*Hello and welcome to Linear Algebra, welcome to educator.com.*0000

*This is the first lesson of Linear Algebra course, here at Educator.com.*0004

*It is a complete Linear Algebra course from beginning to end.*0010

*So a Linear Algebra, I am going to introduce just a couple of terms right now , just to give you an idea of what it is that you are going to expect in this course.*0014

*It is the study of something called Linear Mappings or Linear transformations, also known as linear functions between vector spaces.*0023

*And this is a profoundly important part of mathematics, because linear functions are the heart and soul of Science and Mathematics, everything that you sort of enjoy in your world today consist of essentially a study of linear systems.*0032

*So, don't worry about what these terms mean vector space, linear mapping, transformation, things like that, we will get to that eventually.*0047

*Today's topic, our first topic is going to be linear systems and it's going to be the most ubiquitous of the topics, because we are going to use linear systems as our fundamental technique to deal with all of the other mathematical structures that we deal with.*0055

*In one form or another, we are always going to be solving some set of linear equations.*0070

*So having said that, welcome again, let's get started.*0075

*Okay, so let's just start with something that many of you have seen already, if not, no worries.*0081

*If we have something like AX=B, this is a linear equation, one reason that linear is used, the term linear is because this is the equation of a straight line.*0091

*However as it turns out, although we use the term linear, because it comes from the straight line later on in the course, we are actually going to get the precise definition of what we mean by linear.*0104

*And believe it or not, it actually has nothing to do with a straight line. *0114

*It just so happens that the equation, this AX=B, which can be represented by a straight line on a sheet of paper on a two dimensional surface.*0117

*It had, happens to be a straight line so we call it linear, but its, but the idea of linearity is actually a deeper algebraic property about how this function actually behaves when we start moving from space to space.*0129

*Okay, so this is sort of a single variable, we have ax=b, something like for example, [inaudible].*0143

*Well, that's okay we will just leave it like that.*0153

*If I can write this, A*^{1}X^{1} + A^{2}X^{2} + A^{3}X^{3} = B, well these answer just different coefficients, 5, 4, 6, (-7).0155

*These x1, x2 and x3 are the variable, so now instead of just the one variable, some equation up here.*0175

*We have three variables X*^{1}, X^{2}, X^{3}, we can have any number of them and B.0183

*So a solution to something like this is a series of X's that satisfy this particular equation.*0189

*That's all what's going on here, linear equation, you know this linear essentially is when this exponent up here is A, that pretty much is what we are used to see when we deal with linear equations.*0197

*But again linearity is a deeper algebraic property, which we will explore a little bit later in the class, and that's when linear algebra becomes very, very exciting.*0210

*Okay, so let's use a specific example, so if I had something like 6X*^{1} - 3X^{2} + 4X^{3} = (-13).0220

*I might have something like...*0234

*...X*^{1} = 2, X^{2} = 3 and X^{3} = (-4), well this 2, this 3, this (-4) for X1, X2 and X3 is a solution to this linear equation.0239

*That's it, we are just looking, it is that, that's all we were looking for, we are looking for variable that satisfy this equality, that's all that's happening here. *0256

*note however that we can also have X1 = 3...*0264

*X2 = 1 and X3 = (-7). So if we put 3, 1, (-7) in for X*^{1}, X^{2} and X^{3} respectively, we also get this equality (-13), so as it turns out these particular variables don't necessarily have to be unique. 0271

*Several, sometimes they can be unique, other times a whole bunch of, set of numbers can actually satisfy that equality, so we want to find as many of the solutions that satisfy that equality, okay. *0287

*Now let's generalize this some more and talk about a system of equations, so I am going to go ahead and represent this symbolically, so see we have...*0302

*A*^{11}X1^{ }+ A^{12}X^{2} + ... + A^{1}N XN = b_{1}, so this just is our first equation, we have n variable, that's what the X^{1} to X^{10}, and these are just the coefficients in front of those variables X's and this is just some number. 0315

*So this is just one linear equation, now we'll write another one A*^{21}X^{1}, and I'll explain what these subscripts mean in just a moment + A^{22}X^{2} + ... + A^{2}NX_{n} = B^{2}.0344

*Now we have our second equation and then we go down the line, so I am going to put a ... there ... means we are dealing with several equations here.*0367

*And then I am going to write AM*^{1}X^{1} + AM^{2}X^{2} + ... + AM^{n}, I know that's a little small but that's an MN right there, equals B_{m}, so notice we used two subscripts here, like for example we usually the subscripts I, J.0379

*And the first subscript represents the row or the equation, so in this case 1, 2,3,4,5 all the way to the nth equation, so A*^{11} is the first equation and the second entry J represents that particular column, that particular entry.0415

*So, A*^{11} represents the first coefficient in the first equation, if I did something like let's say I had A^{32}, that would mean the third equation, the second entry, the second coefficient, the coefficient for X^{2}.0437

*That's all this means, so here I have notice X all the way to n, X*^{n} X^{n} all the way down, oops I forgot an X^{n} right here, so I have n variables....0457

*...and I have as many rows M equations and this is exactly what we say when we have n equations and N variables, this many and this many.*0473

*We just arrange it like this, so this is a system of linear equations. *0486

*What this means when we are looking for a solution to a system of linear equations as supposed to just one linear equation, we are looking for...*0490

*We want a set of X1*^{, } X^{2}, all the way to X^{n}, such that all of these equations are satisfied simultaneously...0503

*... such that all equalities, I'll say equalities instead of equations, we know we are dealing with equations; we want all of these equalities to satisfied...*0521

*... simultaneously...*0535

*In other words we want numbers such that, that holds, that holds, that holds, that holds if one of them doesn't hold, it's not a solution.*0540

*Let's say you have seven equations, and let's say you found some numbers that satisfy six of them, but they don't satisfy the seventh, that system doesn't have that solution.*0547

*It has to satisfy all of them, that's the whole idea.*0558

*Let's see what we've got here....*0565

*... okay, we are going to use a process called elimination...*0571

*To solve systems of linear equations, now we are going to start in with the examples to see what kind of situations we can actually come up with.*0583

*One solution infinitely manages solutions, no solutions, what are the things that can happen when dealing with linear system.*0592

*How many variables, how many equation and, what's the relationship that exists, just to get a sense of what's going on, just to get us back into the habit of working with these. *0598

*Now of course many of you have dealt with these in algebra. *0605

*You have seen the method of elimination; you have used the method of substitution.*0608

*Essentially elimination is turning one equation, let's say you have two equations and two unknowns, you are going to manipulate one of the equations so that you can eliminate one of the variable.*0612

*Because again in algebra, ultimately when you are solving an equation, you can deal with one variable at a time.*0620

*Lets just jump in and I think the, the technique itself will be self-explanatory...*0628

*...okay, so our first example is X + 2I = 8, 3X - 4Y = 4, we want to find X and Y such that both of these hold simultaneously, okay.*0636

*In this particular case elimination and it really doesn't matter which variable you eliminate, so a lot of times, it's a question of personal choice.*0649

*Some people just like one particular variable, often times you look at what look like it's easy to do, that will guide your choice.*0658

*In this particular case I notice that this coefficient is 1, so chances are if I multiply this by 3, by (-3), this whole equation by (-3) to transform it, and then add it to this equation, the -3X and the 3X will disappear.*0665

*So let us go ahead and do that.*0681

*Let us go ahead and multiply everything by (-3) and when I do that, I tend to put a (-3) here, (-3) there to remind me. *0684

*What this ends up being is...*0692

*-3X - 6Y= (-24) and of course this equation we just leave it alone.*0698

*We don't need to make any changes to it.*0708

*3X - 4Y = 4.*0711

*And now we can go ahead and then, the -3X + 3X, that goes away, -6Y - 4Y gives us -10Y, -24 + 4 is -20.*0717

*And when we divide through by -10, we get Y = 2.*0731

*We are able to find our first variable Y = 2.1219 Now, I can put this Y = 2 back into any one of the original equations, you could put them in these two, it's not a problem.*0735

*it doesn't, multiplying by a constant doesn't change the nature of the equation, because again you are multiplying, you are retaining the equality, you are doing the same thing to both sides, so Y = 2.*0745

*Lets go ahead and use the first equation, therefore I will go ahead and draw a little line here, we will say X + 2 times 2, which is Y = 8X + 4 = 8X oops...*0757

*Let us put the X on the left hand side, X = 4, so there you have it, a solution X = 4, Y = 2, if X = 4, if Y = 2, that will solve both of these simultaneously. *0780

*Both of these equalities will be satisfied, so in this particular case, we have one solution.*0796

*We do this in red.....*0804

*...one solution, okay....*0817

*Now let's try X - 3Y = -7, 2X - 6Y = 7, so let's see what happens here.*0826

*Well, in this particular case again I notice that I have a 2 and a coefficient of 1, some we have to go ahead and eliminate the X again, so in order to eliminate the X, I need this to be a -2X, so I am going to multiply everything by (-2) of top.*0834

*-2 times X is -2X, -2 times -3Y is +6Y = -2 times -7 gives 14. *0849

*I can pretty much guarantee you that in your just, a small digression, the biggest problem in linear algebra is not, as this is not going to be the linear algebra, it is going to be the arithmetic, just keeping track of the negative signs or positive signs and just the arithmetic addition, subtraction, multiplication and division.*0861

*My recommendation of course is, you can certainly do this by hand, and it is not a problem, but at some point you are going to want to start to use the mathematical software, things like maple, math cad, mathematica, they make life much, much easier. *0881

*Now, obviously you want to understand what is going on with mathematics, but now some of, as we get into the course, a lot of the computational procedures are going to be kind of tedious in the sense that they are easy, except they are arithmetically heavy, so they are going to take time.*0897

*You might want to avail yourself over the mathematical software, okay.*0912

*Let us continue on and then this one doesn't change, so it's 2X - 6Y = 7 and then when we add these, we get +6Y and -6Y, wow these cancel too, so we end up with 0 = 14 + 7 is 21.*0917

*We get something like this, 0 = 21; well 0 does not equal 21, okay, so this is no solution.*0936

*We call this an inconsistent system, so any time you see something that is not true, that tells you that there is no solution.*0946

*In other words there is no way for me to pick an X and a Y that will satisfy both of these equalities simultaneously.*0954

*It is not possible, no solution also called inconsistent.*0960

*Okay...*0969

*Example three, okay, now we have got three equations and three unknowns, X, Y and Z. *0974

*Well we deal with these two equations at a time, so let's go ahead, we see an X here, and a 2, 3.*0979

*I am going to go ahead and just deal with the first two equations, and I am going to multiply, I am going to go ahead and eliminate the X, so I am going to multiply by -2 here.*0986

*And again just be very, very systematic in what you do, write everything down, the biggest problems that I had seen with my students is that they want to do things in their head and they want to skip steps.*0997

*Well, when you are dealing with multiple steps, let us say if you have a seven step problem, and each one of those steps requires may be three or four steps, if you skip a step in each sub portion of the problem, you have skipped about seven steps.*1007

*I promise there has been a mistake, which there always will be, and when it comes to arithmetic, you are going to have a very hard time finding where you went wrong, so just write everything down.*1019

*That's the best thing to do.*1027

*You will never ever go wrong if you write everything own, and yes I am guilty of that myself.*1028

*Okay, so this becomes, let us write it over here, -2X - 4Y - 2 + 3Z is -6Z, -2 times 6 is -12.*1035

*And let us bring this equation over unchanged, that is the whole idea, 2X -3Y + 2Z = 14, let us go ahead and, so the X's eliminate, and then we end up with -4Y - 3Y is -7Y, -6Z + 2Z is -4Z, and -12 + 14 = 2, so that's our first equation.*1051

*And now we have reduced these two, eliminated the X, so now we have an equation in two unknowns, *1078

*Now, let us deal with the first and the third, so on this particular case, I am going to do this one in blue, I am going to, I want to eliminate the X again, because I eliminated the here, so I am going to eliminate the X here.*1086

*I am going to multiply by a -3 this time.*1100

*When I do that, I end up with -3X...*1103

*-3 time +2 is -6Y, -3 times 3 is -9Z and -3 times 6 is -18, and I am hoping that we are going to confirm my arithmetic here.*1112

*And then again I leave this third one unchanged, 3X + Y - Z = -2.*1127

*I eliminate those -6Y + 1Y is -5y, and then I get -9 -1 is -10Z, -18 - 2, I get -20.*1141

*Now, I have my second equation, and this one was first equation, so now I have two equations and two variables, Y and Z, Y and Z.*1155

*Now, I can work with these two, so let me go ahead and bring them over and rewrite them, -7Y - 4Z = 2, and -5Y - 10Z = -20.*1164

*Good, so now we have a little bit of a choice to make, do we eliminate the y or do we eliminate the Z now. *1184

*It's again, it's a personal choice, I am going to go ahead and eliminate the Y's for no other reasons, and beside and I am just going to work from left to right, not a problem.*1192

*I am going to multiply, so I need the Y's to disappear and they are both negative, so I thing I am going to multiply the top equation by a -5.*1201

*And I am going to multiply the bottom equation, I will write that in black, no actually I will keep it in blue, the bottom equation by 7 , 7 here, 7 here.*1214

*This will give me a positive value here and a negative value here, this should take care of it.*1225

*Let me multiply the first one, what I get is 35Y right, -5 times -4 is +20Z, -5 times 2 = 1-10., 7 times -5 is -35Y.*1231

*So far so good, 7 times -10 is -70Z and 7 times a -20 is -140.*1252

*Now, when we solve this, the Y's go away and we get +20Z - 70Z for a total of -50Z = -10 - 140 - 150.*1263

*That means Z is equal to 3.*1279

*Okay, so now that I have Z = 3, I can go back and put it into one of these equations to find Y, so let me go ahead and use the first equation, so let me move over here next.*1285

*I would write -7Y - 4 times Z which was 3 = 2, I get -7Y -12 = 2, -7Y = 14, Y = -2, notice I didn't skip these steps, I wrote down everything, yes I know its basic algebra.*1298

*But it's always going to be the basic stuff that is going to slip you up, so Y = -2.*1320

*I have done my algebra correctly, my arithmetic, that's that one, now that I have a Z and I have a Y, I can go back to any one of my original equations and solve for my X.*1325

*Okay, I am going to go ahead and take the first one since, because that coefficient is there, so I get X + 2 times Y, which is -2 +, write it out exactly like that.*1335

*Don't multiply this out and make sure you actually see it like this again. *1350

*Write it all out, + 3 times 3 = 6, we get X - 4 + 9 = 6, get, oops, that is little straight lines here.*1355

*Erase these , if you guys are bothered, okay, X - 4, what is -4 + 9, that's 5 right.*1371

*X + 5 = 6, we get X = 1, and there you have it, you have X = 1, Y = -2, Z = 3.*1381

*Three equations, three unknowns and we have one solution.*1392

*Again one solution, notice what we did, we eliminated, we picked two equations, eliminated variable, the first and the third to eliminate the same variable, we dropped it down to now two equations and two unknowns.*1401

*Now we eliminated the common variable, got down to 1 and the, we worked our way backward, very, very simple, very straight forward, nice and systematic. *1413

*Again nothing difficult, just a little long, that's all, okay.*1421

*Let's see what else we have in store here, example four, okay so we have X +2Y - 3Z = -4, 2X + Y - 3Z = 4, notice in this case we have two equations and we have 3 unknowns, so let's see what's going to happen here.*1427

*Well, this is a coefficient 1, this is 2, so let's multiply this by a -2, let's go ahead and use a blue here so we will do -2 here and -2 there.*1444

*And let's go, now let's move over in this direction, so we have -2X - 4Y and this is going to +6Z right, equals +8 and then we will leave this one alone, because we want to eliminate the variable 2X.*1457

*Excuse me, + Y - 3Z = 4, okay let's eliminate those, now we have -4Y + Y, it should be -3Y, 6Z - 3Z is +3Z, 8, 9, 10, 11, 12, that is equal to 12, okay.*1478

*Now we have -3Y + 3Z = 12, we can simplify this a little bit because every number here, all the coefficients are divisible by 3, so let me go ahead and rewrite this as, let me divide by (-), actually it doesn't really matter.*1501

*I am going to divide by -3 just to make this a positive, so this becomes....*1520

*right now let me actually do a little error out, so divide by -3, this becomes Y, this becomes a -Z, and 12 divided by 3 becomes -4, is that correct? Yes, so now we have this equation Y - Z = 4, that's as far as we go.*1529

*Now let's, what we are going to do is again we need to find the solutions to this, so we need to find the X and the Y and the Z.*1554

*Let's go ahead and move, solve for one of the variables, so Y = Z -4, so now I have Y = Z - 4.*1564

*And I have this thing I can solve for X, but what do I do with this, as it turns out.*1579

*Whenever I have something like this, Z = any real number, so basically when you have a situation like this, you can put in any real number for Z, and whatever number you get, let's say you choose the number 5.*1584

*If you put 5 in for Z, that means 5 - 4, well let's just do that as an example, so if Z = 5, well 5- 4 = 1.*1600

*That makes Y = 1 , and now I can go back and solve this equation, so let me just do this one quickly.*1611

*We get X + 2 times +2 - 15 = -4, 2 - 15 is X -13 = -4, that means X = 4 + 13 should be 9, so X = 9.*1620

*This is a particular solution, but it's a particular solution based on the fact that I chose Z = 5, so notice any time you have two equations three unknowns, more unknowns than equations, you are going to end up with an infinite number of possibilities depending on how you choose Z.*1644

*Z can be any real number, once you choose Z you have specified why, and once you know, specified Y, you can go back and you specify X.*1660

*Here we have an infinite number of solutions....*1670

*...okay, so an infinite number of solutions is also another possibility, so we have seen one solution, a system that has one solution only, we have seen system that has no solutions , that was inconsistent and now we have seen the system that has an infinite number of solutions, okay.*1683

*Now let's see what we else we can do here.*1698

*Just want to be nice and example, happy just to get a, so make sure that every, every all the, all the steps are covered, all the bases are covered, just we know what we are dealing with.*1703

*Okay, this particular system is X + 2Y = 10, 2X - 2Y = -4, 3X + 5Y = 26. *1711

*Okay, let's start off by eliminating the X here, so I am going to multiply this by -2, -2 to give us, -2X -4Y = -20, and that of course this one stays the same, 2X -2Y = -4, when I do this I get -6Y = -24, Y -4, okay.*1720

*I get Y = 4, now notice I have three equations, so this Y = 4, deals with these, this first two.*1754

*I need all three equations to be handled simultaneously, so now since I can't just stop here and plug back in, it's not going to work.*1766

*I need to make sure so now I have just done the first and the second, now I am going to do the first and the third, so this is first and second equations.*1774

*Now I need to do the first and third.*1784

*So now I am going to, and we do this one in red, this is X, this is 3X, so I am going to multiply by -3, so in this case I have -3X - 6Y = -3 times then actually you cross these out, -3 times that, -30 and make sure my negative signs work here.*1789

*And I have 3X + 5Y = 26.*1814

*Now let's go ahead and do that.*1822

*Okay, 3X's cancel, -6Y + 5Y is a -Y, and -30 + 26 is a -4, divide by -1, so we get Y = 4, okay so notice, our first and second equation we get Y = 4, our first and third equation we get Y = 4, these equations that we come up with, we have transformed this original system.*1829

*Now our original system has been transformed into X + 2Y = 10, because that's what we are doing, we are just changing equations around, X + 2 = 10, and then we did this one, we got Y = 4 and we go Y = 4, because....*1858

*...it worked out the same now, I can take this Y, put it in here and solve for x. *1876

*Let me make this a little clear, select this and we will write an X here, it is definitely a Y, so now I take X + 2 times Y, which is 4 = 10, so I get X + 8 = 10, I get X = 2.*1883

*And that's my solution, one solution X = 2, Y = 4, so be very, very careful with this, it's just because you end up eliminating and equation or eliminating a variable, in this particular case notice we have three equations and two variable, you can eliminate a variable and end up with a Y = 4, which you can't stop there. *1905

*You can't, you have to, you have to account for the third equation, so now you do the first and the third, and if there is consistency there, you end up with this system *1926

*This system is equivalent to this system, that's all you are doing.*1935

*Every time you make the change, you are creating a new set of equations, you are just, you know, now you are dealing with this system because this and this are the same.*1940

*You are good, now you can go back and solve for the X, okay.*1949

*Let's look what we have here, again we have a system of three equations and two unknowns, so we are going to treat it same way, so let's start off by doing the first and second equations, so you write first and second over here, so we are going to multiply this by -2, -2, so we are going to get -2X - 4Y = -20. *1956

*And this one we leave the 2X - 2Y = -4, when we add the X's cancel, we are left with -6Y = excuse me, and then we are left with Y = 4, again.*1984

*That's just the same thing that we had before, now we will take care of the first and third equation, this time to multiply again by 3.*2002

*Let me do this one in blue, -3, -3 and we are left with, so the first equation becomes -3X - 6Y = -30, and then this one becomes 3X + 5Y = 20.*2011

*Now, when I do this, the X's cancel, I am left with -Y = -30 + 20 - 10, I get Y = 10.*2034

*Y = 4, Y = 10.*2048

*there is no way to reconcile these two to make all three equalities satisfied simultaneously, so this is no solution.*2051

*Again just because you found a solution here, don't stop here, don't stop here and (inaudible) into one of these equations because you just did it for the first two, and certainly don't throw it into the third, because that won't give you anything.*2062

*No solution, these have to be consistent, first and second, this is first and third.*2074

*Again we are looking for this whole thing.*2084

*What we just did here is the equivalent system that we have transformed to is X + 2Y = 10, Y = 4, Y = 10.*2088

*There is your inconsistency okay.*2100

*All of these examples that we have done always been the same thing, we see that we either have one solution, unique solution, we have no solution or we have infinitely many solutions, those are the only three possibilities for a linear system, one solution , no solution or infinitely many solutions.*2110

*Back in algebra, we are dealing with lines, again these are all just equation of lines, the ones and two variables X + Y, well the no solution case, that's when you have parallel lines, they never meet.*2133

*The one solution case was when you had...*2148

*... they meet at a point and the infinitely many solutions is when one line is on top of another line, infinitely many solutions.*2154

*But again, we are using the word linear because we have dealt with lines before we developed a mathematical theory; mathematics tends to work from specific to general.*2163

*And the process of going to the general, the language that they use to talk about the general is based on the stuff that we have dealt within the specifics.*2173

*We have dealt with line before we dealt with linear functions, once we actually came up with a precise definition for a linear function, we said let's call it, well the ones who decided to give it a name said let's call it a linear function, a linear map, a linear transformation.*2182

*It actually has nothing to do with a straight line, it just so happens that the equation for a line happens to be a specific example of a linear function.*2197

*But linearity itself is a deeper algebraic property which we will explore and which is going to be the very heart of, well linear algebra.*2204

*Okay, let me just go over one more thing here, the method of elimination, so let's recap.*2214

*Using the method of elimination we can do three things essentially, we can interchange any two equations and interchange just means switch the order, so if I have the particular equation that has a coefficient of 1 and one of the variable, it's usually an good idea to put that one on top.*2221

*But maybe you prefer it in a different location, it just means switching the order of the equations, nothing strange happening there.*2234

*Multiply any equation by a non-zero constant, which is really what we did most of the time here; multiply by -3, -2, 5, 7, whatever you need to do in order to make the elimination of the variables happen.*2241

*And then third, add a multiple of one equation to another, leaving the one you multiplied by a constant in its original form, so recall when we had X + 2Y, we had X + 2Y = 8, 3X - 4Y = 4.*2253

*When we multiply the first equation by -3, then add it to equation 2, we ended up with the following equivalent system, so we end up converting this to -3X -6Y and end up -24 and then we brought this one over 3X - 4Y = 4, once we actually found the answer to this, which is say, -10Y = -20.*2271

*we ended up with a solution, well once we get that solution, that, this is now the new equation, so that's over here, Y = 2.*2305

*But the original equation stays, so this, so that's what we were doing when we do this.*2317

*We are changing a system to an equivalent system, that's what we have to keep in mind when we are doing these eliminations.*2322

*Notice the first equation is unchanged, when we rewrite our entire system.*2330

*Okay, thank you for joining us here at educator.com, first lesson for linear algebra, we look forward to see you again, take care, bye, bye.*2336

*Welcome back to educator.com, this is a continuation of Linear Algebra, today we are going to be talking about vectors in the plane, so the plane is also represented as something called, R*^{2} which just means the real number squared.0000

*Basically you have the X axis t, which is the real numbers, and we just take another, a copy of the real numbers and we make it perpendicular, which is why we call it R*^{2} by that analogy, normal space would be called R^{3} 0014

*And n space is called R*^{n}, R raised to the n power.0027

*Okay, let's go ahead and get started.*0031

*Okay, in math and science we talk about two types of quantities, one is a scalar, which is just a fancy word for a number and the other is something called a vector.*0040

*And in case you are wondering why it is that we actually differentiate, why would we even need something like a vector.*0053

*what I would like to tell my students is think of a pushing analogy.*0059

*If somebody comes, let's say in front of you an pushes you with a certain force, let's just say it's a 100 newtons of force, that's a number.*0062

*Well, you end up going backward in one direction, so let's say you are standing over here.*0073

*If they push you this way, you are moving, you are going to end up being pushed that way.*0079

*Let's say somebody comes from the other direction and pushes you in that direction, well you end up moving that way as it turns out, you end up in different places.*0085

*You end up moving in different direction, but they are both pushing with the same force, so there is a difference because this is not the same motion, so as it turns out in the real world.*0094

*We need something more than just a number, a particular situation needs to have, certain situations not all of them, need to have some other quality associated with them.*0105

*And that quality is a direction, so if I say i am going to push you with a 100 newton’s of force in this direction.*0116

*That is a, we call it a vector having a length of 10, I mean 100, what we call a magnitude in that direction.*0124

*If it were the other way, well we say it's a vector whose magnitude is still a 100 but it's in this direction.*0133

*And these are two very different vectors, because they have different direction, even though their magnitude is the same, so that sort of the unqualitated description of what a vector is.*0140

*Okay, let's go ahead and talk about a reference frame for vectors, so we take as our reference frame, the standard XY coordinate plane, the Cartesian plane and to the right of the X is positive and up on the Y axis is positive, negative, negative.*0150

*Nothing that you don't already know.*0171

*Now, if I start at the origin and if I draw, well let's not draw it, first let me just pick a point, so I have this point, let's say the point is (2, 4).*0176

*Well yes it's true, it represents a point in space, but if I start from the origin and put the tail of an error there, and if I go and put ahead of the error, something like that.*0186

*Notice, I have actually now given you an explicit direction from a particular point origin, well since this Cartesian coordinate plane is our frame of reference.*0196

*The origin will be our ultimate point of reference, so now I have an error associated with this, you know this coordinate here (2, 4).*0207

*Well, this coordinate is called a vector and this error is also called a vector, they are just two different representations of it, so if I call this vector U...*0219

*I can certainly represent it as (2, 4).*0232

*And another representation of it that I will also see is I will write it as a column matrix instead of a, I will write it like this (2,4), so notice this is two rows and one column.*0235

*And remember anything that has either one row or one column, we called it a vector, so now we can see why we can associate this idea of a vector with a matrix.*0248

*We don't necessarily have to have this coordinate with a comma in between, we can just represent a vector as a point to (2,4).*0258

*And again you are also welcome to write it as a row vector (2, 4), not necessarily with the coordinate, mean the only difference being that little comma there.*0266

*This is still saying move 2 in the X direction, 4 in the Y direction, 2 in the X direction, and 4 in the Y direction and now we have introduced this other notion of it actually being an error from the origin to this particular point.*0275

*With this error, now we have a physical something and now there's something else that we can associate with it.*0290

*We can associate a length with this error, because it has a particular length, and we can associate an angle from a reference line.*0297

*Well we take our reference line as the X axis and we measure all angles in counter clockwise direction.*0305

*Well, that would be considered a positive angle and if I go this way, this would be consider a negative angle.*0312

*If I go all the way around ones that 360 degree, if I go around twice, that is a 720 degrees, two times 360, yes it's 720.*0318

*Even though we end up in the same place, the angle measure is actually different, so again...*0330

*...Counter clockwise positive angle, clockwise negative angle.*0338

*X axis is our reference line, the origin is our reference point, okay, let's define a couple of things.*0343

*If we have a vector, let's just take a generic vector U, and again vectors have the little error on top of it, and I will do it as a column this time.*0353

*I will often do both and it's not really a problem, later on when we get into certain aspects of linear algebra, it's going to be important on how we actually use a vector, whether we do it as a column or a row, but for right now it's not really much of a problem.*0365

*We define the magnitude, the symbol for the magnitude is oops, excuse me...*0379

*U is our vector, we have the symbol for the vector, and we put two double lines around it, that's the magnitude and that's just the length.*0388

*Well, you know that if this is our vector, and let me actually draw it again over here so make it little more clear.*0396

*well if I have this particular thing and here is the point XY, well you know that we moved to the right X units and we have moved up Y units, so this is Y and this is X.*0403

*Well the Pythagorean theorem tells us that X, *^{2} + Y^{2} = this length^{2}, so we define the magnitude as X^{2} + Y^{2} under the √ sign.0416

*That gives us the length of the vector, or...*0430

*...The magnitude...*0436

*...Of these fancy words, now we can also define angle, so if we call this angle θ*0440

*Well we have Y, we have X, the relationship between θ is tangent, so if I have the tangent of θ = Y 0ver X.*0448

*Well, that implies that the angle θ itself is going to be the arc tangent or the inverse tangent, of what oops, little lines.*0463

*I don't want them to get in the way here of what we do.*0476

*Y/X, so when you are given a vector in this for, you can find out the length, and you can find out the angle that it makes with the positive X axis and remember again we are measuring that way, okay.*0481

*Okay, let's do an example here, so let's say I have the vector is (3,), okay so 3 in the X direction, 7 in the Y direction, it's in the first quadrant, they are both positive.*0505

*The magnitude of U...*0524

*...Equals 3*^{2} + 7^{2} under the √ sign 9 + 49 = 58, √ 58.0529

*That is our magnitude, it is that long, and again you will find that when I come up with these numbers that are irrational under the square root sign, I often don't simplify.*0541

*I just leave them like that; it's not a problem at all.*0551

*Simplification often times believe it or not, things like reduction and simplification, I think it often obscure the mathematics, once you get a particular number, you are more than welcome to leave it like that.*0555

*The angle...*0566

*...is we said it's the arc tangent of Y/X, so 7/3, and we end up with 66.8 degrees.*0570

*Now notice we haven't drawn anything here, here we were talking about a physical object, we are talking about an angle which is a geometric notion, and we have expressed it in degrees.*0581

*You can express it in radians if you would like, if you remember 180 degrees is π radians, 3.14, you are welcome to do it either way, it's not a problem.*0591

*We have this geometric notion, we have a point that represents an error, you notice we haven't drawn any pictures here. *0602

*Now, you can certainly deal algebraically with vectors, it's not a problem, it's one of the reasons why we are doing it linear algebra.*0609

*Ultimately we want to take the geometric notion and bring it into the round of algebra, so we will give it a former foundation than just drawing pictures.*0615

*But, pictures are a big health and you want to make sure that you know what it is that you are dealing with, so we know that we are dealing with something in this quadrant .*0622

*When we get a number like 66.8 degrees, it actually makes sense, 3 this way, 7 this way, it actually should have been a little but steeper, I apologies.*0631

*But, again we want our numbers to make sense, so this is the number that is.....*0640

*....8 degrees and again we are measuring it from the positive X axis in that direction.*0645

*Okay, let's do another example, let's take a vector, let's actually draw this one out first, yeah let's draw it out over here, so I am going to have one of my vectors, so I give the names...*0653

*I will call one of them T, how's that for tail, and that's going to be (3,2), this time I wrote as a row vector and this other one I will call H for head, as in the error head.*0675

*And let's put this one at (7,4), okay so (3,2), oops (3, -2) actually, so it will go (1,2,3), (1,2).*0691

*This is the tail, that's one thing and the head is (7, 4), 4, 5, 6, 7 and we go up 1, 2, 3, 4 and we are over here, so we have this vector right here.*0704

*Now, notice this didn't begin at the origin, it's not a problem, any vector that we can move it to the origin, so when we actually try to find.*0718

*Well, think of it this way, if we just move this vector over to the origin, it's going to end up being something like this, so as it turns out, any vector in the plane whether it begins at the origin or not is in some equivalent to a vector that actually begins at the origin.*0733

*When we actually solve, for the magnitude and the angle θ, we are just going to be dealing in the same way we did before.*0750

*We just take the difference between these two, the difference from the head to the tail, so in this particular case our X value, in other words, this distance is just a difference between the X values here and here.*0758

*And our Y value for the vector is the difference between the Y values here and here, so let's find our X, it's equal to 7 -3, always do the head - tail, equals 4.*0771

*And our Y = 4 - (-2), which is 6, 4 - (-2), which is 6.*0789

*What we have is this thing here which doesn't begin at the origin is equivalent to a vector that does begin at the origin that has, well whose algebraic representation is (4, 6).*0799

*This vector is the (4,6) vector, well so is this, except it doesn't start at the origin, that's why it's represented by two different points, so for our practical purposes we can just deal with this one, they are equivalent, okay.*0813

*Let's do our, let's give this vector a name, let's just call it S.*0828

*The magnitude of S, again with double lines is equal to 4*^{2} + 6^{2}, under the √ sign 16 + 36 is 52, I hope if my arithmetic is correct.0839

*I often make arithmetic mistakes, and again there is always going to be somebody there to check your arithmetic, there won't always be somebody to check your mathematics.*0855

*If you have to make a choice, mathematics comes first, not arithmetic.*0863

*And our θ is equal to the inverse tangent of Y/X, 6/4.*0869

*You don't have to reduce, 56.3 degrees, it's...*0878

*... That is this angle which is the same as this angle because they are equivalent, not a problem, so again when you are dealing with a vector that's been expressed with a head and a tail, somewhere else except at the origin, we just treated the same way.*0889

*You just take the coordinate for the head - the tai, coordinate for the head - the tai, and you end up with a vector as if it is starting at the origin.*0903

*Okay, let's do one more example here, this time we will do, our vector is (-6, -3) so again picture is worth a 1000 words.*0917

*We use pictures to help us understand what's going on, pictures are not proofs algebra is proof, so (-6,-3)...*0935

*...Were somewhere here, so we are looking at something like that, so just, so we know what we are dealing with.*0947

*We are dealing with something in the third quadrant, so when we gets our numbers, we want to make sure that the numbers match o0our geometric into vision, the picture.*0953

*Let's take the magnitude of U, well it is -6*^{2}, which is 36, -3^{2} which is 9, all under the √ sign, which gives us a √ 45, if I am not mistaken, and our angle θ.0962

*Here is where it's going to get interesting, the inverse tangent and I apologize, I am a little older so I was actually taught as arc tangent, but inverse tangent is fine.*0984

*Y/X, -3/-6, when you enter this into a calculator, here is what you are going to get, 26.5 degrees just on the surface that doesn't make sense.*0995

*we are in the third quadrant, we said that angles are measured from the +X axis over to that way, so I know that my angle has to be more than 180 degrees lees than 270 degrees, 26.5, doesn't really jive with that.*1014

*Here's what's going on, if you remember from your trigonometry, whenever you take the inverse function, remember the graph of the tangent function...*1028

*Well, I too, I am not going to, let's not worry about the gap, let me just say whenever you take, use your calculator, the value that it is going to give you for your angle is going to be and angle between -90 and +90.*1040

*Okay, because the period of the tangent function is π so, what this value represents, remember you doing a tangent, so if you drop a perpendicular down to the X axis and it's always down to the X axis you never drop a perpendicular to the Y axis. *1057

*This angle right here is what is 26.5, remember you are taking the arch tangent of a distance, okay.*1076

*-3/-6, well here is your -3, here is your -6, it's just a distance over a distance.*1084

*that's what the calculator is calculation, so for all practical purposes is acting as if the angle is somewhere here, that's why it's important to know where are in the third quadrant.*1093

*Now that you, and when we formally decide to measure this angle, we take the 180 + the 26.5, so our actual θ is not 26.5, but based on our standard of this being our reference line, this is 26.5.*1104

*From here, the formal θ is 180 + 26.5, which is 206.5, positive.*1124

*Okay, so we need to differentiate that, so there is a couple of things that we need to aware of, which is pretty characteristic if you remember from working with trigonometry in angles.*1139

*You have to be aware of which quadrant you are working in and you also has to be aware of the science of trigonometric functions for, because the cosine is positive in the fourth quadrant.*1147

*The sine is positive in this quadrant and the tangent is positive in this quadrant, so we need the picture to help us to make sense of the numbers, okay.*1158

*let's see what we have got, vector addition and scalar multiplicatio0on, okay, so now that we have these things called vectors, we need to do things with them, and we can multiply them by scalars.*1174

*And we actually add vectors together.*1189

*Kind of the same ways, numbers, let's see what we have.*1193

*Vector addition, we will, vector U = let's do (U1, U2) and the vector V = (V1, V2), these are just the X and Y components of this vector.*1202

*Notice there are no errors over there, then we define the sum U + the, well, all you do is you add then component wise.*1224

*You add the X components of U with the X component of V, so it is U1 + V1, then in this case, you know what, I think I am going to go ahead and put commas just to, and then you have U2 + V2.*1236

*We can also write it as equivalent to U1 + V1, U2 + V2, this is the column representation, so all I have done is I have added the X components, added my Y components and now I have a new vector which is U + V.*1252

*Let's talk about what this looks like geometrically okay, I am going to put my U vector right there, and I am going to put my V vector, I'll make it, I will make it kind of short, okay.*1275

*I don't want to run out of room, all this means is that do U, then do V.*1291

*In other words, U is here, and then just do V, well V is a vector that goes in this direction and that's wrong, so just lay it on top with that, and you end up at that point.*1298

*Well, as it turns out as you remember, that point forms a parallelogram, that is the end, that's your beginning point.*1312

*This is your ending point, so this vector, once you put the head here, this vector is our U + V vector.*1320

*Again all we have done is we have done U first, and then we have done V, if you add three vectors, four vectors, five vectors, you just keep adding them and moving along and where you end up, that's where the head of the final vector goes, and they all begin at the origin.*1332

*This is U + V, okay.*1348

*Let's also do U - V, now U - V, well there is no such thing as vector subtraction, but really what you are doing is U + -V, well the -V is just the V, same length in the opposite direction.*1353

*This would be -V, so now when we do U - V, that means do U first and then go V distance in the opposite direction., so we do U first and the we go.*1374

*Well V goes this way, so -V is this way, so we go in the opposite direction, we go down that way and we end here.*1391

*That vector...*1403

*... Is U - V, and you have treated the same way, if it's U - V, well it's this entry - this entry that forms the X coordinate, this entry - this entry, that forms that coordinate.*1408

*And again all you are doing is you are going along the vector, U + V is do U first, then do V, wherever 6you end up, that's where the head of the final error is concerned. *1423

*This is the vector U + V, this is the vector U - V, this is the original U , this is the original V and this is the -V, okay.*1432

*Okay, now let's talk about scalar multiplication....*1448

*...Okay, once again we will let U = let's say XY, we can also write it as XY column matrix, so let U = that and A is a scalar, just a number.*1460

*Then, see A times U = well, A times X, A times Y or AX, AY written as a column.*1488

*All I am doing is taking this scalar and multiplying it by every entry in the actual vector itself, what this means geometrically is the following.*1514

*If this is my vector U, well whenever I multiply by a constant, all I am doing is expanding it if the constant is greater than 1, I am shrinking it, if it is less than 1, and I am pushing it to the opposite direction, if it is negative.*1526

*If I have this vector, let's say it's XY, and if I multiply it by, that means it take that vector and I increase its length by 5.*1551

*that means I have increased this X value by 5, I have increased the UY value by 5, if I multiply by 1/5, I shrink it down by a fifth.*1560

*If I multiply it by -5, that means it is the length of 5 but in the opposite direction, that's all that's happening pictorially, geometrically, okay.*1569

*Now let's see what we have got, so let's take U = 6 and -9, so (6, - 9), we will put it in the fourth quadrant, no, yes, fourth quadrant.*1585

*And V = (3, 4), which we would put it in the first quadrant, so let's do U + V, that's equal to, I am going to write this as a column matrix, so 6 + 3 is 9.*1600

*Okay, and -9 + 4...*1623

*... 6 + 3 is 9, -9 + 4 is -5, so our U + V is that vector, how about U - V, well we do 6 - 3, and we do -9 - 4.*1631

*6 - 3 is 3, -9 - 4 is - 13...*1647

*...Algebraic, now let's see what this actually looks like geometrically to get a sense of what's going on.*1667

*We want our intuition and our algebra to match, so 6 - 9; put's me somewhere down, say down here, okay.*1675

*This is my U and again we are not , we don't have to be exact here, you are welcome to (inaudible 2813) if you want, that's always nice.*1687

*And (3,4), may be somewhere up here, okay so this is U and this is V, U + V means do U first and then do V.*1695

*That puts us right there, this is U + V, (9 , -5), (9,-5) yeah seems about right, should keep this in the fourth quadrant, *1716

*While U + V, I am sorry U - V, we have U and then -V, which is down this direction.*1729

*It's actually off the page, so it's going to keep as (3,-13), yes looks pretty good, (3,-13) yes it jives, it's exactly right, so it's going to be a vector, it's going to be, and a little further down.*1740

*Yes, everything seems good but again when doing these, it's the algebra that matters, we use the pictures to help us understand what's happening in order to make sense of the algebra, not the other way round, okay.*1756

*Angle between two vectors, okay draw picture here.*1775

*Let's just take one vector randomly there and another vector randomly there, there is an angle between those vectors.*1782

*Let's call that angle θ and I notice this is not the same θ as the angle of one of the vectors which is from the positive X axis.*1792

*This has a θ, this angle also has a θ but we are talking about the angle actually between them, and as it turns out there is a beautiful formula that allows us to work with this.*1799

*Let's say this is U and let's say this is say is V, we have two vectors in the plane, as it turns out the cosine of the angle between them which we call θ is equal to the dot product of those two things U.V over...*1813

*...The product of the magnitudes of those two vectors, and again θ in this case is going to be greater than 0, less than 180.*1836

*When you actually work this out, you are going to get some angle from 0 to 180, an angle being 0, that means the vectors are pointed in the same direction; the angle between them is 0.*1846

*The angle is 180, that means you have vectors that are, that angle is 180, so if it goes past that, well the answer you are going to get is that angle, not that angle, okay that's all this means.*1858

*Let's do an example, so if we have U = let's say (2,5) and V = (-3, 6).*1874

*Well let's do the dot product, U.V and you remember it's the product of the X values + the products of Y values down the line.*1889

*Two times -3 is -6 + 5 times 6, +30, -6 + 30 = 24, so that's our dot product.*1900

*That's going to be our numerator, and now the magnitude of U is going to be 2*^{2} which is 4, 5^{2}, which is 25, and the √ sign is √ 29.1912

*The magnitude of V is -3*^{2} is 9, 6^{2} is 36.1930

*That is equal to 45, ye √ 45, therefore our cosine of θ using our formula up here is equal to 24/√29 times √45, which is 0.664.*1939

*And when I take the inverse cosine of that, okay so θ when we go over here.*1964

*θ = the inverse cosine of 0.664, I get 48.4, write this a little more clearly, my apologies.*1973

*48.4 degrees, so that tells me that if I have U (2, 5), that's in this quadrant, (-3, 6), that's in the second quadrant.*1992

*I am dealing with an angle between them of 48.4 degrees.*2006

*That's really nice to be able to be just given the vector values and to be able to extract some geometric property that is not necessarily implied by anything.*2009

*These are just sort of numbers representing things, and yet here we are able to tell you what the angle between those vectors is.*2022

*This is very extraordinary, okay I see.*2028

*If U is perpendicular to V at right angles, then of ‘course θ = 90 degrees.*2036

*Well, what's the cosine of 90 degrees? 0, so that means that 0 = I use U and V here.*2051

*Let me...*2065

*...U.V over the magnitude of U times the magnitude of V, well I just multiply both sides by the magnitude of U and the magnitude of V and I end up with U.V = 0.*2070

*Here we go so, as it turns out if the two vectors are perpendicular to each other, the dot product is 0.*2094

*If the dot product of two vectors is 0, they are perpendicular to each other, we don't say perpendicular, and we actually use the word orthogonal.*2104

*U and V are...*2117

*...Orthogonal if and only if equivalents U.V = 0, so if I am given two vectors, i take the dot, if they are equal to 0, I know that they are perpendicular.*2128

*If I know that they are perpendicular, I know that the dot product is 0, and the reason we say orthogonal is perpendicular is when we move to higher dimensions and when we actually move later on for those of you to go on to Mathematics.*2141

*You'll speak of actual functions that are orthogonal and it's defined in the similar way.*2153

*This is the power of abstract mathematics is we start with the things that we know, two space, three space, pictures that we can deal with a, and we can generalize to all kinds of mathematical structures that share the same properties.*2158

*We need a more general language to deal with them, so we don't talk about perpendicular functions, we speak about orthogonal functions.*2171

*And so we might as well start now and start dealing with orthogonal vectors, okay.*2179

*Well, let's do something else here, what if we had θ = 0 and let's take U.U, okay.*2184

*Well the cosine of 0 is 1, so let's actually write out our formulas.*2201

*Cosine of this, so the cosine of 0 degrees = 1, well the cosine of θ = let's take U.U, let's just dot it with itself.*2213

*Just put it into our definition for the angle between two vectors, so in other words I have U and I have U ion top of it, the angle between them is 0, so let me see if I can extract some information from this.*2228

*The magnitude of U times magnitude of U, okay, multiply through and I end up with U.U = the magnitude of U.*2243

*This is just a number squared, and if I take the square of both sides, I end up with magnitude of U = U.U.*2259

*Now I have another way of actually finding the magnitude, what I can do is I can just take U dotted by itself, and then just take the square root of that number.*2275

*Very good, okay....*2287

*...We will talk about some, let's talk about some properties of the dot product and unit vectors, okay all these properties are going to be reasonably familiar because we have mentioned them before.*2293

*U.U is greater than 0 if U is not equal to 0, and U.U = 0 if and only if U = 0.*2309

*In other words if U is not the 0 vector, I will put it, if U is not the zero vector, then dot product is always going to give you a positive number.*2328

*B, U.V = V.U, so the dot product is commutative, C, U + V.W = U.W, notation...*2341

*... + V.W in other words the dot product itself is distributive and the final one C, times U.V = I can pull the U out.*2365

*U.C times V or I can just take, pull the C out and do U.V, again just some properties to manipulate vectors when you start to deal with them.*2385

*Okay, we are moving along very nicely here, let us define a unit vector.*2402

*Unit vector is a vector...*2411

*...Whose length is 1, that’s it...*2418

*...A vector worse length is 1, okay and if you are given so let's just say, so let X be any vector, the unit vector which I'll actually as X with a little unit written down below.*2426

*Is equal to 1 over the magnitude of X, times the vector itself.*2449

*In other words I take the vector and I divide each entry of that vector by the magnitude of that vector.*2460

*Think of it this way, if I have the number 15 and if I want to turn it into 1, I divide it by itself right.*2467

*Yes, i just divide by 15 and I get a 1, it's a way in taking that number and converting it to a 1.*2474

*Well vector, we are also dealing with direction, so I can't divide by a vector, that's not defined in mathematics, but I can divide by a number, so if I take the actual vector itself, all of the components.*2481

*And if I divide each of the components which are numbers by the magnitude, which is a number, I essentially just scale it down by its magnitude.*2493

*In other words I turn it into a vector of length 1 in any direction, because we are talking about any vector, okay.*2504

*Let's see we have two very important unit vectors.*2514

*Okay, we have the vector in the X direction which we symbolize as I and we have the vector in the Y direction which can symbolize as J.*2530

*In other words, there is a unit vector length 1 that way, that's called I, and there is a unit vector right here and that's called J.*2549

*Well, as it turns out we can express any vector in the plane by a linear combination of these two and what that means is the following.*2564

*Let's say I have a vector X and let's say it is, I'll write it in, I'll write it in multiple forms, (7,9), which is equivalent to (7,9).*2577

*I want to express this as a combination of these unit vectors, well a unit vector is just a vector 1, well if I multiply it with this value in the X direction.*2590

*That means moving this direction and the direction of the unit vector, that many units, so another expression for this would be 7 times the unit vector I.*2600

*That means move 7 units in the direction of I + 9 units in the direction of J, and remember these are vectors.*2613

*Vector addition just means do this one first, then do this one, all of this is saying is, well, you know this is 7 in the X direction, 9 in the Y direction.*2623

*Well, this is saying 7 in the I direction and, which is the X direction and 9 in the J direction, so I have just sort of combined vector additions, scalar multiplications and I have represented with this very unit vectors the I and the J.*2634

*And any vector in here can be represented as a linear combination, and a linear combination just means a sum.*2649

*That's 1, that's 2, move seven units to the right, move 9 units up.*2656

*If I might have something like -6I, -3J, that means move 6 units in the opposite direction of I that means this way.*2662

*And move 3 units down in the J direction that means this way, that put's it somewhere over here.*2673

*Any vector in R2 can be represented by a linear combination of these two vectors, you will discover later.*2681

*Any vector in let's say 13 space, well I need 13 unit vectors and that's I can represent any of those vectors by all the 13 little unit vectors in that particular coordinate frame.*2690

*We will talk about that little bit later.*2704

*Okay, let's finish off with an example here, let's say we have the vector X, which is (-2, -3), so that puts us in the fourth quadrant.*2708

*Okay so we want to find a unit vector in this direction, okay.*2722

*Well let's go out and find what the magnitude of X is first.*2729

*It's going to be -2*^{2} 4, -3^{2} 9, under the √ sign = √13, so our unit vector in the direction of X is equal to 1 over √ 13 times...2735

*...-2, -3, which is equal to, just multiply it through.*2758

*-2 over √13 and -3 over √13 that is my new vector.*2765

*Notice I have an X coordinate and a Y coordinate; I have divided it by the magnitude of the vector itself.*2773

*This vector has a length of 1, in other words if I would have found the magnitude of this vector, I would do -2 over √13*^{2} + -3 over √13^{2} under the √ sign, I end up getting 1.2780

*That's the whole idea, very important concept, the unit vector.*2795

*And again notice that I have left this √13 in the denominator, it's not a problem, and it’s perfectly valid mathematics.*2800

*Don't let anybody tell you otherwise.*2806

*Thank you for joining us here at educator.com, we will see you next time for linear algebra.*2810

*Welcome back to educator.com and welcome back to linear algebra, this is going to be lesson number 11, and we are going to talk about N vectors today.*0000

*In the last lesson we talked about vectors in the plane, which are two vectors, because each vector is represented by two numbers, so when we talk about a vector in free space, we just, it's a vector, it's a, it's called a 3 vector.*0008

*We also call it R*_{3}, which will symbolize in just a little bit, when we speak about n vectors, it's just any number of them, if we are talking about a 10 dimensional space, it just means a 1 by 10 matrix, or a 10 by 1 matrix you remember.0021

*It's just 0 numbers, so that’s the nice thing about mathematics is you are not, you are not tied to what you can represent as far as reality is concerned, is just as real, and but you know obviously we don't know how to draw 10 space or an space or 13 space.*0034

*But these things do exist and mathematics is actually exactly the same, so let's get started....*0050

*... Okay, so let's just throw out a few examples, a four vector...*0059

*... Something like (1, 3, -2, 6), again we are just talking about a 4 by 1; I could also have written this vector as (1, 3, -2, 6).*0069

*It really doesn't matter, later on it will make a difference depending on how we want to arrange it, because we are going to be multiplying these things by matrices, so sometimes we want it this way, sometimes we want it this way.*0083

*Another representation is just regular coordinate representation, X, Y, Z, so on, so I could also write this as (1, 3, -2, 6).*0094

*They all mean the same thing, it just depends on what it is that you are doing, okay, a seven vector, let's do...*0106

*... Now again, so you would have (0, 5, 0, 6, 9, 7, 2), something like that.*0116

*And again you can write it as a row, you can write it as a list in coordinate form, this just means that you have this many dimensional space.*0125

*Two dimensional space is two numbers, three dimensional space is 3 numbers, this is a seven dimensional space, perfectly valid, perfectly real.*0136

*And the mathematics is handled exactly the same way as it was last times, okay let's talk about vector addition.*0143

*last time we talked about vector addition, when we said will you add two vectors together, you are just adding the individual components, in other words the individual numbers of those vectors together.*0151

*Lets just do an example, so let's say...*0162

*... The vector addition, let's call our, let's do a vector, so we have (1, -2 and 3), and let's take B as (2, 3 and -3), so when we add them we are just adding the 1 and the 2, the -2 and the 3, the 3 and the -3, and so our U + V...*0172

*... Equals...*0197

*... Again it's a, it's a three vector, 1 + 2 is 3, -2 + 3 is -1, I am sorry, +1.*0199

*And 3 - is 0, so we have the vector (3, 1, 0)...*0209

*.. If we do scalar multiplication, when we take a vector and when we just multiply by a scalar, I will write with our number...*0217

*... If I wanted to do, let's say, let's use U again, if I wanted to do 5U, well I just multiply everything in there by 5, 5 times 1 is 5, 5 times -2, -10, 5 times 3 is 15.*0230

*5U equals, and it's again, it's a three vector, nothing changes, we have (5, -10 and 15), so vector addition, scalar multiplication just like when we did it for vectors in the plane, you just have more numbers.*0244

*Okay, now let us write down the theorem, this is going to be a little bit of writing and again a lot of this you have already seen before, but it's sort of nice to write it over and over again, because it solidifies it in your mind.*0261

*And it's nice to see it formally in a mathematical sense, again in mathematics we try to be as precise as possible to leave no room for error, so let's write this one out, and there one aspect of this theorem that I am going to digress on it.*0277

*It's going to be a very important aspect and you will see it in just a minute, so we will let U, V, W...*0291

*... Be N vectors, or vectors in N space, we also talk about at that way, so 3 vector is our vectors in 3 space.*0303

*And space, and just to let you know the symbol for let's say R*_{3} is R here with the double line, it stands for the real numbers.0315

*And we put a little 3 up there, it means that we are taking one number from each real number line, if you think of 3 space, you think of it as in Z axis and X axis and a Y axis.*0328

*Each one of those axes represents the real number line, so since we are using three real number lines that are mutually perpendicular to each other, that's where this 3 comes from.*0340

*If we talk about N space, it's symbolized R*_{n}.0351

*Okay, so let's put a little 1 here and we will start with an A, so if we have U, V and W as N vectors, U + V...*0358

*... Is a vector in N space, you know this already. If I take two vectors to three vectors and if I add them together, I get a three vector, so it's not like I land in some other space.*0374

*I start with 3, I do something to them called addition, and I actually, the result that I get is still a three vector.*0385

*Now, this might seem natural to you, but as it turns out it's not quite so natural, there are things, situations, mathematical structures where this is not true, and this is the digression that I am going to go on in a moment.*0392

*When we say that, so U + V is a vector in N space, this property is called closure, okay, and we say that vector addition is closed...*0405

*... Under addition...*0418

*... We talk about the property of closure, or we also say that this operation of addition of vector is closed under addition, and here is what that means that, and again you might think that this is perfectly natural.*0423

*Why would it be anything else, for example if I take the numbers 5 and 6 and if I add them together, I get 11, which is just another number.*0435

*In other words I still end up with a number, well consider this, let's just take the set of odd numbers, so (1, 3, 5, 7, 9) etc.*0443

*And let's take the set of even numbers, so all I have done is I split the number system and actual number system into even and odd, so this odd, this is even,*0456

*Now, let's just start with some even numbers, if I take any two even numbers, and let's just take the number 4 and the number 8, and if I add them together, so I perform an operation with two elements of that set 8 and 4, I end up with 12.*0468

*But 12 is an even number, so an even + an eve, I end up with an even number.*0485

*But what that means is that I start with two things, I do something to it, but I end up back in my same set, I don't leave, this is called closure.*0491

*That means I don't land someplace else, but now try this with the odd numbers, so let's take an odd number like 3 and let's add it to another odd number let's say 5.*0500

*But when I add these together, I get 8, I get an even number, so I start with two elements in this set, I do something to them, I add them and yet all of a sudden I end up in a different space.*0510

*I have separated these two, why is it that the even numbers when you add to elements, you don't leave that space, you still end up with an even number, but now if I add two odd numbers, I end up outside of that space.*0523

*I didn't end up with an odd number, this is not odd, so as it turns out, this property of closure is actually a very deep property, and we have to specify it.*0535

*As it turns out, when n you add two vectors together, two N vectors, you get an N vector, but this example, this counter example is, it demonstrates that it doesn't always have to be the case.*0545

*that's why it's important for us to specify it, so something that seems obvious in mathematics usually has a very deep reason underlying it, that's why we say this.*0557

*Okay, let's continue, B, U + V = V + U, that means you can ad in either order, so vector addition is commutative.*0567

*C, U + V + W = exactly what you think, U + V quantity + W, so vector addition is associative, okay.*0583

*D, there exist a unique, remember this symbol, reverse E means there exists a unique, that's what that little exclamations, that means there is 1, only 1.*0599

*When we say there exists, it could be more than one, but when we say there exists a unique, we are making a very specific statement that it's the only one that exists.*0609

*There exist a 0 vector...*0617

*... Such that U + the 0 vector equals the 0 vector + U commutativity, where you get U back, that is called the additive identity, identity meaning you start with the vector.*0621

*You add something to it, nothing changes you get that vector back, it's an identity.*0638

*And last but not least, okay... *0643

*... For each U, there exists a unique vector symbolized -U, such that U + this -U vector...*0649

*... Gives me the 0 vector, this is called the additive inverse, again 5 + -5 gives u 0, 5 + 0 gives you 5, so this 0 vector is called the additive identity.*0663

*And additive because it's specific to this property of addition, doesn't apply to multiplication, we will get to multiplication in just a little bit, and for each U for every vector in N space, there exists a vector -U, such that when you add them together, you get a 0 vector, so they come in pairs okay...*0678

*Now the second part, which actually deals with scalar multiplication, if I take some constant times U, well that's also closed.*0701

*As it turns out, if I have a vector in N space and if I multiply it by some scalar, I end up with a vector in N space, I don't jump to another space.*0714

*Again it seems natural, it's obvious, you have been doing it all your life, but there is something deeper going on, it doesn't have to be that way., and as you saw an example of something that you deal with every day, the odd numbers.*0722

*The odd numbers don't satisfy this property, the odd numbers are not closed under addition when you add two odds together, you end up outside of the set, you end up with an even not an odd.*0732

*We have to specify closure; it’s a very important property...*0742

*We say this is closed under scalar multiplication, whereas before it was closed under vector addition, okay.*0749

*C times U + V = C times U + C times V, so the distributive property under scalar multiplication is active, I can distribute the scalars over the vectors that I add excuse me.*0759

*If I add two scalars together and multiply them by some vector, well I can distribute the scalar, the vector over the scalars.*0776

*Its C, I guess I chose a V here, I meant to do a U, but that's fine, it doesn't really matter + D times V, okay.*0784

*And C times D times U = CD times U, so if I have some vector or I multiply uit by a number, then I multiply it by another one.*0795

*I can take a vector and I can just multiply the two numbers together and then multiply it by the vector, and again all these are very common properties that you are accustomed to.*0808

*And 1 times U = U, this 1 again we are talking about scalar multiplication here, scalar...*0817

*... Scalar multiply, this is just the number 1, it is not the vector 1, it is not the unit vector that we talked about before, it is the number 1 times U, gives me U back.*0832

*This is called the multiplicative identity, it is the element that when I multiply by the vector, it gives me back the vector, nothing changes, before we talked about the additive identity, the 0, so that when I added it to a vector, I got the vector back.*0842

*Okay, let's talk a little bit about coordinate systems, as it turns out, there are two types of coordinate systems, there's something called a right handed and a left handed, generally unless there is a reason for doing so.*0859

*It is just been conventional in mathematics to use a right handed coordinate system, and will show what it is that, that means here, we will draw both of them so that you know.*0871

*Z...*0883

*... Okay, there is going to be times when I forget my arrows, forgive me I sometimes just don't write my arrows, Z, Y and X, okay...*0886

*... This is a right handed system, what you don't notice, X is, so the Z and the Y are actually in the plane of paper.*0907

*And it's as if we draw this going back also, the X axis is the one that's out, coming out towards you and away from you, this is the right handed system.*0916

*And the reason it's called right handed because if we actually take our right hand and sort of make a little L with this like this.*0927

*Some people do it with finger like this, I don't know, I think it's a little less intuitive, just sort of keep your hand at an angle like that, your arm is, end up being the Y axis, your thumb is the Z axis, and your fingers are the X axis, what we would consider like the primaries.*0935

*Once we establish our fingers moving in the direction of X axis, the Z and the Y sort of take this particular shape, left handed would be the other way around, and we will draw that, just so you see what it looks like.*0952

*And this is of ‘course in R*_{3}, so in 3 space because we can actually represent it, as you know we can't represent 4 dimensional, 5 dimensional or other spaces.0964

*R*_{3} is where we have the right handed an left handed systems, okay w so we have that goes there...0973

*That's that, that's that, and all you have done with the left handed system is switched X and Y, so X and Z are in the plane, and Y is the thing that comes forward and away from you.*0981

*And again, X is always your fingers, X Y, Z, if you arrange like this, you will actually see this is the left handed system, but again we are considered with the right handed system.*0996

*This is what we are going to be dealing with primarily, right handed..*1009

*... The Z and the Y are in the plane, it's the x that's coming towards you, it's as if we have taken the X, Y plane that we are used to and we have flipped it forward toward you, and now that X is pointing towards you, and the Z is up.*1017

*Okay...*1029

*... Let's talk about the...*1034

*... Projection of a point onto a coordinate plane, very important operation...*1040

*... Projection of a point...*1050

*... Onto a...*1054

*... Coordinate line or plane actually, because I am, on my first example I am going to do is going to be a two dimensional example, so that you can see it, and then we will do the three dimensional.*1060

*When we talk about R*_{2}, two space, we have our X, Y coordinate, this is X, this is Y...1070

*... That's fine, I don't need to label them, let's say we pick a point right there, and let's label that point, let's say it's the point (3, 4), so 3 in the X direction, 4 in the Y direction.*1083

*When I project something onto one of the coordinate axes, it's as if I am shining a light down that way, and what I do is I drop a perpendicular from that point onto that axis.*1093

*I end up at the point 3, if I project this way, I project onto the Y axis, I end up here because that's all you are doing with projections, is you are starting with your point and you are going down to one of your axes or to the plane.*1109

*And you are literally sort of dropping off that whatever coordinates you are not talking about, so if I project onto the X axes, I drop a perpendicular onto the X axis and where I end up, this is my projection right here.*1127

*Now let's do it for 3 space...*1144

*Okay, when I draw the vector, it's going to seem a little strange, but once I do the projection, it will e very clear what's happening, so these are little label, this is Y, this is X, this is Z.*1151

*I have a vector, okay, let's say that the vector is (2, 3, 4)...*1165

*... Now, I want to project this onto the XY plane, in other words I just want to shine a light on it, and cause, I want to see the shadow, that's what the projection is, it's a shadow on the particular X, Y plane.*1175

*I am going to shine a light on it from above, which means I drop a perpendicular...*1188

*... Down to...*1196

*... The X, Y, and now from the origin, I draw that point, and because I dropped it down to the X, Y plane, this point is (2, 3).*1199

*Now we are in the X, Y plane, the shadow of this vector on the X, Y plane is that, that is the actual shadow of that thing, makes complete sense.*1213

*You can project it onto the ZY plane, you can project it onto the ZX plane, in fact let's do that.*1223

*If I project it onto the ZX plane, I would have something like....*1231

*... Something like that, and you might have perpendicular that way, and then you would have a vector in the ZX plane, and that would be, so you take the Z and the X.*1239

*It would be (2, 4), 2 in this direction, 4 in this direction, because now I ignored the Y.*1251

*Here I have projected it, cast a shadow onto the XY axis, XY, which means I only take the XY, so I have a vector in the XY plane, which is the shadow, which is the projection of this.*1257

*Projections are going to be very important in linear algebra, because again any time you drop a perpendicular to something, you are talking about the shortest distance something.*1269

*The shortest distance between this point and this point is that length; we will talk more about that in a little bit.*1278

*Okay, let's move on...*1289

*... Let U be an N vector, so now we are not specifying the space, we are just saying generally speaking, the magnitude of U is exactly what you think it is.*1299

*You just take, all of the, oh, let's actually list this, so it will be something like U*_{1}, U_{2}, so on all the way to U_{n} right, that many entries, so it is going to be Un^{2} + U2^{2}, + so on and so on. 1312

*Until Un*^{2} all of it under the radical, this is just a Pythagorean theorem in N dimensions.1330

*The Pythagorean was just for...*1338

*... U*_{1} and U_{2}, in 3 space, it's U_{1}, U_{2}, U_{3}, in 15 space it's U_{1}, U_{2}, U_{3}, all the way to U_{15}, mathematics is handled exactly the same way.1343

*Square each entry, add them all together, take the square root, perfectly valid...*1354

*Okay, let's see, let's also define the distance between two points...*1363

*... Well points are nothing but vectors, so we can speak of them as points or we can speak of them as vectors, which is an arrow from the origin to that point, so the distance between two points.*1373

*And I know you have seen the distance formula before, the distance between two points and vector form is the magnitude...*1385

*... If one of the points, so if point 1, is represented by some vector U, and point 2 is some vector V, the distance between them is the magnitude of the vector U - V.*1397

*In other words take U, subtract v, you will still get an N vector and then take the magnitude of that, meaning apply this, well, when you write it out, you get, well U - V in component form is (U*_{1} - V_{1})^{2}.1413

*+ (U*_{2} - V_{2})^{2} + so on all the way to (U_{n} - V_{n})^{2} and all of this under the radical sign.1433

*I prefer to use a radical sign instead of putting parenthesis and doing to this, to the power of one half, just a personal preference.*1447

*Symbolism is I mean it's important, but ultimately it's about your understanding, so...*1455

*... Okay, so you see that everything that we discussed in lesson 2 for 2 space is exactly the same, it's completely analogous, you just have more coordinates, more numbers to deal with, that's the only thing that's different.*1466

*And if you remember we also had another...*1478

*... Way of representing the magnitude in terms of A, the vector itself, if we took U, dotted it with itself and to of the square root, that's also another way of finding the magnitude.*1482

*Okay, now we are going to discuss a very important inequality in mathematics, well it's profoundly important inequality, it's called the quotient words in equality.*1496

*And it might similarly strange but, that actually does make sense, so put an arrow there...*1510

*... Quotient words...*1520

*... Excuse me...*1524

*... Real briefly I just want to speak really generally about inequalities, because in a minute we are going to introduce the second inequality called the triangular inequality and there are many inequalities in mathematics.*1531

*In the branch of mathematics that most of you know is calculus, most mathematicians refer to it as analysis, and analysis is exactly what you think, it's any other kind of analysis.*1542

*You have a certain amount of data in front of you, and you are trying to come up to come up with some sort of conclusion for what that data is implying.*1551

*Well often you don't really have all of the information at your disposal, so you have to analysis the situation, you are basically breaking it up seeing what you do have, and seeing how the pieces fit together.*1559

*Well as it turns out, really what you are doing is an analysis as you are establishing relationships between the bits of information that you have at your disposal, and relationships, one relation is an equality relation.*1571

*But in analysis often, you can speculate about the equality of something, but you can say something about the inequality between those two things.*1582

*So as it turns out in mathematical analysis in equalities play a central role because they allow us to order things in a certain way, and extract information that way.*1591

*Analysis really is about dealing with inequalities, relationships among bits of information, so the quotient words in equality says, if I have two vectors U and V...*1602

*... Let's write that out actually, let U and V be numbers of R and, in other words N vectors, you know...*1615

*... Then, excuse me, the absolute value of U.V...*1628

*... Is less than or equal to...*1638

*... The magnitude of U...*1641

*... times the magnitude of V, okay so let's stop and think about this for a second, let's make sure that our symbolism is understood.*1644

*A vector with thee two double line, that's magnitude, that's the length of the vector, these are numbers, so if I take the, this single one here, it's absolute value.*1652

*Now, U.V is a number, it's a scalar, so you can take the absolute value of a, of a number sometimes you doubt V will be negative, sometimes it will be positive.*1663

*That's why these absolute value signs are here, well magnitudes are always positive, so we don't have to worry about absolutes here.*1673

*Well magnitudes are always positive, so we don't have to worry about absolutes here, what this says, if I have two random vectors, and if I take the dot products of those vectors.*1676

*the absolute value of the dot product is always going to be less than or equal to the product of the magnitudes, what this is saying, it's placing an upper limit on what the dot product can be.*1690

*That's profoundly important, we need to know that, we are not just, you know the number is not just going off to infinity, so it actually plays an upper limit on what this dot product can be, it's going to turn out to be very important.*1701

*I'll go ahead and give you an informal justification for this, as supposed to an actual proof, I want you take this informal justification as , not with the grain of salt, but take it lightly.*1716

*This is not a proof; in general, what we do is we end up proving it and we end up using the quotient words in equality to...*1727

*... I am doing something a little backwards, we use the quotient words in equality once we have proved it, to go through this justification to define the angle between two vectors.*1737

*Now we did that one, we were working in R*_{2}, we just gave the definition, however I am going to use that in order to sort of justify that this is possible, just to sort of let you know that it is possible, this doesn't just drop out of the sky, okay.1747

*Remember last lesson, we said that the cosine of an angle between the two vectors is equal to...*1762

*... IU.V divided by the magnitude of U times, the magnitude of V.*1775

*Well, the cosine of an angle is always...*1785

*Is between -1 and +1, that's you know this, the cosine curve, goes like that, the +1 is in upper limit, -1 is in lower limit, therefore when I have this -1 and +1, I can actually write this, this way.*1793

*And say if the cosine of theta it's absolute value sign just allows me to not write it this way a little short hand, okay.*1810

*Equal to, so they take the absolute value of that, well that's the absolute value of this whole thing, and since the bottom is positive, it doesn't really matter, I don't need the absolute value sign there, okay.*1821

*Magnitude of U times the magnitude of V is less than or equal to 1, so now I have this thing, okay.*1839

*Start off with the definition that I had, I noted it's between -1 and +1, the cosine θ that means that this value is between -1, and +1.*1848

*Take the absolute value of the cosine so that I can illuminate this and just write it this way, well the absolute value of the cosine is the absolute value of this thing.*1858

*Now I have that, now just multiply through and what you get is...*1866

*... Absolute value of the dot product of two vectors is less, I should probably make this a little more clear here, this is a little odd, is less than or equal to the product of their magnitudes.*1882

*This is the quotient words in equality, and this always holds, so again this is just an informal justification to let you know that this is, this sort of make sense based on what you know about cosine θ, we actually do the other way around.*1892

*Okay, let's move forward...*1909

*... Just do a little example here, so...*1914

*... We said an this also holds for N vectors, the cosine θ is U.V divided by the magnitude of U, writing all these out is exhausting.*1920

*Okay, so we will let U = (1, 0, 0, 1), that's our for vector U, and we will let V = (0, 1, 0, 1) okay...*1936

*... U.V, this times that + this times that + this times that + this times that, all of these are 0, so U.V = 1.*1958

*Magnitude of U...*1968

*...This squared + that square + his squared + that squared, (0, 0), these are (1, 1), square root, radical 2.*1973

*The magnitude of V, same thing, that squared + that squared + that squared + that squared, under the radical sign we get rad 2, therefore we have using this formula, just putting them in.*1983

*Cosine of θ equals 1, over the magnitude of 1 times magnitude of the other, radical 2 times radical 2, we end up with 1 half, cosine θ equals 1 half, and if you remember your trigonometry, θ equals the inverse cosine of 1 half, that is going to be a 60 degree angle, for in terms of radians π over 6.*1998

*If I have this vector (1, 0, 0, 1) and I have the vector (0, 1, 0, 1), I know that the angle between them is 60 degrees, π over 6 radians.*2022

*Okay, now another property U.V equals 0 if and only if, meaning it's equivalent to U and V or...*2035

*... Orthogonal...*2055

*... In two space and in three space, orthogonal is the same as perpendicular, but when we are dealing with N vectors, we don't really have a way of visualizing, let's say 13 space.*2059

*But we know if 13 vector exists, we can write it, we can do the math word, it's a very real thing, so we don't use the term perpendicular, because that's more geometric as far as the real world is concerned.*2069

*We use the term orthogonal, so orthogonal is a generalized term for perpendicular, so U.V is 0, that means that U and V are orthogonal, this if and only if means well if U and V are orthogonal, then I know that U.V equals 0.*2080

*The implication goes in both directions, that’s all this if and only if means, you can also write this three lines, there is an equivalence, this is the same as that, either one is fine, you can replace this with this, this with this.*2096

*Okay, U.V...*2109

*... When the absolute value of U.V actually equals, when there is a strict equality of the magnitudes...*2118

*... Of the magnitudes...*2125

*... If and only if U and V are parallel, well which makes sense, if you have U, this way and if you have V this way.*2131

*Well the angle between them is a 180 degrees, they are parallel, or the other possibility is U that way and V that way, if they are in the same direction, the angle between them.*2145

*If I put them right on top of each other is 0, well the cosine of θ is a cosine of 0 is 1, the cosine of a 180- degrees is -1.*2155

*that's where this inequality in the quotient works in equality becomes the strict equality, so if I take the dot product and they are equal to 0, I know that they are orthogonal, perpendicular.*2166

*If I take the dot product and they happen to equal, the product of the magnitudes, I know that they are actually parallel.*2178

*Okay, now let's introduce the other inequality that we talked about, this is called the triangle inequality, also a very important inequality, and this one is very intuitive, because there is a picture for it that you can add, that makes sense.*2187

*In fact, those of you that remember from algebra 1 and 2, you probably spent about half a day deciding whether certain triangle when they give you the length of sides is possible, you were doing the triangle inequalities, what you were doing.*2204

*The triangle inequality says I'll do the algebra, then I'll do the picture, I want to do it the other way around, we are dealing with linear algebra, to deal algebraically.*2218

*Pictures will help us, but it's not, pictures are not proof, you know we want to become a custom to actually letting the algebra do the work for us.*2226

*Say's that the magnitude...*2235

*... And again U and V are N vectors, the magnitude of U + V, once I add U + V,. is less than or equal to the magnitude of U + the magnitude of V.*2239

*It places in upper limit on the sum of two vectors, the sum of two vectors, the biggest, that the sum of two vectors can ever be is the length of one vector + the length of the other vector.*2254

*This is a inequality, here's what this means, and this is why it's called the triangle inequality.*2268

*Let's draw two vectors...*2273

*... Let's say I have vector U...*2277

*... And I will label vector U, and let's say I have vector V, notice that I didn't draw them from any, you know any frame of reference, or just random vectors...*2281

*... Adding vectors means you start with one, and then wherever you end up like for example you start with 1, and then wherever you end up, you add the other one, you just put it on top of it and you go to that one, the point where you end up from your original starting point to your final ending point.*2295

*That's your vector addition, it just means add them in order, do U first, then do V, so in this case, let's do U, it's here, and here, and then we will do V, which is here.*2313

*Okay, son that’s V, we end up here, this vector right here is our U + V, notice what this says.*2327

*it says that the length of this vector U + V is less than or equal to the length of this + the length of this, all that means is that the third side of the triangle is less than or equal to the sum of the two sides.*2340

*that's all that means, because if it were longer, then the sum of the two sides, what you would get is triangle like that, let's say that's one side, let's say that's another side, let's say that's another side, the triangle doesn't close.*2352

*These will just collapse onto there, in order to have a triangle; the sum of the two side, of sum of any two sides has to be at least bigger than, has to be bigger than the third side.*2375

*Any time it's equal, well that situation is just when...*2378

*... They basically lay on top of each other, what you have is a line, these collapses...*2385

*... Precisely to align, so again in order to have a triangle, there actually has to be a strict inequality, that's all it means.*2393

*Once again, the sum of two sides of a triangle is always going to be bigger than the third side, that's the only way a triangle can actually exist, that's why it's called the triangle inequality.*2399

*As it turns out, it has nothing to do with the pictures, just because we can draw a picture, and we call this thing a triangle, this is an algebraic property, this is true in any number of dimensions.*2407

*And in fact it has absolutely nothing to do with a picture, pictures are our representations of making things clear, this is a deep mathematical algebraic property, okay...*2418

*... Unit vectors...*2435

*... Again a unit vector is just a vector, with a length of 1...*2439

*... And our symbol, my symbol for that is just X unit, what you do is you take the particular vector you are dealing with X, and you multiply it by the reciprocal of its magnitude, that's it.*2451

*All you are doing is taking the vector, dividing it by its length, just like when you take a number 10, divided by 10 you get 1., well you can't divide by a vector, but you can divide by the magnitude of the vector, because the magnitude is a number, okay...*2465

*... In the last section we introduced the vector I, and the vector J, they were unit vectors in the X direction...*2481

*... X direction and a unit vector in the Y direction, now we are going to introduce the unit vector K, it is a unit vector in the Z direction.*2492

*Let me draw my right handed coordinate system again...*2503

*... Let me darken this up that is a vector of length 1 that is I in the X direction that is a vector in the Y direction, called J.*2510

*And the U vector of length 1 that moves in the Z direction is called K...*2523

*... Any...*2537

*... Vector in R*_{3}, R_{3}, 3 vector, 3 space...2540

*... Can be represented...*2549

*... As a...*2555

*... Linear...*2559

*... Combination...*2563

*... Of...*2566

*... The vectors I, j and K, in other words I can take any vector and I can actually write it as a sum, that's what linear combination means, you are just adding.*2569

*Of these unit vectors, very important unit vectors, very important unit vectors, so for example if I had...*2582

*... U = (0, 4, 2, 3), now let's say I have V is equal to (0, -1, 2, 0)...*2596

*... I might say I can write U as 0I + 4J, actually excuse me, let's forget, these are, we are dealing with 3 vector not 4, + 2K.*2615

*All I have done is I have taken this vector and I have represented it, that means I move 0 in this direction, I move 4 in this direction, and I move 2 in this direction.*2632

*And that's all it is, that's all these unit vectors do, they are a sort of A for a more reference, that allows any vector in R*_{2} or R_{3} to be represented as a linear combination as sum of these vectors.2643

*We will get a little bit more into this later, when we actually break things up, okay.*2658

*Now let's see what we have got, okay so let's do a little but of a recap and we will finish off with some examples...*2665

*... Orthogonal vectors, these are the important points, orthogonal vectors, when U.V is equal to 0, if and only if U and V are...*2675

*... Orthogonal, or ortho, so really important, orthogonal vectors, perpendicular vectors, or when the dot product of those vectors equals 0 and the other way around.*2694

*Quotient works in equality, very important in equality, it says that the absolute value of U.V less than or equal to the magnitude of U times...*2704

*... the magnitude of , profoundly important in equality, triangle inequality says that the magnitude of the sum of U and V is less than or equal to the magnitude of U +...*2718

*... The magnitude of V...*2735

*... Okay, let's do some examples here, let's let U = what it is we had before, so (0, 4, 2 and 3).*2740

*We will let V = (0, -1, 2, 0) and I wrote it in coordinate form, makes no difference, let's calculate U.V.*2752

*U.V is you multiply, so 0 times 0 is 0, 4 times -1, -4, 2 times 2 +4, 3 times 0, 0, 0 - 4 + 4 = 0 = 0.*2769

*Dot product is 0, so U and V are orthogonal...*2787

*... let's find a unit vector in the direction of U, okay, so we are looking for U unit, well, I know that, that's equal to 1 over the magnitude of U times U itself.*2799

*1 over the magnitude, that's just a scalar, by multiplying the scalar by the vector.*2817

*Okay, let's see what the magnitude of U is, magnitude of U equals...*2822

*... 0 + 16, 2 times 2 is 4, 3 times 3 is 9, all under the radical sign...*2833

*... Radical 29, therefore our unit vector is 1 over radical 29 times...*2847

*... (0, 4, 2, 3) it's equal to (0, 4, over radical 29.*2862

*2 over radical 29, and 3 over radical 29, this is 4 vector, but now this vector has a length of 1, if you were to find the magnitude of this vector, it would be 1, it's in the direction of U, but it has a length of 1.*2875

*Alright, okay let's do one final example, a little bit more complex, sort of tie and some other things that we did in previous lessons.*2893

*We want to find...*2905

*... A vector V, which is (A, B, C), such that...*2910

*... V is ortho...*2920

*... To both W, which is (1, 2, 1) and X, which is equal to (1, -1, 1).*2926

*Okay, so we have the vectors W, and we have the vector W and X, and we want to find a vector (A, B, C), in other words we want to find (A, B, C), at least 1, does that for all of them, but at least 1, such that V is orthogonal to both.*2940

*Well we know what orthogonal is, orthogonal means that V.W is 0, and V.X is also 0, so this is use that definition, write out some equations and see what we get, so...*2956

*... V.W is A times 1 is A, + B times 2 is 2B + C, and we know that that's equal to 0, that's all I have done here.*2971

*I have used the definition of dot product and I have written out a linear equation, A + 2B + c = 0, well V.X, I also know that it's equal to 0, well so V it's just A times 1 is A.*2983

*A times -1 is -B, and C times 1 is C, that's equal to 0, well I have two equations, three unknowns, let's go ahead and subject this to reduced row echelon, the Gauss Jordan elimination, and let's see what we can do.*3000

*let's form our augmented matrix here, so (1, 2, 1, 0), (1, 2, 1), let me put the whole thing there so we know that we are dealing with the 0's over here.*3018

*And (1, -1, 1)...*3029

*... We are going to subject this to reduced row echelon form, and when we do that, we end up with the following, we end up with (1, 0, 1, 0, 0, 1, 0, 0).*3035

*This is reduced row echelon, this first column is A, the second column is B, this one is fine, this one is fine, this one, this is not, there is no leading entry here it's free.*3055

*As it turns out, this third variable C can be absolutely anything, therefore our solution is the following, C = anything...*3067

*... Well B = 0...*3079

*... That's what this does, it allows us to just read off what's there, so B + 0 = 0, so B = 0, and now A is equal to well actually let me write it differently A + C = 0.*3087

*Therefore A equals -C, or negative anything, because i can choose anything for my C, so let's just say that ZC = 5, that means B = 0, and A = -5.*3106

*One possible answer is (-5, 0, 5) for my vector V...*3124

*... This vector is orthogonal to that vector and that vector, and again C can be anything, so it's not the only vector.*3134

*There is a whole sleeve of vectors, it's an infinite number of them, so this has an infinite number of solutions.*3143

*And what we did is we just used the definition of dot product, and we use the fact that we know that any time two vectors when they are dotted and equals 0, they are orthogonal to each other.*3148

*Okay, thank you for joining us here at educator.com, linear algebra, we will see you next time.*3158

*Welcome back to educator.com and welcome back to linear algebra, today's lesson we are going to be discussing linear transformations, so this is going to be the heart and soul of linear algebra.*0000

*Is this notion of a linear transformation also called the linear mapping, more often than not, I will probably refer to it as a mapping instead of a transformation, simply because it's habitual for me to think of it as a mapping, as supposed to a transformation.*0012

*It is true that what you are doing is you are actually transforming something but when we actually define what we mean a mapping and get into it, it will probably make more sense to call it a mapping, because we are actually taking something and literally mapping it on to something else.*0028

*Let's go ahead and get started, this is probably for those of you who are the physicist, engineers, and also mathematicians.*0041

*this is where it's going to be probably the first introduction to something that you haven't necessarily seen before, or something that you have seen before, that's going to be disgusting a very different way, a mathematical way, and in a more abstract way if you will.*0053

*A lot of it make scene a little odd, however just by diving in a little bit, taking a look at some of the examples, and letting them wash over you a little bit, you will realize that it is not altogether different than what you have already bee accustomed to.*0071

*It's just looking at it from a different angle, from a more general angle, and again that's what we do in mathematics, we take something and we try to generalize it as much as we can.*0084

*To take that generalization, that process of abstraction is far as we can...okay, let's go ahead and get started...*0093

*... This is an introduction to linear transformation and introduction to linear maps, the first thing that we want to do is generalize this notion of a function, that you have been dealing for years now.*0102

*We want to generalize that notion and that's what it is that we are going to be calling a map, so let's start with something that we do know, let's take the function F(X) = X*^{2}.0115

*Now let's talk about what this, what this means and what it is that you are actually doing here.*0127

*It's saying, take a number X from the real number system, do something to it, in this case square it, and then you are going to get back another number, so you are starting some place.*0131

*You do, you have an input, this X value, you are doing something to it, that is your function, and then you are going to end up with something else.*0143

*In this case if I take a 2, I end up with a 4, if I take a 3 I end up with a 9, if I take a 4, I end up with a 16, you know how to deal with this, you have been dealing with it for years now.*0153

*Okay, let's represent this in a slightly different way; I am going to draw a couple of pictures here, okay...*0163

*... Okay, and I am going to say that this is my space of real numbers, you are used to thinking of real numbers as a real number line, that's fine.*0173

*This is just another way of representing it as a set, as just a collection, a bag of numbers if you will, so and I call it R, because this is the real numbers.*0181

*And I am going to sort of duplicate that over here, now I am going to show you what it is actually going on, let me pick the couple of umbers and the real number, let's say (2, 4, and 6).*0191

*Here is what this function is doing, you are taking a number from the real, of this set of real numbers, you are performing an operation on it, this, so called F, which is defined by this, okay.*0201

*And you are getting back another number 4, you are taking a 4 and you are coming over here, you are squaring it, you are getting a 16, you are taking a 6 and you are coming over here, you are getting a 36.*0216

*As it turns out, what you are doing is your mapping 2 to 4, you are mapping 4 to 16 and you are mapping 6 to 36 and so on.*0229

*In other words, to every number in the set of real numbers, you are associating another number in the set of real numbers, so you have done, you have taken something from the real’s and you have ended up back in the real’s.*0241

*You can think of it as sort of ending up back in the same set, but we like to represent it this way, we like to think of them as two different sets, and we actually denote this like this.*0256

*We say F is a mapping, from the real numbers to the real numbers...*0266

*... Defined by F(X) = X*^{2}, so this new symbolism is the symbolism that we are going to be using, this sort of implies that you know what's going on, but now we are sort of breaking it down, to, to say what it is that we are really doing.0276

*We are taking one number, we are fiddling with it, and we are spitting out another number, something for, so we actually treat these two spaces as separate, in fact we call this the departure space...*0295

*... Not everyone refers to like this, but I think it is the best way to refer to it, and this is the arrival space.*0313

*In other words you are taking a number from the departure space, you are leaving that space, you are doing something to it, performing operating on it, whatever it is you want to call it, reforming a function.*0319

*And then you are arriving at some other number, some other place, the arrival space, now in this case, you are starting with a number and you are ending with a number, but that doesn't mean you always have to do that.*0329

*As it turns out in a minute, you will see we can start with a number and end with a vector, we can start with a vector and end with a number, or we can start even we can get even more bizarre.*0339

*We can start with one mathematical structure, and end up with another mathematical structure, that's why this representation is the most general, so again F is a function, is a mapping from R to R.*0348

*What this means is the I pick a number from the real numbers, I do something to it, and I end up in the real numbers, that's what this symbol represents, and defined by, I actually give the definition of how, of what the function is, what it is that I am doing, what operation I am performing.*0363

*In this case I am actually squaring a number...*0378

*... Let's do another example, let's say I have the function F(XY), now I have two variables, is equal to X*^{2} + Y^{2}.0383

*Let's just do a simple example, if I take the point (1, 2), okay, it's X*^{2} + Y^{2}, so 1^{2} + 2^{2} = 1 + 4.0395

*That's equal to 5, and you have probably never even thought about this before, but take a look at what's going on.*0407

*Now I am taking two numbers from R, and the way this is represented, notice this is actually a vector representation, when I have two numbers like this, a point in two spaces, which is what this is.*0414

*The point (1, 2) is also a vector, in the, in two space, so we represent that of ‘course with R*_{2}, so when I symbolize this, according to how we did it here, here is what I am writing.0425

*The function is a mapping from R*_{2}, to R...0442

*... Defined by F(XY) = X*^{2} + Y^{2}, I could also write this in vector for, the F(XY), the vector XY = X^{2} + Y^{2}.0447

*And again this coordinate XY is the same as a two vector, so what this symbolism means is that my departure space if you will is now R*^{2}, it's the space of 2 vectors, my arrival space is R, the set of real numbers, I have taken a vector.0467

*A vector, I have done something to the individual components of that vector, and I spit out a number, so these are two different spaces.*0486

*Even though I am picking numbers, there, all the numbers are from the real number line, we actually consider this thing that we take, this vector that we take as a single unit.*0495

*I took a two vector, I did something to the components of that two vector and I spit out a number, that's what the symbolism means, that's why this is a very powerful symbolism, and it generalizes.*0507

*This is a mapping from R*_{2} to R that means to the space of two vectors I am associating a number.0518

*To the space of two, to every element in the space of two vectors, I am associating a number, I am mapping a vector to a number.*0527

*I am mapping a two vector to a number, that's what's going on here...*0534

*OK. Let's do something like this, let's define F(XYZ) =...*0544

*... X + Y, X+ Z, let's do an example to show what this actually looks like, if I take F(3, 2, 1), well it's just telling me X, Y, Z, this is a 3 vector.*0558

*In other words it is a vector in three space, if I take this, the answer that I get is well X, the first component is X + Y, so 3 + 2 is 5, and X + Z, 3 + 1 is 4.*0573

*I have taken a three vector, I have done something to it according to the definition of the function here, and I have spit out a two vector, so this is represented this way, pictorially.*0589

*This is R*_{3}, it's the space, the collection, the set of all three vectors, (5, 6, 9z), (2, 4, 6), (1, 3, 5), (0, 0, 9), a set of three vectors, and here is my set of two vectors R_{2}.0601

*I am taking something from here, I am doing something to it, and I am ending up in a completely different space, I am mapping it from one space to another space, this according to the symbolism is written like this.*0619

*F is a mapping from R*_{3} to R_{2} defined by F of, I am going to write it in vector form.0633

*X. Y, Z is equal to X + Y, X + Z, this tells me that I am taking a three vector, I am performing an operation on it, this operation specifically, and I am ending up with a two vector.*0647

*That's kind of extraordinary when you think about it, you have been doing it this all along, actually it's not the first time you have seen this, but to actually step back and realize that you are jumping from space to space.*0664

*That's pretty extraordinary and in a minute when we define what we mean by a linear mapping, it's going to be even more extraordinarily, extraordinary that you can actually retain structure, when you jump from one space to another space.*0676

*Okay, let's do another example...*0689

*... This time we will do F)XY) and we will define it as X*^{2}, the second one as Y^{2}, X + Y, X^{2} + Y^{2}.0693

*In this case, let's do an example, F(2, 3), well X*^{2} is 2^{2}, that's 4, second is Y^{2}, that's 9, X + Y is 2 + 3, that's 5, x^{2} + Y^{2} is 9 + 4, that's 13.0710

*F(2, 3) = 4, 9, 5, 13, I have taken a two vector, and I spit out a four vector, I went from two space to four space.*0729

*I went from R*_{2} to R_{4}...0739

*... Another way of representing this is I have mapped (2, 3) to the vector (4, 9, 5, 13), just like in the previous example, I have mapped one vector to another vector, each vector belongs to a different space.1240 that's why the symbolism is actually is more general and works out beautifully, and it is a mapping from R*_{2} a two vector, to R_{3}, I know, not R_{3}, R_{4}...0743

*... Defined by this, defined by F(XY) = X*^{2}, Y^{2}, X + Y, X^{2} + Y^{2}...0783

*... I am , to each vector in two space, I am associating a vector in four space, the picture looks like this, the set of all two vectors, this is R*_{2}...0802

*... Here is the set of al four vectors, R*_{4}...0815

*... That's what's happening, I treat them as separate spaces, that's what makes this beautiful, I am moving from one space to another, I can do whatever I want, something that I pull from my first space, and then I end up somewhere else.*0825

*It's really very extraordinary that you can do this, and it's even more extraordinary that we can actually represent the real world as operating like this...*0837

*Okay, so this is called a mapping, again, now what we are concerned with are linear mappings, because we are dealing with linear algebra.*0847

*Now we are going to actually give a definition of what we mean by linear, the examples that we gave a second ago, they are just standard mapping, as it turns out in a whole, all the possible mappings, there is a small portion of them that are actually linear.*0859

*And they have special properties, so now we are going to define what we mean by linear mapping, and any time we are faced with the mapping we want to check that it's linear, we are going to check this definition.*0874

*We will do that in a second, okay very important definition, probably the single most important, actual definition i linear algebra, definition, okay...*0883

*... Mapping, or a mapping, sorry, a mapping from RN to RM, let me...*0899

*... Make this a little bit better here...*0912

*... R , M, where and M could be anything, two space to six space, 1, space to 1 space, 3 space to 3 space, it's more general now...*0918

*... A mapping from RN to RM...*0931

*... Is an operation...*0939

*... Which to each vector...*0945

*... In RN, our departure space...*0951

*... Assigns a unique...*0955

*... Vector symbolized...*0966

*LU, in RM, don't worry about what this, it will, it will make sense in a minute when we actually start doing the examples; I'll discuss what these symbols mean.*0971

*Again we just want the mathematical formalism, so that our basis are covered, such that...*0981

*... The following hold...*0991

*... Okay, AL of U + V = L of U + L of v.*0994

*And BL of C times U = C times L of U, okay, let me just talk about what this means, real quickly and we will get the examples, so a mapping from all the space RN to the space RM.*1009

*Or the mapping from N space to M space is an operation, which to each vector in RN, okay associates a unique vector that is symbolized, that way...*1027

*... To RM, such that the following hold, notice I haven't drawn any pictures yet, I will draw pictures in a minute.*1043

*But it's very important to understand that these are algebraic properties, so, we have, you we are talking about a linear mapping, and yet we have drawn the lines.*1049

*We have drawn nothing else, this is an algebraic property, so the following has to hold, I have to check these two things, when I am presented with a mapping.*1058

*It says if I take, if I map U, to it's...*1067

*... Whatever I am associating with it, and if I map V, to whatever I am associating with it, and then I add those, it's the same as if I add them first, and then perform the operation on it, so in either order.*1075

*that's what this, that's what this says, that's what linear means, it means that I can either add the two elements from my departure space and then operate on it, or I can operate on it and then add them.*1087

*But in either case, they have to end up in the same place, and the same thing here, if I start with the vector in my departure space, multiply it by some scalar, and then operate on it.*1099

*meaning my function, whatever my function happens to be, it's the same as operating on the vector first, and then multiplying it by its scalar, this might seem obvious.*1110

*You are going to discover in a minute it's not so obvious, okay, let's draw a picture and show what this means exactly, so we have our departure space, we have our arrival space.*1120

*And again we are considering the spaces as collections of two vectors, three vectors whatever, so this is going to be our N, N space, our M, so they are totally different spaces, they don't have to be, but often they are.*1132

*Let's say I have U, and let's say I have V, and let's say I have U + V, which I can do right, if I have two vectors, I could just add them and I get another vector, because vector addition is closed.*1148

*I end up in the same space, over here, let's say I operate on U, and I end up with LU.*1160

*Let's say I operate on V, I have LV, well these are just vectors in M space, I can add vectors in M space, and I get a vector in M space.*1169

*This is another LU + L...*1180

*... Of V...*1190

*... Here's what linear mapping says, it says that if I take U and V, and if I do U first, and then if I operate on V separately, and then over here if I add them...*1193

*... I will end up here...*1208

*... And then if I do it the other way, if I add them first to come here, and then I operate on them,. it says I have to end up in the same place, with the same answer.*1212

*Think about what that means let me say that again, this is really important distinction.*1224

*If I have a mapping like X*^{2}, well, this says that if I do X^{2}, and then if I do Y^{2}, so I, X^{2} gives me over here, another X^{2} gives me over here.1230

*Well, if I add the X*^{2} + Y^{2}, I am going to get some number, now if I do it the other way around, if I add my X and Y first before I square them, if I take X + Y, square it, and then I do my operation.1241

*If I reverse the order, it tell me that if I end up in the same place, my mapping is linear, if I don't end up in the same place, my mapping is not linear.*1254

*that's what linear means, it means I can do my function or my addition in either order; I still have to end up in the same place, that's what's important.*1263

*And we will see in a minute that that's not always the case, that linear mappings are very special things.*1272

*And the same thing with of ‘course the scalar multiplication, I have U, I can't end up, you know...*1279

*If I, I can multiply it by some scalar first, and then operate on it, or, I can operate on it and then multiply it by a scalar, I will end up in the same place, I should end up in the same place.*1289

*This is what I have to check every time I faced with the mapping, I have to actually put each value in there, and manually check it, you have to go through the rigorous process of actually checking to see that a mapping is linear, later of ‘course will come up with quicker ways.*1299

*But for the time being we want to get a feel for what linear mappings are.*1314

*Okay, I cannot emphasize enough how profoundly important this definition is, it is very, very, very important, this notion.*1318

*Spend some time with this idea, and again operate first and then add or add first then operate, in either way you have to end up in the same place.*1330

*If you end up in the same place, it's a linear mapping, if you don't, it's not a linear mapping.*1342

*Okay, examples will always clear up everything, let's do our first example which we talked about, let's say I am going to use the symbolism that I used before.*1348

*F is a mapping from R to R, so I am starting with a number, I am spitting out a number...*1358

*... Defined by F(X) = X*^{2}, so you are very familiar with this function, been dealing with it for years.1367

*We want to know is it linear...*1374

*... You might already know the answer to this, but let's actually go through the definition, and then we will talk about what linearity really means.*1381

*Okay, well we have to check the two, the two properties that we talked about,. we have check that vector addition is linear, and scalar multiplication is linear, that satisfies two properties from before.*1387

*Okay, let's do part A...*1399

*... We need F(X*_{1} + X_{2}), to actually be equal to F(X_{1}) + F(X_{2}), it means we need, we add the two X's first and then operate on it.1404

*It has to e the same as operating on each separately, then adding them, okay that's what this mean, that's why the parenthesis are arranged the way they are, so let's check that this is the case, let's do this one first...*1421

*... Let's move on to a blue ink here, so I will do F(X*_{1} + X_{2}), well the definition of the function is you square it.1434

*That's equal to (X*_{1} + X_{2})^{2}, that's what this parenthesis means, anything in the parenthesis, you, that's what you do to it.1446

*Well that equals X*_{1}^{2} + 2X_{1}X_{2} + X_{2}^{2}, so that takes care of that.1455

*Now let's do F(X*_{1}) F(X_{2}), F(X_{1}) is equal to, well X_{1}^{2}, that’s the definition of our function right here.1468

*All we were doing is we are using the definition of the function, putting in the values seeing what we get, F(X*_{2}) = X_{2}^{2}.1480

*We have F(X*_{1} + X_{2}), we calculated it, that's here...1493

*... Here, we have F(X*_{1}), that's here, we have F(X_{2}), now let's see if they are actually equal, now let's check this thing.1498

*X*_{1}^{2} + 2X_{1} X_{2} + X_{2}^{2}, if the question is, is it equal to F(X_{1}), which is X_{1}^{2} + F(X_{2}), which is X_{2}^{2}.1508

*Are these two equal, no, they are not, this is an extra term, they are not equal, they are for.*1525

*We don't even have to bother checking the scalar multiplication, this is not linear.*1532

*It's not a linear mapping, it is a mapping from R to R, yes of ‘course it's a mapping, in fact it's a specific type of mapping called the function.*1539

*But again we don't want to use the word function again, we want to use the word mapping or transformation, it is a mapping, it's a perfectly valid mapping, it's a very common mapping.*1549

*It shows up everywhere in map and science, but it's not a linear mapping, okay...*1557

*... Let's try another one, let's let...*1568

*... Let's see, let's go back to our black ink here, let F be a mapping, again, from R to R, from the real numbers to the real numbers, meaning we start with a real number, we fiddle with it, and we get a real number back.*1576

*We defined by F(X) = 5X, okay.*1591

*we need to check, that F(X*_{1} + X_{2}, equals F(X_{1}) + F(X_{2}).1599

*Okay, let's do that...*1611

*... F, let me actually work in red, so we go, let me go to red...*1616

*... F(X*_{1} + X_{2}), well, here is our definition of our function, this is what, it's 5 times the thing in parenthesis, so it's 5 times X_{1} + X_{2} = 5X_{1} + 5X_{2}, distributive property.1626

*Well now let's do F(X*_{1}) = 5X_{1}, F(X_{2}) = 5X_{2}.1646

*And now let's check to see if they are equal, we need to check that, F(X*_{1} + F_{2} = 5X_{1} + X_{2}.1655

*The question is does it equal F(X*_{1}) + F(X_{2}), 5X_{1} + 5X_{2}, yes.1670

*It checks out, the left hand side and the right hand side are the same, adding them together first, then operation on it, versus operating on each and then adding it.*1679

*they end up being the same, so far so good...*1688

*... Now we want to check scalar multiplication...*1693

*... F(ZX) equals, we want to check to see whether it equals, Z times F(X), well...*1700

*... F of...*1713

*... ZX...*1716

*... This is our 5ZX and Z times F(X), so we are checking this one, we are checking this one, equals Z times...*1719

*... 5X, well, 5Zx does equals Z5X, so B checks out scalar multiplication, so yes.*1732

*This is a linear mapping...*1743

*... Okay, this is a really important example, look at this where I = 5X, you know that F(X) = 5X if I write it as Y = X.*1748

*This is the equation of a line, that's where we get the name linear mapping, but and this is why you know that anytime you see an exponent of 1, you are talking about a line, a linear function.*1759

*Now you know why we call it a linear function, however it's really important to understand that linearity is an algebraic property, not a geometric property, geometry and pictures are just there to help us out.*1774

*We get the language that we use, for example, we call it a linear mapping, because we think of it as a line, but believe it or not, just because we can draw something, doesn't necessarily mean that there is such a thing as a line.*1786

*This is an algebraic property, it's a deeper more mathematical property, that has to do with mapping of moving something from one place to another, that's why it's called linear.*1798

*We of ‘course do it the other way, we study lines, we know that lines are equations, where the exponent on the X is 1, but this is what's going on.*1808

*Now you understand something very real and very deep about mathematics, this is an algebraic property, pictures are not proofs, they can help.*1817

*Okay., let's see what we have got, let's go back to, now let's stick with blue, okay now let's do L, call it any letter we want, as a mapping from R*_{3} to R_{2}, defined by...1829

*... L of XYZ = XY, this says that if I have a 3 vector, I start with the vector, I operate on it, I get a 2 vector, and the operation that I am performing is, just take the first component, take the second component, drop the third component.*1852

*I am converting, transforming a 3 vector to a 2 vector, a mapping from the R*_{3} space to R_{2} space, question is, is it linear, well let's check.1870

*We have to pick a U and a V, so we will let U be the vector U*_{1}, U_{2}, U_{3}, it's really important to keep track of all of your indices, all of your subscripts.1886

*There is a lot of notational intensity going on, there is nothing particularly difficult, it's just standard arithmetic, but it's true, this does tend to get a little notationally intensive, notationally busy rather.*1904

*Be very careful, I mean I am going to make mistakes, believe me, equals V*_{1}, V_{2}, V_{3}, okay, let's calculate L(U + V).1916

*Well, L(U + V), well that means this is a parenthesis, so let's do U, that's L of, that means add them first, well that's just U*_{1} + V_{1} right.1935

*We are adding components U*_{2} + V_{2}, U_{3} + V_{3}, now we can apply L to this one vector, this way.1948

*That's equal to, just take the first component, U*_{1} + V_{1}...1963

*... U*_{2} + V_{2}, so that's the first thing that we have...1972

*... Now let's take L(U), well that's just equal to...*1980

*... U*_{1}, U_{2}, and let's take the L(V) based on this definition, it’s just the first component, the second component, right.1992

*And now let's see if these two actually equal each other, so the question is does L(U + V), does it equal, L(U) + L(V).*2008

*Well, L(U + V) this thing, that we got before, U*_{1} + V_{1} U_{2} + V_{2}.2023

*Does it equal L(U), which is U*_{1}, U_{2} + this one, V_{1}, V_{2}.2037

*Well when we, we can perform this addition, again we have to go as far as we can with the actual arithmetic, that equals, these are just, this is a 2 by 1 + a 2 by 1, so we add components.*2051

*U*_{1} + V_{1}, U_{2} + V_{2}, the question is yes, they are, so the first part checks out.2062

*We still have to check scalar multiplication, so let's go ahead and do that...*2077

*... Okay, now let me move on to the next page, okay.*2085

*We have to check that L(CU), does it equal C times L of U, well let's do L(CU), L(C) times U = L of...*2091

*... CU*_{1}, CU_{2}, CU_{3}, that's equal to, we just take the first two entries, CU_{1}, CU_{2}, right.2108

*And then we will calculate this, this other on, C times L(U) = C times, well the L(U) is U*_{1}, U_{2}.2122

*It's equal to, yeah, CU*_{1}, CU_{2}, these...2134

*... Mean my arrows, is you are definitely equal, so as it turns out, scalar multiplication also satisfies this equality, the definition, the second part of the definition of linearity so yes...*2146

*... This is a linear mapping, okay...*2163

*... this example that we just did, the very important mapping, let me go back to black...*2173

*... This mapping...*2183

*... Is called the projection mapping, I remember we mentioned projection a little bit earlier, or the last lesson I think, projection mapping...*2188

*... In this particular case, L of...*2202

*... XYZ is XY, this is a mapping from R*_{3} into R_{2}, all we have done is project on to the XY plane, we take it a 3 vector, you have joined a light on it, and we have just taken the projection on to the XY plane.2207

*Let's draw this one more time, this is our right hand coordinate system, we have the X axis, we have the Y axis, ad we have the Z axis, let's take a vector, for example (2, 3, 5).*2229

*We have projected on to the XY plane, which means we drop a perpendicular onto the XY plane.*2245

*And this vector that we get, this two vector, because now we are only talking about two space, we are not dealing with the Z, this is that.*2254

*We have taken a three vector, we have projected onto the XY plane, in other words we have eliminated the Z component, and now we have this vector, which is in the XY plane, this is a two vector, this is really, and when we think about it, it's kind of extra ordinary.*2266

*Sometimes it seems so natural, we can actually do this, we can move from one space to another space, the space of three vectors to a space of two vectors, and not only that, we can actually retain structure, and what that means is that if you add in one space, and then operate, it's the same as operating and then adding in the second space.*2280

*That's what you are doing, that's amazing that you can actually do that, that structure is maintained in moving from one space to another, on the two space, it really have nothing to do with each other, they are completely different...*2299

*... Okay, let us , it's just terminology here, so let's go back to our picture, we have our departure space, depart, arrive.*2312

*Sorry about that , departure space arrival space, we are taking something from element here, we are operation it under some mapping, linear mapping in this case, and we are ending up with some other object over here, they could be two completely different spaces.*2329

*If we call this U, and then of ‘course we call this L(U), because we have operated on U, it's a different element now, okay, this L(U), this thing right here is called...*2343

*... The image...*2360

*... Under L, the mapping, which makes sense, you are taking this, you are doing something to it, so this is the image under the mapping, L of this original element.*2364

*It's the unique element associated with this, the element from the departure pace, okay, this set...*2374

*... Of all images...*2386

*... For a given set...*2394

*... Our given set of U's is called, you know this, seen it before, it's called the range.*2397

*In other words, if I have this set, let's say I only take five elements from that set, okay,(1, 2, 3, 4, 5), and let say I am only mapping those five elements, and I map over here to, let's say, let's symbolize what they look like, the images like that.*2409

*If I take the collection of these things in this space, that's what I call the range, the range is not the entire space, okay, it's very important to differentiate that.*2428

*Sometimes it can't be, sometimes we will map everything in one space, over onto another space, and it might actually end up where everything here is associated with everything here, that there are no blanks, no gaps.*2439

*But that's not the case, that's often not the case, the range, okay, the range is just loose things that actually end up as images under the choice is that we make from the departure space, the range is not the entire space, the range is just everything that happen to be mapped over here (2, 3, 4, 5).*2451

*In this case the range consists of five elements, and what we might say is, we usually this is the domain, and this is why the whole domain range thing is actually not often used, when you actually move on to, speaking about linear mappings more generally.*2475

*We still use the term range, we don't often really speak about the domain, so speak of the departure space, the arrival space, the individual element is an image, and the set of images for thing that you do map is called the range.*2492

*Okay, let's be here, what else have we got?...*2509

*... Okay, let's do one more example, we will let...*2520

*... A mapping L, be a mapping from R*_{3} to R_{2} again, defined by...2528

*... U*_{1}, U_{2}, U_{3}, this is equal to U_{1} + 1...2541

*... U*_{2} - 3, so we are taking a three vector, mapping it to a two vector, and this is what we are doing to it, we are taking the first entry adding 1 to it, and we are taking the second and subtracting the third from it, okay/2550

*An example might be L(3, 2, 2)., that's going to equal 3 + 1 is4, 2 - 2 is 0, just an example of what that mapping looks like, notice I have changed a three vector to a two vector, okay.*2563

*Let's check our first one, we need to check L(U + V) okay.*2580

*L(U + V) is equal to L of...*2589

*... u*_{1}, U2, U3...2596

*... + V*_{1}, V_{2}, V_{3} =...2601

*... Let's do, we are in the parenthesis, so let's actually do it, L, U*_{1} + V_{1}, U_{2} + V_{2}, U_{3} + V_{3}.2609

*And now I can apply L to this using this, that means I take the first entry and add 1 to it, so that's equal to U*_{1} + V_{1} + 1, and then...2623

*... U*_{2} + V_{2} - U_{3} + V_{3}, take the second entry, subtract the third from it, so let me put a circle around this.2637

*This is what we have for that, I will go back here, and now let's calculate L(U), which is equal to first entry + 1, oops.*2649

*I forgot my U...*2665

*... This is supposed to be a U*_{3}, to a U_{1} + 1, U_{2} - U_{3}, okay, then I will do L(V), it's equal to V_{1} + 1, V_{2} - V_{3}.2669

*Now I actually check that these are equal, okay that's C...*2692

*... Let me move forward, I am checking that L of...*2702

*... Okay, now I take L(U) + L(V), let me calculate that, because that's the right side of the thing that we are going to check, that's going to be U*_{1} + 1, U_{2} - U_{3} + U + V.2709

*Let me make this clear...*2729

*... v*_{1} + 1, V_{2} - V_{3} = U_{1} + V_{1}...2734

*... + 2, and...*2747

*... U*_{2} - U_{3} + V_{2} - V_{3}, okay.2752

*When we did...*2766

*L(U + V), back that thing that I circled, we ended up with U*_{1} + V_{1} + 1.2771

*And U*_{2} - U_{3} + V_{2} - V_{3}, no I am sorry, that's not right...2781

*... 2...*2798

*... Like I said, mistakes are easy, U*_{2} + V_{2} - U_{3} + V_{3}, we got this, the question is, is it equal to the thing that we just got, which is L(U) + L(V).2802

*Well, is it equal to U*_{1} + V_{1} + 2 and U_{2}, U_{2} + V_{2}.2822

*I am going to rewrite this, so that it actually looks -U*_{3} + V_{3}, so notice...2835

*... This matches that, that's fine, however this is not the same as this, when we wrote it all out, we ended up with two things that are not equal, so this particular mapping...*2847

*... is not linear, so when we are given a mapping, we have to use examples, general examples, U and V, we have to check that, adding these two vectors, then operating on it, is equal to operating on those two vectors separately and then adding them.*2864

*If those two are equal, that's half of it, then we have to check to see whether, take us just any old vector, multiply it by a scalar, then operate on it.*2884

*If it's the same as operating on the vector, and then multiplying it by a scalar, if it ends up in the same place, the mapping is linear.*2893

*That's the definition of a linear mapping, it has to satisfy those two properties, that operation and addition, operation and multiplication, you could reverse the order.*2901

*You still end up with the same place in you arrival space, if you end up in different places, the mapping is not linear, [profoundly important, profoundly important definition, and we are going to spend the rest of this, the rest of the semester, the rest of all the lessons, discussing everything that this ultimately implies.*2910

*Thank you for joining us today at educator.com for linear algebra, we will see you next time.*2929

*Welcome back to educator.com and welcome back to linear algebra, in the last lesson we introduced the idea of a linear transformation or a linear mapping.*0000

*They are synonymous, I will often say linear mapping, occasionally how you transformation, but they are synonymous.*0009

*Today I am going to continue the discussion, a couple of more examples, just to develop more of a sensitive intuition about what it is that's going on.*0015

*This is a profoundly important concept, as we move on from here, we are going to move on into studying the structure, as actually after we discussed lines and planes.*0022

*We are going to talk about the structure of something called a vector space, and linear mappings are going to be profoundly important, and how we discuss the transformations from one vector space to another, so this idea of linear mapping for many of you really is the first introduction to this abstraction, you know up to now you have been dealing with functions.*0033

*X*^{2}, radical X, 3X + 5, but now we are going to take, make it a little bit more general, and make the spaces from which we pull something, manipulate it, and land someplace else.0052

*A lot more abstract, we are not going to necessity, I mean we will with specific examples, namely N space, R*_{2}, R_{3}, R_{n}, but the idea, the underlying notions are what we really want to study, the underlying structure that's what's important.0063

*Let's go ahead and get started, and recap what we did with linear transformations, and do some more examples, okay, so recall what a linear map means.*0081

*And again we are using this language linear line, but as it turns out we are using it as a language, because historically we did lines before we came up with a definition of what linear means, that's the only reason we call it linear, linearity is an algebraic property, it actually has nothing to do with lines at all.*0094

*Something is linear, means if we have a mapping or transformation from R*_{N} to R_{M}, it has to satisfy the following properties...0114

*... A + B = L(A) + L(B), these are vectors of course, because we are taking something from R*_{N} and moving it over to R_{M} and it also has to satisfy this other property, that if i multiply.0135

*take a vector, multiply it by a scalar, and then do something to it, it's the same as taking the vector, doing something to it, and then multiplying it by that scalar, so these two properties have to be satisfied for any particular function or mapping that we are dealing with.*0153

*Let's show what that looks like pictorially, so remember we are talking about two spaces, one of them w call the departure, and we call tit the departure space because we are taking something from this space, fiddling around with it and landing someplace else.*0171

*Now they could be the same space, like for example the function F(X) = X*^{2}., I am pulling a number like 5 and i am squaring it and I am getting back another number, 25, so the two spaces are the same, but they don't necessarily have to be the same, that's what makes this beautiful.0190

*Okay, so let's say we have the vector A, and we have the vector B, well in this space we can of course add, so let's say we end up with this vector A + B, and we know that we can do that with vectors.*0205

*Let's see, now when we add here, we, this, we addition in this space, might be defined a certain way, now mind you, it doesn't have to be the same as addition in this space, the operations are different, because the spaces may be different.*0219

*Okay, so addition in these two spaces may not necessarily be the same, usually they will be, won't be a problem, you know we will always specify when it is different, but understand it, there is no reason to believe that it has to be the same.*0239

*Okay, so in this case we take A, so this left part here, it means I add A + B first, and then I do L to it.*0252

*And I end up some place, well what this says is that if this is a linear transformation, it has to satisfy the following properties, that mean if I add these two first, and I, then I transform it and move it to this space, what I end up with.*0266

*I should be able to get the same thing if I take A first under L and then if I do B first under L, and of course I am going to end up in two different places, and he if I add these two, I should do the same thing.*0280

*In other words, adding first and then applying the transformation, or applying the transformation separately and then adding, if I can reverse those, and if I still end up in the same place, that's what makes this a linear transformation.*0294

*And again that's pretty extra ordinary, and the same thing, if I take a vector A, if I multiply it by some scalar 18, and then i operate on it with linear transformation, I am going to end up some place.*0308

*Let's say I end up here, that means I should, if I take the vector A, map it to L, and then multiply by 18, I should end up in the same place.*0320

*Again these two things have to be satisfied for something to make it linear, and again not all maps as we saw from the previous examples satisfy the property, this is a very special property, that the structure from one space to another, the relationship is actually maintained, that's what makes this beautiful, now we are getting into deep mathematics.*0334

*Okay, let's actually represent this a little bit better, so that you can see it, so A, I can transform A under, it becomes l(A), I can transform B under L, it becomes L(B).*0354

*Now, I can...*0371

*... Add these two in my departure space, so I get A + B, and then I can apply L to it, to get L(A + B) or, you can do L first, do L for B, and then add to get here.*0375

*This is more of an expanded, so this is an expanded version of what it is that i sort of drew up here, it's up to you, if you want to work pictorially, if you want to work algebraically, this is what's going on, again profoundly important concept.*0400

*And again addition in this space does not necessarily need to be the same in addition in the arrival space, they often will be like for example, if this is R*_{2} and this is R_{3}, well addition of vectors is the same, you know from space to space you are adding components, but it doesn't necessarily need to be that way.0415

*And again that's the power of this thing...*0432

*... Okay let's state a theorem, so...*0436

*... We will let L...*0446

*... From R*_{N} to R_{M} and you will notice, sometimes I will do a single line, sometimes a double line, it's just the real numbers.0452

*Let it be a linear mapping...*0460

*... Excuse me...*0468

*... then L of C*_{1} times A_{1} + C_{2} time A_{2} + and so on all the way to C_{k} times A_{k} = C_{1} time L, A_{1}.0471

*Oops, no yes that's correct, let me erase this here + C*_{2} times LA_{2} and so on.0495

*Essentially this is just an extension of linearity, so I can do, I can add more than just two things, you know A + B, I can add a whole bunch of things, then I can multiply each of those vectors by a constant, so essentially what's happening here.*0507

*If you think about this algebraically from what you remember as far as distribution, the linear mapping actually distributes over each of these.*0522

*It says that I can multiply, I and take K vectors, multiply each of them by some constant, not all of them 0, and then apply the linear transformation to it.*0531

*Or, well that, this theorem says that it is actually equal to taking each individual vector, applying the linear transformation to it, and then multiplying it by a constant.*0543

*It's just a generalization onto an infinite, any number of vectors that you take, that's all this says.*0552

*And the second theorem...*0560

*... Okay, again we will let R*_{N} to R_{M} be a linear map, L of the 0 vector in R_{N}...0565

*... Maps to the 0 vector in R*_{M}, okay, this notation is very important, notice this 0 with a vector, this 0 is a vector, because we are talking about a particular space, let's say in this case R_{2}.0580

*this 0 point is actually considered a vector, well the 0 vector in R*_{N} and the 0 vector in R_{M} are not the same, one is two vector, one is a three vector, it might be an N vector.0595

*What this is saying that if I take the 0, and if I subject it to the linear transformation, it actually maps it to the 0 in the other space, that's kind of extraordinary, so again if i draw a quick little picture, you know two different spaces.*0608

*let's say this is R*_{3}, and let's say this is R_{4}, 3 space and 4 space, if I have this 0 vector here, and the 0 vector here, they are not the same thing, they fulfil the same row, in their, in their perspective spaces, they are still the 0 vector.0623

*The additive identity, but if I subject it to transformation L, i actually map the 0 in this space to the 0 in that space, again it's maintained, it doesn't just end up randomly some place, the 0 goes to the 0.*0637

*And another one which is actually pretty intuitive if I take the transformation of U - V...*0653

*That's the same as L(U) - L(V), and again you know that the (-) sign is basically the just the addition of the negatives, so it's not a problem, okay.*0662

*Lets see if we can do an example here...*0677

*... Should I go for it, yeah that's okay, we can, we can start over here, let me o that, let me change over to a red ink here, okay.*0684

*We will let L in this particular case be a transformation from R*_{3} to R_{2} so a three vector, we are going to do something to it and we are going to end up with a two vector...0700

*... Be defined by... *0715

*... L(1, 0, 0), so in this case my definition is, I don't actually have the specific mapping that I am doing, but in this case this example is going to demonstrate that I know something about the unit vectors in this particular space.*0720

*Or, in this, where you will see, i know something about three of the vectors and we will see what happens, equals 2 - 1...*0738

*... L (0, 1, 0) is equal to (3, 1), excuse me, and L(0, 0, 1) is equal to (-1, 2), so again it says that if I take the vector (1, 0, 0) in R*_{3} in three space.0749

*Under this transformation I am defining it, I am saying that it equals this, that the vector (0, 1, 0) under the transformation L is equal to this, so I have made a statement about three vectors.*0772

*Now recall...*0782

*... That (1, 0, 0), we have specific symbols for these, we call them E*_{1}, they are unit vectors, they are vectors of length 1, and we happen to give them special symbols because they are very important, (0, 1, 0) in three space.0790

*They actually form the unit vectors that are mutually orthogonal, remember X coordinate, Y coordinate, Z coordinate, E*_{1} , we also call it I.0809

*Remember, and we call this one J, so there are different kinds of symbols that we can use, they all represent the same thing, and (0, 0, 1) is called E*_{3}, and it is represented by a K vector.0820

*Okay, our task is to find L of the vector (-3, 4, 2), so again we are given that the three unit vectors map to these three points under the transformation.*0836

*Can we find where, if we take a random vector, (-3, 4, and 2), can we actually find the point in R*_{2} that l map's knowing just about these three vectors, well as it turns out, yes we can.0852

*Let me, over here, well let's see, now (-3, 4, and 2) can be written as...*0869

*... -3I + 4J + 2K, right, we are just representing them as a linear combination of the unit vectors I, J, K, so L...*0883

*... Of (-3. 4. 2) is equal to L of -3I + 4J + 2K.*0898

*Well that's equal to, and again this is linear, so I can just sort of distribute this linearity if you will, it is -3 times L(I)...*0913

*... + 4 times L(J) + 2 time L(K), well we already know what these are, we already know what L(I), L(J), L(K) is.*0927

*It's the L(1, 0, 0), this is L(0, 1, 0), this is L(0, 0, 1), so we write -3 times, and I am going to write these as vectors, column vectors + 4 times 3, 1.*0939

*+ 2 times -1, 2, because this 2, -1 is L(I), we defined it earlier, that was part of the definition.*0958

*We know that the linear transformation maps these three vectors to these three points, that much we know, no we just sort of set it up in such way, and now end up with -6 and 3.*0968

*I am going to write everything out here, 12 and 4, please check my arithmetic because i will often make arithmetic mistakes, -2 and 4.*0985

*And then when we add these together, we end up with 4 and 11, or we can do it in coordinate form, (4, 11), so there we go, knowing where the linear transformation actually maps the unit vectors , allows us to find the linear transformation of any other vector in that space.*0996

*That's kind of extraordinary, okay...*1019

*... Now let's do another example here...*1029

*... Okay...*1035

*... Let F, this time I use the capital F, be a mapping from R*_{2} to R_{3}, so I am mapping something from two space to three space, okay.1039

*Be defined by...*1053

*... The following, F of the vector XY is equal to, now I am going to represent this as a matrix, so again this is just a mapping that I am throwing out there.*1059

*(1, 0, 0, 1, 1, -1), this is a 3 by 2...*1075

*... This matrix multiplication is perfectly well defined, so it says F is a mapping, notice I haven't said anything about it being linear, I just said it's a mapping, that takes a vector in R*_{2}, transforms it and turns it into a vector in R_{3}.1087

*Lets exactly what happens here, this is the definition that says take XY, some 2 vector, multiply on the left by this matrix, and you actually do end up getting, so this is 3 by 2, this is a 2 by 1.*1100

*Well sure enough, you will end up getting a 3 by 1 matrix, which is a 3 vector, so we have taken a vector in R*_{2}, mapped it to R_{3}, now our question is, is it linear?...1115

*... That's just kind of interesting, I have this matrix multiplication, now I want to find out if it's linear, again the power of linearity, this has nothing to do with lines at all.*1130

*Okay, so again when we check linearity, we check two things, we check the addition and we check the scalar multiplication, we will go through the addition here, I will have, you go ahead and check these scalar multiplication if you want to, so check this, check that F of...*1139

*... U + V for any two vectors equals F(U) + F(V), that we can exchange the addition and the linear, and the actual function itself, okay.*1158

*We will say that U is equal to U*_{1}, U_{2}, oops, then make that like that, we will say that V is...1175

*... V*_{1} and V_{2}, okay, now U + V is exactly what you think it is, it is U_{1} + V_{1}, U_{2} + V_{2}, okay.1188

*I am going to write that, let me actually write it a little differently, let me write it as a, as a 2 vector, column vector, I think it might be a little bit clear.*1206

*I will do this, because we are dealing with matrix multiplication, when, we will just deal with matrices, so U*_{1} + V_{1}, U_{2} + V_{2}.1219

*L:et me make sure I have my indices correct, yes, okay, now...*1229

*I will do a little 1 here, and now let's transform, let's do F(U + V), okay, well that's going to equal the matrix...*1236

*... (1, 0, 0, 1, 1, -1) times...*1252

*... U*_{1} + V_{1}, U_{2} + V_{2}, okay, so again it's, when we do this times that + this times that and then this times that + this times that.1261

*And then this times that + this + this times that, that's how matrix multiplication works, you choose a row and you go down the column, there are two elements in this row, two elements in this column, you add them together.*1274

*What you end up with is the following, U*_{1} + V_{1}, U_{2} + V_{2}, and you get...1286

*... U*_{1} + V_{1} - U_{2} + V_{2}.1300

*This is the three vector, that's the first entry, that's the second entry, that whole thing is the third entry, so we have done this first part, the left, okay.*1313

*Now let's do the right...*1323

*F(U) is equal to...*1327

*... (1, 0, 0, 1, 1, -1) times U*_{1}, U_{2}, that's equal to U_{1}, U_{2}, U_{1} - U_{2}...1333

*... Okay, now let's move to the next page...*1351

*... We will do F(V)...*1360

*... That's equal to (1, 0, 0, 1, 1, -1) times V*_{1}, V_{2}, that's equal to V_{1}, v_{2}, V_{1} - V_{2}.1363

*Now we have to add the F(U) and the F(V), so F(U), which we just did + F(V), which was the second thing we just did, is equal to U*_{1}, U_{2}, U_{1} - U_{2}.1380

*V*_{1}V_{2}, V_{1} - V_{2}, that's equal to U_{1} + V_{1}...1400

*... U*_{2} + v_{2}...1411

* ... U*_{1} + V_{1}, I have just rearranged and put them, and grouped the U_{1} and, the U_{1} with the V_{1}, the U_{2} and the V_{2} +....1417

*... + U*_{2} + V_{2}, there we go, and as it turns out, F(U) + F(V), does in fact equal F(U + V), quantity, so yes, so let me write that out.1432

*F(U + V), does in fact equal F(U) + F(V).*1448

*Now when we check the scalar multiplication, it will also check out, so yes this map is linear...*1460

*... This is rather extra ordinary, matrix multiplication is a linear mapping, matrix multiplication allows you to map something in N space, like R*_{5}, into, let's say seven space, R_{7}.1472

*And to retain the structure of being able to add the vectors in five spaces first, and then do a linear transformation, or do the linear transformation first, end u in seven space, and then ad, you end up in the same place.*1490

*That's extraordinary, matrix multiplication is a linear mapping, notice it has nothing to do with linear, with a line, this is an algebraic property.*1505

*An underlying structure of the mapping itself...*1513

*... Okay...*1517

*... Therefore if you have some mapping...*1521

*... L, defined by the following L of some vector X, is actually equal to some matrix, some M by N matrix, multiplied by X...*1533

*... Then L is linear...*1548

*... We just proved it, always...*1552

*... Okay, now let's state a theorem here...*1557

*... If L is a mapping from R*_{N}...1564

*... To R*_{M}, is a linear mapping...1571

*... here is what's amazing, then there exists a unique M by N matrix...*1582

*... A, such that the mapping, the, is actually equal to some matrix, times...*1591

*... For any vector in R*_{N}...1602

*... Okay, this is profoundly important...*1612

*... We just proved that matrix multiplication is a linear mapping, the other way around i also true, if I have a linear mapping that has nothing to do with the matrix, because remember the examples that we have been dealing up to this point have nothing to do with matrices necessarily.*1616

*They were just mapping, function, if it turns out that, that mapping is linear, what this theorem tells me is that there is some matrix.*1630

*Some matrix somewhere, that actually represents that mapping, in other words I can always, I may not really need to find, but it tells me that the matrix actually exist, that every linear mapping is associated with some M by N matrix.*1641

*And some M by N matrix is associated with some linear mapping, that's extraordinary, there is a correspondence between the set of all linear mappings, and the set of all M by N matrices, that's extra ordinary.*1657

*Actually there is way to find the matrix and here is how it is, so...*1673

*... The matrix A...*1678

*... And it's quite beautiful, is found as follows...*1683

*... The matrix of A is equal to the matrix of...*1691

*... I take the unit vectors in my space, in my departure space, I subject them to transformation, whatever the linear mapping happens to be, and then the vectors that I get, i set them up as columns in a matrix.*1700

*And that's actually the matrix of my transformation, of my linear transformation, L of...*1717

*... E to the N, okay...*1725

*... Yes, alright...*1730

*... I will write it out, so here's what I am doing, so the ith column, let's say the fourth column is just them linear transformation of the fourth unit vector for that space, we should probably just do an example about, or work out much better.*1735

*Okay...*1754

*... Let's...*1756

*... Let this be a mapping, defined, R*_{3} to R_{2}, so we are taking a three vector, mapping some, transforming it into a two vector.1764

*Let it be defined by the following, L(XYZ) is equal to...*1777

*... I am sorry, no this is mapping from R*_{3} to R_{3}, so we are mapping, we are mapping it onto itself essentially, so it's mapping from three space onto three space, which by the way, when the spaces happen to be the same that you are mapping to and from, it's called an operator, a linear operator...1788

*... X + Y is the first entry, Y- Z is the second entry, X + Z is the third entry, so I take a vector, do something to it, and arrange it like this, this is what the mapping is defined by.*1809

*Now the question is this, we said that any linear mapping has a matrix associated with it, I can always represent a linear mapping as a matrix multiplication, very convenient, let's find that matrix...*1823

*... And it's called the standard matrix by the way, I myself am not too big on nomenclature, am more interested that you actually understand what's happening, you could call it whatever name you want.*1839

*Okay, we said that all we have to do is take the linear transformation, or take the unit vectors in the space, in this case R*_{3} or departure space, and just subject them to this transformation, and then set up this columns, and that's our matrix.1851

*L of E*_{1} equals L of, well in three space (1, 0, 0) is the first unit vector, the X, the I, well that equals, well, let's go up here, 1 + 0...1866

*... 0 - 0, and 1 + 0...*1886

*... We end up with (1, 0, 1) okay, this is going to be column 1 of our matrix...*1891

*... L of E*_{2} equals L(0, 1, 0), well X + Y, 0 + 1, Y - Z, 1 - 0 and X + Z, 0 + 0.1901

*Then I should end up with (1, 1, 0), and that's going to be column 2, if I take L(E*_{3}), which is L(0, 0, 1) , well X+ Y, 0,+ 0, Y - Z, Y - Z, 0 - 1, and X + Z, 0 + 1.1917

*I end up with (0, -1, 1)...*1945

*... This is my column 3, so A, the standard matrix is (1, 0, 1), (1, 1, 0), (0, -1, 1)...*1950

*... That is my answer...*1969

*Let me change it to blue, this was how the linear mapping was defined, the linear mapping. Therefore I know that there are some matrix associated with this linear mapping, I could represent it as a matrix multiplication, which is very convenient.*1973

*Well, I take the unit vectors for this space, I subject them to this transformation, I get these things.*1989

*I arrange these things one after the other as the columns of the matrix, and I end up with my matrix.*1996

*This means that...*2003

*... If I want to do this mapping, all I have to do is take any vector X, and multiply by this matrix on the left, profoundly important.*2010

*Every liner mapping is associated with an M by N matrix, and every M by N matrix represents some linear mapping somewhere.*2020

*That's extraordinary, so now you are not just talking about numbers arranged randomly in a square, or in some rectangular fashion, that this actually represents a linear mapping, a linear function from one space to another.*2029

*Okay we will talk a little bit more about this next time, thank you for joining us here at educator.com, we will see you again, bye, bye.*2044

*Welcome back to educator.com, welcome back to linear algebra.*0000

*The last lesson we continued to talk about linear transformations. Today we are going to take a brief respite from that, talk about something a little bit more practical, something that you have seen before.*0003

*But we are just going to discuss lines and planes, before we actually launch into the study of the structure of linear mappings.*0009

*When we get into vector spaces next, but... So some of this that we do today will be familiar, perhaps some of it will be different, and maybe some of the techniques will be a little new.*0021

*In any case let's just dive right in, so again we are going to talk about lines and planes, okay...*0030

*... Let's talk about lines in R*_{2}, we know that...0039

*... We know that AX + BY + C is the equation of a line, you are often used to seeing this thing on the other side.*0053

*It doesn't really matter where you put it, believe it or not, it's actually better to put it this way, to have this 0 over here, that way.*0060

*All of the constants and all of the variable are on one side, and the 0's over on this side, because this idea of homogenous system is going to be very important for us, because remember we discussed homogeneous systems and the conditions under which a homogeneous system has a solution, where the determinant of the particular matrix...*0066

*... Coefficient matrix is equal to 0 and things like that, so it's often best to write it this way, and it's more consistent when you move on to plane and equations of things called hyper planes in N space, which are just the analogs of lines and planes and the spaces that you know, R*_{2} and R_{3}.0085

*Okay, well if I have some point P, is XY...*0103

*And I have actually this is P*_{1}, so X_{1}Y_{1} and if I have a point P_{2}, which is X_{2}Y_{2}.0110

*Well, if these two points are on that line, then they satisfy the following, basically just put the X*_{1}Y_{1} in for X and Y, so you get AX_{1}...0123

*... + BY*_{1} + C = 0, actually I don’t need that C_{1}, because C is constant.0136

*And I have A times X*_{2} + B times Y_{2} + XC = 0. So these two points satisfy if they are on that line, they satisfy this relation, right, well let's write all three, let's write the generic version, and these one on top of the other.0148

*We have AX + BY + C = 0...*0168

*... AX*_{1} + BY_{1} + C = 0, AX_{2} + BY_{2} + C = 0.0179

*And now take a look at this, this is a system. We have three equations, we have two unknowns, X and Y, we have some coefficients, we have, let me highlight these in red.*0192

*We have A, we have B, C, A, B, C, A, B, C...*0205

*... And then we have the X, the X*_{1}, the X_{2}, we have the Y_{2}, the _{1} and the Y, we can actually write this as a matrix times a vector.0214

*We were looking... Of course, here are the A, the B and the C, so these X's and Y's are actually the things that become the coefficients, and the coefficient in front of the C is the 1.*0227

*Let me rewrite this in matrix form as, let me actually do it this way...*0237

*... X, Y, 1, that's this, this and the coefficient here is 1, X*_{1} Y_{1}, 1, X_{2}Y_{2}, 1 this is our matrix.0245

*And then multiplied by the A, the B and the C, so the A, B and C are the three numbers that we are actually looking for, and that's equal to, well (0, 0, 0), again it's 0 on the right.*0259

*This is the homogeneous system, this is the homogeneous system, and this is the matrix representation, so given two points and this equation that we know, we can set up this homogeneous system and now we know how to solve this.*0271

*As it turns out, so these are numbers, these are actual number, these X and Y, they stay as variables, that's why they have no subscript, so because this is a 3 by 3, it's an N by N, and it's a homogenous system.*0286

*We know that it has a solution, a non-trivial solution, if the determinant of this matrix is equal to 0, that's one of our theorems from some lessons past.*0299

*Let's go ahead and set up this determinant, now ket me actually write that down specifically, so the determinant, of, in this case let's call them matrix A = 0.*0310

*It implies that there exists a solution...*0325

*... An other words ABC can be found, they represent actual numbers, so the symbol for determinant is that straight line X, Y,1, X*_{1}, Y_{1}, 1, X_{2}, Y_{2}, 1.0330

*We want the determinant to be equal to 0, so let's expand the determinant and come up with some equation, conditions on X and Y such that this is satisfied.*0348

*Okay, so let's go ahead and expand, we are going to expand of course because X and Y are variable, we are going to expand along that row.*0359

*Okay, so I take my, let me actually do this in blue here, so I am going to knock that out, so what I end up with is X times...*0368

*... Let me see...*0386

*... You know what let's do this a little bit differently, let's actually stop here, and since we are dealing with X and y, when we actually do some numbers in here, then we will do the expansion.*0394

*Again you are going to expand along the first row, but this is what's going on here, so when you have two points X*_{1}, Y_{1}, X_{2} Y_{2}.0404

*You can set up this matrix, solve for the determinant and you will actually get your equation of your particular line which is...*0412

*The original line that we were looking for, so let's actually do an example...*0421

*Let's take the point, actually let me move forward one, let's take the point P*_{1} as -1, 3, and let's take point P_{2} as 4 and 6.0427

*Okay, and now let's set up our...*0444

*... Determinant is before, we have X, Y and 1, so the variable stays the variable, and of ‘course we want this determinant 2 = 0, we take the first point, it's going to be -1, and 3, and 1, this stays 1 and the second point is 4 and 6, so we put 4 and 6.*0449

*And now we can go ahead and expand this, and I actually see what the equation turns out to be, so again I am going to expand along the first row, so when I do that, I end up with X times...*0469

*This determinant (3, -6), okay and I move to the next one, which is going to be a minus, you remember +, -, +, -,+, -, we alternate signs.*0481

*It's going to be - that entry, that I am crossing out, times -1, times 1...*0495

*... -1 times 4...*0504

*... + 1 times -1 times 6 is -6, -...*0509

*... 12, and we end up with 3 -6 is -3X, here we end up with -5 times of -Y.*0520

*And this is going to be + 5Y, and then this is going to be -6 -12 is -18, and again this determinant is equal to 0.*0533

*This is the equation of our line, now yes, can you write this as -3X + 5Y = 18, you can't.*0546

*Can you write this as 3X - 5Y = -18, yes you can, it’s a personal choice, I personally like to see everything in the same form that I did the mathematics.*0558

*I don't believe in simplification, simply for the sake of making something aesthetically more pleasing.*0569

*As it turns out, the more you see if the mathematics, even though it might look more complicated, more symbolism, it actually is more clear, that means there is more on the tale that you see.*0578

*Again it's a personal choice, I will go ahead and leave it like that, okay, now let's talk about lines in R*_{3}...0588

*... We just talked about lines in the lane, now we are going to talk about lines in actual space, slightly more complicated, but not too bad, okay so recall...*0601

*... That vectors...*0617

*... And points...*0621

*... are just different representations of the same thing...*0628

*... Notations of the same thing, sometimes we think of a point in the space as the point, the coordinate XYZ, sometimes we think it is a vector from the origin to that point.*0637

*We can write the point let's say P(0), which is some XYZ, okay it's equivalent to, well off, to symbols, the vector P(0), okay.*0649

*Okay, now we will let...*0662

*U be a vector in R*_{3}, so this is the set symbolism, U is a member of R_{3} and we will let u, will represented by its component form, point form U_{1}, U_{2}, U_{3}.0667

*Vector form, point form, okay, the equation of a line in space, equation...*0690

*... Of a line in space...*0700

*... Is...*0704

*... N, we were talking about points in space, we were talking about vectors, I am going to give you the vector representation, then I will go ahead and talk about the breakdown.*0708

*Is some point is a reference point + some scalar times a vector in a given direction, okay T in R, and I will explain what all of this means in a second, so P(0) is a reference point.*0719

*It is some point on the particular line; I have to pick some point to start off with, U the vector...*0741

*... Is the direction from that reference point, either this way or that way...*0755

*... And again some of this will make sense when we actually do a problem...*0765

*... Let me draw a picture here, of our coordinate system, hour right handed coordinate system, and we have X, we have Y, we have Z, I am not going to label the, I hope you don't mind.*0771

*This is that, and move on to red, so i am going to put point P(0) over here, that's my reference point, now my vector T, let's just say T, I am sorry U is in that direction.*0785

*I want to, so I have a point P, the line through point P in the direction of U, well it's one direction, that's another direction and that's what's happening.*0801

*This point P also happens to be a vector...*0816

*... My reference point, my reference vector, and then + U can be any vector of any length, but T is that scalar that you are multiplying by to make it bigger or smaller, so starting a T, I can go this way, that way by changing T and having it run through all real numbers...*0823

*... Moves in this direction, or if I multiply it in the negative direction, and go this way, that way, that way, so this is actually how it's represented.*0846

*Now, these are, this is a vector representation, well as you know a vector represents just points, so...*0856

*... Let me see if we can't...*0865

*... Well if I am going to hold off on the parametric representation for a second, but I would like you to just sort of see what this is, so the equation for line is this equation right here, it says that any point, any point can be gotten to, by starting at a reference point here.*0870

*And moving in the direction of U, depending on what this is, either forward or backward to get the entire line, that's what's happening...*0888

*... Okay, so let's see, let's write, let me go back to blue here, let's write this again, X vector is equal to some E, + T times U, okay.*0898

*Well this can be represented as , well X*_{1}, X_{2}, X_{3}, again points and vectors are the same thing, if you want to actually expand it, this is just a short hand notation, P(0), why hadn't we called P(0).0916

*Let’s just say...*0932

*... I don't know let's call it as Z*_{1}, now let's do W_{1}, W_{2}, W_{}, the symbolism doesn't matter.0939

*W*_{1}, W_{2}, W_{3} + T times U_{1}, U_{2}, U_{3}, so this is actually equivalent to three equations, notice it's the same as...0953

*... X*_{1} is equal to W_{1} + TU_{1}, X_{2} equals W_{2} + T times U_{2}.0969

*And X*_{3} equals W_{3} + T times U_{3}, okay, vector form...0980

*... Component form, I broke it up, these are called the parametric equations...*0991

*... The component parametric equations, this is also called the parametric equation, and it's called the parametric equation...*1004

*... because you have a parameter, your parameter is T, that's something that varies, and it expresses relationship between some point and another point based on some parameter.*1014

*That's what's changing, you are not actually expressing a direct relationship between the two points, you are expressing it using a third parameter, or a parameter.*1027

*In this case a third entity if you will...*1036

*... Okay...*1040

*... Okay, so let's do an example...*1045

*... Example will be, let's find parametric equations, so find...*1052

*... Parametric equations...*1064

*... For the line, through the point 3...*1071

*... And now let me make this a little bit clear here...*1078

*... (3, 2, -1) I am not sure if that's much clear in the direction of the vector (2, -3 and 4).*1084

*We want the line to pass through this point and we want to move in the direction of that point, both directions positive and negative, well you can just read it right off.*1100

*Again the equation of line says you just take your reference point and you add the some parameter t times the vector U. Well, in component form it's like this.*1109

*We just take X*_{1}, W_{1}, TU_{1}, okay...1120

*... We want to pass through this point, so our X value of the point that we want is equal to3, that's the point that it passes through in the direction of that vector, + T times 2.*1126

*I will write it this way, just to be consistent you could write two T's, it's not a problem, but I will just, I will just keep it consistent, because here I put the T actually before the vector.*1145

*X*_{2} is equal to 2 - T times -3.1153

*And X*_{3} is equal to -1 + T times 4, or if you prefer something a little bit more normal, X = 3 + 2T.1162

*Y = 2 + 3T - (-), you get a +, and Z = -1 + 4T.*1181

*These equations, they give you the X, the Y and the Z coordinate of all the points that are on this line, that pass through this point and are in the direction of this vector.*1193

*Okay...*1206

*... Okay, let's do another example here, let's see, find, let me move on to blue...*1212

*... Find parametric equations...*1228

*... for the line through the point P(0), which is (2, 3, -5)...*1237

*... And P(1), which is (3, -2, 7) okay, well so we have a point, so we can pick either point as far as out reference is concerned, because we are given two of them.*1248

*We need a direction vector, well the direction vector is, well if you have a point P(0), and a point P(1), that's usually a pretty good direction vector, so just take this one - that one, that's how we get a vector in that direction, right.*1263

*What we want to take is the direction vector, P(0), P(1) and I have, so I have chosen 1 as my head and 0 my tail of my, I can do it in either way, I just happen to have made a choice, so that's going to equal to, let's see(3, -2)...*1281

*... -2, -3 , 7, -(-5)...*1303

*... That's how we get a vector, we just take the, the arrow, the head - the tail, so we end up with (1, -5 and 12), so this is my direction vector.*1312

*And now I can just read it right off, X, it's okay I will do it here, once again X is equal to P*_{0} + t times my direction vector, which is now that one.1328

*I wills say, I will call this x, Y, and Z, I want parametric equations, P(0), 2, 3, -5, 2, 3, -5...*1343

*... here is my direction vector + 1T - 5T + 12T, through this point and through this point I have a direction vector, I have chosen one of the points as mu reference, I have chosen the 2, 3, -5.*1358

*These are the parametric equations for a line passing through those two points...*1382

*... Okay, and again i could have chosen the other point, it doesn’t really matter as long as there is a point on that line.*1390

*Okay, let's do something here, make some room, let me rewrite...*1397

*... We have X = some X of 0, + TU*_{1}, Y is equal to Y_{0} + TU_{2}.1409

*And Z is equal to Z*_{0} + TU_{3}, that's all we have done here, this is just the generalized version, now T is the same, T is always going to be the same here, 5, 6, -6, radical 2, 18, 40.1424

*Well because this is same, I can solve each of these equations for T, so I end up with the following...*1442

*... T = X - X*_{0} over U_{1}, it's equal to Y - Y, 0 over U_{2}, equals Z - Z_{0} over U_{3}.1456

*This version of it, when I actually solve for the parameter, and express the relationship this way, this is called the symmetric form of line...*1469

*... We have the vector representation, which is this, we have the parametric, well this is a parametric vector representation, this is a parametric component representation, it just breaks this one up, so we actually see everything.*1481

*And then we can rearrange this, and this is called the symmetric form of the line, again it's just different ways of representing something for different possible techniques.*1494

*Certain problems require one thing, require one form or, over another, makes it easier, excuse me...*1504

*... We will leave you with that, okay let's move on to, so we did lines in R*_{2} and then w moved to three space, we did lines in R_{3}, now we are going to move on to planes, in R_{3}...1518

*... Okay...*1538

*... A plane in R*_{3}...1543

*... Is obtained...*1551

*... By specifying a point in the plane...*1558

*... And a vector perpendicular to it...*1571

*... It will make sense in a minute when we draw the picture...*1577

*... This vector is called...*1583

*... A normal vector, so any vector that's normal to a plane is perpendicular to every point in that plane, that's what normal actually means.*1591

*Okay let's draw a picture of this, this is important, so let's draw our right hand coordinate system, and I am going to pick three points, one on this axis, one on this axis, one on this axis, so that we actually can have a good visual of a plane.*1602

*We have a plane passing through these three points, in the X, Y, Z coordinate system, now if I take a...*1615

*... Point there, and if I have a vector, okay and let's say I have A, so this is a plane here, and let's say I draw another...*1627

*... Point there, so I will call this one P(0), now I will just call this one P, well the perpendicular and this is N, that's our normal vector, and this is what's happening.*1642

*As it turns out, you can take a point in the plain, and as long as you specify a vector, that's actually normal to it, as it turns out, you have defined every single point in that plane, and we will talk about it in just a minute.*1655

*Let me represent these as component, with their components, so the vector N is N*_{1}, N_{2}, N_{3}...1670

*... P*_{0} is let's call it X_{0}, Y_{0}, Z_{0}, and point P, we will just call it XYZ, so we want a way to represent all of the points of that plane, so we are looking for ultimately an equation in X Y and Z, some relationship that exist among these values.1683

*Okay, so let's take this vector right here, because we know that two points in a plane, they represent a vector, if I just take the head of the vector - the tail, so let me also specify P*_{0}, P.1705

*That vector from here to P, so this not a P*_{0}, excuse me, that's just a regular P...1721

*... Well that equals, well we take this one - that one, so X - X is 0...*1731

*... Y - y is 0, and Z - Z is 0, so now I have all of my components, okay, well this vector made of the two points in the plane, is perpendicular to this vector right.*1739

*Right, and what do we know about two vectors that are perpendicular to each, they are orthogonal to each other, orthogonally means that their dot product is 0, so let's set up that equation and see what happens.*1754

*My vector, P, 0, P, which is this vector right here, dotted with N is equal to 0, that's the definition, I am looking for normal vector.*1766

*Now let's go ahead and solve this for product based on the components, well that's equal to...*1782

*... This times that + this times that, + this times that = 0, so I get N*_{1} times X - X is 0 + N_{2} times Y - Y is 0...1790

*... + N*_{3} times Z - Z_{0} = 0...1810

*... This is my equation for a plane, now I can expand this out, I can do N*_{1} times X, N, you know -N times X is 0, you are going to get numbers, you are going to get an equation in XYZ, precisely because you have variables X, Y and Z.1821

*Remember these things with subscripts, they are actual numbers, and these are going to be actual numbers, it's the X, Y and Z, the things without subscripts that are your variables.*1837

*Now this right here, this is the equation of a plane...*1847

*... equation of a plane...*1853

*... In R*_{3} and again all just got it by taking two points or one point actually in specifying the vector that's normal to it, because if it's normal to that one point, well it's going to be normal to every other point in the plane, okay let's do an example.1858

*Find the equation of the plane passing through...*1877

*... (2, 4, 3) and perpendicular to...*1889

*... Yeah it's fine, perpendicular to (5, -2 and 3), so we want to find the equation of the plane that passes through this point and it's perpendicular to this vector, and again any point in the plane, if one point is perpendicular to that vector, every point in that plane is perpendicular to that vector.*1903

*That's the whole idea behind choosing a normal vector, well let's just multiply it out, based on the equation that we just had on the previous slide, we take 5 times...*1921

*... X - 2 + (-2) times Y - 4...*1935

*... + 3 times Z - 3 = 0, 5 times X - 2 -...*1947

*... two times Y - 4, + 3 times Z - 3 is equal to 0...*1965

*... we can stop there, that's fine, this is actually a perfectly valid equation, I like this because I like to see again, I like to see the dot product, that the multiplication that I am doing, one component times another, + one component times another.*1978

*If you want you can multiply this out, it's not a problem, so if you end up with something like 5X - 10 - 2Y + 8 + 3Z - 9 is equal to 0.*1992

*You end up with 5X - 2Y + 3Z + 1 = 0.*2009

*This is perfectly valid, and it's the one that you are accustomed to seeing of ‘course, it's a personal choice, I would like this one, because I would like to see everything that I am doing.*2019

*I prefer a longer expression, more complicated expression, as supposed to things being hidden, I see what's going on, I can see the normal vector, the components of normal vector.*2029

*I can see my point (2, 4, 3), I can see them, it's hidden in here, I have multiplied it out, so simplification makes things look prettier, doesn't necessarily simplify the material.*2039

*Okay...*2052

*... Lets do one final example...*2055

*... Find an equation of the plane passing through the three points, so this time, passing through three points, well there are couple of ways to do this.*2061

*Let's do it the way we did for R*_{2}, so our three points are going to be (2, -2, 1)...2075

* (-1, 0, 3) and (5, -3) and 4.*2086

*Okay, if it passes through, so we know that the equation of a line or the equation of a plane is this, AX + BY + Z, AX + BY + CZ + D = 0, this is the generic equation for a plane.*2099

*For these three points to be on this plane, they actually have to satisfy this, so I can just put all of these in here, and then I can, along with that equation, so I could do (1, 2, 3) for these three points, plus my original generic equations.*2123

*I end up with four equations and three unknowns, just like I did for the first time around, I end up with a 1, 2, 3, 4, 1, 2, 3, 4, I end up with the 4 by 4 linear, 4 by 4 matrix.*2139

*Well a 4 by 4 matrix, it has a solution, if the determinant of that matrix, because it's homogeneous, they are all equal to 0, is equal to 0, therefore I can set up the following.*2152

*X, Y Z 1, it's that coefficient, that coefficient, that coefficient and...*2166

*That coefficient, well, that coefficient, the one what I am looking for is A, B. C and D, those are my variables, and then I take the point (2, -2, 1).*2174

*2, -2, 1, 1, I take the other point, -1, 0, 3, 1 and 5 - 3, 4, 5 - 3 , 4, 1.*2187

*This determinant has to equal 0, and again I am going to expand along the first row, it actually doesn't where you expand, you are going to get the same answer.*2205

*Don't feel like you are stuck with the first row, it's just, that's the row where the variables happen to be, so I just sort of prefer it.*2216

*When you do this expansion and again it's little more complicate but you have done the expansion before, you end up with the following, A to X + 15Y - 3Z + 17 = 0.*2224

*In this case we do end up with this sort of simplified version, before we don't necessarily have a normal vector.*2240

*There you go, if you are given a point in a vector, you can go ahead and just write it out, take the dot product and get the very simple equation for a plane.*2249

*Or if you are given the three points separately, you can just set up this determinant, set it equal to 0 and solve that way for the particular equation.*2259

*Thank you for joining us for lines and planes, thank you for joining us at educator.com, and linear algebra, we will see you next time.*2268

*Welcome back to educator.com and welcome back to linear algebra.*0000

*Last time we discussed lines and planes. Today we are going to move on to discuss the actual structure of something called a vector space.*0004

*So, for those of you who come from the sciences and the engineering, physics, chemistry, engineering disciplines... this notion of a vector space may actually seem a little bit strange to you.*0014

*But, really, all we are doing is we are taking the properties that you know of, and are familiar with, with respect to 2 space and 3 space, the world that we live in, and we are abstracting them.*0027

*In other words we are looking for those properties of that space that are absolutely unchangeable. That are very, very characteristic of a particular space and seeing if we can actually apply it to spaces of objects that have mainly nothing to do with what you know of as points or vectors in this 3-dimensional space.*0037

*As it turns out, we actually can define something like that, and it is a very, very powerful thing.*0058

*So, again, when a mathematician talks about a space, he is not necessarily talking about n-space, or 3-space or 2-space, he is talking about a collection of objects that satisfies a certain property.*0064

*We give specific names to that particular space, for example, today we are going to define the notion of the vector space.*0076

*Those of you that go on into mathematics might discuss something called the Bonnock space, or a Hilbert space.*0081

*These are spaces, collections of objects that satisfy certain properties.*0087

*Before we actually launch into the definition of a vector space and what that means, let us recall a little bit what we did with linear mappings.*0094

*So, you remember your experience has been with lines and planes and then we introduced this notion of a linear mapping.*0103

*We told you this idea of a linear mapping has nothing to do with a line, we are just using our experience with lines as a linguistic tool to sort of... the terminology that we use comes from our intuition and experience.*0110

*But, a linear mapping has nothing actually do to with a line. It has to do with algebraic properties.*0123

*In a minute we are going to be defining this thing called a vector space, which is a collection of objects that satisfies certain properties.*0129

*As it turns out, even though we call it a vector space, it may or may not have anything to do with vectors. Directed line segments, or points in space.*0137

*Granted, most of the time we will be working with our n-space, so actually we will be talking about what you know of as vectors or points, but we are also going to be talking about say, the set of matrices.*0146

*The set of say 5 by 6 matrices. It has nothing to do with points and the certainly do not look like vectors. They are matrices, they are not directed line segments.*0159

*But, we use the terminology of vectors and points because that is our intuition. That is our experience.*0167

*In some sense we are working backwards. We are using our intuition and experience to delve deeper into something, but the terminology that we use to define that something deeper actually has to still do with our more superficial experience about things.*0174

*I do not know if that helped or not, but I just wanted to prepare you for what is coming. This is a very, very beautiful, beautiful aspect of mathematics.*0190

*Here is where you sort of cross the threshold from -- we will still be doing computation, but now we are not going to be doing computation strictly for the sake of computation.*0198

*We are doing it in order to understand deeper properties of the space in which we happen to be working.*0207

*This is where mathematics becomes real. Okay.*0214

*Okay. Let us start off with some definitions of the vector space.*0220

*Now, these definitions are formal and there is a lot of symbolism and terminology. I apologize for that. We will try to mitigate that as much as possible.*0224

*A lot of what is going to happen here is going to be symbols, writing, and a lot of discussion.*0233

*We want to get you to sort of start to think about things in a slightly different way, but still using what you know of, regarding your intuition.*0237

*Not relying on your intuition, because again, intuition will often lead you astray in mathematics. You have to trust the mathematics.*0247

*Okay. So, let us define a vector space. A vector space is a set of elements with 2 operations.*0254

*Now, I am going to have different symbols for the operations. The symbols are going to be similar to what you have seen as far as addition and multiplication, but understand that these operations do not have to be addition and multiplication.*0275

*They can be anything that I want them to be, plus, with a circle, and a little dot with a circle, which satisfy the following properties. *0285

*Okay. There are quite a few properties. I am going to warn you.*0314

*We will start with number 1. If u and v are in v... a set of elements, let us actually give it a name instead of elements... v.*0320

*Then, u + v is in v. This is called closure.*0343

*If I have a space and I take two elements in that space, okay? Oops -- my little symbol.*0350

*If I, in this case -- that is fine -- we can go ahead and call this addition, and we can call this multiplication as long as we know that this does not necessarily mean addition and multiplication the way we are used to as far as the real numbers are concerned.*0355

*It could be any other thing, and again, we are just using language differently. That is all we are doing.*0370

*So, if u and v happen to be in v, then if I add them, or if I perform this addition operation on those 2 elements that I still end up back in my set.*0378

*Remember what we did when we added two even numbers? If you add two even number you end up with an even number.*0390

*In other words, you come back to the set... but if you add to odd numbers, you do not end up back in the odd number set, you end up back in the even number set.*0394

*So, the odd numbers do not satisfy the closure property. That means you can take two elements of them, add them, but you end up in a different set all together. That is very odd.*0403

*That is why we specify this property. So, if you take two elements of this space, the vector space, then when you add them together, you stay in that vector space, you do not land someplace else.*0410

*Okay. The others are things that you are familiar with.*0422

*u + v = v + u, this is the commutativity property.*0427

*By the way, I am often not going to write this... these circles around it. I will often just symbolize it like that and that.*0440

*Again, they do not necessarily mean addition and multiplication, they are just symbols for some operation that I do to two elements.*0448

*Okay... B. u + v + w = u + v + w... associativity.*0457

*This is the associativity of addition operation.*0480

*C. There exists an element, a symbolized 0-vector in v such that u + 0, u + that element... excuse me... is equal to 0 + that element... commutativity... = 0.*0486

*This is just the additive identity. There is some element in this vector space that when I add it to any other vector in that vector space, I get back the vector, nothing changes.*0513

*Okay... and d, for each u in the vector space, for each vector in the vector space, there exists an element symbolized -u, such that u + this -u = that 0 vector element.*0524

*This is called the additive inverse, 5... -5... 10... -10... sqrt(2)... -sqrt(2).*0552

*This says if I have any vector, pick any vector in a vector space, in order for it to actually satisfy the vectors of the vector space, somewhere in that vector space there has to be an element, the opposite of which when I add those two together, I end up with a 0 vector.*0558

*That is what it is saying. Okay.*0572

*2... so this first one is the set of properties having to do with this addition operation.*0575

*Number 2 happens to deal with scalar multiplication operation.*0580

*If u is in the vector space v, and c is some real number, scalar, again, then c × u is in 5.*0589

*Again, this is closure with respect to this operation.*0603

*Okay. We did closure up here. We said that if we do this addition operation, we still end up back in the set.*0609

*Well, this one says that if I multiply some vector in this space by a number, I need to end up back in that set... I cannot jump someplace else.*0614

*This is closure with respect to that operation. That one was closure with respect to that operation.*0625

*Okay. I am going to... I am not going to do (a,b,c,d) again, I am going to continue on (a,b,c,d,e)... c × u + v, and again, you have seen a lot of these before... c × u + c × v... this says that it has to satisfy the property of distribution.*0631

*c + d × u = c × u + d × u... distribution the other way.*0654

*The distribution of the vector over 2 scalar numbers. *0663

*G is c × d × u... I can do it in any order.*0669

*c... d... × u.*0682

*H, I have 1, the number one × u is equal to u.*0688

*Okay. Let us recall again what this means. If I have a set of elements that have 2 operations, 2 separate operations, two different things that I can do to those elements.*0698

*Either I can take two of them, and I can add them, or two of them and multiply them.*0709

*They have to satisfy these properties. When I add two elements, they still up at the set, they commute.*0714

*I can add them in any order. There exists some 0 in that set such that when I add it to any vector I get the vector back.*0720

*And, there exists... if for every vector u... there exists its opposite so to speak, its additive inverse.*0728

*So, when I subject it to addition of those two, I end up with a 0 vector, and scalar multiplication.*0735

*Closure under scalar multiplication, it has to satisfy the distributive properties and what you might consider sort of an associative property here.*0742

*And, that when I multiply any vector here × the number 1, I end up getting that vector again.*0752

*Set two operations. They have to satisfy all of these. I have to check each and every one of these. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.*0758

*Now, it is true. You do have to check every one of these, but later we will develop... very shortly we will develop some theorems that will allow us to bypass a lot of these and just check 1 or 2 to check to see if a particular space satisfies these properties.*0767

*When we do the examples in a minute, we are not going to go through excruciating detail.*0781

*I imagine some of your homework assignments require you to do that, and that is part of the process, is going through this excruciating detail of proving... of making sure that every one of these properties matches or is satisfied.*0787

*Going through that process is an important process of wrapping your mind around this concept of a vector space, because it is precisely as you go through the process that you discover surprises...*0801

*That what you thought was a vector space actually is not a vector space at all. Do not trust your intuition, trust the math.*0812

*Many, many years and hours of labor have gone into these particular definitions. This is a distillation of many year of experiences, hundreds of years of experience with mathematical structures.*0818

*We did not just pull them out of thin air. They did not just drop out of the sky. These are very carefully designed. Very, very specific.*0828

*Okay. Let us move forward.*0838

*Now, the elements in this so-called vector space, we call them vectors.*0842

*But, they have nothing to do with arrows. They can be any object, we are just using the language of a n-space, 3-space, 2-space, 4-space to describe these things.*0849

*We call them vectors, but they do not have any... they may not have anything to actually do with directed line segments or points.*0860

*Okay. Let us see... vector addition, scalar multiplication... okay.*0869

*When the constant c is actually a member of the real numbers, it is called a real vector space.*0877

*We will limit our discussion to real vector spaces.*0885

*However, if those constants are in the complex numbers, it is called a complex vector space.*0890

*Complex vector spaces are very, very important. As it turns out, many of the theorems for real vector spaces carry over beautifully for complex vector spaces, but not entirely all of them. *0906

*Again, this and this are just symbols. They are abstractions of the addition and multiplication properties.*0918

*When we speak about a specific vector space, like for example the vector space of real numbers, then addition and multiplication mean exactly what you think they are.*0930

*But we need a symbol to represent the operations in other spaces. These are the symbols that we choose, because these are the symbols that our experience has allowed us to work with.*0943

*Okay. Let us just do some examples. That is about the best way to treat this.*0957

*Okay. Let us consider, so our first example, let us consider RN, n-space... with that and that, meaning exactly what they mean in n-space.*0968

*The addition of vectors, multiplication by... multiplication by a scalar.*0990

*As it turns out, RN is a vector space.*0998

*Now, again, we are not going to go through and check each one of those properties. That is actually going to be part of your homework assignment.*1002

*For a couple of these in a minute, we will be checking a few of them, just to sort of show you how to go about doing it.*1009

*You are going to go through and check them just as you normally would, so it is a vector space.*1016

*Well, for example, if you wanted to check the closure property, here is how you would do it.*1022

*Let us take -- excuse me -- let us deal with a specific one, R3, so you let u = u1, u2, u3... and you let v = v1, v2, v3, then you want to check the closure of addition.*1027

*So, you do this, you write u, that little symbol is u is equal to u1, u2, u3, and in n-space, that symbol is defined as normal addition of vectors. *1047

*Remember, we are adding vectors, we are not adding numbers, so this is still just a symbol... v1 + v2 -- I am sorry, not plus.*1065

*v1, v2, v3, well, that is equal to... we are going to write it in vector form... u1 + v1, u2 + v2, u3 + v3... well, these are just numbers.*1074

*So, you end up with a number, a number, a number. Well, that is it. You just end up with a number, a number, a number, and that definitely belongs to R3. This is a vector in R3.*1093

*We started off with vectors in R3, you added them, you ended up with a vector in R3, so closure is satisfied. That is one property that you just checked.*1105

*You break it up into its components, you actually check that these things matter... Okay.*1114

*Alright, let us check this one. Let us see. This one we will do in some detail.*1123

*Consider the set of all ordered triples, (x,y,z), so something from R3, but we are only taking a part of that.*1133

*And... define this addition operation as... so (x,y,z) + (r,s,t) = x + r, y + s, z + t.*1157

*So, this addition operation is defined the same way we did for regular vectors. No difference there.*1181

*However, we are going to define this multiplication operation as possible this way. We are going to say that c ×... use my symbols here, it is the point we are trying to make... (x,y,z) = c(x,y,z).*1188

*Now, I am defining this multiplication differently. I am saying that when I have c × a vector in R3, that I only multiply the first component by c, I do not multiply the second and the third.*1212

*I can define it any way that I want. Now, we want to check to see that under these operations, is this a vector space? Well, let us see.*1228

*As it turns out, if you check most of them, they do. However, let us check that property f.*1238

*So, we want to check the following... property f was c + d × u... does it equal c × u + d × u.*1244

*So, we want to check this property for these vectors under these operations.*1265

*Notice, this particular property relates this operation with this operation as some kind of distribution. So, let us see if this actually works.*1271

*Alright. Now, once again, we will let u, component form, u1, u2, u3, well let us check the left side.*1280

*So, c + d × u is equal to, well, we go back to our definition, how is it defined... it says you multiply the thing on the left side of this symbol only by the first component.*1296

*So, it is equal to c + d × u1, but the second and third components stay the same. I hope that makes sense.*1318

*Our definition is this multiplication gives this. That is what we have done. c + d, quantity, this symbol × u.*1330

*Well c + d quantity × that. Okay.*1337

*It is equal to cu1 + du1, u2, u3.*1342

*So, we will leave that one there for a second.*1357

*Now, we want to check cu + du, so that is that... now we are checking this one here.*1362

*cu + du, well, c... should put my symbols here, I apologize. Let me erase this.*1380

*c · u + d · u, okay.*1400

*So, c · u, again we go back to our definition, *1407

*That gives me cu1, u2, u3 + du1, u2, u3 = cu1 + du1, u2 + u2, because now this symbol is just regular addition, u3 + u3 equals...*1411

*Now, watch this. cu1 + du1, well u2 + u2 is 2u2, u3 + u3 is 2u3.*1457

*Okay. That is not the same as that.*1473

*This is cu1 + du1, yeah the first components check out, but I have u2 and u3, here.*1481

*I have 2u2 and 2u3 here. That is not a vector space.*1487

*So, if I take the set of ordered triples, vectors in R3, and if I define my operations this way, it does not form a vector space.*1494

*You are probably wondering why I would go ahead and define something this way.*1507

*As it turns out, I am just going to make a general comment here. The real world and how we define things like distance, they are defined in a specific way to jive with the real world.*1511

*There is no reason for defining them that way. In other words there is no underlying reality or truth to how we define things mathematically.*1527

*What is important is the underlying mathematical structure and the relationships among these things.*1535

*This is why something like this might seem kind of voodoo, like I have pulled it out of nowhere.*1540

*I have not pulled it out of nowhere. As it turns out, you run across things like this. In this particular case, we know how vector spaces behave.*1548

*Well, if I can check this one property, having come up with a new mathematical object... let us say I happen to have to deal with something like this and I discover it is not a vector space.*1555

*That means that everything that I know about a vector space does not hold here. So, I can throw that thing out. I do not have to go and develop an entire new theory for each new space that I work with, that is why we do what we do.*1564

*Okay. Another very, very important example. Let us consider the set of m by n matrices.*1579

*So, if I take the set of 2 by 3 matrices... all the 2 by 3 matrices of all the possible entries, there is an infinite number of them.*1595

*The matrix by itself is an element of that set. The question is, is it a vector space?*1603

*We define, of course we have to always define the operations, we define the addition of matrices as normal matrix addition.*1613

*We define the scalar multiplication thing as normal scalar multiplication with respect to matrices which we have done before.*1630

*As it turns out, if you check those properties, yes, the set of m by n matrices, which is symbolized like this, is a vector space.*1644

*So, once we have checked the properties, and that will definitely part of one of the homework assignment, I guarantee you that that is one of the assignments you have been given.*1660

*IT is just something that we all have to do, is check for matrices that all of these properties hold.*1667

*Now, when I speak about a set of let us say 2 by 2 matrices, I speak of any random matrix as a vector.*1674

*Because, again, it satisfies the certain properties and I am using the language of n-space to talk about a collection of objects that has nothing to do with a point.*1684

*A vector is not a point. As you know it from your experience, but, I can call it a point in that space. In that collection of 2 by 2 matrices.*1691

*That is the power of the abstract approach. Okay. Let us see... here is an interesting example.*1704

*Let us let v be the set of all real valued functions. Real valued functions just mean the functions that give you a real value, x*^{2}, 3x, 5x + 2, sqrt(x), x^{3}, something like that.1715

*... functions f(t) defined on a given interval. We will actually just pick an interval, we do not have to, this is true when it is defined on all the real numbers, but we will just choose this particular interval a, b.*1739

*We define our addition as follows... f + g(t) = f(t) + g(t), and c not f, the symbolism is going to be kind of strange, I will talk about this in a minute... c × f(t).*1759

*Okay. I am taking my set of all re-valued functions. Just, this big bag of functions and I am saying that I can pull... I can treat that as a space.*1792

*I can define the addition of two of those functions the way I would normally add functions.*1802

*I can define this scalar multiplication the way I would normally define scalar multiplication... just c × that function.*1808

*Notice the symbolism for it. Here on the left, I have f + g(t).*1816

*In other words, I am somehow combing two functions in a way that, again, the symbolism is a little unusual and you probably are not used to seeing it like this. *1823

*But these are the actual definitions of what it is that I have to do when I am face with two functions, or if I am multiplying a function by a scalar in this particular space.*1834

*Now, as it turns out, this is a vector space. What that means is that the functions that you know of x*^{2}, 3x^{2}, you can consider these as points in a function space.1845

*We call them vectors. They behave the same way that matrices do. They behave the same way that actual vectors do.*1862

*This is what is amazing. There is no reason to believe that a matrix, or a function, or a point should behave the same way.*1870

*As it turns out, their underlying algebraic properties are the same. I can just treat a function as if it were a point.*1878

*Okay. Let us see. In general, when you are showing that a given space is a vector space, what you want to do is you want to check properties -- excuse me -- 1 and 2.*1890

*You want to check closure for the addition property, and you want to check closure for the multiplication, for the scalar multiplication operation.*1914

*Then, if those are okay, if they are not... then no worries, you can stop right there, you do not have to worry it is a vector space.*1921

*If those are okay... then you want to check property c next.*1929

*Okay. A little bit of a notational thing, from now on, when we write u + v, just to save ourselves some writing, we are just going to write it as normal addition.*1939

*That does not mean that it is normal addition, it just means that this is a symbol describing that operation.*1956

*We just have to keep in mind what space we are dealing with, and what operation we are dealing with.*1963

*Let us see... let us talk about some properties of spaces.*1970

*So, if I am given a vector space, these are some properties that are satisfied.*1977

*0 × u = 0 vector. Notice this 0 is a vector, this 0 is a number.*1984

*This says that if I take the number 0, multiply it by a vector, I get the 0 vector.*1993

*So, they are different. They are symbolized differently. b... c... × the 0 vector is the 0 vector. c is just a constant.*1998

*If cu = 0, if I take a vector and I multiply it by some scalar and I end up with a 0 vector, then I can say that either c is 0, or u is 0.*2018

*Either the scalar was 0 or the vector itself was the 0 vector.*2050

*d - 1 × the vector u will give me the additive inverse of that vector, -u.*2056

*Okay. Now, what we are going to do is we are going to show in detail that the set of all real valued functions, what we did before, for all real numbers is actually a vector space.*2068

*Okay. Let me see here. Yes. Okay, let us go ahead and check property one which is closure.*2086

*So, again, we are talking about the set of all real valued functions.*2102

*Remember our definitions? We defined f... actually we are not using that symbol anymore, we are just going to write it this way.*2107

*We said that f + g(t) = f(t) + g(t). Okay.*2116

*So, we want to check closure.*2124

*Does it equal f(t) + g(t), that is not a question, that is our definition... the question is, is that a member of s, which is the set of real valued functions.*2135

*If I take a function, and I add another function to it, I still end up with another function.*2157

*So, yes, closure is satisfied. If I add two functions, I get another real valued function, so closure is satisfied, it stays in the set.*2161

*2. If I multiply, I know that c... f(t), the definition was c × f(t)... well, if I take some function and I multiply it by a scalar, like if I have x*^{2} and I multiply it by 5x^{2}, it is still a real valued function.2172

*So yes, it stays in the set. So closure it satisfied for scalar multiplication. Property 2.*2191

*Okay. Let us check property c. In other words the existence of a 0.*2200

*So, property c, in other words, does there exist a function g(t) such that f(t) + g(t) gives me back my f(t).*2208

*Well, yes, as it turns out, g(t), the 0 function, I will put f for function, there is a function which is the 0 function, it is a real valued function.*2238

*So, for example, 0 f(3), well, it gives me 0. It is a function... if I use 3 as an argument, it gives me the real number 0.*2252

*It actually exists, it is part of that space, so it does exist, so yes this property, so there is such a thing and it is called the 0 function.*2264

*Okay. Let us check property d in a little bit more detail. Let us see.*2275

*So, we want to check the existence of an inverse. So, f(t), we want to know if something like that exists.*2285

*Well, now the question is... if I have some given function and I just take the negative of that function, is that a real valued function?*2303

*Well, yes, it is. If I have the function x*^{2}, if I take -x^{2}, it is a perfectly valid, real-valued function and it is still in the set of real valued functions.2316

*So, yes, there is your answer.*2327

*That additive inverse actually exists, and it is in this set, so that property is satisfied.*2331

*Let us check property f. We want to check whether c + d · f(t) = c · f(t) + d · f(t).*2340

*Well, this left hand side is going to be this one, c + d · f(t), is defined as c + d f(t) = c × f(t) + d × f(t).*2368

*Now, this part, I will bring down here, c f(t) = c f(t), d · f(t) = d f(t), and of course, when I... this is this... this is this.*2401

*When I add them together, normal addition, normal addition... I end up with exactly this.*2439

*So, it turns out that when I add these two do end up being equal, so yes, once again, the set of functions, all real valued functions on a given interval or in this case over the real numbers, they do form a vector space.*2447

*So, again, that is actually pretty extraordinary when you think about it, that functions behave the same way that points in space do.*2468

*For those of you that actually continue on into mathematics, it is going to be a really, really interesting thing when you can actually define the distance between two functions.*2474

*When I say distance, again, we are talking about distance, but as it turns out the notion of a distance in a function space is actually entirely analogous to the distance between 2 points in say 3-space.*2486

*Because again, 3-space, and the function space are both vector spaces.*2500

*They satisfy a certain basic kind of properties, and that is what we are doing when we define a vector space. We are looking for deeper properties, fundamental properties that are satisfied for any collection of objects.*2506

*That is why it is so amazing that there ultimately is no difference between this base of functions, real space, R2, the space of 2 by 3 matrices, their algebraic behavior is the same, and that is what makes this important, that is what makes this powerful.*2518

*Thank you for joining us here at Educator.com, we will see you next time. Bye-bye.*2536

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*The last lesson, we talked about a vector space. We gave the definition of a vector space and we gave some examples of a vector space.*0003

*Today, we are going to delve a little bit deeper into the structure of a vector space, and we are going to talk about something called a subspace.*0011

*Now, it is very, very important that we differentiate between the notion of a subset and a subspace. *0020

*We will see what that means, exactly, in a minute. We will see subsets that are not subspaces, and we will see subspaces... well, all subspaces are subsets, but not all subsets are subspaces.*0025

*So, a subspace, again, has a very, very clear sort of definition, and that is what we are going to start with first.*0036

*So, we will define a subspace, and we will look at some examples. Let us see what we have got.*0043

*We will define a subspace here, we will... let us go to a black ink here.*0050

*Okay. We will let v be a vector space... okay? And w a subset, so we have not said anything about a subspace yet... a subset.*0064

*Specify one quality of that subset is very important. It might seem obvious, but we do have to specify it... and w a non-empty... non-empty subset of v.*0094

*Okay. If w is a vector space with respect to the same operations as v, then w is a subspace of v.*0114

*In other words, if I am given a particular vector space, and I am given a particular subset of that vector space... if the w, if that subset itself is a vector space in its own right, with respect to those two operations... then I can say that w is an actual subspace of that vector space.*0153

*Again, subset, that goes without saying, but a subset is not necessarily a subspace. A subspace has very, very specific properties.*0175

*So, we will start off with the first basic example, the trivial example, which is probably not even worth mentioning, but it is okay.*0187

*0... so vector... and v. So, the 0 vector itself, that element alone is a subspace.*0196

*It is non-empty, and it actually satisfies the property, because 0 + 0 is 0, c × 0 is 0, so the closure properties are satisfied, and in fact all of the other properties are satisfied. *0206

*And v itself, a set is a subset of itself. Again, we call these the trivial examples. They do not come up too often, but we do need to mention them to be complete. Okay.*0219

*The second example. Let us do example 2.*0228

*Okay. We will let v = R3, so now we are talking about 3-space, and w be the subset all vectors of the form (a,b,0).*0236

*In other words, I am dealing with all 3 vectors, and I am going to take as a subset of that everything where the z, where the z component is 0.*0266

*In other words, I am just looking at the xy plane, if you will. So, there is no z value. That is clearly a subset obviously this is just the z component is 0, so it is clearly a subset.*0279

*Now, let us check to see if it is actually a subspace.*0291

*Okay. So, with respect to the addition, well, let us see, we will let a1 equal... a1 be (1,0), we will let a2 = (a2,b2,0).*0296

*a1 + a2 = a1 + a2, b1 + b2, 0. Well, yes, that is a number, that is a number, that is a number, this is a three vector. It does belong to w.*0320

*So, the closure is satisfied.*0337

*Now, notice, when we say closure with respect to the subset, that means when I start with something in that subset, I need to end up in that subset.*0342

*I cannot just... if this is the bigger set and if I take a little subset of it, when I am checking closure now, I am checking to see that it stays in here, not just that it stays here.*0351

*If I take two elements, let us say (a,b,0), and another (a,b,0)... if I add them and I end up outside of that set, that cannot be so.*0363

*I might end up in the overall vector space, but the idea is I am talking about a subset. I want closure with just respect to that subset.*0371

*So, be very, very clear about that. Be very careful about that.*0380

*Now, let us check scalar multiplication. c × a = c × (a,b,0).*0384

*Well, it is equal to (ca,cb,0). Sure enough, that is a number, that is a number, third element is still 0, so that is also in 2.*0393

*So, yes, we can certainly verify all of the other properties. It is not a problem.*0405

*As it turns out, this is a subspace. So, w is a subspace of v.*0409

*This gives rise to a really, really nice theorem which allows us to actually not have to check all of the other properties... a,b,c,d,e,f,g,h... we only have to check two... closure.*0420

*Since most of the time we are not talking about the entire space anyway, we are only talking about a part of it, the sub-space, a subset of a given space that we are working with, this theorem that we are about to list comes in very, very handy.*0434

*So, let us see. v is a vector space... v be a vector space with operations of addition and scalar multiplication... let w be a non-empty subset of v, of course.*0448

*Then, w is a subspace if and only if equivalence a, if closure is satisfied with respect to the addition operation and be closure satisfied with respect to the scalar multiplication.*0483

*In other words, if I have a given vector space and if I take a subset of that, if I want to check to see if that subset is a subspace, I only have to check two properties.*0510

*I have to check that if I take two elements in that subset, and if I add them together I stay in that subset, and if i take an element in that subset and multiply it by a scalar, I stay in that subset, I do not jump out of it.*0518

*Checking those two takes care of all of the others. This lets me know that I am dealing with a subspace, and this if and only if means equivalent.*0529

*If I had some subset of a space, and if I check to see that, you know, the closure is satisfied with respect to both operations, I know that I am dealing with a subspace and vice versa.*0538

*Okay. Let us do another example. Example 3.*0552

*We will let v = the set of 2 by 3 matrices, so v = m 2 by 3.*0560

*So, the vector space is the set, the collection of all two by three matrices.*0579

*We will let w be the subset of the two by 3 matrices of the form... the subset of m 2 by 3, of the form (a,0,b), (0,c,d).*0584

*Basically I am taking, of all of the possible 2 by 3 matrices, I am taking just the matrices that have the second column of the first row 0, and the first entry of the second row 0.*0615

*Everything else can be numbers. Okay. We just need to check closure, and we just need to check closure with respect to addition, closure with respect to scalar multiplication.*0628

*So, let us take (a1,0,b19, (0,c1,d1), and let us add to that another element of that subset. *0639

*(a2,0,b2), (0,c2,d2), well when we add these, we get a1 + a2, 0 + 0 is 0, b1 + b2, 0 + 0 is 0, c1 + c2, and d1 + d2.*0655

*Sure enough, I get a 2 by 3 matrix where that -- oops, let me get my red here -- where that entry and that entry are still 0.*0681

*So, yes, this is a member of w, the closure. It stays in the subset.*0695

*If I check closure with respect to scalar multiplication, well c × (a,0,b), (0,c,d) = ca, c × 0 is 0, cb, c × 0 is 0, cc, cd.*0703

*Sure enough, that entry and that entry are 0. There are our numbers. That is also in w.*0725

*So, yes, now I do not have to say all of the other properties are fulfilled. Our theorem allows us to conclude because of closure properties, this is a subspace, not just a subset, a very special type of subset.*0732

*It is a subspace. Okay.*0746

*Let us do example 4. We will let p2 be the vector space of polynomials of degree < or = 2.*0750

*So, t*^{2} + 3t + 5... t + 6 - t^{2} + 7.0777

*All of the polynomials of degree < or = 2, remember a couple of lessons ago, we did show that the set of polynomials is a vector space. I mean -- yes, is a vector space.*0784

*Now, we are going to let w, which is a subset of p2, be the polynomials of exactly degree 2.*0795

*Okay, so not degree 2, degree 1, degree 0, but they have to be exactly of degree 2. Clearly this is a subset of the set of polynomials up to and including degree 2.*0818

*The question is, is this w, the polynomials of degrees 2, is it a subspace of this... well, let us see.*0829

*I mean, we do not know, our intuition might say yes it is a subset, it is a subspace, let us make sure... it has to satisfy the properties.*0837

*Okay. We will take 3t*^{2} + 5t + 6, and we will take -3t^{2} + 4t + 2.0845

*Both of these are of degree 2, so both of these definitely belong to w. Now, we want to check to see if we add these, will they stay in w?*0859

*Okay. Well, if I add them, I get 3t*^{2} + 5t + 6 + -3t^{2} + 4t + 2.0872

*When I add them together, that and that cancel... I end up with 5t + 4t which is 9t, 6 + 8.*0894

*Well, notice, 9t + 8, this is a polynomial of degree 1. It is not a member of w, which was the set of polynomials of exactly degree 2.*0904

*So, closure is not satisfied. Therefore, just by taking a subset, I am not necessarily dealing with a subspace.*0916

*So, you see the subspace is a very special type of subset.*0924

*So, no, w is not a subspace.*0929

*Okay. Now, we are going to deal with a very, very, very important example of a subspace.*0941

*This subspace will show up for the rest of the time that we study linear algebra. It is profoundly important.*0950

*It will actually show up in the theory of differential equations as well. So, it is going to be your friend.*0959

*So, write... example 5.*0969

*Okay. Let v = RN, so we will just... vector space, n space... nice, comfortable, we know how to deal with it... it is the set of all n vectors.*0975

*Okay. We will let w be the subset of RN, which our solutions to the homogeneous system equals 0, where a is m by n.*0987

*The question is... is w a subspace? Okay, let us stop and take a look at what it is we are looking at here.*1035

*We are starting with a vector space RN and we are picking some random matrix a, okay? Just some random m by n matrix a, and we can set up... we know that that matrix a, multiplied by some vector in a... we can set up that linear system.*1043

*We can set it equal to 0 and we can check to see whether that system actually has a solution. Sometimes it will, sometimes it will be unique, sometimes it will have multiple solutions, sometimes it will have no solution at all.*1063

*As it turns out, for those values, for those vectors in RN that are a solution to that particular homogeneous system, that is what we want to check to see.*1078

*So, there is some set, let us say there is 17 vectors that are a solution to a particular homogeneous system that we have set up.*1093

*They form a subset, right? of the vectors in RN. There are 17 vectors in this infinite collection of vectors that happen to satisfy this equation.*1103

*When I multiply a matrix by a vector in that subset, I end up with 0, so there is a collection.*1114

*I want to see if that collection actually satisfies the properties of being a subspace. So, let us go ahead and do that.*1120

*Okay. Now, we need to choose of course 2 from that subset.*1127

*So, let us let... we will let u be a solution, in other words, a × u = 0, and we will let v be a solution.*1134

*So, I have picked 2 random elements from that subset. So av = 0, I know this because that is my subset. It is the subset of solutions to this homogeneous equation. Okay.*1160

*Let us check closure with respect to addition. That means we want to check this is a × u + v.*1176

*In other words, if I add u and v, do I end up back in that set?*1197

*The set is the set of vectors that satisfy this equation.*1201

*Well, that means if I add u and v, and if I multiply by a, do I get 0? Well, let us check it out.*1206

*a × u + v, we know that matrices are linear mapping, right?*1214

*So, I can write this as au + av. Well, I know that u and v are solutions already, so au = 0... av = 0.*1223

*Well, 0 + 0 is 0, so yes, as it turns out, a(u + v) does equal 0. That means it is in... that means u + v is in w... the set of solutions.*1234

*Okay. Now, let us check the other one... c ×... well, actually what we are checking is a c(u).*1256

*We want to see if we take a vector that we know is a solution, if we multiply it by a scalar, is it still a solution... so what we want to check is this.*1268

*Does it equal 0? Well, a c(u), well we know that multiplication by a matrix, again, it is a linear mapping, so I can write this as c × a(u).*1279

*Well, a(u) is just 0. c × 0... and I know from my properties of a vector space that c × 0 = 0.*1294

*So, yes, as it turns out, c(u) is also a solution.*1303

*Therefore, yes, the subset of RN that satisfies a given homogeneous system is not just a subset of RN. It is actually a subspace of RN.*1311

*That is a very, very, very special space. A profoundly important example. Probably one of the top 3 or 4 most important examples in linear algebra and the study of analysis.*1328

*We give it a, in fact, we give it... it is so special that we give it a special name. A name that comes up quite often. It is called the solution space for... this one I am going to write in red... it is called the null space.*1341

*So, once again, if I have some random matrix, some m by n matrix, I can always set up a homogeneous system by taking that matrix and multiplying it by an n-vector.*1364

*Well, the vectors that actually satisfy that solution form a subset, of course, for all of the vectors.*1378

*Well, that subset is not just a subset, it is actually a subspace. It is a very special kind of subset, it is a subspace and we call that space the null space of the homogeneous system. Null just comes from this being 0.*1387

*The null space is going to play a very, very important role later on.*1400

*Now, let us talk about something called linear combinations. We have seen linear combinations before, but let us just define it again and deal with it more formally.*1409

*Definition. We will let v1, v2, and so on all the way to vk be vectors in a vector space. *1420

*Okay. A vector 7 in v, big vector space, is called a linear combination of v1, v2... etc. if z, if the vector itself in the vector space can be represented as a series of constants × the individual vectors.*1435

*These are... these constants are of course just real numbers. Again, we are talking about real vector spaces.*1489

*So, again, a linear combination is if I happen to have 3 vectors, if I take those 3 vectors and if I multiply them by any kind of constant to come up with another vector and I know I can do that... I can just add vectors.*1496

*The vector that I come up with, it can be expressed as a linear combination of the three vectors that I chose.*1508

*Again, because we are talking about a vector space we are talking about closure, so I know that no matter how I multiply and add these, I still end up back in my vector space, I am not going anywhere.*1515

*Let us do an example. Okay. In R3, so we will do a 3-space, we will let v1 = (1,2,1), we will let v2 = (1,0,2), we will let v3 = (1,1,0).*1525

*Now, the example says find c1, c2, c3, such that some random vector z... so for example (2,1,5) is a linear combination of v1, v2, v3.*1553

*So, in other words, I have these three vectors, (1,2,1), (1,0,2), (1,1,0), and I have this random vector (2,1,5).*1583

*Well, they are all 3 vectors. Is it possible to actually express this vector as a linear combination of these 3?*1592

*Let us see. Well, what is it asking?*1600

*It is asking us to find this... c1 × (1,2,1) + c2 × ... this is not going to work.*1606

*Maybe I should slow my writing down a little bit... (1,0,2) + c3 × (1,1,0)... we want it to equal (2,1,5).*1622

*This should look familiar to you. It is just a linear system.*1637

*This is what we want. We want to find these constants that allow this to be possible.*1642

*Okay. So, let us move forward. c... c... c... c2... c2... c2... c3... c3... c3...*1648

*What you have is a linear system. A linear system that looks like this... (1,1,1), (2,0,1), (1,2,0) × (c1,c2,c3) = (2,1,5).*1660

*That is what we want to find out. We want to find c1, c2, c3... solving a linear system. Okay.*1687

*Well, let us just go ahead and form the augmented matrix... (1,1,1,2), (2,0,1,1), (1,2,0,5), and let me actually show the augmentation.*1692

*I am just taking the coefficient matrix and I am taking the solutions... and this one I am going to subject to Gauss Jordan elimination and turn it into reduced row echelon form.*1707

*I end up with the following... (1,0,0,1), (0,1,0,2), (0,0,1,-1), there we go.*1719

*c1 = 1, c2 = 2, c3 = -1. There you have it, these are my constants.*1731

*So, I end up with 1 × v1 + 2 × v2 - ... oh, erasing everything... - v3 = (2,1,5).*1747

*In this particular case, yes, I was able to find constants such that any random vector could be expressed as a linear combination of other vectors in that space.*1774

*Okay. The definition here.*1787

*If s is a set of vectors in v, then the set of all linear combinations of the elements s is called the span of s.*1810

*In other words, if I have 6 vectors and I just chose them randomly from the space, if I arrange those in any kind of combination I want, okay?*1857

*5 × the first one + 10 × the second one - 3 × the first one... in any combination.*1867

*All of the vectors that I can generate from the infinite number of combinations, I call that the span of that set of s, which sort of makes sense.*1872

*You are sort of taking all of the vectors, seeing what you can come up with, and if it spans the entire set of vectors. That is why it is called the span of s.*1883

*So, let us see. Our example, we will let v = R3 again and we will choose 2 vectors in R3.*1895

*We will choose (1,2,3), and we will choose (1,1,2).*1915

*The span of s = the set of all linear combinations of these 2 vectors... c1 × (1,2,3) + c2 × (1,1,2).*1924

*If I choose 1 and 1, it means I just add these... 1 + 1 is 2, 2 + 1 is 3, 3 + 2 is 5, that is the vector.*1942

*The set of all possibilities, that is what the span is. Okay.*1950

*Let us see. Let us now list a very, very important theorem.*1957

*Okay. Let s be a set of vectors from a vector space... and we said k... let s be a set of vectors in a vector space v.*1969

*Then, the span of s is a subspace. In other words, let us just take R3.*1995

*If I take 2 vectors in R3, like I did here, the span of that is all of the linear combinations that I can come up with of those 2 vectors.*2010

*Well, this set right here, this span of s is actually a subspace of R3.*2024

*That is extraordinary. No reason for believing that should be the case, and yet it is.*2030

*Okay. Let us finish up with a final detailed example.*2040

*Once again, we will deal with p2. Do you remember what p2 was? That was the vector space of polynomials of degrees < or = 2.*2045

*Okay. So, we are going to choose... I am going to go back to black, I hope you do not mind... we will let v1 = 2t*^{2} + t + 2.2053

*We will let v2 = t*^{2} - 2t... we will let v3 = 5t^{2} - 5t + 2.2072

*We will let v4 = -t*^{2} - 3t - 2.2088

*So, again, this is a vector space and I am just pulling vectors from that vector space.*2096

*Well, the vectors in the space of polynomials happen to be polynomials of degree < or = 2. Here I have a 2, 2, 2, 2, they all happen to have a degree of 2, but I could have chosen any other ones.*2100

*Now, I am going to choose a vector u, I am going to pull it randomly, and it is t*^{2}, this one I am actually going to write in red, excuse me. 2111

*u... it is t*^{2} + t + 2, well, this is also in p2.2128

*Here is the question. Can we find scalars (a,b,c,d), such that u is actually a linear combination of these other 4.*2136

*In other words, can I take a random vector and express it as a linear combination of these other 4.*2159

*Well, you are probably thinking to yourself... well, you should be able to, you have 4 vectors and you have another one... let us see if it is possible, it may be... it might not be.*2165

*a1v1 -- I am sorry it is av1 -- + bv2 + cv3 + dv4.*2175

*In other words, is this possible? Can we express u as a linear combination of these vectors.*2187

*Well, let us find out. Let us write this out and see if we can turn it into a system of equations and solve that system of equations and see if there is a solution for (a,b,c,d).*2194

*Okay. Let me go back to my blue ink here. So, av1 is... one second, I will do blue here... a × 2t*^{2} + t + 2 + b × v2, which is t^{2} - 2t + c × v3, which is 5t^{2} -5t + 2, and then + d × -t^{2} - 3t - 2.2205

*So, again, I have this thing that I am checking, and I just expand the right side to see what I can get. Okay.*2256

*Another question is... is all of this is equal to u? Which is t*^{2} + t + 2.2268

*Okay. I multiply all of these out. When I multiply this, when I distribute and simplify and multiply, I am going to end up with the following.*2286

*When I combine terms, 2a + b + 5c - d... t*^{2} + a - 2b - 5c - 3bt + 2a + 0b + 2c - 2d, again, multiplied all of these out, expanded it, collected terms, put all of my t^{2}'s together, all of my t's together, my nothings together. Okay.2298

*I need that to equal t*^{2} + t + 2.2346

*Well, this is an equality here. I need to know if this is possible. So, here I have a t*^{2}, let me go to red... here I have t^{2}, and here I have t^{2}.2352

*The coefficient is 1. That means all of these numbers have to add to 1.*2363

*Here I have a t, here I have a t, the coefficient is 1, that means all of these number have to add to 1.*2367

*This nothing at all, it is just a constant, that means all of these numbers have to add to 2. That is what this equality means.*2372

*I have myself a system of equations. Let me write that system of equations.*2380

*Okay. I have got... 2a + b + 5c - d, I need that to equal 1.*2388

*I have a - 2b - 5c - 3d, I need that to equal 1.*2400

*Then I have 2a + 0b, it is up to you whether you want to put the 0 there or not... it does not really matter, + 2c - 2d = 2.*2410

*Okay. I have a linear system. I am going to set up the coefficient matrix... (2,1,5,-1,1), (2,1,5,-1,1), (1,-2,-5,-3,1), (1,-2,-5,-3,1), (2,0,2,2,0), (1,-2,2,-2,2,)*2422

*Of course my solution is there, so I just remind myself... and again I subject this to reduced row echelon form.*2453

*Let me, well, let me do it this way. So, Gauss Jordan elimination to reduced row echelon for which I use Maple, and I get the following.*2463

*(1,0,1,-1,0), (0,1,3,1,0), (0,0,0,0,1). Okay.*2474

*There is a problem with this. These are all 0's, this is a 1. This tells me that 0 = 1, this is not true.*2490

*This is inconsistent. That is what it means, when we... remember when we reduced to reduced row echelon, if we get something like this, we are dealing with a system that is inconsistent. There is no reduced row echelon form.*2503

*So, because this is inconsistent, that means there is no solution. In other words, u, that vector u that we had... it does not belong to the span of v1, v2, v3, v4.*2514

*Just because I can pick 4 vectors at random does not mean a vector in that space can necessarily be represented as a linear combination.*2538

*In other words, it is not in the span. So, that is what we did. We took those 4 vectors, we set up the equality that should be the case, we solved the linear system, and we realized that the linear system is inconsistent.*2548

*Therefore, that polynomial, t*^{2} + t + 2 is not in the span of those vectors. Very, very important.2561

*Okay. So, today we discussed subspaces and again a subspace is a subset of a vector space, but it is a very special space of a subset that satisfies certain properties.*2571

*In order to check to see that something is a subspace, all you have to do is check two properties of that subset.*2582

*You have to make sure that closure with respect to addition is satisfied, and closure with respect to scalar multiplication is satisfied.*2587

*Again, what that means is that this might be our bigger vector space... if we take a subset of that, closure means if I take two elements in that space, I need to end up back in that space.*2594

*So, I do not end up outside in the bigger space, I need to stay in that subset. That is what turns it into a subspace. Very, very important.*2605

*Okay. Thank you for joining us here at Educator.com, we will see you next time. Bye-bye.*2614

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*Today we are going to talk about something called the span of a set of vectors.*0004

*It means exactly what you think that it means. If I have a collection of vectors, 2, 5, 10, the number does not actually matter that much.*0011

*We want to talk about all of the possible linear combinations of those vectors, that are possible... all of the vectors that can be built from that particular set.*0021

*So, for example, if I take R2... the normal plane, I know that I have the vector in the x direction, I know that I have the vector in the y direction, and if I take any collection of those, multiplied by constants, let us say 5i + 6j, I can represent every single vector in R2.*0033

*So, those two vectors, we say it actually spans R2.*0050

*So, that is just sort of the general description that a span is.*0055

*Unfortunately, in this case, the name actually gives you an idea of what it is that you are talking about, so it is not strange. So, let us start with a couple of definitions and let us see what we can do.*0059

*Okay. Now, in a vector space, there is an infinite number of elements and then the reason for that is if I have at least one element in that space, and I know there is at least one, I know that I can multiply that element by any number that I want. Any constant.*0072

*Therefore, since that constant is just a real number, there are an infinite number of elements in that vector space.*0091

*However, what we want to do is we want to see if we can find a finite number of elements in that vector space that when I take certain combinations of them, linear combinations of them, that we can describe the entire space.*0098

*That means all infinite vectors based on just that finite set of vectors.*0114

*So, let us actually write out definitions down.*0122

*Okay... vectors v1, v2 and so forth onto vk are said to span v, which is our vector space.*0134

*If every vector in v can be written as a linear combination of the v1, v2, v3.*0156

*SO, now we have actually written it down. If I have vectors v1 through vk, let us say 6 of them.*0188

*And... if any linear -- excuse me -- if any combination of those vectors, they do not have to all be included, you know some of the constants can be 0, but if some combination of those vectors can represent very single vector in that vector space, then we say that that set of vectors actually spans the vector space.*0194

*Okay. Now, let us list the procedure to check to see if a set of vectors actually spans a vector space.*0218

*Procedure to check if the set of vectors spans a vector space, so vs, for vector space.*0233

*So, let us see... you know what... let us leave it as blue for right now.*0253

*Choose an arbitrary vector v in the vector space, so when you are given a vector space, just choose some arbitrary vector.*0260

*So, if you are given 4-space, R4, then just choose the random vector (a,b,c,d), or you can call it (x,y,z,t), just some random vector and label it... to determine if v is a linear combination of the given vectors.*0273

*So, this is basically just an application of the definition, which is what definitions are all about.*0311

*Let me just take a quick second to talk about definitions real quickly. *0315

*Often in mathematics, we begin with definitions. They are a basic element.*0319

*We use those definitions to start to create theorems and we sort of build from there, build our way up from the bottom if you will.*0325

*If you find that you have lost your way in mathematics, more often than not, you want to go back to your definitions, and 90% of the time, the problem is something is either missing from a definition, or there is a definition that the student has not quite wrapped his mind fully around.*0333

*Again, mathematics is very, very precise. It says exactly what it wants to say, no more and no less.*0349

*Okay. Determine if v is a linear combination of the given vectors... definition.*0357

*If so, if it is a linear combination, then yes, the vectors actually span the vector space.*0365

*If not, if there is no way to form a linear combination, then no. Okay.*0373

*So, again, when you are forming a linear combination, you are taking a constant, multiplying it by a bunch of vectors and setting it equal to some, in this case an arbitrary constant.*0385

*So, once again, we are going to investigate the linear system, the linear systems are ubiquitous in linear algebra. Okay.*0394

*So, let us start with an example here.*0403

*Let us go to the next page. So, first example. Let us consider R3.*0407

*So, regular 3 space... (x,y,z), the space we live in.*0418

*We are going to let v1 = (1,2,1), we will let v2 = (1,0... -- oops, excuse me -- (1,0,2), and we will let our third vector be (1,1,0).*0424

*I wrote these out in the form of a list, in terms of their coordinates... I could write the vertically, I could write them horizontally without spaces, however you want.*0447

*Now, the question is these three vectors that I have chosen randomly... do they... so do v1, v2, v3 span R3?*0455

*Are these vectors enough to represent all of R3.*0469

*Is a linear combination, any linear combination, any of these three vectors... can I find any vector in R3 and use these three to represent it?*0475

*Well, let us see. Okay.*0484

*Well, first thing we do from our procedure, let us choose an arbitrary vector in R3... arbitrary v in R3.*0489

*So, let us choose, let us say that v is just equal to (a,b,c,d), and again variables do not matter. It is just some random vector.*0503

*Okay. Now, we want to see the second thing that we are going to check is are there constants c1, c2, c3 such that, well, c1v1 + c2v2 + c3v3, a linear combination equals v.*0510

*That is what we are doing. That is all we are doing. We are taking an arbitrary vector, we are setting it equal to the constant times the vectors that we have in our set, and we are going to solve this linear system.*0538

*So, this is the vector representation, the simplest representation... now we are going to break it down a bit.*0548

*This is, of course, equal to c1 × the first vector, which is (1,2,1)... I am going to go ahead and write them vertically.*0554

*That is just a personal choice of mine. You are welcome to write them any way you please. I like to do them vertically because it keeps the coefficient of the ultimate matrix that we are going to do... systematic.*0561

*These becomes the columns of the particular matrix.*0571

*Plus c2 × (1,0,2)... I hope that 2 is clear... + c3 × third vector, our third vector is (1,1,0), and we are setting it equal to (a,b,c).*0576

*We want to find out if this linear system actually has a... so, let us write this system... when we actually multiply these c1's out, we get this... I would like you to at least see one of them.*0596

*We get c1 + c2 + c3 = a.*0607

*We get 2c1, c20, I will just leave that over here, + c3 = b, and I get c1 + 2c2, and there is nothing over here = c.*0615

*This is, of course, equivalent to the following augmented matrix... (1,1,1,a)... (1,1,1,a).*0632

*Again, I am taking the coefficients of c1, c2, c3... c1, c2, c3 is what I am looking for.*0640

*Can I find a solution to this? If I can, then yes... (2,0,1,b), (1,2,0)c, so this is the system that we are going to solve.*0646

*We are going to subject it to reduced row echelon form. I will not go ahead and show you the reduced row echelon form... I of course did this with my computer, with my Maple software.*0663

*Fast and beautiful. As it turns out, this does have a solution. In other words, it has a non-trivial solution, and here is what it looks like.*0672

*You end up with c3 = (4a - b - 2c)/3 -- oops, these strange lines that show up on here.*0683

*Let us see... c2 = (a - b + c)/3... let me go ahead and put parentheses around so that you know the numerator is that way, because I am writing my fractions not two dimensional, but in a line.*0703

*c1 = (-2a + 2b + c)... and again, all divided by 3.*0729

*So, for any choice of a, b, or c, I can just put them in here and the constants that I get end up being a solution to this system.*0741

*So, because there is a solution to this system... this system... that means there is a solution to this because these are all equivalent.*0750

*That means that any vector that I choose, and again, I just chose it (a,b,c)... is... I can find constants for them and these are the explicit values of those constants no matter what vector I choose.*0760

*So, yes... let us try this again... so here the answer is yes... v1, v2, v3 do span R3.*0771

*Turn this and make this red... that vector, that vector and that vector are a perfectly good span for R3.*0794

*Now, you know, of course, that R3, or if you do not, I am telling you right now... that R3, the standard 3 vectors that we use as the set which spans the space is of course the i vector, the j vector in the y direction, and the k vector in the z direction.*0806

*Those are mutually perpendicular. Well, as it turns out, you can have any collection of vectors that may span a vector space.*0825

*There is no particular reason for choosing one over the other, so there are an infinite number of them, but in certain circumstances... it makes sense to choose one over the other.*0833

*In the case of the i,j,k, we choose it because they have the property that they are mutually orthogonal -- excuse me.*0845

*Which actually -- excuse me -- makes it easier to deal with certain things.*0855

*Okay. Let us consider another example. Let us consider... we will do this example in red... p2.*0863

*If you recall p2 is the vector space of all polynomials... all polynomials of degree 2 or less.*0877

*So, we will let our set s, this time we will actually do it in set notation... p1t, p2t, oh this p2 right here has nothing to do with this p2.*0889

*This is a general symbol for p2, the space of polynomials of degree 2. This just happens to be number 2 in the list.*0903

*p1... let us define that one as t*^{2} + 2t + 1, and we will say that p2 of t is t^{2} + 2.0912

*Okay. We want to know does s... do these 2 polynomials, are they enough to span all of p2.*0928

*In other words, can I take two constants c1 and c2, and multiply them by this... by these 2... c1 × this one, c2 × this one.*0942

*Can I always find constants such that every single polynomial, every second degree polynomial, or first degree polynomial, remember it is degree 2 or less... can be represented by just these 2 vectors... well, let us find out.*0953

* So, again, the first thing that we do is we choose an arbitrary vector in p2... and an arbitrary vector in the space of polynomials looks as follows... at*^{2} + bt + c, right? because it is t^{2}, right?0968

*And now... we want to show the following... we want c... c1 × p1t + c2 × p2t = our arbitrary vector that we picked.*0985

*This one up here equals at*^{2} + bt + c, so again, we are just setting up a basic equation... a linear combination of the vectors that we are given... does it equal our arbitrary vector?1008

*Well, let us go ahead and expand this out based on what they are... so, we have c1 × p1t which is t*^{2} + 2t + 1 + c2 × t^{2} + 2... okay.1020

*We want that to equal at*^{2} + bt + c... let us multiply this out.1045

*Now, when you multiply this out, this is going to be c1t*^{2}, 2c1t + c1, and then 2c2^{2} + 2c2.1053

*I am going to skip that step... and just... imagine that I just multiplied it, basic distribution, you know, something from algebra 1.*1063

*Then I am going to combine terms in t*^{2}, in t, and in t to the 0 power.1068

*So, it is going to look like this when I actually expand it. It is just one line that I am skipping.*1075

*It is going to be c1 + c2 × + 2c1... sorry... t*^{2}... 2c1 × t + c1 + 2c2 = at^{2} + bt + c.1080

*Well, we have an equality sign here. We have some -- change this to blue -- this is the coefficient of t*^{2} here.1103

*This is the coefficient of t*^{2} in the equality, so that equals that.1112

*That is our first equation... c1 + c2 = a.*1117

*Well, 2c1 is the coefficient of t, here b is the coefficient of t. They are equal on both sides, so this one becomes 2c1 = b.*1125

*Then, we do this one over here. We have c1 + 2c2 = c... c1 + 2c2 = c.*1139

*This is going to be equivalent to the following... (1,1,a)... I am just taking coefficients... (2,0,b), and (1,2,c).*1150

*Now, when I subject that to reduced row echelon form, this one I do want you to see...*1164

*So, we subject that to Gauss Jordan elimination for reduced row echelon... we get (1,0,2a-c).*1173

*(0,1,c-a), and we get (0,0,b-4a+2c). We get this as the reduced row echelon of the system that we just took care of.*1185

*Now, this is possible if and only if this thing right there... so you see this 0 right here, this... that means this thing right here... the b - 4a + 2c has to equal 0 for this thing to be consistent.*1201

*The only way that that b - 4a + 2c is equal to 0 is if b = 0, a = 0, c = 0. That is just the trivial solution.*1223

*So, there is no solution to this system. So, the answer to this one is no.*1234

*p1 and p2, t, those two polynomials do not span p2, which is the vector space of polynomials of degree 2 or less.*1239

*So, it is not enough. I need some other vector, I do not know, maybe 2 or 3 of them.*1254

*Let us try... let us go back to red here for example number 3.*1268

*These will just list, you know these already... i and j span R2.*1278

*We know that i,j, and k, the three unit vectors in the (x,y,z) direction span R3, and so on, onto R4... R5.*1287

*So, e1, we do not give the letters anymore, after 4... we just call them e.*1298

*e1, e2, and so on all the way to eN... They span n-space, which is RN.*1305

*Now, you have probably already noticed this, but notice, I have 2 vectors that span R2, three vectors that span R3, N vectors needed to span RN, that is a general truth.*1318

*We will talk more about that in a minute... well, actually the next lesson.*1330

*Okay. Now we are going to get to a profoundly, profoundly important example.*1336

*This one, we want to do very, very carefully... therefore, so, let us consider the following homogeneous equation... ax = 0, such that a, and we are actually going to explicitly list this vector... this matrix, excuse me, (1,1,0,2), (-2,-2,1,-5), (1,1,-1,3), (4,4,-1,9). Okay.*1343

*So, we have this homogeneous system, matrix × some vector x is equal to the 0 vector. Here is our matrix a.*1386

*In other words, what are my x values that will actually make this true such that when I multiply this by some vector x, which I will put in blue...*1397

*Let us just say this is going to be x1, x2, x3, x4... what are the values of x that will actually make it true so that when I multiply these, I end up with 0.*1407

*It might be one vector, it might be an infinite number of vectors, it might be no vectors.*1417

*So, if you remember what we called the set of vectors that actually make this true, we called it the null space.*1423

*The solution space of the homogeneous system... in other words, the vectors that make this space be true. We called it the null space.*1430

*So, let me just write that down again... the null space is a very, very important space. Okay.*1437

*Now, one thing that we also know about the null space is the null space is a subspace... of... in this case, R4, remember?*1445

*So, if we happen to be dealing with a null space in R4, you have these 1, 2, 3, 4... 4 by 4, we are looking for a vector which has 4 entries in it, so it is from 4-space.*1465

*And... remember that we proved that it is actually a null space, it is a subspace of R4.*1476

*Not just a subset... it is a very special kind of subset... it is an actual subspace.*1480

*In other words, it is a vector space in its own right. It is as if I can ignore the rest of R4, just look at that, and I can treat it the same way I would the rest of that vector space, it has special properties... all of the properties of a vector space.*1486

*Okay. Now, can we find a set of vectors... here is our problem here... can we find a set of vectors that stands the null space?*1499

*Let me write that down... can we find a set of vectors... set which spans... singular because it is set... this null space.*1511

*So, again, we are not looking to expand the entire R4. We are just looking for something that will span this particular null space. The null space based on this particular matrix.*1535

*Well, let us solve this system... Let us see what x's we can come up with. *1546

*So, when we solve the system, we create the augmented matrix. When I take this and I put 0's over on the final column in matrix a, and I end up subjecting that to reduced row echelon form, which I will not do... well, actually you know what, let me go ahead and write it all out. It is not a problem.*1555

*So, I have 1...1... yes, (1,1,0,2,0), (-2,-2,1,-5,0), (1,1,-1,3,0)... and (4,4,-1,9,0).*1581

*So, this is our augmented matrix. This is just this thing in matrix form, and then I am going to subject this to Gauss Jordan elimination to convert it into reduced row echelon, so let us write out the reduced row echelon form of this.*1613

*It is going to be (1,1,0,2,0), (0,0,1,-1,0), (0,0,0,0,0)... 0's everywhere.*1626

*Okay. So, this is our reduced row echelon, and remember... this represents x1, x2, x3, x4. This is the -- we are looking for a vector... a 4 vector.*1646

*So, let us take a look here. This one is fine. This one has a leading entry -- let me go to red -- so, this one is good, and this one has a leading entry.*1661

*This one does not have a leading entry. The x2. So... x2... x3, x4, yes.*1672

*So, that second does not have a leading entry and the fourth does not have a leading entry.*1681

*Remember, when we do something, when we convert it to reduced row echelon, the columns that do not have a leading entry actually are free parameters... I can call them s, t, x, y, they can be any number I want.*1686

*Then I solve for the other two. So, let us go ahead and set x2 = r, and we will set x4 = s, they can be any numbers, they are free parameters.*1697

*Then, what I get is x1 + x2 + 2x4, that is what this line here says... x1 + x2 +2x4 = 0.*1716

*Therefore, x... x2 is R, x4 is S, so x1 = -R - 2s.*1734

*So, I have x2, I have x4, I have x1 that I just calculated, and now I will do x3... x3 here... x3... this right here...x3 - x4 = 0.*1745

*So, x3 = x4, which is equal to x... s... I am sorry.*1768

*So, we have x2 is r, x4 is s, x3 is s, excuse me, and x1 is -r - 2s. Let me rewrite that a little bit differently.*1774

*I am going to write that as the following -- let me go back to blue here.*1785

*x1 = -r - 2s, x2 = r, and there is a reason why I am writing it this way, you will see in a minute... x3 = s -- I will put the s over there in that column.*1791

*x4 = s, well take a look at this x1, x2, x3, x4... I have r's here, 0, I have 2, 0, s, s.*1811

*I can rewrite this as the following in vector form. This is equivalent to (x1, x2, x3, x4) equal to, let me pull out the r from here, and I can just take these coefficients (-1,1,0,0).*1823

*And... plus, now I can pull out an s from here, and I can take these coefficients, (-2,0,1,1), if that is not clear, just stop and take a look at what it is that I have done.*1850

*I will just treat this as a column... treat this as a column... Okay.*1864

*So, notice what I have done. I have taken the solution space which is x1, x2, x3, x4, this is because i have solved the system... these are all the possible solutions.*1870

*There is an infinite number of them because r and s can be anything. I have written it in vector form this way, as a linear combination of this vector and that vector.*1884

*These are just arbitrary numbers, right? The r and the s are just arbitrary numbers, therefore I have expressed the solution set, which is this thing of the homogeneous system based on that matrix.*1893

*I have expressed it as the linear combination of this vector and this vector.*1906

*Therefore, those 2 vectors (-1,1,0,0) and (-2,0,1,1), they actually span the null space.*1912

*The null space has an infinite number of solutions, that is what our system tells us here.*1930

*Well, I know that I can describe all of those solutions by reducing it to two vectors, any linear combination of which will keep me in that null space.*1937

*It will give me all of the vectors, all of the vectors are represented by a linear combination of this vector and that vector.*1947

*They span the null space of the homogeneous system.*1953

*Again, profoundly important example. Go through this example again carefully to understand what it is that I did.*1957

*I had a homogeneous system, I solved that homogeneous system for the solution space. I represented that solution space... once I have that... I represent it as vectors, and these vectors that I was able to get -- because these are arbitrary constants -- well, that is the whole idea behind a span.*1965

*I was able to represent the higher solution space but only 2 vectors. That is extraordinary.*1983

*Okay. Thank you for joining us here at Educator.com for the discussion of span, we will see you next time.*1990

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*In the last lesson we talked about something called the span of a given set of vectors.*0004

*In other words, any linear combination of those particular vectors represents, all the vectors that can be represented by what actually spans.*0010

*Today we are going to be talking about a related concept, again, a very, very profoundly important concept called linear independence, and its dual, linear dependence.*0019

*So, let us write out some definitions and get started right away and jump into the examples, because again, examples are the things that make it very, very clear... I think.*0032

*So, let us define linear dependence. Even though we speak about linear independence, the actual definition is written in terms of linear dependence.*0041

*Well, let us actually -- okay -- vectors v1, v2... vk are linearly dependent.*0058

*We will often just say dependent or independent without saying linear. We are talking about linear algebra so it goes without saying.*0090

*If there exists constants, c1, c2... all the way to ck... not all 0 -- that is important, because that is easy -- such that c1v1 + c2v2 + ckvk = 0, or the 0 vector.*0096

*So, let us look at this again. So, if vectors v1 to vk, if I have a set of vectors and if I can somehow, if there exist constants -- no matter what they are, but not all of them 0.*0134

*If I can arrange them in such a way, some linear combination of them, if they add up to 0, we call that linearly dependent, okay?*0148

*A couple of vectors are linearly dependent, and let us talk about what that actually means. Okay.*0155

*Oh, by the way, if it is not the case, that is when they are linearly independent.*0165

*If you cannot find this, or if the only way to make this true is if all of the individual c's are 0, or there is no way to find it otherwise, that means that they are linearly independent.*0170

*Here is what the meaning is, just so you get an idea of what is going on.*0181

*If we solve this equation... actually, I am going to number this equation. I have not done this before but this is definitely 1 equation that we are going to want to number.*0189

*We will call it equation 1, because we are going to refer to it again and again. It is a very important equation.*0193

*If you solve this equation for any one of these vectors, so let us just choose one arbitrarily, let us choose that.*0200

*I am going to write that as c2v2 = well, we are going to move all of these over to the right... -c1v1 - c3v3 - so on, all the way to the end... -ckvk.*0209

*Then I am going to go ahead and divide by c2, so I get v2 = this whole thing.*0232

*Let me just... I hope you do not mind, I am going to call this whole thing the capital Z.*0241

*Z/c2, well, as you can see, if this is true, then you can always solve for one of these vectors and this vector is always going to be some linear combination of the other vectors.*0249

*That is what dependence means. Each of those vectors is dependent on the other vectors.*0265

*In other words, it can be represented as some combination of the others. That is why it is called dependent.*0271

*So, that is all that means. It is nothing strange, it makes perfect sense, and if this relationship does not exist, then it is independent.*0279

*In other words, 1 vector cannot be represented as a combination of the others. It is independent. That is what independence means.*0287

*Okay. Let us jump into examples, because that is what is important.*0297

*Actually, let me talk about it... let me list at least the procedure. It is analogous to what we did before.*0302

*The procedure for determining if a given list of vectors is linearly independent or linearly dependent... LD, LI... abbreviations.*0309

*The first thing we do, well, we form equation 1. Remember when we were dealing with span over on the right hand side?*0335

*We did not have 0, we had some arbitrary vector. Now, for linear independence or linear dependence, we set it equal to 0.*0343

*So, we form equation 1, which is a homogeneous system.*0352

*Then, 2, then we solve that system, and here are the results. *0365

*If you find out it only has the trivial solution, that means all of the c's the constants are 0... that implies that it is independent.*0373

*The other thing is, if there exists a non-trivial solution... just one, could be many, but if there is just one, so again, you remember this reverse e means there exists.*0391

*So, if there exists a non-trivial solution, that implies that it is dependent.*0411

*Again, we are just solving linear systems. That is all we are doing, and the solutions to these linear systems give us all kinds of information about the underlying structure of this vector space.*0422

*Whether something spans it, whether something is linearly independent or dependent, and of course, all of these will make more sense as we delve deeper into the structure of the vector space. Okay.*0432

*So, let us start with our example... this is going to be not a continuation of what we did for the span, but I guess kind of a further discussion of it.*0442

*You remember in the last example of the last lesson, we found we had this homogeneous system and we found solutions for x, and we found 2 vectors that actually span the entire solution space, the null space.*0455

*Those vectors were as follows... (-1,1,0,0), (-2,0,1,1).*0472

*So, we know that these two vectors span the solution space to this particular equation based on what a was.*0485

*I will not write down what a is. It is not necessary.*0492

*Now, the question is... we know that they span the null space... the question is are they linearly independent or dependent.*0497

*So, our question here is... are these two vectors LI or LD.*0506

*Our procedure says form equation 1. So, form equation 1.*0523

*That is just c1 × this vector, (-1,1,0,0) + c2 × (-2,0,1,1)... and we set it equal to the 0 vector which is just all 0's.*0530

*So, we do not need the vector mark anymore... (0,0,0,0). Okay.*0546

*Then, what we end up having is the following... this is equivalent to the following... (-1,-2,0), we are just taking the coefficients, that is all we are doing.*0552

*(-1,0,0), (0,1,0), (0,1,0), and when we subject this to Gauss Jordan elimination, reduced row echelon, we end up with the following... c2 = 0, c1 = 0.*0566

*That means that all of the constants, we only have two constants in this case, so this is only the trivial solution.*0585

*Therefore, they are independent. There you go, that is it. It is that simple.*0596

*You set up the homogeneous system, you solve the homogeneous system, and you decide whether it is dependent or independent. Fantastic technique.*0605

*Okay. Let us consider the vector space of polynomials again. Let us consider p2, again.*0617

*p2, the set of all polynomials of degree < or = 2.*0626

*Let us look at 3 vectors in there... we have t*^{2} + t + 2.0631

*We have p2t, which is equal to 2t*^{2} + t, we have p3t, which is equal to 3t^{2} + 2t + 2.0642

*So, they are just, you know, random vectors in this particular space, in other words random polynomials.*0656

*Well, we want to know if these three as vectors are the linearly dependent or independent.*0661

*Well, do what we do. We set up equation 1, which is the following. We take arbitrary constants... c1 × p1t + c2 × p2t, we will write everything out here... we want things to be as explicit as possible.*0671

*Plus c3 × p3t, and we set it equal to 0, that is our homogeneous system.*0688

*Now, we actually expand this by putting in what these p1, p2, p3 are. Okay.*0695

*We get p1 × t*^{2} + t + 2 + c2 × 2t^{2} + t + c3 × 3t^{2} + 2t + 2 = 0.0702

*Now, let us actually... this one I am going to do explicitly... there is no particular reason why, I just decided that it would be nice to do this one explicitly.*0725

*So, I have c1t*^{2} + c1t + 2c1 + 2c2t^{2} +c2t + 3c3t^{2} + 2c3t + 2c3 = 0.0734

*Algebra makes me crazy, just like it makes you crazy, because there are a whole bunch of things floating around to keep track of it all.*0762

*Just go slowly and very carefully and be systematic. That is... do not ever do anything in your head.*0769

*That is the real secret to math, do not do anything in your head. You will not be impressing anyone.*0774

*I collect the terms... the t*^{2} terms, so I have that one, that one, and that one, and I end up with... so let me write these out as t^{2} × c1 + 2c2 + 3c3.0780

*Then, I will take the t terms... there is a t, there is a t, there is a t, and I will write that as a second line here, just to be clear what it is that we are doing.*0800

*c1 + c2 + 2c3... then I have plus the... well, the rest of the terms.*0811

*That one... and that one... and is there one that I am missing? No. It looks like it is okay.*0821

*So, it is going to be + 2c1 + 2c3 and all of this... sum... is equal to 0.*0829

*Again, that means that this is 0, this is 0, this is 0. That is what this system is.*0843

*So, we will write that, because everything is 0 on the right, so all of these have to be 0 in order to make this left side 0.*0850

*So, I get c1 + 2c2 + 2c3 = 0.*0858

*Note, we do not want these lines floating around. We want to be able to see everything here.*0867

*c1 + c2 + 2c3, is equal to 0.*0875

*2c1 + 2c3 = 0, this is of course equivalent to... I will just take the coefficients... (1,2,3,0), (1,1,2,0), (2,0,2,0), okay.*0885

*So, this is the system that we want to solve, and we are going to subject that to reduced row echelon.*0906

*So, I put a little arrow to let you know what is happening here and what you end up with is (1,0,1,0), (0,1,1,0), (0,0,0,0).*0913

*So, let us take a look at our reduced row echelon. We have this is fine, yes. That is a leading entry... that is fine, that is a leading entry.*0934

*There is no leading entry here. Remember when we solved reduced row echelon for a homogeneous system, this means we have infinite number of solutions, because this one can be any parameter.*0944

*If this is any parameter, well, I can choose any number for this one and then that this means these two will be based on this.*0954

*Therefore, we have infinite solutions. In other words, there does exist a non-trivial solution.*0963

*So, there exists a non-trivial solution, which implies dependence... that means that those three polynomials that I had, one of them can be expressed as a linear combination of the other two. *0974

*So, they are not completely independent. At least one of them depends on the others.*0996

*So, we have dependence. Again, today we talked about linear independence and dependence.*1006

*The previous lesson we talked about the span, so, make sure you recall... we are still studying a linear system when we do that, but with a span we choose an arbitrary vector... that is our solution on the right hand side of the equation, that linear combination that we write.*1013

*For linear dependence and independence we are solving a homogeneous system. We just set everything equal to 0. Make sure to keep those straight.*1028

*Thank you for joining us here for a discussion of Linear Algebra at Educator.com. We will see you next time.*1036

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*Last couple of lessons, we talked about linear independence, and we talked about the span.*0004

*Today we are going to talk about something called basis and dimension, and we are going to use linear independence and span to define those things.*0013

*So, let us get started. Okay. Let us start with a definition here.*0021

*Again, math usually always starts with some definition. Okay.*0031

*Vectors v1, v2, and so on... all the way to vk are said to form a basis for vector space v.*0037

*If 1... v1, v2, all the way to v*_{k}, with a span b.0065

*And 2... v1, v2... v*_{k} are independent, linearly independent, but I will just write independent.0086

*So, again, in the case of a set of vectors that is both independent and happens to span a vector space or some subspace, span something that we are, that we happen to be interested in dealing with. *0103

*We actually give it a special name. It is called a basis. Now, you can have vectors that are independent, but do not necessarily span a space.*0119

*So, for example, if I had 3-space, I could take the i vector and the j vector.*0130

*Well, they certainly are independent, they are orthogonal, they have nothing to do with each other... and yet they do not span the entire space. They only span a part of the space... in other words, the xy plane.*0135

*Or, you can have vectors that span the entire space, but are not necessarily independent.*0145

*So, again, let us take 3-space. I can have i,j,k, and let us say I decided to take also the vector 5k, another vector in the direction of k, but 5 times its length.*0152

*Well, it is a different vector. So, there are four vectors but... and they span the space, you know, every single vector can be written as a linear combination of i,j,k, and 5k, but they are not linearly independent.*0165

*They are dependent, because 5k can be written as a, well, constant × k. It is just they are parallel, they are the same thing,*0179

*So, again, you can have something that spans a space but is not independent, and you can have vectors that are independent but do not necessarily span the space.*0190

*What we want is something that does both. When it does both, it is called a basis for that space... profoundly important.*0199

*Okay. Let us see what we can do. Let us just throw out some basic examples.*0207

*Okay. So, the one we just discussed, e1, e2, and e3, they form a basis for R3.*0218

*Just like e1, e2, e3, e4, e5 would form a basis for R5.*0235

*Let us do a computational example here.*0243

*We are going to take a list of four vectors.*0249

*v1 = (1,0,1,0), v2 = (0,1,-1,2), these are vectors by the way, I better notate them as such.*0254

*v3 = (0,2,2,1) and v4 = (1,0,0,1), again I just wrote them in a list, you can write them as vectors, anything you want.*0273

*Let me see. We want to show that these four vectors are... so, show that these form a basis for R4.*0290

*Well, what do we need to show that they form a basis. Two things, we need to show that the vectors span R4, in this case, and we need to show that they are linearly independent.*0309

*So, let us get started, and see which one we want to do first.*0321

*Let us go ahead and do independence first. So, again, we form the following. *0326

*So, equation 1. Remember, c1v1 + c2v2, I am not going to write out everything, but it is good to write out the equation which is the definition of dependence and independence... c3v3 + c4v4 = the 0 vector.*0336

*When I put the vectors, v1, v2, v3, v4 in here, multiply the c's, get a linear system, convert that to a matrix, I get the following (1,0,0,1,0).*0354

*Again, that final 0 is there... (0,1,2,0,0), (1,-1,2,0,0), again the columns of the matrix are just the vectors through v4 (0,2,1,1,0). Okay.*0368

*When I subject this to reduced row echelon, I get the following. c1 = 0, c2 = 0, c3 = 0, c4 = 0.*0392

*Again, you can confirm this with your mathematical software. This is the non-trivial solution. It implies independence. Good.*0403

*So, part of it is set, now let us see about the span. Well, for the span, we need to pick an arbitrary vector in R4, since we are dealing with R4.*0415

*We can just call it -- I do not know -- (a,b,c,d), and, we need to find to set up the following equation.*0425

*I will not use c because we used them before, I will use k... k1v1, constant, k2v2 + k3v3 + k4v4... symbolism in mathematics just gets crazy. Very tedious sometimes.*0432

*And... I will just call it v arbitrary... just some vector v.*0452

*Although, again, we can set up the solution, we can go (1,0,1,0), (0,1,-1,2), (0,2,2,1), (1,0,0,1), and we can do (a,b,c,d).*0458

*You know what, let me go ahead and just write it out, so you see it.*0470

*We have (1,0,0,1), (0,1,2,0), (1,-1,2,0), (0,2,2,1), and of course our vector, this time it is not a (0,0,0,0), it is going to be (a,b,c,d).*0476

*Again, the nice thing about mathematical software is that it actually solves this symbolically. Not necessarily numerically.*0497

*So, it will give you a solution for k1, k2, k3, k4 in terms of a, b, c, and d. Well, there does exist a solution.*0503

*Okay. There does exist a solution. That means that any arbitrary vector can be represented by these 4 vectors.*0513

*So, let us see, so v1, v2, v3, and v4, which are just v1, v2, v3, v4, span R4. *0527

*We found something that spans R4, and we also found that they are linearly independent, so yes, these vectors are a good basis.*0543

*Are they the best basis? Maybe, maybe not, it depends on the problem at hand... but they are a basis and it is a good basis for R4.*0560

*Okay. Let us list a theorem here... theorem... if s, the set of vectors, v1 so on and so forth onto vk, is a basis for v.*0575

*So, if the set is a basis for v, then every vector in v can be written in 1 and only 1 way, as a linear combination of the vectors in s.*0603

*That is not s, we should write the vectors in s. *0655

*So, in other words, if I know that s is a basis for the vector space, any vector in that vector space can only be represented 1 way.*0665

*That means the particular representation, the constants that are chosen is unique. Not multiple ways, it is unique.*0675

*Another theorem... actually, let me write this one in blue because we are possibly going to do something with this one.*0684

*Let s be v1... vk be a set of non-zero vectors in v, and we will let w equal the span of s.*0702

*So, we have this set s, there is a span of it, we will call that w, because it may not span the entire vector space, that is why we are giving it different, but obviously it is going to... I mean it is in v, so it is going to be some subset of it.*0736

*Then, some subset of s is a basis for w. Okay, let us stop and think about what this means.*0752

*I have a vector space v, I have some arbitrary collection of vectors that I have taken from v and I call that set s, just a list of vectors.*0771

*I know that these vectors span some part of v.*0779

*I call that w, if I need to give it a name, or I can just refer to it as the span of s.*0785

*Well, if I take some subset of this, maybe all of it, but so... either k vectors or less than k vectors, some subset of it, it actually forms a basis for the span.*0791

*That makes sense. Again, you have some set of vectors that spans an entire space, well, either all of the vectors together are independent, in which case that is your basis. *0805

*Or, they might be dependent, which means that you should be able to throw out a couple of them and reduce the number.*0820

*But to get something, some set of vectors from here, some collection that actually forms a basis for the span.*0825

*let us see how this works in terms of... a real life example. Okay.*0834

*We are going to list a procedure for finding the subset of s, of any s that is a basis for the span of s.*0841

*Let me actually move forward. Let me write down what this is.*0854

*Procedure for finding a subset of s, that is a basis for this span of s.*0866

*Okay. First thing we are going to do. We want to form c1v1 + c2v2 + so on and so forth... ckvk = 0.*0890

*We want to set up the homogeneous system, okay?*0907

*Now, we want to solve the system by taking it to reduced row echelon form.*0911

*Now, here is the best part. The vectors corresponding to the leading entries form a basis for span s.*0930

*This is actually kind of extraordinary. I love this, and I do not know why, but it is amazing.*0963

*I have this collection of vectors that spans a particular space.*0969

*I set up the homogeneous system and I subject it to Gauss Jordan elimination, bring it down to reduced row echelon form, and as you know, not every column needs to have a leading entry.*0974

*Well, the columns that do have a leading entry, that means I throw out all of the others.*0986

*The original vectors that correspond to those columns that have leading entries, they actually form a basis for my span of s.*0992

*So, let us just do an example and see what happens. Let us take the following vectors.*0998

*let me do this in red, actually... so v1 = 1... this is not going to work... (1,2,-2,1).*1007

*v2 = (-3,0,-4,3). v3, and of course these are vectors, so let me notate them as such... (2,1... this is also 1,-1).*1029

*v4 = (-3,3,-9,6).*1054

*v5 = (9,3,7,-6).*1065

*So, we have 5 vectors... we want to find a subset of these vectors, it might be all 5, it might be 2, it might be 3, it might be 4... that form a basis for the span of s.*1073

*Okay? Okay. So, we form for step... we do this thing right here.*1091

*So, we set up this equation and we put these vectors in for this equation, and we end up with the following system.*1105

*Columns... these vectors just going down... (1,2,-2,1)... or you can do them across... either way.*1115

*(1,-3,2,-3,9), 1, 2, 3, 4, 5 because we have 5 vectors... 5 columns, and of course the augmented is going to be 0.*1126

*(2,0,1,3,3,0), (-2,-4,1,-9,7,0), (3,-1,6,-6,0)... good.*1140

*We are going to subject that to reduced row echelon form.*1161

*When we do that, let me just put that there and write that there. Let me see...*1165

*Let me move on to the next page, that is not a problem.*1172

*So, we have subjected that matrix to reduced row echelon and we end up with the following... (1,0,0,0), (0,1,0,0), (1/2,3/2,3/2,0), (-1/2,3/2,5/2,0), and 0's everywhere else.*1176

*So our reduced row echelon looks like this.*1206

*Well, leading entry, leading entry, no leading entries anywhere else.*1210

*So, vector number 1, vector number 2, v1 and v2 form a basis.*1217

*So, it is not these, it is not (1,0,0,0), (0,1,0,0).*1231

*This is the reduced row echelon from the matrix, the columns of which are the vectors that we are talking about.*1236

*So, those vectors, the actual columns from the original matrix, those 2 vectors, so we started off with 5 vectors, and we found two of them that actually span the entire space.*1242

*We threw out the other three, they are not that important. We can describe the entire span with just these 2 vectors.*1254

*Form a basis for the span of s.*1260

*Again, this is really, really extraordinary.*1266

*Okay. Let us... another theorem... if s, v1... so on and so forth all the way to vk and t, which is, let us say w1 all the way to wk... okay?*1270

*wN, so if s is the set of vectors v1 to vk, k could be 13 so we might have 13 vectors in this one... and t is equal to w1 all the way to wN.*1308

*So, k and n do not have to necessarily be the same, but here is what the theorem says.*1320

*If these 2 sets are bases for v, then k = n.*1328

*In other words, if I have a given vector space, and if I have the bases for them, the bases have the same number of vectors.*1340

*So, the basis set has the same number of vectors. In other words, I cannot have a vector space that has one basis that is 3 vectors and another that is 5 vectors.*1349

*That is not what basis is. Basis expands the set, and it is linearly independent.*1358

*Therefore, if I have 2 bases, they have to have the same number of elements in them.*1364

*It makes sense. Okay. Now, because of this, once again, every basis of a given vector space has the same number of vectors in it.*1368

*There are an infinite number of bases for a vector space... but of that infinite number, they all have the same number of vectors in them.*1382

*Therefore, we define... again, very, very important definition... the dimension of a non-zero vector space dimension -- fancy word -- is the number of vectors in a basis for the vector space.*1393

*So, read this again, the dimension of a non-zero vector, of a non-zero vector space is the number of vectors in the basis for that space.*1440

*So, dimension is kind of a fancy word that a lot of people throw around.*1452

*So, we talk about 3-dimensional space, the space that we live in. Well, 3-dimensional space, there are a couple of ways to think about it.*1458

*Yes, it means 3-dimensional space because it will require 3 numbers to actually describe a point, (x,y,z)... three coordinates.*1467

*However, the actual mathematical definition is 3 space is 3 dimensional because any basis for 3-space has to be made up of 3 vectors... 5 dimensional space.*1475

*Any basis for 5-dimensional space has to be made up of 5 vectors. I cannot have 4 vectors describing it for 5 dimensional space. It is not going to happen.*1489

*Can I have 6 vectors that actually describe it? Yes, I can have 6 vectors that span the 5-dimensional space, but that span is not linearly independent.*1501

*So, because... and that is the whole idea. The dimension of a space is the number of vectors that form the basis and a basis is expansive and it is linearly independent.*1513

*Okay. Let us see here. p2, which we have used a lot.*1526

*That is the vector space of all polynomials of degree < or = 2.*1534

*The dimension is 3, and here is why. The basis, we will list a basis, and that should tell you.*1548

*This is the best part, if you just want to list a basis you can just count the number of vectors... that is how many dimensions that space is.*1555

*t*^{2}, t, and 1. Any linear combination of t^{2}, t, and 1 will give you every single possible polynomial of degree < or = 2.1562

*For example, 3t*^{2} + 6t - 10. Well, it is 3 × t^{2}, 6 × t, -10 × 1.1580

*3t + 2... 3 × t... 2... it is of degree 2, this is degree 1. degree less than or equal to 2.*1592

*So, this one has to be in there. So, p2 has a dimension 3.*1602

*pn has dimension n + 1.*1615

*Okay. Now, here is where it gets really, really interesting and sort of just a sideline discussion, something sort of think about a little bit of mathematical culture. A little bit of abstraction... *1621

*Notice that this p2 has a dimension of 3. Well, our 3-space, our normal 3-space that we live in also has a dimension of 3.*1636

*As it turns out, all vector spaces of a given dimension, the only different between the vector spaces is the identity of their elements.*1649

*In one vector space, R3, we are talking about points, or vectors, arrows.*1658

*In this vector space, where this is a basis, it is a dimension of 3... the elements are actual polynomials.*1664

*As it turns out, the identity of the elements is the only thing that is different about those 2 spaces. These two spaces have the exact same algebraic properties.*1675

*They behave exactly the same way. In fact, I do not even need to think about it... if I can find myself 15 other vector spaces that have a dimension of 3, the identity of those elements completely does not matter.*1683

*In fact, it does not even matter, I can treat it completely symbolically. I can call them whatever I want. I can label them whatever I want.*1697

*What is important is the underlying algebraic property, and it is the same for every single vector space of a given dimension. That is what is extraordinary, that is what gives mathematics its power.*1704

*Once I understand, let us say R3, and we understand R3 pretty well... we live in this space, we enjoy the world around us, look at what we have done with the world around us.*1715

*If I find any other vector space with strange objects in it, if it has a dimension of 3, I know everything about it. I know absolutely everything about it because it behaves the same way that R3 does.*1724

*Again, that is really, really extraordinary... the last thing that I want to leave you with in this particular lesson is that what we have dealt with are finite dimensional vector spaces.*1737

*In other words, we know that R3 has an infinite number of vectors in them, but the basis, the dimension is finite... 3.*1748

*That means I only need 3 vectors in order to describe the entire space.*1757

*Now, that is not always true. There are infinite dimensional vector spaces that require an infinite number of vectors to actually describe them.*1761

*Those of you that go on into higher mathematics, or not even that, those of you who are engineering and physics majors, at some point you will be discussing something called the Fourier series, which is an infinite series of trigonometric polynomials.*1772

*Sin(x), cos(x), sin2(x), cos2(x), sin3(x), cos3(x), and so on. That is an infinite dimensional vector space.*1785

*Okay. So, I will list... let us see... 2 infinite dimensional vector spaces, we of course are not going to deal with it.*1796

*Linear algebra, mostly we stick with finite dimensional vector spaces, but I do want you to be aware of them.*1804

*p, the space of all polynomials... all polynomials, that is an infinite dimensional vector space.*1811

*It requires... it has an infinite number of vectors in its basis. Not like p2 or R3, that only have 3.*1817

*The other one is the space of continuous functions on the real line.*1824

*So, the space of continuous functions, you will see it represented like this... from negative infinity to infinity... that is defined on the entire real line.*1838

*That space has an infinite number of dimensions. I need an infinite number of functions in order to be able to describe all of the other functions, if I need to do so.*1845

*I just wanted to throw that out there. Really, what I wanted you to take away from this is that the identity for a vector space of any given dimension, the identity of the elements is completely irrelevant.*1859

*The underlying behavior is what we are concerned with, and the underlying behavior is exactly the same.*1870

*Thank you for joining us here at Educator.com, linear algebra, we will see you next time.*1876

*Welcome back to educator.com and welcome back to linear algebra, today we are going to talk about matrices.*0000

*Matrices are the work horses of linear algebra, essentially everything that we do with linear algebra of a computational nature.*0007

*Well we are not necessarily discussing; the theory is going to somehow use matrices.*0016

*You have dealt with matrices before; you have seen them a little bit in algebra-2 if I am not mistaken.*0022

*You have added, you have subtracted, you have maybe multiplied matrices.*0027

*Today we are going to talk about them generally talk about some of their properties, we are going to go over addition, we are going to go over scalar multiplication, things like that, the transpose of a matrix.*0031

*Having said that, let's just jump right in and familiarize ourselves with what these things are and how they operate.*0041

*The definition matrix is just a rectangular array of MN entries, arranged in M rows and N columns, so for example if I had three rows and two column matrix, the number of entries in that matrix is 3 times (2,6), because they are arranged in a rectangular fashion.*0050

*That's all this MN means.*0067

*Let's, most matrices will be designated by a capital letter and it will look something like this, it will be symbolized most generally A11, A12... A1N.*0069

*Notice this is very similar to the arrangement that we had for the linear systems and of ‘course there is a way in subsequent lesson to represent the linear system by a matrix, and we will see what that is.*0088

*A21, excuse me, A22... A2N and will go down, will go down, this will be AM1, AM2... AMN.*0103

*The top-left entry is A11, bottom-right entry is AMN, this is an M by N matrix.*0122

*This M is the rows, sorry, rows always come first, this is the row and N is a column, so M rows, N columns...*0133

*...Which is why this first subscript here is an M and this second subscript here is an N, okay.*0149

*Basic examples of something like 1, 5, 6, 7, oh, a little thing about notation and matrices.*0163

*You are going to see matrices represent a couple of ways, you are going to see it with these little square brackets, you are going to see it the way that I just did it, which is just 1,5,6,7 with parenthesis like that.*0175

*And sometimes in this particular course, probably not in your book, but in this particular course, often time when I write a matrix, I'll arrange it in a rectangular fashion, *0190

*And it will be clear that it's a matrix, because we will be discussing and talking about it as a matrix, but I often will not put the little parenthesis around it.*0199

*Don't let that throw you, there is no law that it says, a notation has to be this way or that way, these are just convention.*0206

*As long as we know what we are talking about, the notation for that is actually kind of irrelevant, okay.*0212

*This is a 2 by 2 matrix, there are two rows, two columns, you might have something like 3, 4, 7, 0, 6, 8.*0218

*This is going to be three rows, two columns, so this is a 3 by 2 matrix.*0233

*You might have something like this, which is a 1 by 1 matrix.*0240

*1 by 1 matrix is just a number, which is actually an interesting notion, because those of you go on to...*0244

*We go on to study some higher mathematics, perhaps even complex analysis.*0253

*As it turns out numbers an matrices share many properties, we are actually going to be talking about a fair number of those properties.*0258

*The idea of thinking is a number as a 1 by 1 matrix, or the idea of thinking is a, of a square matrix as some kind of a generalized number.*0264

*Its actually good way to think about it, so...*0274

*...Not really going to, not really something that we are going to deal, but it's something to think about, you know may be in the back of your mind if you are wondering.*0278

*Well you know they seem to behave kind of similarly, well there is a reason they behave similarly, because numbers and matrices, their underlying structure which we are going to examine later on is actually the same.*0285

*Okay, so we speak about the Ith row in the Jth column, so let me do this in blue.*0295

*We talk about the Ith row, we talk about the Jth column, so remember the I, J, this was the notation.*0304

*This were first to the row, this were first to the column, so if you have something like A (5, 7), we are talking about the entry that's in the fifth row and the seventh column, go down 5, go over seven and that's your entry.*0316

*Okay, the third, well let's actually do this specific example here.*0332

*Let's say we have a matrix A, which is (1, 2, 3, 4, 7, 9, 10, 4, 6, 5, 9, 6, 0,0, 1, 8) and they can be negative numbers too.*0339

*I just happen to have picked all positive numbers here, so we might talk about the third row, that's going to be this thing.*0357

*you have (6, 5, 9, 6), you might talk about the second column, the second column is going to be that (2, 9, 5, 0)*0363

*A 1 by N matrix, or N, M by 1 matrix.*0374

*Okay, so just single columns or single rows, you can arrange them anyway you like, so 1 by N would be something like this, if I took, let's say the fourth row, I would have 0, oops, lines showing up.*0394

*We don't want that, now let's do it over here, so if I take (0, 0, 1, 8), this is 1 by N.*0401

*In this particular case 1 by 4, or if I take let’s say (4, 4, 6, 8)...*0416

*... This is a ( 1, 2, 3, 4, 4, 5, 1), in general it's not really going to make much of a difference, because we are going to give the special names, they are called vectors.*0428

*And this particular case it's called a four vector, because there are four entries in it, it might have a seven vector which has seven entries in it.*0438

*In general it really doesn't matter what the right vector as columns or rows as long as there is a degrees of consistency, when you are doing your mathematical manipulation.*0446

*Sometimes it's better to write them as columns or rows, because it helps to understand what's going on, especially when we talk about matrix multiplication.*0455

*But in general both this and this are considered four vectors, so...*0463

*... Okay, Let's see here, if M = N, if M = N, if the number of rows equals the number of columns, we call it a square matrix....*0470

*... Call it a square matrix...*0495

*... Something like K = let's say (3, 4, 7, 10)...*0500

*... (11, 14, 8, 1, 0, 5, 6, 7, 7, 6, 5, 0), so this is four this way, four this way.*0510

*This is a 4 by 4 matrix, it is a square matrix, these entries, the ones that go from the A11, A22, A33, A44, the ones that have A...*0527

*... I*_{j}, where I = J, those entries are called entries on the main diagonal.0543

*This is called, to this in red, this is the main diagonal from top-left to bottom-right, so the entries on a main diagonal on this square matrix are (3, 14, 6, and 0) and again notice I haven't put the parenthesis around them, simply for because it's just my own personal notational taste.*0549

*You have a square matrix; you have entries along the main diagonal, well a diagonal matrix...*0567

*... Matrix...*0580

*... Is one where every entry...*0585

*... Alter the main diagonal...*0592

*...Is 0, so something like, I have A, I have (3, 0, 0, 0, 4, 0, 0, 0, 7), so notice I have entries along the main diagonal, 3, 4 and 7, but every other entry is 0.*0602

*This is called a diagonal matrix.*0623

*Diagonal matrix is a square matrix, where all the, the main diagonal is represented, good.*0625

*Okay, so let's start talking about some operations with matrices, let me go back to blue here, the first thing we are going to talk about is matrix addition....*0634

*... addition, so let's start with a definition here, try to be as mathematically precise as possible.*0652

*If A = matrix entry IJ and the symbol here, when we put brackets, just one symbol with IJ, this represents the matrix of all the entries, I*_{j} and if B is equal to the matrix B, J...0661

*... Are both M by N, then A + B is the M by N matrix, C...*0687

*... C*_{i}J, where C_{i}J - A_{i}J = B_{i}J, okay.0709

*That is...*0725

*...We get C by adding, by adding corresponding entries....*0733

*... Of A and B.*0750

*That's al that means, a big part of linear algebra and a lot of the lessons, the subsequent lessons they are going to start with definitions.*0753

*In mathematics the definitions are very important, they are the things that we start form, and often times there is a lot of, there is a lot of formalism to these definitions.*0761

*When we give the definition, when we give them for the sake of being mathematically precise, and of ‘course we do our best to explain it subsequently, so often times the definitions will look a lot more complicated than they really are, simply because we need to be as precise as possible.*0771

*And we need to express that precision symbolically, so that's all it's going on here.*0786

*All the words essentially saying with this definition is if I have a 3 by 3 matrix, and I have another 3 by 3 matrix, and I want to add the matrices, well all I do is add the corresponding entries.*0791

*First entry, first entry, second entry, second entry, then all the way down the line, and then I have at the end a 3 by 3 matrix.*0801

*Let's just do some example, so I think it will make sense, A = (1, - 2, 4, 2, -1, ) so this is a 2 by 3 matrix, and let's say B is also a 2 by 3 matrix ( 0, 2, -4, 1, 3, 1).*0809

*Notice in the definition both A and B are M by N and are our final matrix is also M by N.*0832

*They have to be the same in order to be able to add them, in other words if I have a 3 by 2 matrix and if I have a 2 by 3 matrix that I want to add it to, I can't do that.*0841

*It's, the addition is not defined because Indeed corresponding entries, I need (3,2) matrix, 3 by 2 matrix added to a 3 by 2 matrix, 5 by 7 matrix added to a 5 by 7 matrix.*0852

*Addition actually needs to be defined, so they have to be the same size, both row and column for addition to actually work, so in this case we have a 3 by 2, so A + B...*0865

*... I just add corresponding entries 1 + 0 is 1, -2 + 2 is 0, 4 -4 = 0, 2 + 1 is 3, -1 + 3 is 2, 3 + 1 is 4, and now I have my A + B matrix.*0878

*Just add corresponding entries, nice and easy, basic arithmetic.*0895

*Okay, now let's talk about something called scalar multiplication.*0902

*Many of you have heard the word scalar before, if you haven't, it's just a fancy word for number, real number specifically.*0912

*Okay, so let's, let A = the matrix A*_{ij} again with that symbol.0922

*Well let A be M by N and will, and R a real number.*0935

*We have a matrix A and we have R which is a real number.*0947

*Then...*0953

*... The scalar multiple of A by R, which is symbolized RA, R times A is the, again M by N matrix....*0961

*... B, BA*_{ij} , such that...0980

*... The IJth entry of the B matrix equals R times BA*_{ij}.0989

*In other words all we are doing is we are taking a matrix and if I multiply by the number 5, I multiply every entry in the matrix by 5.*0996

*Let's say if R = -2, and A is the matrix, (4, -3, 2, 5, 2, 0, 3, -6, 2).*1006

*Let's go ahead and put those, then RA is equal to -2 times each entry, -2 times 4, we get -8.*1026

*-2 times -3 is 6, -2 times 2 is -4, -2 times 5 is -10, -2 times 2 is -4, -2 times 0 is 0, -2 times 3 is -6.*1038

*-2 times -6 is 12, -2 times 2 is -2, this is our final matrix.*1054

*Now okay, now let's talk about something called the transpose of a matrix, okay....*1065

*... It's going to be a very important notion, it's going to come up a lot in linear algebra, so let's go ahead, transpose of a matrix.*1081

*Let's start with a definition, if A = the A's by J, is M by N.*1095

*Then the N by M, notice I switch those, the N by M matrix, A will go little T on top, which stands for transpose, equals A*_{j}_{i}.1109

*Well actually I mean....*1132

*... TIJ, where, I will write it down here.*1138

*A*_{IJ}, the IJth entry of the transpose matrix is equal to AJI.1147

*Okay, so if A is, this is an M by N matrix, then the N by M matrix, A transpose is this thing where the entry is equal to the A*_{IJ}th entry = A_{JI}, where the indices have been reversed.1157

*This is called the transpose of A, in other words what we are doing here is we are just exchanging rows for columns, so the first row of A becomes the first column of A transpose.*1177

*The third row of A becomes the third column of A transpose, and its best way to think about it.*1189

*Pick a row and then write it as a column, then move to the next one, pick the next row, write it as the next column.*1195

*That's all you are doing, you are literally just switching, you are flipping the matrix, so let's do some examples.*1202

*If A = (4, -2, 3, 0, 5, 2), well A transpose, so again we are writing the rows now as columns, so I take (4, -2, 3) and I write it as a column, (4, -2, #), and I take the next one (0, %, 2), (0, 5, 2).*1211

*Now what was a 2 by 3 has become a 3 by 2.*1234

*Definition, M by N becomes N by M, that's all you are doing with the transpose is you are flipping it in some sense, so another example, let's say you have a square matrix, so (6, 2, -4) and (3, -1, 2), (0, 4, 3).*1241

*Well, the transpose is going to be (6, 2, -4) written as a column, (3, -1, 2), written as a column, and (0, 4, 3) written as a column.*1268

*All of them here is of literally flipped it along the main diagonal as if this main diagonal were a mirror image, I have moved the 3 here, the 2 here.*1282

*See that 3 is now here, the 2 is here, 0 and up to there, the -4 moved down here, this 4 moved there, the 2 moved there.*1292

*That's all we were doing with the square matrix, but again all you are doing is taking the rows, writing in this columns and do it one by one systematically and you will always get the transpose that you will need.*1301

*Okay, let's see, let's do one more for the transpose, so C = let's say (5, 4, -3, 2, 2, -3),*1313

*This is a 3 by 2 so I know that my C transpose is going to have to be a 2 by 3.*1332

*Take a row, write it as a column (5, 4), take the next row write it as a column, (-3, 2) next row (2, -3), write it as a column starting from top to bottom and you get you C transpose.*1339

*Again all you have done is flip this.*1356

*If we write a 1 by, let's say 3, so let's say we have (3, 5, -1), this is technically a, it is a 1 by 3 matrix, so 1 by 3.*1362

*When we take T transpose, it is going to be a 3 by 1.*1375

*3 by 1 and it's going to be, well (3, 5, 1), it's going to be that thing written as a column.*1379

*But again these once where there are single rows or single columns, we generally call them vectors.*1386

*We will talk more about that specifically more formally in a subsequent lesson, okay so let's recap what we have done here.*1391

*We have a regular matrix, so we are talking about matrix here, regular matrix, let me use red.*1401

*It's just and M by N matrix of M rows and N columns, so let's say we have (1, 6, 7, 3, 2, 1) this is two rows and three columns, this is a 2 by 3.*1409

*A square matrix is N by N that means the number of rows equals the number of columns, so an example might be (1, 6, , 2), two rows, two columns, square matrices are very important, will play a major role throughout or particularly in the latter part of the course when we talk about Eigen values and Eigen vectors.*1425

*A diagonal is a square matrix where all of the entries often mean diagonal or 0, so let's do a 3 by 3 diagonal matrix, so let's take 1, let's take 2, let's take 3, let's just put it along the main diagonal.*1449

*Erase these random, and we put 0's in everywhere else, 0.*1463

*This is a diagonal matrix, entries along the main diagonal, 0's everywhere else, okay.*1471

*Okay, we did something called a matrix addition where the addition of corresponding entries in two or more M by N matrices of the same dimensions.*1479

*Okay, so they have to be the same dimension in order for matrix addition to be defined, if you are going to take a 5 by 7, you have to add it to a 5 by 7.*1488

*You can't add a 5 by 7 matrix to a 2 by 3 matrix, it's not defined because you have to add corresponding entries.*1497

*Scalar multiplication, it's a multiplication of each entry of an M by N matrix, oops, erase these random lines again, so the multiplication of each entry of M by N matrix by a non-zero constant.*1504

*If I have a matrix, could be 3 by 6 and I multiply by 5, I multiply every entry in that matrix by the 5.*1519

*the transpose is where you are exchanging the rows and columns of an M by matrix, thus creating N by M, so if I start with a 6 by 3, I transpose it, I get a 3 by 6.*1529

*If I start with a 4 by 4 and I transpose it, it's still a 4 by 4, but it's different matrix, because the entries have switched places, okay.*1541

*Let's go ahead and finish off with one more example of everything that we have discussed, so let's start off with matrix A, let's go ahead and define that as (3, 1, 2) (2, 4, 1).*1555

*And let's go ahead and put parenthesis around that, let's, matrix B as (6, -5 and 4), (3, 0, -8), good, so these are both 2 by 3.*1570

*Certainly matrix addition is defined here, let's find two times A - 3 times B and take the transpose of the whole thing, so now we are going to put everything together.*1589

*We are going to put addition, subtraction, we are going to put multiplication by scalar, and we are going to put transpose all in one.*1603

*Okay, so let's find 2A, well 2A = 2 times (3, 2, 2), (2, 4, 1) and that's going to equal; (6, 2, 4), 2 times 2 is 4, 2 times 4 is 8, 2 times 1 is 2, that's that 1.*1610

*Let's find -3B, so -3B = -3 times B, which is (6, -5, 4, 3, 0, 8) and that ends up being.*1635

*When I do it I get (-18, 15, -12), actually I am going to squeeze it in here.*1652

*I want to make it a little more clear, so let me write it down here, so I have (-18, 15, -12, -9) and I hope here, checking my arithmetic, because I make a ton of arithmetic errors.*1663

*This is -3B, so now we have 2A - 3B, we will notice this -3B is already, so we have this (-) sign we took care of it already.*1681

*This is the same as 2A + -3B, okay.*1694

*That means I add that matrix with that matrix, so when I add those two, I get (6,...*1701

*... Wait a second, did I get this right, let me double check, -28, this is 15, -12, -9, 0, -24, okay.*1718

*And it's -3B, yes okay, so now let's add 6 + -18 should be -12, 2 + 15 is 17, 4 - 12 is -8.*1733

*4 - 9 is -5, 8 + 0 is 8 and 2 - 24 is -22, so this is our 2A - 3B.*1757

*Now if I take 2A -3B and I transpose it, I am going to write the rows as columns, (-12, 17,, -8), (-12, 17, -8) and please check my arithmetic here.*1773

*(-5, 8, -22), this is 2 by 3, this is 3 by 2, everything checks out.*1795

*This is our final answer, so we have got scalar multiplication, matrix addition, we can look out at the transpose.*1805

*We have the diagonal matrix, which is only entries on the main diagonal, top left or bottom right.*1814

*And we have square matrices, where the number of rows equals the number of columns okay.*1821

*We will go ahead and stop here for now and we will continue on with matrices next time.*1827

*Thank you for joining us here at educator.com, let's see you next time.*1831

* Welcome back to Educator.com and welcome back to linear algebra.*0000

*We have been investigating the structure of vector spaces and subspaces recently getting a little bit more deeper into it to understand what it is that is actually going on in the vector space.*0004

*Today we are going to continue to talk about that of course, and we are going to talk about homogeneous systems.*0015

*As you gleaned already, homogeneous systems are profoundly important, not only in linear algebra, but they end up being of central importance in the theory of differential equations.*0022

* In fact, at the end of this lesson, I am going to take a little bit of a digression to talk about the nature of solutions in the field of differential equations as related to linear algebra.*0033

*Linear algebra and differential equations, they are very, very closely tied together. So, it is very difficult to separate one from the other.*0043

*Let us go ahead and get started.*0050

*Okay, so, we know that the null space of... so what we have is this homogeneous system, for example, ax = 0, where a is some m by n matrix.*0054

*This is just the matrix form of the normal linear system that we are used to, in this case a homogeneous system because everything on the right hand side is 0.*0072

*x is, again, is a vector, so it is a n by m, n by 1 vector that is a solution to this particular equation.*0080

*Now, we know that given this, we know that the null space, or the space of all solutions to this equation.*0091

*In other words, all of the vectors that satisfy, all of the vectors x that satisfy this equation is a subspace of RN.*0102

*Well, very important question, a very important problem is finding the basis for this subspace.*0120

*If you recall what a basis is, a basis is a set of vectors that actually spans the entire space that we are dealing with and is also linearly independent.*0137

*So, you remember, you might have a series of vectors that spans a space, but it might not be linearly independent.*0148

*Or, you might have some vectors that are linearly independent, but they might not span the space.*0153

*A basis is something that satisfies both of those properties.*0158

*Again, it spans the space that we are talking about, and the vectors in it are linearly independent.*0162

*In this case, the space that we are talking about is the null space. The space of solutions to the homogeneous system ax = 0.*0167

*So, the procedure for finding a basis for the null space of ax = 0, where is a n by m, of course I will not go ahead and mention that.*0177

*The first thing we do, well, we solve ax = 0 by Gauss Jordan elimination to reduced row echelon form, as always.*0210

*Now, if there are no arbitrary constants, in other words, if there are no columns that have no leading entries, then, the null space equals the set 0 vector.*0229

*What that means is that there is no basis, no null space. There is no null space, essentially -- well, there is, it is the 0 vector, but there is no basis for it.*0263

*In other words, there is no collection of vectors. Okay.*0272

*The dimension of the null space, which we called the nullity if you remember, equals 0.*0277

*Okay. Our second possibility is if arbitrary constants do exist after we reduce it to reduced row echelon form.*0293

*What that means is that if there are columns that do not have leading entries, those are... the x... the values corresponding to those columns, let us say it is the third and fifth column, so x3 and x5, I can give them any value I want. That is what the arbitrary constant means.*0302

*So, if arbitrary constants exist, then, write the solution x = c1x1 + c2x2 + ... + ckxk, however many of these vectors and constants there are.*0321

*Well, once you do that, the set s, which consists of this x1, this x2, all the way up to xk... x1, x2... xk... is a basis for the null space.*0358

*Let me write space over here. So, again, what we do when we want to find the basis of a null space of a homogeneous system.*0390

*We solve the homogeneous system with reduced row echelon form.*0402

*We check to see if there are no columns that do not have a leading entry, meaning if all of the columns have a leading entry, there are no arbitrary constants.*0407

*Our null space is the 0 vector. It has no basis, and the dimension of the null space, the nullity in other words equals 0.*0414

*Let me go ahead and put nullity here. Nullity is the dimension of the null space. In other words it is the number of vectors that span that space, it is 0.*0423

*If arbitrary constants do exist, meaning if there are columns that do not have a leading entry, then we can read off the solution for that homogeneous system.*0435

*We can write it this way, and the vectors that we get end up being our basis.*0442

*Let us do an example, and as always, it will make sense, hopefully.*0450

*Let us see, so our example... find, not find the... find a basis.*0458

*So, there is usually more than one basis. Find a basis. Basis is not necessarily unique.*0476

*Find a basis for, and the nullity of the null space for the following system.*0484

*(1,1,4,1,2), (0,1,2,1,1), this is a one, sorry about that, it looks like a 7, (0,0,0,1,2), (1,-1,0,0,2), and (2,1,6,0,1).*0506

*x1, x2, x3, x4, x5 = (0,0,0,0,0).*0538

*This is our homogeneous system. We are looking for this right here.*0552

*We are looking for all vectors, x1, x2, x... all vectors whose components x1, x2, x3, x4, x5... that is what we are looking for.*0558

*It is a solution space. So, we are going to try to find the solution, and when we do, we are going to try to write it as a linear combination of certain number of vectors.*0566

*Those certain number of vectors are going to be our basis for that solution space.*0574

*Okay. So, let us go ahead and just resubmit it to reduced row echelon form.*0578

*What we end up getting is the following... (1,0,2,1,0), (0,1,2,0 -- oops, I forgot a number here.*0586

*(1,0,2... let me go back... what I am doing here is... I am going to take the augmented matrix.*0608

*I am going to take this matrix and I am going to augment it with this one, so let me actually rewrite the whole thing.*0627

*It is (1,1,4,1,2,0), (0,1,2,1,1,0), (0,0,0,1,2,0), (1,-1,0,0,2,0), (2,1,6,0,1,0). Okay.*0632

*So, this is the matrix that we want to submit to reduced row echelon form.*0655

*I apologize, I ended up doing just the matrix a.*0659

*When we submit it to reduced row echelon form using our software, we get the following.*0663

*(1,0,2,0,1,0), (0,1,2,0,-1,0), (0,0,0,1,2,0), (0,0,0,0,0,0), and of course the last... not of course... but... there is no way of knowing ahead of time.*0669

*What we end up with is something like this. So, now let us check to see which columns actually do not have leading entries.*0695

*This first column does, this second column does, this third column does not... so, third.*0702

*The fourth column has a leading entry, it is right there... the fifth column... okay.*0711

*So the third and the fifth columns, they are arbitrary constants.*0719

*In other words, these of course correspond to x1, x2, x3, x4, and x5.*0723

*So, x3 and x5, I can let them be anything that I want.*0732

*I am going to give them arbitrary parameters. So, let me go ahead and write x3... I am going to call x3 R.*0739

*It just stands for any number.*0748

*And... x5 is going to be S, any parameter. Columns with non-leading entries.*0752

*Now, when I solve for, like for example if I take x1, it is going to end up being the following.*0759

*x1 = -2R - S, and here is why... that row... well this is x1 + 2x3 + x5.*0767

*So, x1 = -2x3 - x5.*0785

*Well, -2x3 -x5 is -R - S, that is where this x1 comes from.*0793

*I am just solving now for the x1, x2 and x4. That is where I am going to get the following equations.*0800

*Then, I am going to rewrite these in such a way that I can turn them into a vector that I can read right off.*0809

*So, x2 here is equal to -2R + S, because here, this is 2R over here, this is -S, it becomes +S when I move it over.*0815

*x5, our last one, is equal to -2S, and here is the reason why: because it is -- I am sorry, this is x4 not x5, x5 is S.*0828

*x4 + 2s = 0, therefore x = -2s.*0843

*So, this is our solution. Notice our infinite number of solutions, now we are going to rewrite this in order... x1, 2, 3, 4, 5 as opposed to how I wrote it here, which was just reading it off, make it as simple as possible.*0851

*So, let me move forward. What I end up with is x1 = -2R - S, x2 = -2R + S, x3 = R, x4 = -2S, and x5 = S.*0866

*So, now x1, 2, 3, 4, 5... this is the vector that I wanted. This is the arrangement.*0896

*Well, take a look. R and S are my parameters. This is equivalent to my following.*0901

*I can write this as this vector, which is just x1 x2 in vector form... is equal to... I pull out an R and this becomes (-2, -2, 1, 0, 0).*0908

*That is what this column right here is... -2R, -2R, R, 0, R, 0, R. *0925

*Then this one right here, I pull out an S... +S × (-1,1,0,-2,1). There you go.*0935

*This particular homogeneous system has the following solution, as it is some arbitrary number × this vector + some arbitrary number × this vector.*0950

*Well, these are just constants. Any sort of number that I can imagine from the real numbers... any number at all, so this is a linear combination.*0960

*Therefore, that vector and that vector form a basis for our solution space.*0970

*Therefore, we have our basis is the set (-2,-2,1,0,0), and (-1,1,0,2,1).*0977

*This is the basis of the null space. Well, the dimension is, well how many are there... there are 2.*0997

*So, the null space has a dimension 2. The nullity is 2.*1005

*So, notice what it is that we actually had. We had the system of 1, 2, 3, 4, 5... 1, 2, 3, 4, 5, we had this 5 by 5 system.*1016

*So, R5 all the way around. Well, within that 5 dimensional space, 2 of those dimensions are occupied by solutions to the homogeneous system.*1026

*That is what is going on here. If you think about it, what you have is any time you have 2 vectors, you have essentially a plane, what we call a hyper-plane.*1040

*Because, you know, we are talking about, you know, a 5-dimensional space.*1051

*But, that is all that is going on here. So, we solve a homogeneous system, we reduce the solution to this, and because it is a linear combination of vectors, the 2 vectors that we get actually form a basis for our solution space.*1054

*We could how many vectors we get to and that is the dimension of our null space, which is a subspace of all of it.*1068

*Let us do another example. Let us take the system 2, let us go (1,0,2), (2,1,3), (3,1,2) × x1, x2, x3 = (0,0,0).*1081

*We set up the augmented matrix for the homogeneous system which is (1,0,2,0, (2,1,3,0), (3,1,2,0), just to let you know, that is what the augment is.*1110

*We subject it to reduced row echelon form, and we get the following... We get (1,0,0,0), (0,1,0,0), (0,0,1,0).*1130

*There are no columns here that do not have leading entries. Basically what this is saying is that x1 is 0, x2 is 0, x3 is 0. That is our solution.*1144

*We have x1 = 0, x2 = 0, x3 = 0, all of these are equivalent the vector = 0 vector.*1155

*When we have the 0 vector as the solution space, only the trivial solution, we have no basis for this system.*1168

*The nullity is 0. The dimension of the solution space is 0. The only solution to this is the 0 vector.*1176

*Okay. Now, let us go ahead and talk... take a little bit of a digression and talk about the relation between the homogeneous and non-homogeneous systems.*1186

*So, we have been dealing with homogeneous systems, but as we know there is also the associated non-homogeneous system. Some vector v.*1197

*So, that is the non-homogeneous and this is the homogeneous version of it.*1206

*0 on the right, or some b on the right. There is a relationship that exists between the two. This relationship becomes profoundly important not only for linear algebra, but especially for differential equations.*1213

*Because, often times, we will have a particular solution to a differential equation, but we want to know some other solution.*1224

*But, maybe we cannot actually solve the equation. Believe it or not, the situation comes up all the time.*1232

*As it turns out, we do not have to solve the equation. We can solve a simpler equation. The homogeneous version, which is actually quite easy to solve for differential equations, and here is the relationship.*1237

*Okay. If, some vector x*_{b} is a solution, some solution that I just happen to have, to the system ax = b, and x_{0} is a solution to the associated homogeneous system, okay?1248

*Then, if I add these two, x*_{b} and x_{0}, then the some of those 2 as a solution is also the solution. I should say also.1285

*Is also a solution to ax = b. So, if I happen to have the non-homogeneous system and if I happen to know some solution for it, and if I happen to know some solution to the homogeneous system, which is usually pretty easy to find, then I can add those 2 solutions.*1306

*That is a third solution, or that is a second solution to ax = b.*1322

*That is actually kind of extraordinary. There is no reason for it to actually be that way, and yet there it is.*1328

*What is more, every solution to the non-homogeneous system, ax = b, every single solution can be written as a particular solution, which we called x*_{b} + something from the null space.1337

*Symbolically, x*_{b} + x_{0}, what this says is that when I have the system ax = b, and I happen to know a solution to it and I also happen to know the solution to the homogeneous system, all the solutions. 1389

*In other words, I happen to know the null space every single solution to the non-homogeneous system consists of some particular solution to the non-homogeneous system plus something from the null space.*1406

*So, if I have, I mean obviously I am going to have maybe an infinite number of solutions to the null space, all I have to do is take any one of those solutions from the null space, add it to the particular solution that I have for the non-homogeneous, and I have my collection of every single solution to the non-homogeneous system.*1422

*That is amazing. What makes this most amazing is that usually when we are dealing with a non-homogeneous system, more often than not, you can usually just guess a solution.*1441

*We can check it, and if it works, that is a good particular solution.*1450

*Then, what we do is instead of solving that equation, we actually end up solving the homogeneous system, which is pretty easy to solve.*1453

*Then we just take any solution we want from the homogeneous system, add it to the solution that we guessed, and we have any solution that we want for the particular problem at hand.*1459

*That is extraordinary. It is a technique that you as engineers and physicists will use all the time.*1470

*Very, very important for differential equations. Okay.*1477

*We will go ahead and stop it there today. Thank you for joining us at educator.com, we will see you next time.*1480

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*Today we are going to do the first part of a two-part lesson and we are going to discuss something called the rank of a matrix.*0004

*This is a profoundly, profoundly important concept.*0013

*Not that anything else that we have discussed or that we are going to be discussing is not important, it is, but this is where linear algebra starts to take on a very -- I do not know what the word for it is.*0017

*I know for me personally, this point when I studied rank that linear algebra became mysterious and at the same time extremely beautiful.*0029

*This is where the real beauty of mathematics was for me. This was the turning point. This particular topic.*0040

*I would definitely, definitely urge you to think very carefully about what is going on in these two lessons, and of course, using your textbook, maybe going through some of the proofs, which are not all together that difficult.*0045

*To make sure that you really, really wrap your mind around what is happening with this notion of a rank, and with the notion of the column space and the row space, which we are going to define and discuss in a minute.*0057

*So, do not worry, you do not have to know what they are yet.*0070

*It speaks to the strangeness that I can just take some random rectangular array of numbers, 3 by 6, 5 by 7, 10 by 14, and just throw any kind of numbers in a rectangular array.*0073

*Just by virtue of arranging them that way, there is a relationship that exists between the columns and the rows of that matrix that no one would ever believe should actually exist... and yet it does.*0087

*That is what makes this beautiful.*0099

*So, anyway, let us dive in, and hopefully you will feel the same way. Okay.*0103

*So, we are going to start off with a definition, and let us go ahead and define our matrix first that we are going to be talking about.*0108

*So, we will let a equal the following matrix... a11... a12... a1N... and a21... a22... so on.*0116

*a31 all the way down to aM1... and of course down here it is going to be aN... the subscript is mn.*0142

*Of course this is an m by n matrix. Okay.*0156

*Okay. So, this is an m by n matrix, and these entries of course are the entries of that matrix. Now, m by n. There are m rows.*0163

*So, 1, 2, 3, 4... m this way. n columns... that way.*0171

*Alright. If I take the rows of this matrix, I have m of them, right? So, let me write -- just write R1, which is equal to a11, a12, and so on, to a1n.*0179

*Let us say... let us just go ahead and write out row 2 also. a21, a22, all the way to a2N, so the first row, the second row, the third row.*0208

*If I treat them as just individual vectors, well, these rows... considered as vectors in RN.*0220

*The reason they are considered as vectors in RN is because they have n entries in them, right?*0236

*1, 2, 3, 4, 5... all the way to n. That is why they are vectors in RN.*0242

*A little bit better of an r here -- that one was a little bit odd.*0249

*Considered as vectors in RN... they span a subspace of RN called the row space.*0254

*So, let us stop and think about that again. If I take the rows of this m by n matrix, I have m rows.*0275

*Well, the rows have n entries in them, because they are n columns.*0282

*Well, if n entries, that means it is a vector in RN, right? Just like if I have 3 vectors, it is a vector in R3.*0289

*Well, those rows, let us say if I have 6 of them, they actually span a subspace of RN.*0297

*That subspace, we call the row space... so, let us differentiate the rows are the individual vectors from the row space, which is the space that the rows actually span.*0307

*I can define something called the column space analogously.*0324

*So, the columns... I will call them c1, c2, so forth, and I will write them as column vectors, in fact... a11, a21, all the way down to am1.*0329

*And, c2, I will put the second one also... a21, a22, a23, all the way down to am2.*0351

*Wait, I think my a2... have my indices incorrect here.*0365

*This is going to be a12, a22, a32, there we go... am2... it is very confusing.*0374

*Okay. So, if I take columns, notice now the columns have m entries in them, because they have m rows. If I take them as columns, they have m entries.*0385

*Well, if they have m entries, then they are vectors in RM.*0396

*So, the columns, considered as vectors in RM span another subspace called exactly what you think, the column space... subspace of RM, called the column space.*0405

*Okay. So, once again, if I just take any random matrix, any random rectangular array... m by n... if I take the rows and treat them as vectors in RN, those vectors span a space, a sub-space of RN as it turns out.*0436

*So, they are not just a subset, they actually span a subspace.*0452

*We call that the row space. If I take the columns of that matrix, any random matrix, and if I... those are going to be vectors in m-space... RM.*0455

*They span a space, a subspace of RM and we call that the column space.*0469

*So, any matrix has 2 subspaces associated with it automatically, by virtue of just picking the numbers -- a row space and a column space.*0475

*We are going to investigate the structure of that row space and that column space.*0484

*Okay. Now, let me... quick theorem that we are going to use before our first example.*0489

*If the 2 matrices a and b are 2 m by n matrices, which are row equivalent, and again row equivalence just means I have converted one to the other.*0506

*Like, for example, when I convert to reduced row echelon, those 2 matrices that I get are row equivalent.*0527

*Then, the row spaces of a and b are equal.*0535

*In other words, excuse me, simply by converting it by a series of those operations that we do... converting a matrix to, let us say reduced row echelon.*0555

*The reduced row echelon matrix still has the same row space. I have not changed anything. I added one equation to the other, I have not changed anything at all. That is what this theorem is saying.*0565

*So, now we are actually going to use this theorem. So, let us do our first example. Let us do it in red, here... oops, there we go.*0575

*Okay. We will let s equal the set of vectors v1, v2, v3... and v4.*0595

*Here is what the vectors are: v1 is the vector (1,-2,0,3,-4).*0612

*Let me write that down here, actually.*0627

*v2... so I do not feel so crushed in... (3,2,8,1,4).*0631

*v3 is (2,3,7,2,3).*0642

*v4 is equal to (-1,2,0,4,-3).*0650

*So, these vectors are vectors in R5, right? because they have 1, 2, 3, 4, 5 elements in them.*0658

*I have four vectors in R5. Okay.*0664

*We want to find a basis for a span of s, so these four vectors, they span a space.*0671

*I do now know what space, but I know they span a space.*0684

*Well, we want to find a basis for that space... for that subspace, not just any old space. Okay.*0688

*Let us do the following. Let us write these vectors, 1, 2, 3, 4 of them... and they are vectors in R5, so let us write them as the rows of a matrix.*0699

*Let us just take these random vectors and put them in matrix form.*0714

*So, we will let a equal to... we want to take the first vector (1,-2,0,3,-4), and I just write it our as my first row. (1,-2,0,3,-4).*0720

*I take vector 2, and that is (3,2,8,1,4).*0734

*I take vector 3, which is (2,3,7,2,3).*0739

*And, I take (-1,2,0,4,-3).*0745

*So, all I have done is I have taken these vectors and I have arranged them as the rows of a matrix.*0750

*Well, if I subject this matrix to reduced row echelon form, the matrix... nothing augmented... just take the vectors, stick it in matrix form and convert it to reduced row echelon form. Here is what I get.*0757

*(1,0,2,0,1), (0,1,1,0,1), (0,0,0,1,-1), and (0,0,0,0,0).*0777

*This is my reduced row echelon form. Well, as it turns out, the non-zero rows of this reduced row echelon form, in other words row 1 row 2, and row 3, they are as vectors linearly independent.*0792

*So we have these vectors that span a space, which means that we need to find vectors that are linearly independent.*0811

*As it turns out, there is a particular theorem which I did not mention here, but it is something... a result that we are using.*0820

*When I take a matrix and I reduce it to reduced row echelon form, the non-zero rows are linearly independent as vectors.*0827

*Therefore, I can take this vector as my first one, I will just call it w1. w1 = (1,0,2,0,1).*0836

*w2 = (0,1,1,0,1).*0867

*w3 = (0,0,0,1,-1).*0874

*This is a basis for the span of s. I was given this set of vectors.*0884

*I arranged these vectors as rows of a matrix. I reduced that matrix to reduced row echelon form, and the non-zero rows I just read them straight off.*0894

*Those non-zero rows, they form a basis for the span of those vectors.*0904

*Well, how many vectors do we have? 1, 2, 3.*0911

*So, the dimension of that space, of that span, is 3... because we have 3 linearly independent vectors that span the same space.*0915

*Notice, these vectors, (1,0,2,0,1)... they are vectors in R5, they have 5 entries in them.*0931

*(0,1,1,0,1), (0,0,0,1,-1), these are not the same vectors as these, okay? They are not from the original set.*0937

*They are completely different, and yet they span the same space. That is what makes this kind of extraordinary... that you can take these original vectors, arrange them in a matrix, reduced row echelon, whatever vectors are left over, they give you a vector that spans the same space.*0946

*Okay. So, now let us say we actually want to find a basis for the span of this space, but this time, we want vectors that are actually v1, v2, v3, v4.*0961

*We want vectors from this subset, whether it is v1 and v3, or v2, v3, v4, we do not know... I mean we do know, because we do know it is going to be at least 3 of them.*0985

*But, the idea is here we came up with 3 vectors that span this that are not from this original set, but there is actually a way to find a basis for this, consisting of vectors that are from this set.*0997

*Let us actually go ahead and do that. Before we do that, we are going to define this thing called the rank.*1011

*Well, we just calculated the dimension of that sub-space which was 3. Well, the dimension of the row space is called the row rank.*1020

*In a minute, we are going to talk about column rank, so I will just go ahead and write that too... the dimension of the column space is called column rank.*1038

*Okay. So, now let us do the same problem, but let us find a basis that actually consists of vectors from the original set.*1057

*This time, we take those vectors that we had... the v1, v2, v3, v4, and instead of writing them as rows, we are going to write them as columns and we are going to solve the associated homogeneous system for that.*1066

*Notice, for the last one, we just wrote them as rows and we converted that matrix to reduced row echelon form. Now we are going to write those as columns and we are going to augment it.*1081

*We are going to actually solve the associated system.*1092

*So, it turns out to be the following.*1096

*Yes... so, when I write v1, v2, v3, v4 as columns this time instead of rows, it looks like this.*1113

*(1,-2,0,3,-4), (3,2,8,1,4), (2,3,7,2,3), (-1,2,0,4,3)... and the augmented system.*1122

*Now, we have done this before, actually. If you remember a couple of lessons back when we were looking for the basis of a particular subspace, this is what we did.*1150

*We solved the associated homogeneous system, and then when we converted it to reduced row echelon, which we will do in a minute, the columns that have leading entries, the corresponding vectors are a basis from the original set.*1156

*So, let us go ahead and do that.*1172

*We convert to reduced row echelon and we get the following... (1,0,0,0,0), (0,1,0,0,0), (0,0,1,0,0), (11/24,0,-49/24,0,7/3), (0,0,0,0,0). *1177

*Okay, here we go. Now we have this column, this column, and this column, the first, second and third columns have leading entries in them. *1206

*Because they have leading entries in them, the original vectors corresponding to them. *1216

*In other words, that vector, that vector and that vector... which are part of the original set, they form a basis for the span of that original set of four vectors, right?*1220

*We had four vectors, but we do not need four vectors, we need just three, and that is what this procedure does.*1232

*It allows us to find a basis consisting of vectors from the original set.*1238

*Whereas the thing we did before allowed us to find a basis that had nothing to do with the set. Notice, we have three of them. It is not a coincidence.*1242

*So, in this particular case, we could take v1, v2, v3, the original, that is a basis for a span of s.*1251

*In other words, the span of s which is the row space.*1267

*Originally, we wrote them as rows... row vectors.*1275

*Here, we wrote them as columns in order to solve it in a certain way so that we end up with vectors from the original set.*1280

*It is still a row space, or the span of s, if you will.*1287

*Okay... and again, the row rank, well the row rank is a number of linearly independent vectors in the basis, or the number of vectors in the basis. *1293

*They are linearly independent by definition. The basis is 3.*1307

*Okay. So, now, let us write this out.*1315

*So, given a set of vectors, what it is that we just did.*1321

*Given a set of vectors, s, v1, v2, v3, and so on... to vk.*1330

*One. If we want a basis for the span of s consisting -- that is fine -- consisting of vectors from s, from the original set, then, set up the vectors as columns.*1350

*Augment the 0 vector, 0's, okay.*1402

*Convert to reduced row echelon, and then the vectors corresponding to columns with leading entries form the basis you are looking for.*1411

*The basis of span s. That was the second example that we did.*1442

*Two. Well, if we want a basis for span s consisting of vectors not necessarily in s...*1449

*Vectors completely different from that, but that still form a basis for that space.*1480

*Then, set up the vectors in s as rows of a matrix.*1491

*Convert to reduced row echelon. *1506

*Then, the non-zero rows, the actual non-zero rows of the reduced row echelon matrix form a basis which span s.*1513

*So, there you go. If you are given a set of vectors, and if you want to find a basis for the span of those vectors and if you want this particular basis to actually consist of vectors from the original set, then you set those vectors up as columns.*1544

*You solve the associated homogeneous system. You just add a zero, you augment the matrix, and then once you get the columns that have leading entries in them -- excuse me.*1561

*Let us say it is the first, third and fifth column, you reduce row echelon.*1572

*Then you go back to the original matrix and you pick the first, the third, and the fifth column. That is your basis.*1576

*If you want the basis for the span that has nothing to do with the original set of vectors, the set of the vectors as a row... reduced row echelon form... know worries about augmentation.*1582

*The non-zero rows give you the basis for the same space, and you will always end up with the same number. *1591

*If it is 3 for 1, it will be three for the other, if it is 2 for 1, it will be 2 for the other.*1599

*Okay. Let us do another example here. We will let -- this time, we will just throw out the matrix itself as opposed to given the vectors, we will just give you the matrix.*1604

*Often, it is done that way. You remember the definition at the beginning of this lesson was some row space and some column space. That is what we do. We just sort of often given a random matrix, and we have to treat the rows and columns as vectors.*1616

*So we will let a, let me go back to blue here, let a equal (1,2,-1), (1,9,-1), (-3,8,3), and (-2,8,2). That is a.*1631

*We want to find a basis for the row space of this matrix.*1655

*So for the row space of a, first part, we want this basis to consist of vectors which are not rows of a.*1667

*Part b, we want it to consist of vectors, the basis consisting of vectors which are rows of a.*1691

*Okay. So, we have a row space, so find the basis for the row space.*1711

*Well ,the row space is just these treated as vectors, and they are vectors in R3... 1, 2, 3... 1, 2, 3... 1, 2, 3... 1, 2, 3.*1715

*We have four vectors in R3. So, let us set them up.*1725

*Let us do part a first. Now, they want us to find a basis for this row space consisting of vectors, which are not actually these vectors at all.*1732

*These rows... So, for that, we need to set these vectors up as columns and solve the associated homogeneous system.*1744

*So, let us go ahead and do that.*1753

*Not -- no sorry -- we are consisting of vectors that are not rows of a. Okay.*1761

*Since we are doing it that are not rows of a, we are actually just going to set them up as rows. They are already set up as rows. My apologies.*1770

*It is hard to keep these straight sometimes. So, not rows of a. We set them up as this.*1779

*So, we are just going to go ahead and solve this matrix. Convert to reduced row echelon form.*1784

*So, let us rewrite it... (1,2,-1), so we write them as rows... yes. (1,2,-1).*1791

*(1,9,-1), (-3,8,3), and (-2,3,2).*1798

*Okay. We convert to reduced row echelon form, and we end up with the following: (1,0,-1), (0,1,0), (0,0,0), (0,0,0).*1808

*Okay. That, oops I wanted blue. So, that is a non-zero row, that is a non-zero row.*1825

*Therefore, I can take my basis as the set (1,0,-1)... that and that.*1839

*(1,0,-1)... and (0,1,0)... there two vectors form a basis for the row space of this matrix.*1859

*In other words, there are four vectors, 1, 2, 3, 4 vectors in R3. Three entries.*1875

*Well, they span a space, it is called the row space.*1882

*By simply leaving them as rows or arranging these as rows, row echelon, I get 2 non-zero vectors.*1887

*These two non-zero vectors, they span the same space.*1893

*So, dimension is, in other words, the row rank is equal to 2, because I have 2 vectors.*1897

*Okay. Now let us do part b. We want to find a basis for this row space that consists of vectors which are rows of a.*1909

*So, if they are rows of a, that means I have to solve a homogeneous system.*1915

*So, I will set up these rows as actual columns.*1922

*Okay. So, that is going to look like this... (1,2,-1), (1,9,-1), (-3,8,3), (-2,3,2)... and setting up a homogeneous system... (0,0,0).*1928

*Now, I need to convert this to reduced row echelon form, and when I do that, I get the following.*1951

*I get (1,0,0), (0,1,0), (-5,2,0), (-3,1,0), (0,0,0).*1958

*Go back to red... this column has a leading entry, this column has a leading entry... there they are.*1973

*Therefore, my basis consists of the original vectors corresponding to those two columns.*1980

*Now, my basis is the vector (1,2,-1) and the vector (1,9,-1).*1988

*This basis spans the same space as the other basis that I just found.*2001

*The rank is, well, 2.*2006

*It is pretty amazing, is it not?*2013

*Now, think about this for a second. I take some random vectors, and I arrange them in rows, and I convert to reduced row echelon form and I get the dimension of 2... i get a row rank of 2... two non-zero vectors.*2019

*Then, I write them as columns, and I solve the homogeneous system and I end up with, even though I end up with no non-zero columns, I mean I end up with two columns with leading entries.*2035

*I still end up with 2... that is kind of extraordinary. At least it is to me.*2050

*That you can just arrange these things as columns or rows and still end up essentially in the same place.*2054

*Now, this space, this row space, this space that is spanned by the original vectors that we had, this is a perfectly good basis for it, and the basis that we got otherwise is also a perfectly good basis for it.*2063

*Notice, both of them. They have 2 vectors in them. Okay.*2074

*So, row rank, row space. Profoundly important concepts.*2082

*We will continue this discussion in the next lesson. Thank you for joining us here at Educator.com. Take care.*2097

*Welcome back to Educator.com, and welcome back to linear algebra. *0000

*This lesson, we are going to continue the discussion of row rank and column rank that we started in the last lesson.*0004

*So, this is going to be the rank of a matrix, part 2.*0010

*Let us just go ahead and jump right in -- go ahead and switch over to a blue ink here.*0015

*Recall from a previous lesson the following matrix.*0022

*So, we have a matrix a... it is (1,3,2,1), (-2,2,3,2), (0,8,7,0), (3,1,2,4), (-4,4,3,-3).*0032

*Okay. So, this is a 4 by 5.*0058

*A 4 by 5 matrix. Okay. Now, let us consider just the columns of this matrix.*0064

*I will call it the set c. So, we have the (1,3,2,1), we have (-2,2,3,2), we have (0,8,7,0), we have (3,1,2,4), (-4,4,3,-3).*0070

*What we have is 1, 2, 3, 4, 5 vectors in R4... 5 vector in R4. Okay.*0099

*Now, we said that the rows of a matrix form the span as we treat it as vectors, individual vectors... they span a space called the row space.*0112

*Well, similarly, the columns of the matrix, they span a space like we define in the previous lesson. They span a space, a subspace called... the column space.*0124

*Now, what we want to do is find a basis for the column space.*0137

*Consisting of arbitrary vectors... they do not necessarily have to be from this set.*0156

*We want to find a basis for the span of this set, but I do not necessarily want them to be from this set.*0162

*So, find a basis for the column space consisting of arbitrary vectors.*0168

*Now, if you remember from our last lesson, when we have a set of vectors, and we want to find a basis for the span of that set of vectors, but we do not care if the vectors in that basis come from the original set... we set up those vectors as rows.*0182

*Then, we do reduced row echelon form, and then the number of non-zero rows, those actually form a basis.*0202

*So, let us do that. Here, the column... the columns are this way.*0206

*We want to find a basis for the column space arbitrary vectors, so I am going to write the columns as rows, because that is the procedure.*0214

*So, I am going to write (1,3,2,1), (-2,2,3,2), (0,8,7,0), (3,1,2,4), (-4,4,3,-3).*0225

*I am going to convert that to reduced row echelon form, and I end up with (1,0,0,11/24), (0,1,0,-49/24), (0,0,1,7/3), and 0's everywhere else.*0248

*My 3 non-zero rows are these. They form a basis for the span of the columns, the original matrix.*0277

*So, I can choose the set (1,0,0,11/24)... that is one vector.*0292

*Notice, in the matrix I had written them as rows, but now I am just writing them as columns because I just tend to prefer writing them this way.*0310

*(0,1,0,-49/24), and (0,0,1,7/3), if I am not mistaken, that is correct.*0320

*Yes. This set forms a basis for the column space.*0332

*Column rank, three. There are three vectors in there.*0347

*Okay. Now, we want to find a basis for the original set of vectors consisting of vectors from that actual set... either all of them, or few of them.*0356

*So, when we do that, we set them up as columns, and we solve the associated homogeneous system.*0371

*So, here is what we are going to set up... (1,3,2,-1), (-2,2,3,2), (0,8,7,0), (3,1,2,4), (-4,4,3,3), and of course the associated system goes that way.*0377

*We convert to reduced row echelon form, we end up with the following.*0406

*We end up with (1,0,0,0), (0,1,0,0), (2,1,0,0), (0,0,1,0), (1,1,-1,0), and 0's in the final column.*0412

*Let us go to blue. Leading entry, leading entry, leading entry. In other words, the first, the second, and the fourth column.*0431

*Therefore, the first, second and fourth column form a basis. Therefore, I can take the vectors (1,3,2,-1), here.*0440

*I can take the vector (-2,2,3,2).*0460

*The fourth column, (3,1,2,4)... this set forms a basis.*0467

*A good basis for the column space.*0475

*Column rank equals 3, because there are 3 vectors that go into the basis. Again, the rank is the dimension of that space.*0485

*Okay. Now, let us recap what we did. Just now, and from the previous lesson. Here is what we did.*0501

*We had a, okay? I will write it one more time. I know it is getting a little tedious, but I suppose it is always good to see it... (2,8,7,0), (3,1,2,4), (4,4,3,3).*0516

*We had this original matrix a. Okay, it is a 4 by 5 matrix.*0542

*The column... the row space consists of 4 vectors in R5.*0549

*The column space consists of 5 vectors in R4. Okay. *0555

*Using two different techniques, we found a basis for the row space, alright? that was in the previous lesson.*0566

*For the row space, we dealt with the rows using two different techniques. One, we set them up as rows, and we got vectors arbitrary vectors for a basis consisting of arbitrary vectors.*0600

*Then, we set up these rows as columns, we solve the associated system and we got a basis consisting of vectors from the original set.*0611

*So, 2 different techniques, we ended up with a row rank equal to 3. *0619

*Okay. Now the columns, like we said, the columns form a set of 1,2,3,4,5 vectors in R4.*0628

*Well, again, this was for rows, now for columns.*0640

*The problem we just did, using 2 different techniques, we found a basis in the column space.*0650

*Column rank is equal to 3. Let me stop this for a second.*0679

*Random matrix... random matrix a... the rows consist of 4 vectors in R5. Using the two techniques, we found a basis, and the basis consists of the 3 vectors a piece. Row rank was 3.*0686

*The columns, the columns are 5 vectors in R4. R5 and R4 have nothing to do with each other, they are completely different spaces.*0707

*I mean, their underlying structure might be the same, but they are completely different spaces. One has 4 elements, the vectors in the other space have 5 elements in them.*0715

*Using two different techniques, we found a basis for the column space. Column rank ends up being 3. Okay.*0723

*This 3, this is not a coincidence... not a coincidence.*0730

*As it turns out, for any random matrix, m by n, row rank equals the column rank.*0742

*So, let me put this in perspective for you. I took a rectangular array, random, in this case 4 by 5.*0760

*It could be 7 by 8... it could be 4 by 13... if I treat the rows as vectors, and if I treat the columns as vectors, and if I calculate... *0766

*If I find a basis for both of the bases that those two... a basis for the span of the collection of vectors that make up the rows... collection of vectors that make up the columns, they have nothing to do with each other.*0777

*Yet, they end up with the same number of vectors.*0793

*Well, the column rank is the row rank, now we call it the rank.*0796

*So, because it is the case, because the column space and the row space end up having the same number of vectors in their bases, we just call it the rank.*0800

*So, we no longer refer to it as the row rank of a matrix, or the column rank of a matrix, we call it the rank of a matrix.*0813

*Now, I want you to stop and think about how extraordinary this is.*0820

*A collection, a rectangular array of numbers... let us say 3 by 17.*0823

*You have some vectors in R3, and you have vectors in R17. They have absolutely nothing to do with each other, and yet a basis for this space that these vectors span... they end up with the same number of vectors.*0830

*There is no reason in the world for believing that that should be the case. There is no reason in the world for believing that that should be the case, and yet there it is.*0846

*Simply by virtue of a rectangular arrangement of numbers. That is extraordinary beyond belief, and we have not even gotten to the best part yet.*0852

*Now, we are just going to call it the rank from now on. So, I do not necessarily have to find the row rank and the column rank of a matrix, I can just take my pick.*0861

*So, let us just stick with rows. I go with rows. You are welcome to go with columns if you want.*0870

*Okay. So, as a recap... our procedure for computing the rank of a matrix a.*0874

*Okay. 1. Transform the matrix a to reduced row echelon matrix b.*0900

*2. The number of non-zero rows is the rank, that is it. Nice and easy.*0911

*Now, recall from a previous lesson. We defined something called the nullity, defined the nullity.*0928

*That was the dimension of the null space. In other words, it is the dimension of the solution space for the homogeneous system ax = 0.*0945

*Okay? Theorem. Profoundly important result, insanely beautiful result. We have to know this.*0975

*If you do not walk away with anything else from linear algebra, know this theorem, because I promise you, if you can drop this theorem in one of your classes in graduate school, you will make one hell of an impression on your professors.*0989

*They probably do not even know this themselves, some of them... but beautiful, beautiful theorem.*1000

*The rank of a matrix a plus the nullity of a matrix a is equal to n.*1007

*So, think about what this means. If I have an m by n matrix, a 5 by 6 matrix... 5 by 6... 5 rows, 6 columns.*1018

*n is 6. The rank of that matrix plus the nullity of that matrix equals 6.*1028

*If I know that I have a matrix that is n = 6, and I find the nullity, I know what the rank is automatically, by virtue of this equation.*1038

*If I know what the rank is, I know what the nullity is. If I know what the rank and nullity is, I know what space I am dealing with.*1046

*If I have a rank of 5, and if I have a nullity of 3, then I know that I am dealing with an 8-dimensional space. Amazing, amazing, amazing theorem. Comes up in a lot of places.*1053

*Okay. Let us do some examples here.*1064

*Let us go... okay... simply by virtue of a random arrangement of numbers in a rectangular array that we call a matrix.*1069

*(1,1,4,1,2), (0,1,2,1,1), (0,0,0,1,2), (1,-1,0,0,2), (2,1,6,0,1)... okay.*1083

*Reduced row echelon. We have this random matrix, it is 1, 2, 3, 4, 5... 1, 2, 3, 4, 5... this is a 5 by 5 matrix, so here n = 5.*1106

*Okay. Reduced row echelon form, you get (1,0,2,0,1), (0,1,2,0,-1), (0,0,0,1,2), and we get (0,0,0,0,0), (0,0,0,0,0)...*1118

*We have 1, 2, 3 non-zero rows. Rank = 3.*1140

*Well, if rank = 3 and n = 5, I know that my solution space to the associated homogeneous system that goes with this matrix... I know that it has to have a dimension 2, because rank + nullity = n.*1149

*3 + 2 = 5. That is extraordinary. In fact, this is from a previous example.*1164

*If you go back to a previous lesson where we actually calculated the solution space, you will find that there were 2 vectors.*1173

*So, 2 vectors, dimension 2, here you have dimension 3 and it confirms the fact that 3 + 2 = 5.*1181

*Okay. Now, let us throw out a theorem here, that has to do with rank in singularity.*1189

*Actually, you know what, we define here... let me go to blue... rank and singularity, and if you remember singularity has to do with determinance. *1200

*So, a non-singular matrix is one whose determinant... well a non-singular matrix is one that actually has an inverse, that is the actual definition of singularity, something that has an inverse and that corresponds to a determinant not being equal to 0.*1220

*And... you remember that list of non-singular equivalences? We are actually going to recap it at the end of this lesson and add a few more things to it.*1235

*So, rank and singularity... a n by n, a n by n matrix is non-singular... it means it has an inverse, if and only if rank = n.*1241

*So, if I calculate the rank and the rank equals n, that means it is not singular. That means it has an inverse. That means its determinant is non-zero.*1260

*Okay. Let us do some quick examples of this one. We will let a equal (1,2,0,0,1,3,2,1,3).*1269

*We convert to reduced row echelon. We get (1,0,0,0,1,0,0,0,1).*1284

*Okay. There is 1, there is 2, there is 3 non-zero vectors in that reduced row echelon. Rank = 3.*1293

*Well, we have a 3 by 3, the rank = 3, therefore that implies that this is non-singular, and it implies that the solution space... okay... has only the trivial solution.*1305

*Again, this goes back to that list of equivalences. One thing implies a whole bunch of other things.*1330

*Okay. Another example. Let us let matrix b equal (1,2,0), (1,1,-3), (1,3,3).*1338

*Let us convert to reduced row echelon form. We end up with (1,0,-6), (0,1,3), we get (0,0,0).*1352

*We have that, we have that... we have 2. So, rank is equal to 2.*1364

*Well, the n is 3, the rank is 2. It is less than 3, it is not equal to 3, therefore... so 2 less than 3, it implies that a is singular.*1370

*It does not have an inverse. It implies that there does exist a non-trivial solution for the homogeneous system.*1386

*Okay. One more theorem here, that is very, very nice.*1402

*We will not necessarily do an example of this, but it is good to know.*1413

*The non-homogeneous system, ax = b, has a solution if and only if the rank of the matrix a is equal to the rank of the matrix a augmented by b.*1419

*So, if I take a, take the rank, and if I take a, make the augmented matrix, and then calculate the length of that matrix, if those are equal... then I know that the actual system has a solution.*1441

*Now, of course we have techniques for finding this solution, you know, and that is important, but sometimes it is nice just to know that it does have a solution.*1454

*Okay. Now, let us talk about our list of non-singular equivalences, and let us add to that list.*1464

*So, list of non-singular equivalences. This is for a n by n... you remember, because an n by n matrix is the only one for which a determinant is actually defined. Okay.*1471

*All of the following are equivalent. In other words, one is the same as the other.*1489

*Each one implies each and every other one.*1496

*One, well, a is non-singular.*1502

*Two, ax = 0, the homogeneous system has only the trivial solution... and you remember the trivial solution is just 0, all 0's.*1511

*Three, a is row-equivalent to i*_{n}, the identity matrix. That is the one with all 1's in the main diagonal. Everything else is 0.1532

*Four, ax = b, the associated non-homogeneous solution has a weak solution... only 1.*1552

*Five, the determinant of a is non-zero.*1569

*Six, a has rank n.*1578

*Seven, a has nullity 0.*1585

*Eight, rows of a form a linearly independent set of vectors in RN.*1593

*Nine, the columns do the same thing. The columns of a form a linearly independent -- I will just abbreviate it as LI -- set of vectors in RN.*1625

*So, I can make all of these statements if I have some random a matrix, which is n by n, let us say 5 by 5.*1646

*If I know that it is... let us say I know... I calculate its rank and its rank ends up being n. I know that all of these other things are true.*1656

*A is non-singular, that means that it has an inverse. I know that the associated homogeneous system has the trivial solution only.*1665

*I know that I can convert a to the identity matrix, in this case i5... I know that the associated non-homogeneous solution for any particular b has one and only one solution.*1671

*I know that the determinant is not 0.*1683

* I know that the nullity, the solution space is 0, which is the same as the... yeah, only the trivial solution.*1686

*I know that the rows of a form a linearly independent set of vectors in RN.*1694

*I know that the columns of a form a linearly independent set of vectors in RN.*1699

*So, again, we have a matrix... the rows are a set of vectors, and they behave a certain way, they span a space. The dimension of that space is the row rank. *1708

*The columns of that matrix span a space. The dimension of that subspace is called the column rank.*1720

*The row rank and the column rank end up being the same, no matter what rectangular array we have. We call that the rank.*1731

*The rank + the nullity, which is the dimension of the solution space, of the associated homogeneous system is always equal to n.*1740

*That is amazing. That is beautiful, and it is going to have even further consequences as we see in our subsequent lessons.*1754

*Thank you for joining us here at Educator.com, thank you for joining us for linear algebra, we will see you next time, bye-bye.*1760

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*Today we are going to be talking about something... continue our discussion, of course, about the structure of a vector space.*0005

*We have been talking about bases, linear independence, span, things like that.*0011

*Today we are going to be talking about something called a coordinates and a change of basis.*0016

*So, up to now, we have been talking about a random basis, a set of vectors that actually spans a given space.*0021

*Either the entire space, or a subspace of that space.*0031

*Well, we have not really cared about the order of those vectors -- you know -- we just say v1, v2, v3, the basis will do.*0036

*In this particular lesson, we are going to start talking about the particular order of a basis.*0042

*So, if I put vector 1 in front of vector 2, and if I switch the order, it actually changes the basis and it is going to change something called the coordinates of the particular vector that the basis is actually representing.*0048

*So, let us start with a couple of definitions, and we will jump right on in.*0061

*Now, let us say we have v1... let us make these concepts a little bit more clear so we can see it notationally.*0069

*So, let us say v1 is a set of vectors, v1, v2, and v3.*0080

*So, we have a basis which has 3 vectors, so we are talking about a 3-dimensional vector space.*0088

*If we also have, let us say, a separate basis which is almost the same, it is the same vectors, except now, I am going to put v2 first, and then v3, and then v1.*0096

*Even though these two bases consist of the same vectors, they are not in the same order.*0108

*Turns out, they are not really the same, and you will see why in a minute when we do what we do.*0114

*So, not the same. Now, let us let v... v1, v2, all the way to vN... since there are n vectors, we are talking about an n-dimensional vector space, because again, that is what a basis is. *0120

*The number of vectors in the basis gives you the dimension of that particular space.*0139

*So, let this basis be an ordered basis for an n-dimensional vector space, v.*0145

*Okay. Then of course, as we know, because it is a basis, every vector in v, every v in v, symbol like that, can be written -- excuse me -- as... so v is equal to some constant... c1 × v1, c2 × v2 + all the way + cN × vN.*0171

*Now, some of these constants might be 0, but they cannot all be 0.*0205

*So, again, this is just the definition of a basis. It is a linear combination. A basis allows you to actually write any vector in a vector space as a linear combination of those basis vectors.*0208

*Nothing particularly new here. Now, let us take a look at c1, c2, c3, c4, all the way to cN.*0220

*If we take just the constants and write them as a vector we get this thing.*0230

*I will do the right side of the equation first, and then I will put the symbol on the left hand side.*0260

*So if I take just the constants, so if I have a vector v, and I can express it as some linear combination of the vectors in our basis, and if I just pull these constants out and I write them as a vector.*0265

*So, c1 -- oops, let me do this in red, actually -- c1, c2, all the way to cN>*0275

*So, I am writing it as a -- oops, we do not want these stray lines here.*0287

*I will write it as a column vector. Basically, if I take these, if I have some vector in n-space... well, this vector of the constants that make up the linear combination representing v... it is symbolized as that way.*0298

*We have the vector symbol, and we have a bracket around it, and we put a little b... the b represents the basis, okay?*0320

*We call this... called... this is called the coordinate vector.*0327

*This is called the coordinate vector of v, with respect to basis b.*0342

*This is unique. So, b... so the coordinate vector of any vector with respect to a given basis is unique.*0357

*Let us stop and think about what this means. We know that if we have a given particular vector space, let us say R3, we know that a basis for R3 has to have 3 vectors in it, because that is the definition of dimensions.*0374

*The number of vectors in a basis for that space.*0386

*We also know that there is an infinite number of bases, it does not have to be 1 or the other.*0390

*As it turns out, any vector in a vector space is going to be written as a linear combination of the vectors in that basis.*0394

*Well, the constants that make up that linear combination, I can arrange them as a vector, and I call those the coordinates with respect to that basis of that particular vector that I am dealing with.*0402

*So, needless to say, if I choose one basis, the coordinates are going to be one thing. If I choose another thing, the coordinates are going to be entirely different.*0414

*It is kind of interesting when you think about this. If I pick some random point in a vector space, as it turns out, its identity, its intrinsic identity is actually... It has nothing to do with its coordinates.*0423

*The coordinates are something that we attach to it so that we can actually deal with it. It all depends on the basis that we choose.*0438

*That is kind of extraordinary. You know, were you still thinking of a point in space like (5,6,7) as if it is specifically (5,6,7).*0445

*In a minute you will see that those numbers 5, 6, 7, with respect to a given basis.*0454

*In R3, it is the i,j,k, vectors. It is a very convenient basis because they happen to be unit length, each of the vectors, they happen to be mutually orthogonal, which we will talk about more in a minute.*0462

*But any basis will do, actually. As it turns out, that number (5,6,7), it is specific only to the natural basis.*0473

*It does not really tell me something about the point itself. That is actually kind of interesting to think that it is only so that we can handle that point mathematically as an object, that we have to assign some sort of a value to it.*0482

*We assign a value with respect to an arbitrary choice of basis. Arbitrary in a sense that no one basis is better than another.*0494

*You will have bases that are more convenient than others, but if you... if the problem might call for a basis that is completely different than the one you are used to. Again, that is kind of extraordinary.*0504

*Let us do an example.*0514

*Okay. Let us see -- let us go back to blue.*0520

*We will let s equal the set (1,1,0,0), that is the first vector in the set... then we have (2,0,1,0), that is the second vector in the set, (0,1,2,-1), third vector, and we have (0,1,-1,0). Okay.*0528

*Let s be a basis, an ordered basis for R4.*0555

*Again, we have 4 numbers, we have 4 vectors, so it is R4. Now, let us choose a random vector v, let v in R4 be the vector (-1,2,-6,5).*0564

*Okay. We want to find the coordinate vector of this vector with respect to this basis.*0589

*So, let us stop and think about this for a second. I have some vector, you know, that I just represented as (-1,2,-6,5).*0605

*But, I have a different basis than I am normally accustomed to. So, I want to find the coordinates of this vector with respect to this basis. Okay.*0615

*Okay. Let us see what we are going to do. Well, here is what we want.*0625

*We want constants, c1, c2, c3, c4, such that c1 × the first vector (1,1,0,0) + c2 × the second vector (2,0,1,0), + c3 × (0,1,2,-1), + c4 × (0,1,-1,0) is equal to our vector v, which is (-1,2,-6,5).*0630

*This is what we want. The idea is we take these basis vectors, we write the vector that we are looking for, it is a linear combination of these things. *0667

*Now, we have to solve this. Well, this is just a linear system, so we set it up as a linear system, as a 4 by 5 augmented matrix.*0676

*It is going to be... (1,1,0,0), we just take these as columns, (2,0,1,0), (0,1,2,-1), then we take (0,1,-1,0), (0,1,-1,0).*0686

*And... we augment this with (-1,2,-6,5). We are just solving a × x = b.*0703

*In this particular case, x are the constants. That is what we are looking for. We subject this to reduced row echelon... well, subject it to Gauss Jordan elimination to get the reduced row echelon form.*0712

*We end up with the following. (1,0,0,0), (0,1,0,0), (0,0,1,0) -- that is not a 6, that is a 0 -- and (0,0,0,1), and we end up with... (23,-12,-5,-16).*0724

*Therefore, our coordinate vector for v with respect to the basis that we were given is equal to -- nope, cannot have that -- let us make sure these are clear.*0752

*We have (23,-12,-5,-16). That is our answer, with respect to this basis, the vector is (23,-12,-5,-16).*0774

*These numbers up here, (-1,2,-6,5), this vector was given to us because that is the standard basis.*0797

*In R4, it is the... imagine i,j,k, with one extra vector... basically it is something in the x direction, something in the y direction, something in the z direction, and something in the L direction.*0805

*Again, we are talking about a 4-dimensional space. We cannot see it, but we can still treat it mathematically.*0817

*Mutually orthogonal vectors. That is why this and this are different. We are talking about the same point, but, in order for us to identify that point, to give it a label, to give it a name, we need to choose a basis.*0823

*We need to choose a point of reference, a frame of reference. That is what all of modern science is based on. All of measurement is based on.*0843

*We need something from which to measure something. Our frame of reference, well here it is the standard basis. The basis of mutually orthogonal unit vectors.*0851

*Here, it is a completely different basis. Well, this one basis is not necessarily better than this one.*0864

*We are just accustomed to this one. We think that that is the one, that this vector is actually (-1,2,-6, 5). It is not.*0869

*This (-1,2,-6,5), actually has nothing to do intrinsically with that point. It has to do with our imposing a label on that point so that we can deal with it mathematically.*0877

*This set of coordinates is just as good as this set of coordinates. This basis is just as good as the natural basis.*0892

*That is what you have to... so now, we are getting into the idea of linear algebra we want to sort of disabuse ourselves of the things that we have become accustomed to.*0899

*That, just because we have become accustomed to them, it does not mean that they are necessary, or necessarily better that anything that we might develop for these mathematical objects.*0909

*Okay. Let us actually demonstrate this mathematically. This whole idea of the standard basis. Okay.*0918

*Let s, this time we will let s equal the standard basis, remember? e1, e2, e3, and e4.*0929

*Which is equal to... (1,0,0,0), that is e1. (0,1,0,0), that is e2. *0945

*Again, the e*_{i}, the 1, 2, 3, 4, that means all of the entries for that vector are 0, except that number.0956

*So, for example, e3, all of the entries are going to be 0 except the third entry which is going to be 1... (0,0,1,0).*0965

*You notice all of these vectors have a length of 1, and if you actually took the dot product of this with this, you would get 0.*0974

*So, they are length 1, which is very convenient, and they are also mutually orthogonal, perpendicular... and (0,0,0,1).*0980

*So, this is our set. Now we are using this basis. Well, we are going to let v equal the same thing... (-1,2,-6,5).*0994

*Okay. So, we set up the same system. We want constants c1e1 + c2e2 + c3e3 + c4e4... and these are vectors, I should actually notate them as such -- excuse me.*1007

*Such that they equal v. Well, again, this is just a system. Well, we take these vectors in the basis, set them up as a matrix, augment them with v, and we solve it.*1029

*So, we have (1,0,0,0), (0,1,0,0), e3 is (0,0,1,0), and (0,0,0,1).*1041

*We augment it with (-1,2,-6,5).*1050

*Now, we take a look at this, we want it to subject it to Gauss Jordan elimination to take it to reduced row echelon form.*1055

*Well, it is already in row echelon form. So, as it turns out, with respect to this basis, s, which is the natural basis... it is the vector itself (-1,2,-6,5), which is what we said from before.*1060

*The natural basis is the basis that we use all the time to represent a point. That is why... so... a particular vector does not own this set of numbers. *1080

*So this point that is represented by (-1,2,-6,5). It is only a representation of that point. It is not as if this (-1,2,-6,5) actually belong to that point. It is not an intrinsic property, in other words.*1094

*It is simply based on the basis that we chose for our frame of reference.*1109

*Those of you in engineering and physics, you are going to be changing frames of reference all the time, and you are always going to be choosing a different basis.*1114

*So, your coordinates are going to change. The relationship of the points themselves that you deal with, the vectors that you deal with do not change, but the coordinates are simply representations of those points - they are not intrinsic properties of those points.*1123

*Very curious, isn't it? Okay.*1138

*In other words, any basis will do. Any basis that is convenient.*1143

*Alright. Let us take one more look here. Let us do one more example, this time with the space of polynomials.*1148

*Okay. This time we will let our vector space v equal p1. It is the space of polynomials of degrees < or = 1.*1164

*So, for example, t + 6, 5t - 7, things like that, a degree less than or equal to 1... 8... that works because it is less than one.*1187

*We will let s be one of our basis and consist of t and 1, and we will let t be another basis, and it will be t + 1, and t - 1.*1198

*We will let our v random be 5t - 2, so first thing we want to do is we want to find... or the first issue is find the coordinate vector of v with respect to the basis s.*1213

*Okay -- let us go to blue here -- well, we want to solve the following.*1236

*We want to go c1 × t + c2 × 1 = 5t - 2.*1240

*That is what we are doing, it is a linear combination. Constants × the individual members of the basis, and we set it equal to the vector.*1253

*Well, when we see c1t + c2 × 1 = 5t - 2, this is just c1t + c2 = 5t - 2. *1260

*Well, c1t, this is an equality, so what is on the left has to equal what is one the right.*1269

*So, c1t is equal to 5t, that means c1 = 5, and c2 = -2. Well, there you go.*1276

*With respect to this basis, it is equal to 5 - 2. Well, 5 - 2, that is exactly what these numbers are here... 5 and -2.*1289

*So, you see that this basis... t and 1... this is the natural basis for the space of polynomials of degree < or = 1.*1300

*If you were talking about the space of polynomials of degree < or = 2, your natural basis would be t*^{2}, t and 1.1310

*If you were talking about degree < or = 3, you would have t*^{3}, t^{2}, t, and 1. This is the natural basis, the basis that we have become accustomed to talking about.1320

*However, we have a different basis that can still represent -- you know -- this particular polynomial. This particular point in the space of polynomials... 5t - 2.*1333

*Let us calculate... now we want to find the coordinates of v with respect to the basis t.*1347

*So, we end up doing the same thing. We are going to go c1 × t + 1 + c2 × t - 1, and it is equal to 5t - 2.*1358

*Linear combination, set it equal to the vector. We go ahead and we solve for this. We get c1t + c1 + c2t - c2 = 5t - 2.*1375

*I collect terms... t × (c1 + c2) + c1 - c2 = 5t - 2.*1387

*Okay. Let me rewrite that on this page. t × c1 + c2 + c1 - c2 = 5t - 2.*1402

*Well, t, t, c1 + c2, so I get c1 + c2 = 5, and I get c1 - c2 = -2, right?*1418

*I can just go ahead and add this directly. So, I end up with 2c1 = 3, c1 = 3/2, and when I put that into any one of the other equations, I end up with c2 = 7/2.*1434

*Therefore, with respect to that vector... 5t - 2, let me write it up here again... this was our original random vector... 5t - 2.*1452

*The coordinates of that vector with respect to the other basis that we chose, is equal to 3/2 and 7/2.*1465

*This is extraordinary. All of our lives from early pre-algebra into algebra, algebra 1, algebra 2, a little bit of geometry, trigonometry, calculus... we think that a polynomial like 5t - 2 actually is 5t - 2.*1479

*Well, 5t - 2 is our way of dealing with that particular polynomial with respect to the standard basis. *1499

*As it turns out, this 5t - 2 can be written in a completely different way with respect to another basis.*1507

*I can write it as 3/2 t + 7/2, with respect to the other basis that I gave for that space of polynomials.*1515

*No one basis is better than another. This 5t - 2 is not... the polynomial itself is something that exists in a space... but in order for us to deal with something that exists, we have to put a label on it.*1523

*We need a frame of reference for it. That is what is going on with linear algebra. This is the difference between mathematics and science.*1540

*Science actually labels things and deals with them in a given frame of reference, but what mathematics tries to do is... these things exists but we need to understand that the labels that we give them are not intrinsic to those objects.*1548

*They are simply our way of dealing with them because at some point we have to deal with them in a certain way, and we deal with things from a frame of reference.*1562

*From a point of reference. For example, measurement does not mean anything. *1570

*If I said something is 5 feet long, well, it is based on a certain standard. It is based on a point of reference, and a certain definition of a distance.*1574

*As it turns out, those things are actually arbitrary. It has nothing to do with the relationship between the two points that I am measuring the distance between. Those are deeper mathematical properties.*1583

*Linear algebra is sort of an introduction to that kind of thinking.*1594

*So, there we go. Today we dealt with coordinates and ordered bases, and notice that we can actually deal with 2 different bases to talk about the same mathematical object.*1599

*Next lesson, we will actually start talking about change of basis and how we go from one to the other.*1613

*Thank you for joining us at Educator.com today, we will see you next time for some more linear algebra.*1620

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*In the previous lesson, we talked about the coordinates of a particular vector and we realized that if we had two different bases that the coordinate vector with respect to each of those bases is going to be different.*0004

*So, as it turns out, it is not all together... it has to be this or that.*0018

*One basis is as good as another. We are going to continue that discussion today, deal with coordinates some more and we are going to talk about something called a transition matrix.*0022

*Where, if we are given the coordinates... if we are given both bases and if we are given the coordinates with respect to one basis, can we actually transform that and is there a matrix that actually does that.*0032

*The answer is yes, there is a matrix. It is called the transition matrix from one basis to another, and it ends up being a profoundly important matrix.*0045

*So, let us just dive right in.*0053

*The first thing I want to talk about it just two brief properties of the coordinates that we mentioned.*0058

*Their properties are exactly the same as that of vectors, so, it is going to be nothing new.*0066

*It is just the notation is, of course, slightly different because we have that little subscript s and t underneath the coordinate vector.*0070

*So, let us just write it out and start with that.*0076

*So, we have v + w, so if I add two vectors and take the coordinate with respect to a certain basis, well, I can treat that, I can just sort of separate them.*0080

*That is just going to be the coordinate vector with respect to s for v, + the coordinate vector with respect to s for w.*0098

*Again, it is something that you already know. The sum of two vectors is nothing new here.*0106

*If I have a vector v and I multiply by a constant, and if I have the coordinate vector with respect to a certain basis s, well, I can just go ahead and pull that constant out and multiply by the coordinate vector for that vector first, and then multiply by the constant.*0112

*So, just to have these properties to describe what it is that we are going to do in a minute.*0130

*Okay. I am actually going to go through something that I would not normally go through.*0137

*It is the derivation of where this thing called a transition matrix comes from, simply because I want you to see it.*0141

*It is not going to be particularly notationally intensive, but there are going to be indices, you know, there are some numbers and letters, things floating around.*0148

*So, it is really, really important to pay attention to where things are and what each number is doing.*0155

*Okay. Let us say we have two bases for a vector space that we call v, of course.*0161

*The first basis is going to be s, it is going to consist of vectors v1, v2, and so on all the way to vN.*0180

*And... we have t, which is another basis.*0195

*We will call these w... w1, w2, all the way to wN, and again they are bases for the same vector space, so they have the same number of vectors in them. That is the dimension of the vector space.*0200

*Now, choose some v in v, some random in the vector space.*0215

*Well, we can write this particular v with respect to this basis, let us choose this basis, t.*0228

*So, we can write... we can say that v = c1 × w1, c2 × w2, we have done this a thousand times... + cN × wN.*0235

*Okay, just a linear combination of the vectors in this basis t. Well, once I actually solve for these constants, c1, c2 through cN... what I end up with is the coordinate vector v with respect to the basis t.*0252

*That is what this t down here means, which is c1, c2, all the way to cN. Okay.*0268

*Now. Here is where it gets kind of interesting. So, let us watch very, very carefully. Let me put a little arrow right here to show what we are doing.*0279

*If I take v, and if I want to find the coordinate vector with respect to the basis s, I am just going to take this thing that I wrote, which is the ... so the left side, I put it in this notation... sub s.*0290

*Well, the left side is equal to the right side. I just happen to have written the right side with respect to this basis.*0310

*I am just going to write... I am basically just copying it.*0317

*c1w1, c2w2 + ... + cNwN, with respect to s. All of them is take this thing, and subjected it to this notation. Everything should be okay.*0324

*Well, now I am going to use these properties. So, that is equal to c1 × w1, with respect to s + c2 × w2, with respect to s + so on + cN × wN, with respect to s.*0340

*Okay. Take a look at what we have done. A random vector v with respect to a basis t, and then, we want to find the coordinate vectors for... with respect to the basis s.*0368

*So, I have just taken this definition, and subjected it to the notation for the coordinate vector for s.*0384

*Then I use these properties up here, which you might call the linearity properties of these coordinate vectors, and just rewrite it.*0390

*Well, let us just see what this actually says... c1 × w1 with respect to s, w2 with respect to s... wN with respect to s.*0400

*Let us move forward, that is just this.*0409

*It says that coordinate vector v, with respect to s is equal to 1s, w2 respect to s, and so on and so forth.*0414

*I just set up, well let me actually finish up writing it and then I will tell you what it is we are doing here.*0434

*wN with respect to s, × c1, c2, all the way to cN.*0445

*So, the equation that I wrote on the previous slide is just this equation in matrix form.*0457

*What I am doing is I am taking the columns, I am taking each w1 in the basis t, I am expressing that vector with respect to the basis s, and whatever I get I am putting in as columns of my matrix.*0461

*Well, what this ends up being... this matrix that I get by doing that is precisely... well let me rewrite it.*0483

*So, we have the equation in front of us... p s... notice the arrow is going from right to left, not usually from left to right... × this thing, which is just coordinates of v with respect to t.*0496

*Okay, so what we have done, this is our ultimate goal. It is okay if you do not completely understand what it is that we did.*0520

*We will go through the procedure for how to find this matrix. This matrix right here, which is called the transition matrix from t to s. *0525

*There is a reason why I wrote it this way with the arrow going backwards from right to left. I will tell you what it is in a second.*0537

*It says that if I have a vector, and if I can find the coordinate vector with respect to a basis t, but I want to convert that to a coordinate vector with respect to the other basis that I have, s, I can multiply the coordinate vector with the one basis on the left by some matrix.*0543

*The transition matrix that takes it from t to s. This is why it is written this way. Notice this vector space -- I am sorry -- this coordinate vector with respect to s is on the left.*0566

*Here, the notation for the transition matrix has the s on the left, has the t on the right, because you are multiplying it by the coordinate vector for the basis t on the right.*0576

*It is just a way of... again, it is a notational device to remind us that we are going from the t basis to the s basis.*0588

*It is written this way simply because of how we wrote the equation. We wrote the coordinate vector with respect to s on the left side of the equality sign. That is why it is written this way.*0597

*Now, here is how we did it. We have the basis t which consists of vector w1, w2, and w3 and so on.*0609

*We take each of those vectors, we express them as coordinate vectors with respect to s, and we do that by solving this system, just like we did for the previous lesson.*0618

*Then, the coordinate vectors that we get, we just put them in columns and the final matrix that we get when we just put in all of the columns, that is our transition matrix.*0630

*Okay. Let us just do an example and I think it will all make sense.*0645

*Let us move forward here. This is going to be a bit of a long example notationally, but it should be reasonably straight forward.*0650

*Okay, now, let s = the set (2,0,1), (1,2,0), (1,1,1).*0659

*Okay. That is one basis for R3. Three vectors, three entries, it is R3.*0680

*T, let it be another set, let it be (6,3,3), (4,-1,3), (5,5,2).*0695

*So, we have two different bases. Okay, what we want to do is a, we want to compute the transition matrix.*0714

*What matrix will allow us to convert from t basis to s basis? The transition matrix from t to s, that is the first thing we want to do.*0725

*The second thing we want to do is we want to verify the equation that we just wrote. *0735

*That the coordinate with respect to basis s is equal to this transition matrix, multiplied by the coordinate for v with respect to t.*0740

*Okay. So, let us see what we have got here. Alright, so let us do the first thing first.*0755

*Let us go ahead and compute this transition matrix. So, we said that in order to compute the transition matrix, we have to take... so we are going from t to s. *0768

*That means we take the vectors in the basis t, (6,3,3) (4,-1,3), (5,5,2), and we express each of these vectors with respect to the basis s.*0779

*Again, these are just vectors in R3. They are random vectors, but they do form a basis and that forms a basis.*0791

*So, we want to change, we want to be able to write these vectors as a linear combination, each of these as a linear combination of these 3. That is what we are doing.*0796

*So, let us write that down. So, we have... we will take this one first, right?*0808

*Let us actually label these. No, that is okay, we do not need to label them.*0815

*So, we want (6,3,3) to equal some constant a1 × (2,0,1) + a2 × (1,2,0).*0824

*Actually, let us not, let us choose a different letter here. Let us choose b1, and we will make this c1 × (1,1,1).*0845

*So, this is one of the things that we want. We can solve this system. We can just solve this column, this column, this... let me write it on the other side.*0857

*We are accustomed to seeing it on the right, let us go ahead and be consistent... (6,3,3). That is one thing.*0865

*Okay. The other thing we want to do, is we want to express this one. The second vector in the basis t.*0876

*Again, t to s, so we want to take the vectors in t, the second vector expressed as a linear combination of these two.*0882

*So, this time we have a different set of constants, we will call them a2 × (2,0,1) + b2 × (1,2,0) + c2 × (1,1,1) = (4,-1,3).*0890

*Now we want to express the third vector in t as a linear combination of these.*0908

*So, we will take a3 ×, well, (2,0,1) + b3 × (1,2,0) + c3 × (1,1,1).*0914

*That is going to equal (5,5,2). So, we solve this system, we solve this system, we solve this system.*0932

*Well, this system, for each of these, the left hand side, these columns are the same, (2,0,1), (1,2,0), (1,1,1).*0941

*So, we can take all three of these and do them simultaneously. Here is what it looks like.*0948

*All we are doing is taking (2,0,1), (1,2,0), and (1,1,1) and then we are augmenting the (6,3,3).*0959

*(2,0,1), (1,2,0), (1,1,1), augmenting it with (4,-1,3).*0965

*And this one... (2,0,1), (1,2,0), (1,1,1)... augmenting it with (5,5,2).*0970

*Well, we can do all of the augmentations simultaneously. We can just add three columns and then do our matrix in reduced row echelon form. Here is what it looks like.*0974

*So, we get (2,0,1), (1,2,0), (1,1,1), and then we have our augmented... we have (6,3,3), (4,-1,3), and we have (5,5,2). Okay.*0985

*When we subject this to reduced row echelon form, let me go horizontally actually, we end up with the following.*1007

*We end up with (1,0,0), (0,1,0), (0,0,1), and we end up with (2,1,1), (2,-1,1), (1,2,1).*1016

*So, this, right here, in the red -- oh, I did not get red, oops -- right here, that is our transition matrix.*1035

*It is the columns of the vectors in t, expressed as... these are the coordinates of those vectors with respect to the s basis.*1046

*That is what we did, just like the previous lesson, so our transition matrix from t to s = (2,1,1), (2,-1,1), (1,2,1).*1058

*There you go. That is the first part. Okay.*1074

*Now, we want to confirm that that equation is actually true. In other words, we want to confirm this equation.*1077

*That the coordinate vector of some random vector v with respect to s is equal to this transition matrix that we just found × the coordinate vector with respect to the basis t.*1095

*Okay. Well, let us let v... well, let us choose a random vector. We will let v - (4,-9,5).*1105

*Okay, so now the first thing that we want to do is... again we are verifying so we are doing a left hand side, we are going to do a right-hand side.*1120

*We are verifying this. We need to check to see if this is actually equal. So, we need to do this side, and we need to do this side.*1128

*Okay. First of all, let us find the coordinate of this vector with respect to t, that is this right here. Okay.*1136

*Well, we need to set up the following. c1w1 + c2w2 + c3w3 = our (4,-9,5).*1148

*Well, let us take our columns, which are our basis t, so we get the following. We get (6,3,3), (4,-1,3), (5,5,2).*1171

*It is going to be (4,-9,5).*1187

*Convert to reduced row echelon form, we get (1,0,0), (0,1,0), (0,0,1), and we get (1,2,-2). Okay.*1194

*So, the coordinates of v with respect to the basis t is equal to (1,2,-2). That is part of the right hand side.*1209

*Well, we have the transition matrix, that is this, so let us circle what we have. We have that. that is our coordinate with respect to t.*1223

*We have our transition matrix, so we have the right-hand side. Now, we need to find the left hand side, do a multiplication, and see if they are actually equal to confirm that equation.*1232

*Okay. Now, let us move to the next page, so we want to find with respect to s. Well, with respect to s we set up the columns from the vectors in the basis s.*1241

*So, we get (2,0,1), (1,2,0), (1,1,1), and we are solving for (4,-9,5).*1261

*Reduced row echelon, when you do that, you end up with... I will actually write that out... and I put (4,-5,1)... so now we have the left hand side.*1276

*Now, what we want to do is we want to check, is (4,-5,1)... does it equal that transition matrix × the coordinate vector with respect to... yes, as it turns out, when I do the multiplication on the right hand side, I end up with (4,-5,1).*1293

*So, yes. It is verified.*1329

*So again, our equation is this. If I have some coordinate vector with respect to a basis t, and I want to find the coordinates with respect to another basis s, I multiply on the left with something called the transition matrix.*1333

*That will give me the coordinates with respect to s, and the columns of that transition matrix are the individual basis vectors for the basis t expressed as coordinate vectors with respect to the basis, s.*1353

*That is what this notation tells me. Okay.*1372

*Now, that is exactly what we did. We were given two bases, t and s, we took the basis, the vectors in the basis from t, we expressed them as coordinate vectors with respect to the basis s, and that which we got, we set up as columns in a matrix.*1379

*That matrix that we get is the transition matrix. That allows us to go from 1 basis to another, given one coordinate, or another.*1405

*Okay. Let us see. Let us continue with a theorem here.*1417

*s = v1, v2... vN. And, t = w1, w2... wN. Okay.*1431

*Let s and t be two bases for an n-dimensional vector space. Okay.*1456

*If p, from t to s, is the transition matrix... transition matrix from t to s... then, the inverse of that transition matrix from t to s is the transition matrix from s to t.*1471

*So, if I have 2 bases, and if I calculate a transition matrix from t to s, I can take that matrix, I can take the inverse of that matrix, and that is going to be the transition matrix from s to t.*1526

*So, I do not have to calculate it separately. I can if I want to, but really all I have to do is take the inverse of the matrix that I found.*1539

*That is the relationship between these two. Okay.*1546

*Also, the transition matrix which we found is non-singular.*1552

*Of course, invertible. Non-singular means invertible. Okay.*1566

*Okay. Let us see what we have got here.*1576

*So, let us continue with our previous example. Let us recall the conditions... we said that s is equal to (2,0,1), (1,2,0), (1,1,1).*1580

*And... t is equal to (6,3,3), (4,-1,3), and (5,5,2).*1605

*Okay. What we want to do is we want to compute the transition matrix from s to t, directly.*1618

*So, we can do it directly, and the other way we can do it is to take the inverse of the transition matrix from t to s that we already found. That is going to be the second part of this.*1632

*So, the first part a will be computed directly, and the second part, we want to show that this thing from s to t is actually equal to the inverse of the matrix from t to s.*1647

*Make sure you look at these very, very carefully to make sure you actually know which direction we are going in.*1661

*Well, in order to calculate it directly, we take the... so this, we are going from s to t, alright?*1666

*So, let us go with vectors in s... so we are going to write (6,3,3), so in other words, we are going to express... so this is from s to t.*1680

*So, we want to take the vectors in s and express them as a linear combination of these vectors.*1700

*These vectors are the ones that actually form the matrix over here... (6,3,3), (4,-1,3), (5,5,2), and we augment with the (2,0,1), (1,2,0), (1,1,1).*1706

*Again, we are going from s to t. We want to express the vectors in s as linear combinations of these. That is why s is on the augmented side, and t is over on this side. Okay?*1725

*These are three linear equations. This, this augment, that, that augment, this, this augment. Okay?*1738

*When we subject to reduce row echelon form, we end up with (1,0,0), (0,1,0), (0,0,1).*1749

*We end up with -- nope, we do not end up with stray lines -- we have (3/2,-1/2,-1,1/2,-1/2,0,-5/2,3/2,2) make these as clear as possible.*1760

*Therefore, q transition matrix from s to t is equal to (3/2,-1/2,-1,1/1,-1/2,0,-5/2,3/2,2).*1793

*Okay. So, that is the direct computation of u(s(t).*1819

*Now, let us go back to blue. Now, let us calculate the inverse to show that that equals the inverse of that. Yes.*1827

*Okay. So, now, let us see. We want to take, in order to find... so we have... let us recall what the transition matrix from t to s was.*1844

*We had (2,1,1), (2,-1,1), (1,2,1), okay? That was our transition matrix.*1859

*Now, you recall, when you actually, in order to find the inverse of the matrix, you set up this system... (2,1,1), (2,-1,-1), (1,2,1).*1881

*Then, you put the identity matrix here, (0,1,0), (1,0,0)... yes.*1898

*Then, of course, if you do reduced row echelon form, this right side ends up being the inverse of that.*1910

*In this particular case, we do not need to do that. If we say that one thing is the inverse of another, all that I have to really do is multiply them and see if I end up with the identity matrix.*1915

*So, part b, in order to confirm that that is the case, all I have to do is I have to take the transition matrix of t from s to t, and I multiply by the matrix from t to s, to see if I get the identity matrix.*1926

*While I do that, I have of course my (3/2,-1/2,-1,1/2,-1/2,0,-5/2,3/2,2).*1949

*Multiply that by our transition matrix (2,1,1), (2,-1,-1), (1,2,1), and I can only hope I have not messed up my minus signs or anything like that.*1970

*As it turns out, when I do this, I get (1,0,0), (0,1,0), (0,0,1), which is the identity matrix - n-dimensional, the 3 by 3 identity matrix, so this is not n, this is 3.*1982

*That confirms that q, s to t = inverse of t to s.*2000

*When I find a transition matrix from t to s, if I want the transition matrix from s to t, all I do is take the inverse. That is what we have done here.*2014

*Thank you for joining us Educator.com to discuss transition matrices, we will see you next time.*2023

*Welcome back to Educator.com, welcome back to linear algebra.*0000

*In the previous lesson, we talked about transition matrices for a particular space, where we have 2 or more bases for that space.*0004

*There we said that one basis is as good as another. As it turns out that is true.*0015

*One basis is not necessarily better than another intrinsically, however as it turns out, there are certain bases... one particular basis in particular that is better computationally.*0019

*It just makes our life a lot easier. That is called an orthonormal basis.*0029

*So, that is what we are going to talk about today. We are going to introduce the concept, and then we will talk about how to take a given basis and turn it into an orthonormal basis by something called the Gram Schmidt orthonormalization process.*0034

*It can be a little bit computationally intensive, and notationally intensive, but there is nothing particularly strange about it.*0047

*It is all things you have seen. Essentially it is all just arithmetic, and a little bit of the dot product.*0055

*So, let us just jump in and see what we can do.*0059

*Okay. Let us talk about the standard basis in R2 and R3, the basis that we are accustomed to.*0063

*If I take R2, I can have a basis (1,0), that is one vector, and the other vector is (0,1)... okay?*0071

*Also known as i and j, also known as e1 and e2, again just different symbols for the same thing.*0084

*R3 is the same thing. R3... we have... (1,0,0), (0,1,0), and (0,0,1), as a basis.*0095

*Three vectors, three dimensional vector space, you also know them as i,j, k, and we have also referred to them as e1, e2, e3.*0108

*Now, what is interesting about these particular bases, notice, let us just deal with R3... vector a and vector 2 are orthogonal meaning that their dot product is 0, perpendicular.*0122

*1 and 3 are orthogonal, 2 and 3 are orthogonal. Not only that, they are not just orthogonal, mutually orthogonal, but each of these has a length of 1.*0136

*So, this is what we call orthonormal, that the vectors are mutually orthogonal, and they have a length of 1.*0146

*As it turns out, this so called natural basis works out really, really, well computationally.*0154

*We want to find a procedure... is there a way where given some random basis, or several random bases, can we choose among them and turn that basis into something that is orthonormal, where all of the vectors have a length of 1 and all of the vectors are mutually orthogonal.*0160

*As it turns out, there is. The Gram Schmidt orthonormalization process. A beautiful process, and we will go through it in just a minute.*0175

*Let us just start off with some formal definitions first.*0183

*So, we have a set s, which is the vectors u1, u2... all the way to uN is orthogonal, if any two vectors in s are orthogonal.*0188

*What that means mathematically is that u*_{i} · u_{j}, so u_{1} · u_{3} = 0.0222

*The dot product of those two vectors is equal to 0. That is the definition of orthogonality. Perpendicularity if you will.*0231

*Now, the set is orthonormal if each vector also has a length norm of 1.*0239

*Mathematically, that means u*_{i} dotted with itself gives me 1.0262

*Let us just do an example. Let us take u*_{1} = the vector (1,0,2), u_{2} = the vector (-2,0,1), and u_{3} = the vector (0,1,0).0269

*Okay. So, as it turns out, the set u1, u2, u3, well if I do the dot product of u1, u2... u1, u3... u2, u3, I get 0 for the dot product.*0293

*So, this set is orthogonal.*0311

*Now, let us calculate some norms. Well, the norm of u*_{1} is equal to this squared + this squared + this squared under the radical sign.... is sqrt(5).0316

*The norm for u2, that is our symbol for norm, is equal to this squared + this squared + this squared... also sqrt(5).*0334

*And... the norm for u3 is 1. So, this one is already a unit vector, these two are not.*0348

*So, since I have the norm, how do I create a vector that is length 1. I take the vector and I multiply it by the reciprocal of its norm, or I divide it by its norm, essentially.*0353

*So, we get the following. If I have the set u1/sqrt(5), u2/sqrt(5), and u3... this set is orthonormal.*0369

*Again, radical sign in the denominator, it is not a problem, it is perfectly good math. In fact, I think it is better math than simplification.*0388

*What they call rationalizing the denominator, or something like that... I like to see where my numbers are.*0399

*If there is a radical 5 in the denominator, I want it to be there. I do not want it to be u2sqrt(5)/5, that makes no sense to me, personally, but it is up to you though.*0405

*Okay. Quick little theorem here.*0416

*If s is an orthogonal set, or orthonormal, then s is linearly independent.*0428

*So, if I have a set that I do not know is a basis, I know this theorem says that they are linearly independent.*0452

*So, the particular space that they span, it is a basis for that space.*0460

*Okay. Let us go ahead and do this. Like we said before, using an orthonormal basis can actually improve the computation.*0466

*It just makes the computational effort a little bit less painful.*0476

*So, now if we have s, u1, u2, all the way to u*_{n}... 0481

*If this set is a basis for some vector space v and u is some random vector in v, then we know that we can express u as linear combination... c1u1 + c2u2 + cNuN.*0498

*u's, v's, w's, all kinds of stuff going on. Okay, so what we did before was we just solved this linear system.*0528

*We found c1, c2, all the way to cN, you now, Gauss-Jordan elimination, reduced row echelon form.*0536

*Well, here is what is interesting. If s is orthonormal, well still if it is orthonormal, when it is orthogonal it is still a basis so you still get this property.*0545

* You know -- u is still that, but it is a really, really simple way to find the cN without finding the linear system.*0562

*What you end up with is the following. Each c*_{i} is equal to the vector u dotted with u_{i}.0570

*For example, if I wanted to find the second coefficient, I would just take the vector u and I would dot it with the second vector in the basis.*0582

*That is fantastic. It is just a simple dot product. For vectors that are -- you know -- maybe 2-space, 3-space, 4-space, maybe even 5-space, there is no solution, you do not have to worry about it.*0592

*You can just do the multiplication and the addition in your head. The dot product is really, really easy to find.*0600

*So, let us do an example of this. We will let s equal... okay... it is going to be a little strange because we are going to have some fractions here... *0608

*(2/3, -2/3, 1/3, 2/3, 1/3, -2/3, 1/3, 2/3, 2/3,), so this is our set s.*0625

*Okay, well, we want to be able to write some random vector v, which is equal to let us say (3,4,5) as a linear combo of the vectors in s.*0650

*So, we want c1, let us call it v1, the vectors in s, let us just call them v1, v2, v3... + c2v2 + c3v3.*0675

*So, we want to find c1, c2, c3. Well, as it turns out that basis, even though it looks a little strange, it actually ends up being orthonormal.*0691

*The length of each of those vectors is 1, and the mutual dot product of each of those is 0.*0700

*So, it is an orthonormal set. Therefore, c1 = v · v1.*0708

*Well, v is just 3, 4, 5, okay? I am going to dot it with v1, which was 2/3, -2/3, 1/3, and I get 1.*0718

*When I do c2, well that is just equal to v · v2. When I do that, I get 0.*0737

*If I do c3, that is just v, again, 3, 4, 5, dotted with v3, which was v3 up there in s, and this ends up being 7.*0746

*Therefore, v is equal to, well, 1 × v1 + 0 × v2 + 7v3.*0759

*There we go. I did not have to solve the linear system. I just did simple dot product. Simple multiplication and addition. Very, very nice.*0771

*One of the benefits of having an orthonormal basis. There are many benefits, trust me on this one.*0781

*Okay. So, now let us come down to the procedure of actually constructing our orthonormal basis.*0786

*So we are going to go through this very, very carefully. There is going to be a lot of notation, but again, the notation is not strange. You just have to make sure to know what is where.*0792

*The indices are going to be very, very important... so, take a look at it here, take some time to actually stare at the Gram Schmidt orthogonalization process in your textbook.*0802

*That is really the best way to sort of wrap your mind around it. Of course, doing examples, which we will do when you do problems, but just staring at something is really a fantastic way to understand what it is that is going on.*0815

*In mathematics, it is the details, the indices, the order of things. Okay.*0826

*Let us see. So, let us write it out as a theorem first.*0835

*You know, let me... let me go to a black ink... theorem...*0841

*Let w be a non-zero subspace. So again, when we can speak of a basis we can speak of a base for the entire space, it does not really matter.*0857

*In this case we are just going to express this theorem as a subspace.*0870

*Again, the whole space itself is a subspace of itself, so this theorem is perfectly valid in this case.*0875

*... be a subspace of rN with basis s = u1, u2, uN. N vectors, N space.*0883

*Then, there exists an orthonormal basis t, which is equal to... let us call it w1, w2, all the way to wN for w.*0906

*So, this theorem says that if I am given a basis for this subspace or a space itself, I can find... there exists an orthonormal basis.*0934

*Well, not only does one exist, as it turns out, this particular procedure constructs it for us. So, the proof of this theorem is the construction of that orthonormal basis itself.*0945

*That is the Gram Schmidt orthonormalization process. They call it the orthogonalization process, which is really what we are doing.*0955

*We are finding orthogonal vectors, but we know how to change vectors that are orthogonal to vectors that are orthonormal.*0961

*We just divide by their norms... nice and simple. Okay.*0967

*So, let us list the procedure so that we have some protocol to follow.*0973

*Procedure... okay... procedure for constructing an orthonormal basis t, which we said is going to be w1, w2... all the way to wN.*0979

*Constructing an orthonormal basis t... from basis s = u1, u2, all the way to uN.*1014

*I am going to change something here. I am going to not use w, I think I am going to use v instead.*1038

*I used u for s, so I am going to go back and choose... let me call them v so that we stay reasonably consistent... to vN.*1047

*Okay. So, we are given a basis s, we want to construct an orthonormal basis t from s, here is how we do it.*1058

*First things first. We let v1, the first vector in our basis t, our orthonormal, we let it equal u1.*1068

*We just take it straight from there. The first vector is the first vector.*1076

*Okay. Two. This is where the notation gets kind of intensive. *1082

*v*_{i} = u_{i} - u_{i} · 1_{1}/v_{1} · v_{1} × the vector v_{1} - u_{i} · v_{2}/v_{2} · v_{2} × the vector v_{2}... and so on.1086

*Until we get to u*_{i} · v_{i-1}/v_{i-1} · v_{i-1} × v_{i-1}. Okay.1134

*Do not worry about this, it will... when we do the example, this will make sense. Again, this is just mathematical formulism to make sure that everything is complete*1158

*When we do the example, you will see what all of these i's and i-1's and v's mean.*1165

*Three. t*, when we have collected our v*_{1}, v_{2}, that we go from the first two steps... is orthogonal.1173

*We have created an orthogonal set.*1194

*Now, we want to take -- that is fine, we will go ahead and we will take for every v*_{i}, we are going to divide it by its norm.1199

*So, for each of these v*_{1}, v_{1}, in this set which is orthogonal, we are going to divide each of these vectors by the norm of that vector.1231

*Then, of course, what you get is the final set t, which is v*_{1}/norm(v_{1}), v_{2}/norm(v_{2}), and so on and so forth, all the way to v_{n}, not v_{i}.../norm(v_{n})/1242

*This set is orthonormal. Let us just do an example and it will all make sense.*1287

*So, let us start here. Let us do our example in blue.*1299

*We will let s = u1, u2, u3 = 1, 1, 1, -1, 0, -1, -1, 2, 3.*1308

*This is our set. We want to transform s into an orthonormal basis, for R3. This is a basis for R3.*1342

*These are linearly independent. They span R3. Three vectors. We want to change this into an orthonormal basis.*1353

*We want each of the vectors in our basis to be orthogonal, mutually orthogonal, and we want them to have a length of 1. So, we will run through our procedure.*1361

*Okay. First thing we do, let v1 = u1. So the first thing I am going to do is I am going to choose by vector (1,1,1).*1371

*That is my first vector in my orthogonal set. Nice, we got that out of the way. Okay.*1380

*Two. Go back to the previous slide and check to see that number two thing with all of the indices going on. Here is what it looks like based on this number of vectors.*1387

*v*_{2} = u_{2} - u_{2} · v_{1}, which is this thing /v_{1} · v_{1} × vector v_{1}.1400

*That is equal to... well, u*_{2} is (-1, 0, -1).1430

*Now, when I take u*_{2}, which is (-1, 0, -1), and I dot it with v_{1}, which is (1, 1, 1), I get -2.1440

*When I take v*_{1} dotted with v_{1}, I get three. So it is -2/3 × v_{1}, which is the (1,1,1).1453

*When I do that, I get -1/3, 2/3, -1/3... okay.*1471

*The next one v3, I have v*_{1}, I have v_{2}, which is here. I need v_{3}, right? because I need 3 vectors.1489

*So, v*_{3} = well, it is equal to -- go back to that number 2 in our procedure -- it is equal to u_{3} - u_{3} · v_{1}/v_{1} · v_{1} × v_{1} - u_{3} · v_{2}/v_{2} · v_{2}, all × v_{2}.1498

*Well if you remember that last entry in that number 2, it said v*^{i-1}.1543

*Well, since we are calculating v*_{3}, 3 - 1 is 2, so that is it. We stop here. We do not have to go anymore.1547

*That is what that symbolism meant, it tells us how many of these we get.*1557

*If we are calculating v*_{4}, well, 4 - 1 is... so that is v_{i}, i is 4. It is 4 - 1, that means we go all the way up to this last entry, which is 3. So we would have three of these.1563

*That is all this means. That is all that symbolism meant. Just follow the symbolism, and everything will be just fine.*1578

*okay. This actually ends up equaling... well u*_{3} is (-1,2,3).1584

*When I do u*_{3} · v_{1}, which is (-1,2,3) · (1,1,1) over (1,1,1) · (1,1,1), I am going to end up with 4/3 × (1,1,1) - ... when I do u_{3} · v_{2}, which is u_{3} · v_{2}/v_{2} · v_{2}.1597

*I am going to get -2/6 × v*_{2} - (1,2,1)... okay?1627

*I am going to end up with (-2,0,2). So, let me go to red.*1644

* (1,1,1), that... and that... so let us write our what we have got.*1657

*Our t*, our orthogonal set is this sub... interesting... 1, 1, 1, -1/3, 2/3, -1/3, and -2, 0, 2.*1678

*This set is orthogonal. Now, let us take a look at v*_{2} real quickly here.1706

*v*_{2} = (-1/3, 2/3, -1/3), well, let me pull out the third... that is equal to 1/3 × (-1, 2, -1).1721

*These vectors, if I just take the vector (-2, 2, 1), and if I take the vector 1/3 × that, which is this vector, they are vectors going in the same direction.*1741

*They are just different lengths of each other. So, because they are vectors going in the same direction, I do now need the fractional version of it. I can just drop the denominator from that, because again, we are going to be normalizing this.*1754

*We are going to be reducing it to a length of 1, so it does not matter whether I take this vector or this vector.*1766

*They are just different vectors in the same direction. Does that make sense?*1772

*So, I can rewrite my t*, my orthogonal set, as (1,1,1), (-1,2,-1), and (-2,0,2).*1776

*They are still orthogonal. This, the dot product of this and this is going to be 0. They dot product of this and this is going to be 0, it does not matter. They are in the same direction.*1795

*They are just different lengths, we are going to be normalizing it anyway. So, we want to make life easier for ourselves, so that is our orthogonal set.*1806

*Now, what we want to do is calculate the norms.*1820

*So, the norm of v*_{1} = sqrt(3). The norm of v_{2} = sqrt(6), and the norm of v_{3}, right here, is equal to sqrt(8).1827

*Therefore, our final orthonormal... we have 1/sqrt(3), 1/sqrt(3), 1/sqrt(3)... -1/sqrt(6), 2/sqrt(6), -1/sqrt(6) and... -2/sqrt(8), 0, 2/sqrt(8)...*1855

*Again, I may have missed some minus signs or plus signs... it is procedure that is important.*1897

*This is orthonormal. In this case, the length matters. In the previous, they were orthogonal and so we dropped the denominator from the vectors that we found because again, they are vectors in the same direction.*1907

*The same direction, they are still going to be orthogonal. So, I can make it easy for myself by not having to deal with fractions. But, in this case, we are normalizing it. We are taking that orthogonal set and we are dividing each of those vectors by its own norm to create vectors.*1925

*Each of these has a length of 1. Any two of these are mutually orthogonal. Their dot product equals 0. This is an orthonormal basis for our three. *1940

*It is just as good of a basis as our standard basis, the i, j, k. Computations are easy, and who knows, there might be some problem where this basis is actually the best basis to work with.*1953

*Again, it just depends on frames of reference.*1964

*Okay. Thank you for joining us at Educator.com.*1968

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*Today we are going to be talking about orthogonal complements.*0004

*So, rather than doing a preamble discussion of what it is, let us just jump into some definitions and it should make sense once we actually set it out in a definition form.*0008

*Okay. So, let us start with our definition. It is a little long, but nothing strange.*0019

*Let w be a subspace of RN, so we are definitely talking about N-space here. Okay.*0032

*A vector u which is a member of RN is said to be orthogonal to w, so notice orthogonal to w as a subspace. *0049

*Orthogonal to an entire subspace, if it is orthogonal to every vector in that subspace.*0070

*Okay. The set of all vectors in RN that are orthogonal to all vectors in w is called the orthogonal complement of w.*0094

*And... is symbolized by w with a little perpendicular mark on the top, and they call it w perp.*0142

*The top right. Okay. SO, let us look at this definition again.*0157

*So, w is a subspace of RN, okay? So it could be dimension 1, 2, 3, all the way up to N, because RN is a subspace of itself.*0161

*A vector u in RN is said to be orthogonal to that subspace if its orthogonal to every vector in that subspace.*0170

*So, the set of all vectors that are orthogonal, we call it the orthogonal complement of w.*0180

*It is symbolized by that w with a little perpendicular mark on the top right, called w perp.*0187

*Let us give a little bit of a picture so that we see what it is we are looking at.*0194

*So, let us just deal in R3, and let us say that, so let me draw a plane.*0199

*As you know, a plane is 2-dimensional so it is in R3, and then let me just draw some random vectors in this plane. Something like that.*0205

*Well, if I have some vector like that, which is perpendicular to this plane, so this plane... let us call that w.*0214

*So, that is some subspace of R3, and again, let me make sure that I write it down... so we are dealing with R3.*0226

*This two dimensional plane is a subspace of R3, and every single vector in there is of course... well, it is a vector in the plane.*0231

*Then, if I take this vector here, well every single vector that is perpendicular to it is going to be parallel to this vector, right?*0241

*So, when we speak about parallel vectors, we really only speak about 1 vector.*0248

*So, as it turns out, if this is w, this vector right here and all of the scalar multiples of it, like shortened versions of it, long versions of it, this is your w perp.*0253

*Because, this vector, any vector in here, is going to end up being perpendicular to every one of these vectors. This is the orthogonal complement of that.*0268

*So, it helps to use this picture working in R3, and working with either dimension 1 or 2, because we can actually picture it.*0279

*For something like R4 or R5, I mean I can go ahead and tell you what it is that you will be dealing with.*0287

*So let us say in R4 you have a subspace that is 2-dimensional, that is some kind of plane so to speak in R4.*0294

*Well, the orthogonal complement of that is going to be every vector that is going to be perpendicular to that 1 or 2 dimensions, that is actually going to end up being 2-dimensional.*0302

*The idea is we have this subspace and we have a bunch of vectors that are orthogonal to every vector in that subspace.*0314

*The set of all of those vectors that are orthogonal are called the orthogonal complement. That is all that it means.*0324

*Okay. Let us actually do a little bit of an example here.*0331

*So, let us say... well actually, you know what, let us just jump into a theorem and we will get into an example in a minute.*0338

*So, theorem... let w be a subspace of RN... okay.*0348

*Then, aw perp is a subspace of RN.*0371

*So, if w is a subspace, w perp, its orthogonal complement, is also a subspace.*0383

*We do not have to go through that procedure of checking whether it is a subspace.*0388

*And... it is interesting... that the intersection of w and w perp is the 0-vector.*0393

*So, again, they are subspaces so they have to include the 0 vector, both of them, but that is the only thing common between the two subspaces of w and w perp, its orthogonal complement. *0402

*The only thing they have in common. They actually pass through the origin.*0413

*Okay. So, now let us do our example.*0419

*Let us see. We will let w be a subspace of, this time we will work in R4, with basis w1, w2.*0425

*So, w1, w2, these two vectors form a basis for our subspace w.*0447

*And... w1 is going to be 1, 1, 0, 1, and I have just written this vector in horizontal form without the... not as a list without the commas, it does not really matter.*0453

*w2 is going to be the vector 0, -1, 1, 1, 1.*0466

*So, you have 2 vectors, they form a basis for the subspace w in R4.*0474

*Now, our task is find a basis for the orthogonal complement, w perp. Find a basis for the subspace of all of the vectors that are orthogonal to all of the vectors in w, that has these 2 vectors as a basis.*0479

*Okay, well, so, let us just take some random... okay... so we will let u, let us choose u equal to some random vector in R4.*0498

*a, b, c, d, we want to be as general as possible... a, b, c, d.*0512

*Well, so we are looking for the following. We want... actually, let me see, let this be -- I am sorry -- let this be a random vector in the orthogonal complement.*0518

*Okay. So, we are just going to look for some random vector, see if we can find values for a, b, c, d.*0532

*We are going to take a vector in the orthogonal complement, and we know that this is going to be true. *0538

*We know that because w perp and w are orthogonal to each other, we know that u · w1 = 0.*0543

*We know that u · w2... let me make this dot a little more clear, we do not want that.... is equal to 0, right?*0555

*So, because they are orthogonal complements, we know that they are orthogonal, which means that their dot product is equal to 0.*0570

*Well, these are just a couple of equations, so let us actually do this.*0578

*So, if I do u · w1, I get a + b + 0 + b = 0.*0582

*Then, if I do u · w2, I get 0 - b + c + d = 0.*0595

*When we solve this using the techniques that we have at our disposal... I am going to go ahead and do it over here.*0608

*So, this is just a homogeneous system, you set up the coefficient matrix, reduced row echelon form, the columns that have... that do not have a leading entry, those are your free variables... r, s, t, u, v, whatever you want.*0618

*Then you solve for the other variables that do have leading entries*0629

*When you do this, you end up with the following. You get a, b, c, and d, the vector takes on the form R × -1, 1, 1, 1, 0, + s × -2, 1, 0, 1.*0633

*So, those two vectors form a basis for the orthogonal complement w perp.*0655

*Therefore, we will set it up as c -- set notation, let me just write it and again -1, 1, 1, 1, 0... comma, -2, 1, 0, 1... is a basis for w perp.*0663

*So, that is it. We started with a basis of two vectors in R4.*0682

*Then, just by virtue of the fact that we know that the orthogonal complement is going to be orthogonal to every single vector in this, so it is certainly going to be orthogonal to these two... I pick a random vector in this orthogonal complement.*0689

*I write my equation... orthogonal just means the dot product equals 0, get a homogeneous system.*0702

*I solve the homogeneous system and I set it up a way where I can basically read off the basis for my solution space of this homogeneous system, which is the basis for, in this particular case, based on this problem, the orthogonal complement.*0708

*Nice, straight forward, nothing we have not done. We have seen dot product, we have seen homogeneous systems, we have seen basis, everything is new.*0725

*Now we are just supplying it to this new idea of 2 subspaces being orthogonal to each other. Being perpendicular to each other.*0733

*Of course, perpendicularity, of course you know from your geometric intuition, only makes sense in R2 and R3, which is why we do not use the word perpendicular, we use the word orthogonal, but it is the same thing in some sense*0742

*So, you might have a 6-dimensional subspace being orthogonal to a 3-dimensional subspace "whatever that means".*0750

*Well, geometrically, pictorially, we do not know what that means. We cannot actually picture that. We have no way of representing it geographically.*0760

*But, we know what it means algebraically. The dot product of two vectors in those spaces is equal to 0.*0767

*Okay. One of the things that I would like you to notice when we had R4, you notice that our w had dimension 2.*0774

*Its basis had 2 vectors, dimension 2... and you noticed when we had w perp, the orthogonal complement, we ended up with 2 vectors as a basis, also in dimension 2.*0782

*Notice that the dimension of the subspace w + the dimension of its orthogonal complement added up to 4, the actual dimension of the space. That is not a coincidence.*0793

*So, let us write down a theorem... Let w be a subspace of RN.*0808

*Then, RN, the actual space itself is made up of 2 w + w perp. SO, let me talk about this thing.*0826

*This little plus sign with a circle around it, it is called a direct sum, and I will speak about it in just a minute.*0836

*Okay. Essentially what this means is... we will have to speak a little bit more about it, but one of the things that it means is that w intersect w perp, the only thing they have in common is like we said before... the 0 vector.*0846

*These are both subspaces, so they have to have at least the 0 vector in common. They do not share anything else in common.*0860

*Okay. Yet another theorem, and I will talk about the sum in just a moment, but going back to the problem that we just did, this basically says that if I take the subspace w and its orthogonal complement, and if I somehow combine them -- which we will talk about it in a minute -- we will actually end up getting this space itself, the 4-dimensional space.*0870

*So if I had a 6-dimensional space and I know that I am dealing with a subspace of 2-dimensions, w, I know that the orthogonal complement is going to have dimension 4 because 2 + 4 has to equal 6, or 6 - 2 = 4, however you want to look at it. *0890

*Okay. Let us do another theorem here. Just a little bit of an informational theorem, which will make sense.*0906

*If w is a subspace of RN, then w perp perp = w.*0915

*This just says that if you take an orthogonal complement of some subspace and you take the orthogonal complement of that ,you are going to end up getting the original subspace.*0932

*Nothing new about that, I mean like a function... if you take the inverse of a function and then you take the inverse of the inverse, you get the function back. That is all it is. Very, very intuitive.*0940

*Okay. Now, let us discuss this symbol some more. This + symbol.*0950

*So, when we write... this direct sum symbol -- I am sorry -- when we write w + w perp, these are subspaces, okay? *0956

*We do not... this is a symbol for the addition of subspaces, we are not actually doing the operation of addition.*0970

*What this means, so this symbolizes the addition of a subspace. This whole thing is a space.*0976

*What this means is that something... it means that if I have some w1 -- no, let me make it a little bit more general, there are going to be too many w's floating around.*0987

*So, if I have a, this direct sum symbol, plus b, okay?*1003

*It is the space made up of vectors v, such that v is equal to some a + b, where the vector a comes from the space a and the vector b comes from the space b.*1009

*So, this symbol, this direct sum symbol... it means if I take some vector in the subspace a... and a vector in the subspace b, and i actually add those vectors like I normally would? I am going to get some vector.*1040

*Well, that vector belongs to this space. When I see this symbol, I am talking about a space. In some sense what I have done is I have taken 1 whole space and I have attached another space right to it.*1055

*In the case of the example that we did, we had a 2-dimensional subspace, we added a 2-dimensional orthogonal complement to it, and what I got was the entire space R4.*1070

*That is what is happening here. That is what this direct sum symbol means. It symbolizes the addition of spaces, the putting together of spaces.*1078

*But, these vectors are spaces that contain individual vectors.*1088

*Okay. Let us see. Let us do a little bit further here. Let us take R4, expand upon this...*1097

*Let us let w = ... well not equal, let us say it has a basis.*1112

*Let w have... has a basis vector (1,0,0,0), and (0,1,0,0).*1123

*So, let us say that w is the subspace that has these 2 vectors as a basis.*1136

*So, it is a 2-dimensional subspace, and we will let w perp have basis (0,0,1,0)... (0,0,0,1)... okay, as a basis.*1141

*Now, if I take w, the direct sum w perp, well, that is equal to R4... right? *1165

*So, a vector in R4... let us say for example... which is let us just say some random vector (4,3,-2,6), which is a vector in R4, it can be written as... well, it can be written as a vector from this subspace + a vector from this subspace.*1180

*Just like what we defined, that is what a direct sum means. This w + the w perp, means take a vector from here, add it to a vector from here, and you have a vector in the sum, which happens to be R4.*1207

*We can write it as (4,3,0,0)... this vector right here is in the space w.*1220

*We can add it to the vector (0,0,-2,6), which is a vector in w perp.*1231

*What is nice about this representation, this direct sum representation is that -- let us see -- this representation is unique.*1240

*So, when I write a particular vector as a direct sum of 2 individual subspaces, the way that I write it is unique. There is no other way of writing it.*1262

*Okay. So that gives us a nice basic idea of orthogonal complements to work with.*1273

*We will continue on next time some more with orthogonal complements.*1277

*Thank you for joining us at Educator.com and we will see you for the next instalment of linear algebra. Take care.*1280

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*In our last lesson, we introduced the notion of an orthogonal complement, and this time we are going to continue talking about orthogonal complements.*0004

*We are going to be talking about these 4 fundamental subspaces that are actually associated with any random matrix, and we are going to talk about the relationships that exist between these spaces.*0013

*Then we are going to talk about something called a projection. The projection is a profoundly, profoundly important concept.*0023

*It shows up in almost every area of physics and engineering and mathematics in ways that you would not believe.*0033

*As it turns out, those of you who are engineers and physicists... one of the tools in your tool box that is going to be almost the primary tool for many years to come is going to be the idea of something called Fourier series.*0042

*If you have not been introduced to it yet, you will more than likely be introduced to it sometime this year... and Fourier series actually is an application of projection.*0052

*Essentially what you are doing is you are taking a function and what you are doing is you are projecting that function onto -- how shall I say this -- you are projecting it onto an infinite dimensional vector space on the individual axes which are the trigonometric functions.*0061

*Let us say, for example, I have a function 5x. I can actually project that function onto the cos(x) axis, onto the sin(x) axis, onto the cos(2x) axis, onto the sin(2x) axis, so on and so forth.*0080

*I can actually represent that function in terms of cosine functions.*0096

*Now, you will learn it algebraically, but really what you are doing is you are actually doing a projection.*0100

*You are projecting a function onto other functions, and it is really quite extraordinary.*0105

*When you see it that way, the entire theory of Fourier series becomes open to you, and more than that, the entire theory of orthogonal polynomials becomes open to you. *0111

*That is going to connect to our topic that we discuss in the next lesson, which is Eigenvectors and Eigenvalues.*0120

*So, linear algebra really brings together all areas of mathematics. Very, very central. Okay. Let us get started.*0126

*Let us see. So, let us go ahead and talk about our four fundamental vector spaces associated with a matrix a.*0135

*So, if a is an m by m matrix, then, there are 4 spaces associated with a.*0143

*You actually know of all of these spaces. We have talked about them individually, now we are going to bring them together.*0173

*One is the null space of a, which is, if you remember, is the solution space for the equation ax = 0.*0178

*Let me put that in parentheses here. It is the solution space, the set of all vectors x, such that the matrix a × x = 0. Just a homogeneous system.*0189

*Two, we have something called the row space of a. Well, if you remember, the row space of a if I take the rows of a, just some random m by m matrix, they actually form a series of vectors... m vectors in RN.*0200

*The space that is spanned by those vectors, that is the row space.*0221

*Then we have the null space of a transpose. So, if I take a and just flip it along its main diagonal and then I solve this equation for this set of vectors, x such that a transpose × x = 0, I get its null space.*0229

*It is also a subspace, and the row space is a subspace. All of these are subspaces.*0246

*Oops -- it would be nice if I could actually count properly... 1, 2, 3.*0251

*Now, our fourth space is going to be, well, you can imagine... it is going to be the column space. *0257

*Again, the column space if I take the individual columns of the matrix, they are vectors in RM, and they form a space... the span of those vectors form a space.*0264

*Now, they do not all have to be linearly independent. I can take those vectors, remember from an old discussion and I can find a basis... so the basis might be fewer vectors but they still span the same space.*0278

*Okay. So, let us start with a theorem here, which is an incredibly beautiful theorem.*0290

*As you can figure it out, linear algebra is full of unbelievably beautiful theorems... beautiful and very, very practical.*0299

*If a is m by n, then, this is kind of extraordinary, the null space of a is the orthogonal complement of the row space of a.*0306

*That is kind of amazing. Think about what that means for a second.*0337

*If I just have this rectangular array of numbers, 5 by 6 and I just throw some numbers in there... when I solve the equation ax = 0, the homogeneous system associated with that matrix, I am going to get a subspace, the null space.*0344

*As it turns out, if I take that matrix and I turn it into reduced row echelon, the non-zero rows form a basis for the column space. That is how we found the basis for the column space.*0359

*Those two subspaces, they are orthogonal to each other. That is extraordinary. There is no reason to believe why that should be the case, and yet there it is.*0370

*B, the complement of that, the null space of a transpose is the orthogonal complement to the column space.*0380

*It is the orthogonal complement of the column space of a.*0399

*So, that is the relationship. The null space of a given matrix a, its null space and its column space are orthogonal complements. *0407

*If I take the transpose of a, the null space of the transpose and the column space of the original a, which ends up being the row space of a transpose because I have transposed it... those two are orthogonal complements.*0416

*Let us do an example and see if this... if we can make sense of some of it just by seeing some numbers here.*0433

*So, let us go... let us let a equal, it is going to be a big matrix here, and again we do not worry about big matrices because we have our math software.*0441

*1, -2, 1, 0, 2... now I am not going to go through all of the steps.*0450

*You know, this reduced row echelon, solving homogeneous systems, all of this stuff, I am going to give you the final results.*0457

*At this point, I would like to think that you are reasonably comfortable either with the computational procedure manually, or you are using mathematical software yourself. I just do this in math software, myself.*0463

*1, -1, 4, 1, 3, -1, 3, 2, 1, -1, 2, -3, 5, 1, 5... Okay.*0473

*So, this is our matrix a. Our task is to find the four fundamental spaces associated with this matrix and confirm the theorem.*0488

*So, this random rectangular array of numbers, something really, really amazing emerges from this. There are spaces that are deeply, deeply interconnected.*0516

*So, let us see what happens here. Okay. When we take a, so the first thing we want to do is we want to find the row space of a.*0527

*Let us go ahead and do that. So, row space of a.*0536

*When I take a, and I reduce it to reduced row echelon form, I get the following matrix, 1, 0, 7, 2, 4, 0, 1, 3, 1, 1... and I get 0's in the other 2 rows.*0540

*Well, what that means, basically what that tells me is that my row space, if I take these -- let me go red -- if I take that vector and that vector, they form a basis for the row space.*0560

*So, my row space... basis for the row space and I am going to write these as column vectors, is 1, 0, 7, 2, 4... and 0, 1, 3, 1, 1...*0577

*So, the dimension is 2. There you go. Also, I have my row space is 2-dimensional. Okay.*0600

*Now, I need to find the null space. Well, I can almost guess this. The theorem says that the row space and the null space are going to be orthogonal complements.*0610

*Well, I know that the orthogonal complement, or some subspace + the direct some of some subspace plus its orthogonal complement gives me the actual space itself.*0620

*In this case, I am talking about 1, 2, 3, 4, 5... I am talking about R5.*0630

*Well, if I already have a dimension 2, I know that the dimension of my orthogonal complement is going to be 3 and so I am hoping that when I actually do the calculation I end up with 3 vectors.*0637

*Let us see what happens. The null space, well the null space is the set of all vectors x such that a(x) = 0.*0648

*I solve a homogeneous system and I get my basis... I am not going to actually show this one.*0662

*So, my basis for null of a... I tend to symbolize it like that... is equal... set notation... I have the vector (-7, -3, 1, 0, 0).*0669

*I end up with (-2, -1, 0, 1, 0) and the presumption here is that you are comfortable doing this, the reduced row echelon, solving the homogeneous system, putting it into a form that you can actually read off your vectors.*0686

*(-4, -1, 0, 0, 1). Well, there you go. We have a dimension equals 3.*0703

*So, the dimension 2 + the dimension 3 = 5. The row space was a series of vectors in R5, so our dimensions match.*0711

*Now the question is, I need to basically check that this is the case... I need to check that each of these vectors is orthogonal to the 2 vectors that I found.*0721

*As it turns out, they are orthogonal. When you actually take the dot product of each of these with each of the other ones, you are going to end up with 0.*0731

*So, this confirms our theorem. The first part of the theorem. The row space of a and the null space of a.*0739

*Okay. So, now let us take our column space. So, we are going to take a transpose.*0748

*Now, let me actually write out a transpose, I would like you to see it. It is (1,1,-1,2)... this is going to be R4, okay? -2.*0760

*Now, the column space of a, what I have done here is I have actually transpose a. I have turns the rows in columns and the columns into rows.*0776

*So, now the columns of a are written as rows. That is why I am doing it this way. Okay?*0783

*-2,-1,3,-3,1,4,2,5,0,1,1,1... and I have 2,3,-1, and 5. Okay.*0790

*So, I have 5 vectors in R4. So, here we are talking about R4.*0814

*Alright. Now, when I subject this to reduced row echelon form, I am going to end up with some non-zero columns.*0823

*That is going to be a basis for my column space.*0833

*I get 1, 0, -2, 1, 0, 1, 1, 1, and 0's everywhere else, 0, 0, 0, 0... 0, 0, 0, 0.*0838

*These first 2 actually form a basis for my column space.*0850

*So, let me write that down... basis for my column space equals the set of vector 1, 0, -2, 1, 1, 0, -2, 1... I think it is always best to write them in vertical form, and 0, 1, 1, 1.*0856

*Not good -- we would like them to be clear... 1, 1, 1... there you go. That forms a basis for our column space.*0869

*Well, the dimension is 2. You know your spaces are 4... 4 - 2 is 2.*0890

*We are going to expect that are homogeneous system, our null space of a transpose is going to be 2-dimensional. We should have 2 vectors.*0897

*Well, let us confirm. As it turns out, when we solve a transpose × x = the 0 vector... basis for null -- love this stuff, it is great -- a transpose equals... again, I am just going to give the final answer... 2, -1, 1, 0.*0905

*That is one vector, and the second vector is... these are basis vectors for our subspace... 0, 1.*0938

*Sure enough, we end up with a dimension 2. So, our dimensions match. Now we just need to check that any vector in here × any vector in what we just got ,the column space are orthogonal.*0945

*It turns out that they are. If you do the dot product of those, you are going to end up with 0.*0956

*So, sure enough, once again, row space a, okay? is going to be the orthogonal complement of the null space of a.*0961

*The column space of a is the orthocomplement of null space of a transpose.*0982

*Simply by virtue of rectangular array of numbers, you have this relationship where these spaces are deeply interconnected.*0995

*It is really rather extraordinary... and they add up to the actual dimension of the space. Okay.*1001

*So, let us talk about projections and some applications. So, projections are very, very, very important.*1011

*Recall, if you will that w is a subspace, I am just going to write ss for subspace of RN, then, w direct sum plus w perp is equal to RN.*1019

*We sort of have been hammering that point. That is some subspace and some orthogonal complement, the dimensions add up to n.*1042

*When you add them you actually get this space, RN. Okay.*1048

*That was one of the proofs that we discussed... one of the theorems that we had in the last lesson.*1053

*Now, we did not go through a proof of that, and I certainly urge you, with each of these theorems, to at least look at the proofs, because a lot of the proofs are constructive in nature and they will give you a clue as to why things are the way that they are.*1059

*So, in the proof of the theorem, it is shown that if w, if that subspace has an orthonormal basis, remember an orthonormal basis is well, all of the vector are of length 1, and they are all mutually orthogonal.*1070

*We had that Gram Schmidt orthonormalization process where we take a basis and we can actually turn it into an orthonormal basis by first making it orthogonal and then dividing by the norms of each of those vectors.*1104

*So, if it has an orthonormal basis, let us say w1, w2, all the way to w*_{k}, we do not know how many dimensions it is.1116

*v is any vector in RN, then there exists a unique... there exists unique vectors w from the subspace w and u from the subspace w perp, such that the vector v can be written as w + u.*1132

*Well, we know this already. Essentially what we are saying is that if we take any vector in RN, I can represent it uniquely as some vector from the subspace w + some vector in its orthogonal complement.*1171

*Okay. Here is the interesting part. Also... we will write it as an also... this particular w, let me actually circle it in blue, there is a way to find it.*1185

*Here is how we find it... w = the vector v · w1 × w1 + the vector v · w2 × w2 + the vector v · wk, as many vectors as there are in the basis, × wk. *1196

*This is called the projection. This is called the projection of v onto the vector space w.*1230

*It is symbolized as proj... as a subscript we write the w... that is the subspace w... and this is the vector v. Okay.*1250

*We definitely need to investigate what it is that this looks like. When we do a projection -- let me draw this out so that you see what this looks like.*1262

*So, we are going to be working in R3. So, let me draw a plane here.*1270

*Let me draw a vector... this is going to be our w vector, then this is going to be our u vector.*1287

*Let me make v... let me make it blue. So, v, once again, let us remind ourselves... v is any vector in this particular case, it will just be vector RN.*1300

*You know what, since we are dealing in 3, let me be specific. R3.*1315

*w is our vector in the subspace, which is a 2-dimensional subspace in 2, so this plane here... that is the subspace w, and the u is in the subspace w perp.*1323

*So, here is what we are doing. Well, we said that this particular... so v can be written as something from w + something from u, because we know that RN is equal to the direct sum of w and w perp.*1347

*So some vector from w, some vector from here, so that is a vector in w, that is a vector in w perp.*1368

*Well, when we add them together, we get v. This is just standard vector addition.*1374

*Here is what is really interesting. If we have a basis for this subspace, if we actually project v, project means shine a light on v so that you have a shadow, a shadow of v on this subspace... that is what the projection means.*1380

*That is where you get that v · w1 × w1 + v · w2... when you do that, what we just wrote down for the projection, you actually end up finding w.*1397

*Okay. Now, we had also written since v is equal to w + u... as it turns out if I wanted to find u, well, just move that over.*1412

*Equals v - w. That is it. This is really, really great. So, let us do a problem and I think all of this will start to make sense.*1425

*So, let us go... example here... we will let w be a subspace of R3 with an orthonormal basis... I often just write ortho for orthonormal basis.*1437

*Again, orthonormal bases, they tend to have fractions in them because they are of length 1.*1459

*We do not always want to use (0,0,1), we want something to be reasonably exciting.*1465

*Let us go... oops -- these lines are making me crazy -- 2/sqrt(14), 3/sqrt(14), 1/sqrt(14)... one vector.*1473

*The other vector is 1/sqrt(2),0,-1/sqrt(2)... so these are orthonormal.*1493

*It is an orthonormal basis for the subspace R3, there is 2 vectors in it, so our subspace w has dimension 2, which means that our orthogonal complement w perp has dimension 1. 2 + 1 has to equal 3.*1501

*Okay. We will also let v, the vector in R3 equal to some random 4, 2, 7... this is what we want to find.*1516

*We want to find the projection of v onto w, and the vector... and we want to find -- this has to stop, why does this keep happening?*1529

*So, we want to find the projection of v into this subspace, and we want to find the vector u that is orthogonal to every vector in w.*1549

*In other words, we want to find w perp. Okay. So, how can we do that. Switch the page here...*1572

*Well we know that from our formula, from our theorem, that our w is equal to the projection onto w of v, which is what we wanted.*1584

*That is going to equal v · w1, one of the vectors in the basis × one of the vectors in the basis... plus v · w2, the second vector in the basis × that vector in the basis.*1597

*Again, I am going to let you work these out. So take v · w1, you are going to get a scalar, multiply it by w1, it is going to give you a vector.*1617

*You are going to add to that v · w2, which is a scalar × w2, which is a vector. When you add two vectors together you are going to get a vector.*1626

*So, you will end up with something like this. 21/sqrt(14) × 2/sqrt(14), 3/sqrt(14), 1/sqrt(14), + -3/sqrt(2) ×, well 1/sqrt(2), 0, -1/sqrt(2). That is what this is.*1634

*Then when you put those together, you are going to end up with 21/14, 63/14, 42/14, if I have done my arithmetic correctly.*1666

*So, the projection of v onto the subspace w is this vector right here.*1684

*That is what that means. If I take v and if I take the shadow of v on that subspace, I am going to get a vector.*1692

*It is nice. Okay. Now, well we know that v is equal to w, this is w by the way. That is the projection.*1701

*plus u, well we have v, we just found w, so now we want to find u.*1714

*Well, u is just equal to v - w, when I take v - w, that is equal to, well, (4,2,7) - what I just found... 21/14, 63/14, 42/14.*1720

*I am going to get 35/14, - 35/14, and 56/14. So, my vector v in R3 is a sum of that vector + that vector.*1746

*It is pretty extraordinary, yeah? Again, this idea of a projection. All you are doing is you are taking a random vector and onto another space you are just shining the light. you are just taking the shadow.*1772

*The shadow means you are taking the perpendicular... you are dropping a perpendicular from the end of that vector onto there, and this vector that you get, whatever it is, that is the projection.*1787

*That is all the projection means. Perpendicular.*1800

*As we know, the perpendicular from a point down to something else is the shortest distance from that object to that something else.*1805

*So, let us draw the picture one more time, so that we are clear about what it is we are doing.*1815

*We had w, we had u, I will put v here, that is v, this is -- oops, we wanted this in red.*1825

*This is u, this is w, vector v is equal to w + u, u is equal to the vector v - the vector w. That is what this picture says.*1841

*So, the distance from v, from the vector v to the subspace w... this is a capital W... is, well, the distance from v to the subspace w, the perpendicular distance, well it is equal to the norm of u.*1863

*Well, the norm of u equals the norm of the projection of v onto the subspace w, which is equal to the norm of v - ... no, I am sorry, that is not correct, getting a little ahead of myself here.*1898

*Vector u, the norm of u is the norm of this thing, which is v - the projection of v onto the subspace w.*1923

*In 3-space, it makes sense because you are used to seeing 3-space and distance.*1938

*Well, the distance from this point to this point is just the distance of the vector u, which you can calculate here. That is just the norm, and you know you found w by that formula that we just used which is the projection of v onto the subspace w.*1944

*This is the subspace w, so we project it on here, we get w, here is what may not make sense. What if you are dealing with a 14 dimensional space?*1960

*Let us say that your vector in R-14, you project it onto a subspace which is say 3 dimensional.*1970

*How do you talk about a distance in that case? Well, again, distance is just an algebraic property.*1977

*So, in some sense, you have this distance of a vector in a 14-dimensional space to its 3-dimensional subspace.*1982

*There is a distance "defined," and that distance is precisely the projection of that 14-dimensional vector, of that vector in 14-dimensional space onto the 3-dimensional subspace.*1994

*Again, this is the power of mathematics. We are not limited by reality. We are not limited by our senses. We are, in fact, not limited at all as long as the math supports it. As long as the algebra is correct. We are not limited by time or space.*2007

*Okay. Thank you very much for joining us here at Educator.com to finish up our discussion of orthogonal complements. We will see you next time.*2023

*Welcome back to educator.com and welcome back to linear algebra.*0000

*Today, we are going to start on a new topic. A very, very, very important topic. 0008 Probably the single most important topic both in terms of the underlying structure of linear mappings, matrices, and also of profoundly practical importance, in all areas of science and math... quantum mechanics, engineering, all areas of physics, all areas of mathematics.*0004

*Also, we are going to be discussing Eigenvalues and Eigenvectors. So, let us solve this and jump on in and see if we can make sense of this.*0030

*Okay. Recall if you will, so if... a is n by n... and for the discussion of Eigenvalues and Eigenvectors, we are always going to be talking about matrices that are n by n.*0039

*So, we are no longer going to be talking about 5 by 6, 3 by 9, it is always going to be 3 by 3, 4 by 4, 2 by 2... things like that.*0056

*Okay. So, if a is n by n, we know that the function L, which is a mapping from RN to RN defined by the multiplication of some vector x by that matrix... we know that it is a linear mapping.*0063

*So, this we know. That when we are given a n by n matrix, and we use that matrix to multiply on the left of some vector in RN, we know that what we get is a linear mapping. Okay.*0107

*What we want to do, so we wish to discuss this situation where the vector x and anything I do to x which is multiply it by the matrix on the left are parallel to each other... by parallel, what we are really saying is that they are scalar multiples of each other.*0122

*In other words, or when a × x is just a scalar multiple of x.*0155

*In other words, I do not just map it to a completely different vector all together, all I do is take the vector x and I either expand it or contract it or leave it the same length.*0175

*So, I am keeping it in its own space. I do not jump to another space. That is really what is going on here with this idea of Eigenvalue and Eigenvector. *0184

*It has to do with starting in a space, taking a vector in that space, multiplying it by a matrix, and instead of twisting it and turning that vector, turn it into something else... just -- you know -- dilating it, making it bigger or smaller.*0193

*That is all we are doing. Still staying in that space. Staying parallel. Okay.*0206

*Now, let us see what we have got. Let us start with a definition. Let a be an n by n matrix, the real number Λ. *0213

*It is always symbolized with a Λ this is traditional.*0234

*It is called an Eigenvalue of a if there exists a non-zero vector x such that the matrix a × x just gives me some scalar Λ × x.*0240

*So, again, all this is saying is that we are starting with a vector x, if I multiply it by a matrix, it is the same as multiplying that vector by some scalar multiple.*0272

*Instead of twisting it and turning it, all I have done is expand it, contract it, or left it the same. Okay.*0285

*Now, every non-zero vector satisfying this relation is called an Eigenvector... whoa, that was interesting... an Eigenvector of a associated with the Eigenvalue Λ.*0292

*Okay. So, our central equation here is this one, our definition. It basically says, again, if I take a vector x in a subspace, and if I multiply it by a matrix, and n by n matrix, I am going to be transforming that vector... turning it into something else.*0346

*If, all I do to it is expand or contract that vector x, and there is actually some number by which I expand or contract it. *0362

*That is called an Eigenvalue, and every vector that satisfies this condition... meaning every vector that when multiplied by the matrix only ends up being expanded or contracted or left the same.*0371

*That is called an Eigenvector associated with the Eigenvalue, associated with the matrix a.*0384

*Very, very important relation. Again we are staying in this space. We are not doing anything to it. We are just moving along that space in a parallel fashion.*0391

*Okay. One thing we definitely want to note here is that the 0 vector cannot be an Eigenvector, but 0, the real number can be an Eigenvalue.*0401

*So, once again, the 0 vector cannot be an Eigenvector. We just exclude that possibility, but the number 0 can be an Eigenvalue. Okay. That is the only caveat with respect to this.*0427

*Quick example... let us say that a is the matrix (0,1/2,1/2,0). Okay.*0445

*Well, if we take a × the vector -- let us just say (1,1) -- well that is equal to (0,1/2,1/2,0) × (1, 1), that is what this is.*0458

*That is equal to 0 × 1 + 1/2 × that is equal to 1/2, and then 1/2 ×1 + 0 × 1, 1/2... all that is equal to 1/2 × (1,1).*0473

*Notice what I have done here. a × the vector (1,1) is equal to 1/2 × (1,1).*0486

*My Eigenvalue Λ = 1/2, because that is all a did... just simply by virtue of this multiplication... all I did was shrink it by 1/2.*0496

*Λ = 1/2, and the vector (1,1) happens to be one of the Eigenvectors. It is an Eigenvector... not the only Eigenvector.*0509

*Oftentimes, for a given Eigenvalue you have an infinite number of Eigenvectors. We will show you why in a minute.*0521

*Again, ax does nothing but expand or contract a vector. Okay. Now, a given Λ can have many Eigenvectors.*0528

*Often, we are only interested in 1... we do not necessarily need to list them all... so 1 will do.*0546

*So a given Λ can have many Eigenvectors associated with it, and here is why.*0557

*Well, if I take a × some number R × x, if I just take any vector x and I multiply it by any number, that is an infinite number of vectors that I can get.*0566

*Then, if I multiply that by a, we can reverse this... we can do R × a × x is equal to r × Λ x... because a(x) is equal to Λx, right? Λ is an Eigenvalue.*0580

*Well, that is equal to Λ × r(x). Notice what I have got. a × R(x) = Λ × R(x).*0595

*If I have a given vector x, any scalar multiply of x is also an Eigenvector associated with that Eigenvalue.*0603

*Okay. Let us do a example again. This time, we will let a equal to (1,1) - 2 and 4. Okay.*0614

*This time we want to actually find the Eigenvalues and associated Eigenvectors of a.*0635

*So, a given matrix can have Eigenvalues and Eigenvectors associated with it. Okay. Now, what do we want. So, we want real numbers Λ and all of the variables x, which I will write in component form... x1, x2, such that, well, ax = Λx.*0659

*Well, a is (1,1,-2,4)... x is (x1,x2) = Λ × x1... Λ's x's, all these symbols everywhere... x2... okay.*0694

*When we actually multiply this out, we get the following system: x1 + x2 = Λx1, and we get -2x1 + 4x2 = Λx2... let us fiddle around with this a little bit.*0714

*Let me bring this over here, and this over here... and set it equal to 0, so I am going to write the equivalent version. It is going to be Λ - 1 × x1, right? I have Λ x1 - x1... I can pull out the x1 and I get Λ - 1 × x1 - x2 = 0.*0742

*I also get 2x1, moved it over to that side... + Λx2 - 4x2, which is Λ - 4 × x2 = 0. Right? Okay.*0771

*Now, take a look at this linear system right here. It is a homogeneous system, okay? 2 by 2.*0794

*Let me go back to my blue ink. This system has a non-trivial solution... remember the list of non-singular equivalences? It has a non-trivial solution, if and only if the determinant of the coefficient matrix is equal to 0.*0802

*So, if I have -- no, this one I definitely want to write as clear as possible... start again -- coefficient matrix is (Λ - 1, - 1, 2, Λ - 4)... the determinant = 0. *0820

*This homogeneous system has the non-trivial solution and the determinant is 0. Well, the determinant is this. The determinant of a 2 by 2 is this × this - that × that.*0843

*So, I end up with Λ - 1 × Λ - 4 -2 = 0. I get Λ*^{2} - 5Λ + 4 + 2. I get Λ^{2} - 5Λ + 6 = 0. All I am doing is following the map... that is all I am doing.0856

*Let me rewrite this, there are too many lines here... + 6 = 0... rewrite it again... go to red... Λ*^{2} - 5Λ + 6 = 0.0891

*This factors into Λ - 2, Λ - 3, this implies that Λ1 = 2, Λ2 = 3. These are my Eigenvalues associated with that matrix, and all I did was solved this homogeneous system, right? Okay.*0910

*Now we want to find the Eigenvectors associated with the Eigenvalues. Well, I have 2 Eigenvalues, so I am going to be solving 2 systems to find the associated Eigenvectors.*0934

*Let me show you what I just did here. I started with ax = Λ × x. Bring this over here and set it equal to 0.*0948

*Λ... let us make the Λ look like a Λ and the x look like an x... Λx - a × x is equal to 0. So, let me put the 0 vector over here... so I am working on the right because that is our habit.*0964

*Let me factor out the x... well, Λ ×... we are talking about matrices here, so since this is a matrix a, Λ is a scalar... I just multiply that scalar by the identity matrix.*0985

*Remember what the identity matrix is... it is just that matrix with 1's all along the diagonals, because I need matrix subtraction to be defined.*0997

*This is the equation that I solve. So, for every Λ that I get... I put it into this equation which is the thing that I had in the previous page. *1010

*I put it into this equation, I solve the homogeneous system, I get my Eigenvectors for that Eigenvalue, and then I do the same for 3.*1016

*So, now let us actually go through the process. Okay. This if you recall, was this. Λ - 1 × x1 - x2 = 0, and it was 2x1 + Λ - 4x2 is equal to 0. *1030

*So, if I were going to take me Λ = 2 Eigenvalue, I would put this 2 in here, and solve the associated homogeneous system.*1058

*So, I would get 2 - 1 is 1. So, I would get x1 - x2 = 0, and I would get 2x1 + Λ is 2... 2 - 4 is -2, so it is -2x2 is equal to 0.*1070

*Well, that x1 is equal to x2 so which means that I can choose x2 anything that I want, so let us just call it R.*1098

*Therefore, any vector of the form (R,R), is an Eigenvector for this Eigenvalue 2.*1111

*Okay. Alright. What this means is if I take a, and if I take any vector of the form (R,R)... (1,1), (2,2), (3,3), (4,4)... all I end up doing is I end up multiplying it... that is what this is telling me.*1125

*All vectors of this form, that have the same entry... when I multiply by the matrix a, all I do is end up doubling its length. That is what this telling me. Only vectors of this form are associated with this Eigenvalue.*1144

*Now, let us do the Λ = 3. Well, Λ = 3... we end up putting it back into that original equations, so that is 3 - 1 × x1 - x2 = 0.*1160

*We have 2x1 + 3 - 4, because 3 is our Eigenvalue, x2 = 0. We end up with 2x1 - x2 = 0... 2x1 - 2x2 = 0... This tells us that 2x1 = x2... x1 = x2/2.*1179

*Therefore, our vector x is, well, if x2 is equal to R, then x1 = R/2.*1212

*So, every vector of the form (R/2,R), like for example (4,2), (8,4), (16,8), (24,12)... those are the Eigenvectors associated with the Eigenvalue 3... that is going to actually end up equaling 3 × (18,9).*1224

*That means that if I take the matrix a which was given, and if I take some vector like (18,9), which is of the form (R/2,R), all I am going to do is I am going to multiply that vector by a factor of 3. That is what is happening here.*1243

*Okay. Let us move on, we are going to have a little bit of a definition here. We just did this, so now we are going to actually... this equation that we came up with that we solved to get the Eigenvalue, we are going to give it a special name.*1265

*So, definition, we will let a*_{ij} be an n by n matrix... the determinant of Λ × the identity matrix - a matrix, which is equal to this following determinant in symbolic form, Λ - (a1,1) - (a1,2) - (a1,3) - (a1,n). 1283

*Then, of course, - (a2,1) - (an,1) - (an,2)... Λ - (a2,2), Λ - (a*_{n},n)... 1344

*This determinant is called the characteristic polynomial... characteristic polynomial of a.*1367

*Now, when I set that characteristic polynomial, in other words the determinant of Λ × in - a, when I set it equal to 0, it is called the characteristic poly... it is called the characteristic equation -- I am sorry.*1388

*That is the polynomial... it is the characteristic equation... is the characteristic equation of a.*1410

*Okay. Let us do an example. Let a = 1 - 2, 1 - 2, 1, 1, 0, -1, 4, 4, -5.*1421

*Okay, so, we want to find the determinant of Λ × in - a, which is... so you see what this looks like, let me actually do this... this is 3 by 3.*1446

*So, it is going to be Λ × in - 1 - 2, 1, 1, 0, - 1, 4, 4, -5... *1468

*We are going to take the determinant of this thing... which means I have Λ, 0, 0, 0, Λ, 0, 0, 0, Λ, -1, -2, 1, 1, 0, -1, 4, 4, -5.*1491

*I end up with Λ, -1, - -2 is 2, -1, 0 - 1 is -1, Λ - 0 is Λ, - -1 is 1, -4, -4, Λ + 5. *1520

*Then I take the determinant of that, and when I actually end up doing that and going through it, I end up with Λ*^{3} + 4Λ^{2} - 3Λ - 6.1552

*This is a characteristic polynomial. If I want to find the Eigenvalues, I have to find the roots of this characteristic polynomial. I set it equal to 0, that will give me the Eigenvalues.*1572

*When I get the Eigenvalues, I put it back into this form and I solve the homogeneous system to get the Eigenvectors. We will do more of that in just a minute.*1584

*Okay. So, let us close off this section with just a theorem.*1594

*An n by n matrix is singular... does not have an inverse if and only if 0 is an Eigenvalue of a.*1604

*In other words, if 0 was not an Eigenvalue of matrix a, that matrix is non-singular. It has an inverse, so this is one item that we are going to add to our list of non-singular equivalences.*1628

*We had 9 of them, now we have are going to have 10. We are going to add a 10th item.*1640

*Okay. That 10th item added to the list of non-singular equivalences... 0 is not an Eigenvalue of a... that is the same as saying that a is non-singular.*1646

*It is the same as saying that the determinant exists... all of those things that -- you know -- we have for those... for that list. So, this is the 10th equivalence.*1665

*Another theorem. The Eigenvalues of a are the real roots of the characteristic polynomial... okay. So, you might have a polynomial fifth degree... it has 5 roots.*1681

*Well, there is no guarantee that all of those 5 roots are going to be real. Some of them might become complex... if they are complex, they are going to come in complex conjugate pairs.*1717

*So, if you know one of them is complex, you know 2 of them are complex. That means only 3 of them can be real.*1725

*If you have 3 of them that are complex, that means the 4th is also complex. That means only 1 of them is going to be real. For caller algebra, polynomial equations, and solutions to polynomial equations, roots where the graph hits the x axis.*1731

*So, when you have a characteristic polynomial, it is the real values that are the Eigenvalues of that associated matrix. Okay. *1748

*Let us try something here. We will try an example. We will let a = (2, 2, 3, 1, 2, 1, 2, -2, 1,)... this is our matrix a.*1756

*Our characteristic polynomial when we set it up... again, this is something that you can do on the mathematical software... our characteristic polynomial... is Λ*^{3} - 5Λ^{2} + 2Λ + 8.1781

*When we actually factor this out, we end up with Λ1 = 2... Λ2 = 4. Λ3 = -1. The degree of the polynomial is 3, which means we have 3 roots. We found those 3 roots... (2,4,-1), they are all real.*1804

*All of these are Eigenvalues. Okay. Now, let us find the Eigenvectors associated with these Eigenvalues. Let us actually find a specific Eigenvector, not like we did last time where we found a general Eigenvector.*1832

*Okay. In so doing, we are going to solve, of course, when we do... so we are going to solve this... Λ × i3 - the matrix a × x = 0.*1845

*This is the equation -- okay, this is not going to work... too many lines all over the place... this is too strange, let us try this again -- Λ × i3 - a... x = 0, the vector.*1860

*We are going to solve this equation, homogeneous system in order to find the associated Eigenvector.*1882

*So, 4Λ = 2... I get the following system... 0, -2, -3, 0, -1, 0, -1, 0... I am hoping to god my arithmetic is correct here... -2, 2, 1, 0... 2302 when I subject it to reduced row echelon... I want you to see at least one of them.*1889

*We end up with 1, 0, 1, 0, 0, 1, 3/2, 0, 0, 0, 0, 0. So, leading entries here and here. Leading entry not there. Therefore I can take x3 = R, x2 = well, x2 = -3/2R, and x1 = -R.*1928

*Well, I can set R to anything, so why do I not just like R = 1. So, a particular Eigenvector... a specific Eigenvector is -1, -3/2, and 1. This is an Eigenvector associated with the Eigenvalue 2. *1962

*There are an infinite number of them, just different values of R. That is all it is.*1985

*Okay. When I take the Λ, when my Eigenvalue is a -1, I get some matrix, I subject it to reduced row echelon form, and I get a vector of the form -R, 0, R.*1991

*Well, let us just take specific values. Let us just take R = 1, so (1,0,-1). This Eigenvector is associated with this Eigenvalue, Λ2.*2010

*Now, we will do Λ = 4... Λ3 = 4 -- get some notation here, make sure that I am correct -- Λ1, Λ2, and Λ3 = 4... yes, we are correct. *2024

*For that Eigenvalue, we end up with the general 4R, 5/2R, R, which gives us 4, 5/2, and 1.*2040

*So, given a certain matrix, we can find its Eigenvalues, we can solve the homogeneous system to find its Eigenvectors, so we had a nice structure developing here for a particular matrix.*2060

*So, let us do a quick recap. We have the definition of Eigenvalue, Eigenvector... If I have a matrix a, if I have a vector that I multiplied by, if what I end up with is some scalar multiple of that vector, well the scalar multiple is called an Eigenvalue. *2073

*The vectors that actually satisfy this condition are called Eigenvectors associated with that Eigenvalue.*2101

*I solve this for 0. I move this over to that side, and I end up with 0 = Λx - ax, let me just bring this 0 over here, so I end up with Λ × in - a... × x = 0.*2111

*Okay. For this to have a non-trivial solution, well, the determinant of this thing Λ i*_{n} - a has to equal 0.2139

*So I take the determinant of that... this matrix that I get, set it equal to 0, that gives me the Eigenvalues. Okay?*2167

*That is the characteristic polynomial. This is the characteristic equation, and for each Λ*_{i}, for each Eigenvalue that I get, for each root, real root, of the characteristic polynomial. 2177

*We put each Λ*_{i} back into this equation and we solve that homogeneous system and find our basis... our vectors that satisfy that.2195

*We find the associated Eigenvectors by solving Λ*_{i} × i_{n} - a × x = 0.2208

*So, we have a matrix a. We set this up, we take the parameter Λ × the identity matrix... we subtract from it the a matrix.*2233

*Now, you can do it either way. You can go a - Λ, it does not matter. I did Λ - a because I like Λ to be positive, that is just a personal choice of mine.*2246

*You end up with this equation. Well, you take the determinant of the matrix that you get... this thing Λ × in - a, you set it equal to 0, you find the roots... those are the Eigenvalues. *2256

*When you take each of those Eigenvalues and put it in turn back into this equation, solve the homogeneous system to get the associated Eigenvector with that respective Eigenvalue.*2269

*So, that takes us through the basic structure of Eigenvalues and Eigenvectors. In our next lesson we are going to continue on and dig a little deeper into the structure of these things.*2280

*Thank you for joining us at Educator.com, we will see you next time.*2289

*Welcome back to Educator.com and welcome back to linear algebra.*0000

*In our previous lesson, we introduced the notion of Eigenvector and Eigenvalue.*0004

*Again, very, very profoundly profoundly important concepts throughout mathematics and science.*0009

*Today, we are going to dig a little bit deeper and we are going to introduce the notion of similar matrices and the idea of diagonalization.*0015

*So, let us jump right on in. Let us start with a definition... Let me go to blue ink here.*0025

*Okay. A matrix b is said to be similar to matrix a if there is a non-singular matrix, p, such that... let us see what this definition says.*0036

*If I have some matrix a, and I find some other matrix p, and if I multiply on the left of a by p inverse, and on the right by p, so if I take p inverse a × p, the matrix that I get, b, I say that b is similar to a.*0083

*So, there is a relationship that exists between, if I can actually sandwich this matrix a between some matrix p and the inverse of p.*0102

*In the course of this lesson, we are going to talk to you actually about how to find this matrix p, and about what this matrix b looks like. Really, really quite beautiful.*0109

*Okay. A quick example just so you see what this looks like in real life. So, if I let the matrix a equal (1,1,-2,4), just a little 2 by 2... and if I say I had p which is (1,1,1,2), well, if I calculate the inverse of p, that is going to equal (2,-1,-1,1)... okay?*0120

*Now, as it turns out, if I take b, if I actually do p inverse × a × p so I multiply that by that, then by that, I end up with the following... I end up with the matrix (2,0,0,3).*0149

*Now, what you might want to do is take a look at the previous lesson. This matrix a, you remember, and I will discuss this in a minute... just to let you know what is coming up, what is coming ahead, this matrix a, we have dealt with this matrix before in the previous lesson.*0170

*We found the Eigenvalues for it. The Eigenvalues were 2 and 3. Well, notice what we did.*0183

*We found this matrix p... and we took the inverse of that, we multiplied p inverse a p, and we end up with a diagonal matrix, where the entries on the diagonal are exactly the Eigenvalues of a. That is what is going to end up being so beautiful.*0189

*Here is what is even better. If you remember the two Eigenvectors that we found for the two Eigenvalues 2 and 3 were exactly of the form (1,1,1,2). *0203

*So, this matrix p is going to end up being made up of the actual Eigenvectors for the Eigenvalues. We will just... a little preview of what is coming.*0212

*Okay. Just some quick properties of similarity. So, the first property is a is similar to a, of course... intuitively clear.*0223

*If b is similar to a, then a is similar to b. That just means that... we can always multiply on the left by p, here, and p × p inverse, this goes away, and we multiply on the right by p inverse so this is just the same.*0244

*And three... If a is similar to b, and b is similar to c, then, by transitivity, c is similar to a... I am sorry, a is similar to c, which means that c is similar to a by property too... then a is similar to c.*0262

*So, standard properties... we will be using those in a second. Another definition. We say matrix a is diagonalizable... did I spell that correct?... diagonalizable... if it is similar to a diagonal matrix.*0293

*In this case, we say a can be diagonalized -- put a comma there, in this case we say... a can be diagonalized.*0344

*Okay. Now, let us see what we have got. Alright. Profoundly, profoundly, profoundly important theorem.*0367

*An n by n matrix is diagonalizable if, and only if, it has n linearly independent Eigenvectors.*0387

*In this case, a is similar to a diagonal matrix d, where d is equal to p inverse a... p... and the diagonal elements of p, whose diagonal elements are the Eigenvalues of a.*0423

*It is like we did before. It is similar to the diagonal matrix b, and the entries on that diagonal are precisely the Eigenvalues of a.*0468

*Well, p is the matrix whose columns respectively... it means respective to the individual Eigenvalues... so Eigenvalue Λ1 gives column 1, Λ2 gives column 2, Λ3 gives column 3... so on and so forth, respectively.*0481

*Each of these columns, respectively, are the n linearly independent Eigenvectors of a. *0512

*So, again, this is what we did in our example. So now we state it as a theorem... an n by n matrix is diagonalizable if and only if it has n linearly independent Eigenvectors.*0529

*So, if I have a 3 by 3, I need 3 Eigenvectors. If I have a 5 by 5, I need 5 linearly independent Eigenvectors.*0542

*In this case, the a is similar to a diagonal matrix d, whose diagonal elements are precisely the Eigenvalues of a, and the matrix p in this relation here, is the matrix made up of the columns that are the n independent, linearly independent Eigenvectors of a.*0548

*So, from a, I can derive the matrix d, I can derive the matrix p, and the relationship is precisely this.*0570

*Let us just do an example here. Okay. We will let a equal the matrix (1,2,3)... (0,1,0)... (2,1,2).*0577

*I am not going to go through the entire process, again I use mathematical software to do this... to find the Eigenvalues and to find the Eigenvectors.*0594

*Here is how it works out. As it turns out, one of the Eigenvalues is 4. 4 generates the following Eigenvector, when I do solve the homogenous system, I get (1,0,1).*0602

*A second Eigenvalue is -1. Also real. It generates the Eigenvalue... the Eigenvector (3,0,2). The third Eigenvalue is equal to 1, all distinct, all real, and it generates 1 - 6 and 4, when I solve the homogeneous system.*0619

*Therefore, p = (1,0,1), (-3,0,2), (1,-6,4).*0651

*If I want to define p inverse, which I can, it is not a problem -- you know what I will go ahead and write it out here... it is not going to be too big of an issue -- I have (2/5, -1/5, 0), (7/15, 1/10, -1/6), (3/5, -1/5, 0), and of course my diagonal matrix d is going to end up being (4,0,0), (0,-1,0), (0,0,1).*0667

*If I were to confirm... yes, I would find out that d does in fact equal p inverse × a × p. Excuse me... found the Eigenvalues... found the associated Eigenvector...put those Eigenvectors... these are linearly independent, put them as columns in p... if I take p, multiply and find p inverse, if I multiply a by p inverse on the left, p on the right, I end up with a diagonal matrix where the entries on the main diagonal are precisely the Eigenvalues (4,-1,1), (4,-1,1).*0714

*So, 4 is the first, that is why its Eigenvector is the first column, that is what we meant by respectively.*0753

*Okay. Now, if all the roots of the characteristic polynomial, which is what we solve to find the Eigenvalues, of a are real and distinct... if the roots of the characteristic polynomial are real and distinct... in other words if the Eigenvalues of the matrix are real and distinct, then a is diagonalizable. Always.*0761

*Note how we wrote this. If all of the roots of the characteristic polynomial are real and distinct, then a is diagonalizable. There is no if and only if here.*0812

*That does not mean that if a is diagonalizable, that the roots of the characteristic polynomial, the Eigenvalues are real and distinct.*0822

*It is possible for a to be diagonalizable and have roots that are not distinct. You might have an Eigenvalue, you might have a 4 by 4, and you might have an Eigenvalue 4, and then (1,1,1). That 1 might be an Eigenvalue 3 times over, but it will still be diagonalizable.*0829

*So, again. If then does not mean that it works the same backwards. It is not the same. It is not the same as if and only... if and only if means it goes both ways.*0846

*So, if the roots of the polynomial are real and distinct. If it is diagonalizable, the Eigenvalues may or may not be real and distinct.*0855

*Okay. Now, the characteristic polynomial... so the characteristic poly for non-distinct Eigenvalues looks like this.*0866

*Well, we know we are dealing with some characteristic polynomial... Λ*^{3} + something Λ^{2}, lambda;^{n}.0890

*Well, every time we find a value, we can factor... that is what the fundamental theorem of algebra says... every polynomial can be factored into linear factors.*0899

*For the non-distinct, we end up with something like this... Λ - Λ*_{i} to the k_{i} power. 0910

*So, if some root ends up showing up 5 times, that means that I have 5 factors for that root. Λ - 1, Λ - 1, Λ - 1, Λ - 1, Λ - 1, well, that factor is Λ - Λ*_{i}... you know, to this power. Okay.0917

*This is Λ1, this is Λ 2, in the case of 2, in the case of i... Okay.*0939

*Well, this k*_{i} is called multiplicity of the Eigenvalue... Λ_{i}0974

*It can be shown that if the Eigenvalues of a are all real and distinct, in this case we already dealt with the fact they are distinct, we know they are diagonalizable if they are all real and distinct.*0977

*So, we can also show, now that we have introduced this idea of multiplicity... so if our characteristic polynomial has multiple roots, so if I have a fourth degree equation, one of the roots is 4, the other root is 3, and the other roots are 2 and 2.*1019

*Well, 4 has a multiplicity 1, 3 has a multiplicity 1, 2, because it shows up twice, has a multiplicity 2.*1032

*It can be shown that if the Eigenvalues of a are all real, then a can be diagonalized, it can still be diagonalized if and only if for each Λ*_{i} of multiplicity k_{i}, we can find... if we can find k_{i} linearly independent Eigenvectors.1038

*This means that the null space for that thing that we solved, L*_{i}, i_{n} - a, × x = 0, this means that the null space of that equation that we solved to find the Eigenvectors has dimension k_{i}.1088

*In other words, if I have a root of the characteristic polynomial, an Eigenvalue that has a multiplicity 3... let us say its 1 shows up 3 times.*1118

*Well, when I put it into the equation, this homogeneous equation, if I can actually find 3 linearly independent vectors... if I can find that the dimension of that null space is 3, I can diagonalize that matrix.*1128

*If not, I cannot diagonalize that matrix. Okay.*1145

*So, let us see what we have got. You will let a = (0,0,1), (0,1,2), (0,0,1).*1150

*That is interesting... okay.*1169

*When we take the characteristic polynomial of this, we end up with the following. Λ × (Λ - 1)*^{2}... so we have λ1 = 0, Λ2 = 1, so one has a multiplicity of 2, and Λ3 is also equal to 1.1173

*Well, let us just deal with this multiplicity of 1 Eigenvalue. When we solved the homogeneous system, we end up with the following. We end up finding that the Eigenvector x is this... (0,r,0).*1195

*Well, this is only 1 vector. Here we have 2 Eigenvalues, multiplicity is 2. Well, in order for this matrix to be diagonalizable, I have to... when I solve that homogeneous system to find the actual Eigenvectors, I need 2 vectors, not just the 1. This is only one vector so this is not diagonalizable.*1214

*Now, mind you, it still has Eigenvalues, and it still has an associated Eigenvector, but it is not diagonalizable. I cannot find some matrix that satisfies that other property.*1240

*Okay. So, now, let us try this one. Let us let a equal to (0,0,0), (0,1,0), (1,0,1).*1255

*Well, as it turns out, this characteristic polynomial is also Λ × (Λ - 1)*^{2}.1272

*So, again we have Λ = 0, Λ2 = 1, Λ3 = 1, so our Eigenvalue 1 has a multiplicity of 2, so we want to find 2 Eigenvectors.*1280

*We want to find the dimension of this null space associated with this Eigenvalue if it has a dimension 2, we are good. We can actually diagonalize this.*1294

*Let us see what we have. Let us see. When we solve the associated homogeneous system, we end up with (1,0,0,0), (0,0,0,0), (-1,0,0,0).*1302

*When we subject this to reduced row echelon form, it becomes (1,0,0,0), (0,0,0), (0,0,0,0)... so we have... there we go... we have x1 = 0, we have x2 = let us see, this is x2, x3, 3 parameters equals r... x3 = s, and we can rewrite this as equal to r × (0,1,0) + s × (0,0,1), so there you go.*1326

*We have two Eigenvectors, a basis... our basis for that null space has 2 Eigenvectors. It is of dimension 2. It matches the multiplicity of the Eigenvalue, therefore this can be diagonalized. *1375

*So, let us go ahead and actually finish the diagonalization process... when I go back and solve for the Eigenvalue Λ1 = 0, for that Eigenvalue, I get the following vector... r0 - r, as a general, and its specific would be -- let us see -- (1,0,-1). *1392

*Therefore our matrix p would be (1,0,-1), (0,1,0), (0,0,1). This is our matrix p.*1417

*It is of course diagonalizable. Matrix d is going to end up being (0,0,0), (0,1,0), (0,0,1). These along the main diagonal are the Eigenvalues of our matrix. This is our matrix p. We can find the inverse for it, and when we multiply we will find that d, in fact, equals p inverse, a p.*1428

*So, now, procedure for diagonalizing matrix a, this is going to be our recap. First thing we want to do is form the characteristic polynomial, which is symbolized also with f(Λ) equals the determinant of Λ × i*_{n}... Λ is our variable... Λ times the identity matrix minus a.1458

*That is our polynomial. Then, we find the roots of the characteristic polynomial... okay?*1502

*If not all real, if they are not all real, then you can stop... it cannot be diagonalized.*1517

*Okay. Three. For each Eigenvalue, Λ*_{i} of multiplicity k_{i}... find a basis for the null space of Λ_{i}, i_{n} - a × x = 0.1537

*So, for each Eigenvalue Λ*_{i} of multiplicity k_{i}, find a basis for the null space... also called the Eigenspace.1581

*If the dimension... the dimension that we just found... the dimension of this null space that we just found is less than the k*_{i}, then you can stop. a cannot be diagonalized.1594

*If so, well, then, let p be the matrix whose columns are the n linearly independent Eigenvectors... found above... then, the inverse a × p = d, where d has Λ*_{i} along the diagonals. The diagonal entries are the Eigenvalues.1622

*Okay. So, let us recap. We want to form the characteristic polynomial once we are given a matrix a, n by n.*1697

*Once we find the characteristic polynomial, we want to find its roots. If the roots are not all real, if there is... they have to all be real.*1704

*If they are not all real, then you can stop. You cannot diagonalize the matrix.*1713

*If they are real, well, for each Eigenvalue of multiplicity... you know k*_{i}, for each one that has a multiplicity, you want to find a basis for the null space of that equation... we solve the homogeneous system.1718

*Well, if the dimension of that null space is equal to k*_{i}, we can continue. If not, we can stop, we cannot diagonalize the matrix.1735

*But, if we can, and for each distinct Eigenvalue, we are going to have 1 Eigenvector. We will let... take those n linearly independent Eigenvectors that we just found, we will arrange them as columns respectively.*1742

*So Λ1, column 1, Λ2, column 2... that is going to be our matrix p. When we take p inverse × a × p, that actually is equal to our diagonal matrix d, the entries of which are the respective Eigenvalues of our original matrix a.*1757

*So, we will be dealing more with Eigenvalues and Eigenvectors in our next lesson, so we are not quite finished with this, it gets a little bit deeper. We will have a little more exposure with it.*1780

*So, until then, thank you for joining us at Educator.com, and we will see you next time.*1791

*Welcome back to linear algebra, so we have talked about linear systems, we have talked about matrix addition, we have talked about scalar multiplication, things like transpose, diagonal matrices.*0000

*Now we are going to talk about dot product and matrix multiplication, so matrix multiplication is not numerical multiplication, yes it does involve not just standard multiplying of numbers, but it's handled differently.*0011

*And one of the things that you are going to notice about this is that matrix multiplication does not commute.*0028

*In other words, I know that 5 times 6 = 6 times 5, I can do the multiplication in any order and it ends up being 30.*0035

*However if I take the matrix A and multiply by a matrix B, it's actually not the same as the matrix B multiplied by A.*0041

*It might be, but there is no guarantee that it will be, and in fact most of the time it won't be, so that's the one thing that's actually different about matrices and then numbers.*0049

*Le's just jump in, get started and see what we can do, okay.*0058

*Let's go ahead and start with a definition and this is going to be our definition of dot product, which is going to be very important and it shows up in all areas of science., so we will let...*0065

*... equal, A1, A2, A3, let me erase this A3 and just use ... all the way to AN.*0083

*And B = B1, B2..... BN, okay so let me just explain what these notations means here.*0102

*Whenever we see a normally a lowercase letters a, b, c, d, x we will often use x with an arrow on top, that means it's a vector, and a vector is just a list of numbers.*0116

*A1, A2 all the way to AN in this particular case we are talking about an N vector, which means it has N entries, so 5 vector would have 5 entries.*0131

*An example might be, let's say the vector V might be 1, 3, 7, 6, that's all this means, this is the vector in these the components of that vector.*0140

*It's composed of (1, 3, 7, 6), it's a four vector, because it has four entries in it, that's all this notation means, this is just a generalized version of it.*0153

*Okay, so let A, the vector A = A1 to An, let the vector B = B1 through BN, now we defines something called the dot product as the following, A.B.*0163

*The product of two vectors is equal to A1 times B1 + A2 times B2 + ... +A*_{n}B_{n} and I am going to write this in σ notation.0180

*σ notation, I'll explain in just a minute, if you guys haven't seen it, I am sure you have, but you just, I know that you don't deal with it all too often.*0196

*Okay, so if the vector A is composed of A*_{1} through AN, B is the list, B_{1} through BN, the dot product A.B = the product of the corresponding entries added together.0205

*When I add these together, I end up with a number, so the dot product of two vectors gives me a scalar; it gives me a number, so I just add them all up.*0222

*This σ notation is the capital Greek letter S, and stands for sum, and it says take the sum of the Ith entry of A, the Ith entry of B.*0233

*Multiply them together and add them, so A1B1, 1I = 1, and then go to the next one, I = 2 + A2B2 abd then go to I = 3 + A3B3.*0246

*This is just a short hand notation for this, we won't deal with σ notation all that much, what end our definitions, whatever we do I'll usually write this explicitly.*0260

*I just want you to be aware that in your book, you'll probably see this; you'll definitely see it in the future.*0270

*That's all this means, it's a short hand notation for a very long sum, so don't let the symbolism intimidate you, scare you, confuse you, anything like that, it's very simple.*0275

*Okay, let's just do an example of a dot product and everything should make sense, so example; we will let...*0285

*... Vector A = (1, 2, -3 and 4), so this is a four vector, and will let B = (-2, 3, 2, 1), notice I wrote one of them in row form, one of them in column form.*0298

*This is also a four vector because we have four entries, I wrote it this way because in a minute when we talk about matrix multiplication, it's going to make sense, it will make sense why it is that I wrote it this way, but just for now understand that there is no real difference between these two.*0319

*I could have written this as a column, I could have written this as a row, it's just a question of corresponding entries.*0335

*But I did like this because in a minute when we do matrix multiplication, symbolically, its going to help make sense when you move your fingers across a row and down a column, just sort of keep things straight, because matrix multiplication, there is lot, a of lot of arithmetic involved.*0340

*Okay, so our dot product A.B here, A.B, we just go back to our definition, it says take corresponding entries and just multiply it together, that's you got to do, *0356

*I take A1 times B1, so which is 1 times -2, which is -2 + 2 times 3, which is 6 + -3 times 2, which is -6...*0370

*... + 4 times 1, which is 4.*0392

*Well, that equals, so the 6 is cancelled, -2 + 4 gives me a 2, so I have a vector of 4 vector, times of 4 vector and I end up with a number 2.*0396

*The dot product of two vectors is a scalar, and all I am doing is multiplying corresponding entries and adding them all up, that's it simple arithmetic, nice and easy, no worries.*0408

*Let's go ahead and move forward now on to matrix multiplication, okay.*0423

*Let me go ahead and write down the definition of matrix multiplication and then we wi do some examples.*0429

*We will let A = that matrix A*_{ IJ}, B...0439

*... M by P, so this is an M by P, be 3 by 2.*0448

*Let B be the IJth P...*0458

*P by N, so A is M by P and B is P by N, notice that the number of columns of the matrix A equal to the number of rows of the matrix B, that's going to be very important.*0468

*Then AB is the...*0482

*... M by N matrix.*0493

*C equals C*_{ IJ}, such that the IJth entry of C is equal to Row_{ I} of A.0498

*In other words, the Ith row of A dotted with the Jth column of B.*0517

*let's take a look at the definition again, A is a matrix, A*_{ IJ}, it is M by P, B is a matrix, B_{ IJ} is P by N.0526

*when I multiply those two matrices the, essentially what happens is that the column of the first matrix, the one on the left cancels the column accounts with the row of the matrix on the right an what you end up with is a matrix which is M by N.*0536

*And that matrix is such that the IJth entry = Ith of A dot end with the Jth column of B, that's why this P and this P have to be the same.*0554

*In order to multiply two matrices, let's write this one out specifically, okay.*0573

*In order to multiply two matrices....*0581

*... The number of columns of the first...*0596

*... Must equal...*0603

*... The number of rows of the second and that's what this says M by P, P by N, the number of columns of the first has to equal the number of rows.*0611

*That's the only way that matrix multiplication is defined and what we mean when we say is defined, means if they are not the same, you can't do the multiplication.*0623

*That's what defined means, it's the only way you can do it if that's the case, okay.*0632

*let's see what we have got, so for example if I have a 2 by 3 matrix and I want to multiply it by a 3 by 2 matrix, yes I can do that because the number of columns of the first one is equal to the number of columns of the second one, and essentially they go away.*0639

*What I am left with is the final matrix which is 2 by 2, this is kind of interesting.*0661

*Now notice if I reverse them and if I did a 3 by 2 matrix, and if I multiply that by a 3 by 2 matrix, I am sorry 2 by 3...*0666

*... Now, it is defined, number of columns of the first equals the number of rows of the second, so now I end up with a 3 by 3 matrix, okay.*0686

*These are all defined, that will work.*0698

*Let's see 2 by 3, 3 by 2, 3 y 2, 2 by 3, so notice what's happened here, take a quick look, I have a 2 by 3 times the 3 by 2 gives me a 2 by 2.*0703

*If I switch these, a 3 by 2 by a 2 by 3 I'll let them switch them, I get a 3 by 3, a 3 by 3 and a 2 by 2 are not the same.*0714

*In general not only do the dimensions not match, it won’t work, AB is not equal to BA, that’s the take on lesson for this, matrix multiplication does not commute.*0725

*AB does not equal BA, and we will actually do an example later on where we can actually do AB and BA, but they end up being completely different matrices.*0737

*Okay, let's do some examples, let's let A = (1, 2, -1, 3, 1, 4) when doing much matrix multiplication goes very slowly and go systematically.*0747

*Lot of arithmetic, lots of room for mistake, (-2, 5, 4, -3, 2, 1) okay.*0767

*We said that the IJth entry = Ith row of A times the Jth column of B, well we are looking at here, A, let me use black, this is a 2 by 3, 2 by 3 matrix and this is a 3 by 2.*0779

*Yes, it is defined because this 3 and this 3 are the same, so we should end up with a 2 by 2 matrix, okay.*0799

*Lets go ahead and put little thing here for our 2 by 2 matrix, now for our...*0810

*.. This, the first row first column, this entry it's going to equal the first row of A dotted with the first column of B, so it's going to be that row and that column, so I take 1 times -2.*0820

*Let me actually write over here or let's call this the, so now we are doing the A11 entry, this one right up here, A11 entry equals 1 times -2, which is -2, 2 times 4 which is 8, -1 times 2, -2.*0838

*-2 - 2 + 8, answer should be 4, so 4 goes there.*0861

*Now let's do this entry which is the first row, second column, well the first row, second column means I take the dot product of the first row of A and the second column.*0870

*A12 Entry = 1 times 5, which is 5, 2 times -3, which is -6, -1 times 1, which is -1.*0882

*That means, let's try this again without these little extra lines, 1 times 5 is 5, 2 times -3 is -6, -1 times 1, -1.*0895

*5 - 6 is -1, -2, so this becomes -2, now we are going to go to the second row, first column, which means we do second row first column.*0909

*This is A21, 3 times -2 is -6, 1 times 4 is 4, 4 times 2 is 8, so 8 + 4 is 12 - 6 is 6, so this entry is 6.*0922

*And now we have our last entry which is the 2,2, so the 2,2 entry, second row, second column, which means we dot product the second row with the second column, second row of A, second column of B.*0941

*3 times 5 is 15, 1 times -3 is -3, 4 times, oops, that's nice.*0955

*4 times, is that a -1, I don’t even know, no that's 1...*0972

*4 times 1, is 4, so we get 15 + 4,., which is 19, 19 - 2 is 16, so this entry is 6.*0983

*The product so, AB = 4 - 2, 6, 16, 2 by 3 matrix multiplied by a 3 by 2 matrix gives us a 2 by 2 matrix, and we get that by this row this column, this row this column, and then this row this column, this row this column.*0996

*That's all you are doing, rows and columns, now you know why I arranged it, remember a little bit back when we did dot product, I arranged it, the first one horizontally and the other one vertically.*1021

*This is the reason why, because when we multiply, we are doing this times that, this times that, this times that, we can move one this way, one this way, it seems sort of, it's a way to keep things separate, as one hand, one finger moves across a row.*1031

*The other finger should move down a column as used to going this way or this way., okay.*1047

*Lets do another example here, we will...*1058

*... Okay, let A equal, this is going to be a 3 by 3, so it's 1(1, -2, 3, 4, 2, 1, 0, 1, -2) *1066

*And B is equal to, let's make it a 3 by 2, so this is a 3 by 3, and then we have (1, 4, 3, -1, -2, and 2) , so this is a 3 by 2, so AB.*1084

*A times be is defined, so AB is defined and it's going to be well, 3 goes away, the 2 inside 1's, so we are left with a 3 by 2, so AB is a 3 by 2 matrix.*1105

*Okay, well let's just multiply it out, this time we are not going to write everything out we are just going to do the multiplication and keep it straight, they are final numbers.*1121

*We know that we are looking for a 3 by 2, so let's just start putting in entries, well the first entry; first row first column is going to be first row first column.*1131

*1 times 1 is 1, -2 times so 1 times, that's 1, -6 is going to be a -5, and then -6 is going to be -11.*1142

*This is going to be -11, there and now we are going to do the second entry, okay, first row second column, which means first row of A, second column.*1161

*One times 4 is 4, -2 times -1 is 2, 4 + 2 is 6, and then 3 times 2 is 6, that becomes 12.*1172

*When we continue ion this way, we end up with 8, we end up with 16, we end up with 7, we end up with -5, that's our AB.*1185

*Okay, now let's try something.*1196

*Let's let A =(1, 2, -1, 3) and we will let B = (12, 1, 0, 1) in this case, because this is 2 by 2, and because this is 2 by 2, both AB and BA, they are both defined.*1203

*I can do the multiplication, well let's do the multiplication and see if AB = BA.*1224

*There are two ways that you can, there are certain demonstrate non-commutivity, is if the dimensions don't match when you switch them or if it's defined, multiplication is defined and doable this way and that way.*1229

*Then you might end up with different matrices, again proving that it doesn't commute, alright.*1246

*Let's see what we have got, when we do AB, okay we end with the following, we end up with (2, 3, -2, 2) and when we do BA, we said it is defined.*1252

*We end up with (1, 7, -1, 3).*1266

*AB and BA are not the same, AB is not equal to BA, matrix multiplication does not commute.*1271

*Okay, so now let's talk about matrices and linear systems, so we introduced linear systems in our first lesson, we talked about matrices in our second, and we have just introduced matrix multiplication.*1283

*Now let's combine them together to see if we can take a matrix and represent it as a linear system, or a linear system and represent it in matrix form.*1297

*Let's let me go back to blue here, we will let, excuse me, A = A*_{11}, A_{12}, A_{13}, A_{21}, A_{22}, A_{23}, A_{31}, A_{32}, A_{33}.1307

*And we will let X with the little line, the vector be our vector, let's call it X*_{1}, X_{2}, X_{3}, this is the vector formulation, this is the component form, it's just a 3 vector.1332

*Okay, so this is a, we can do this in red, this is a 3 by 3 matrix, and this is a 3 by 1, right, so if I multiply this matrix by that vector X, well it's just a 3 by 3 times a 3 by 1.*1346

*Well those are the same, so I end up with a 3 by 1, it is defined and it's going to equal some vector b, which is going to be a 3 by 1 vector, just something with 3 entries in it.*1366

*And let's let B therefore equal, we will call it B*_{1}, B_{2}, B_{3}, so again we have a matrix.1383

*We have this 3 vector, I can multiply them because matrix multiplication is defined, their answer is going to be a 3 vector, so we will call that 3 vector B, and will call it's components B*_{1}, B_{2}, B_{3}.1394

*Okay, well let's actually do the multiplication here, so A*_{1} X, I am sorry, AX.1407

*When i do this multiplication, this row, this column, this row, this column, this row, this column.*1415

*Here is what I get, A*_{11} times X_{1} + A_{12} times X_{2} + A_{13} times X _{3}, that's what I get, that's the multiplication.1423

*A*_{11} X_{1} + A_{12} X_{2} + A_{13} X_{3}, and then I do this second row, that column again, I get A_{21} X_{1} + A_{22} X_{2} + A_{2} X_{3}.1435

*And then I will do the third row, which is A*_{33} X_{1} + A_{32} X_{2} + A_{33} X_{3}, that's going to be my matrix.1455

*That's my actual matrix multiplication; well I know that equals this B, so I write B*_{1}, B_{2}, and B_{3}.1469

*Well, this thing = this thing, this thing = equals this thing, this thing = this thing, that's what this says, this is just a 3 by 1 in its most expanded form.*1481

*That's the A times the X, this thing is the B, that are equals, and so now I am just going to set corresponding entries equal to each other, this whole thing is equal to that.*1491

*I write A*_{11} X_{1} + A_{12} X_{2} + A_{13} X_{3} = B_{1}.1500

*A*_{21} X_{1} + A_{22} X_{2} + A_{23} X_{3} = B_{2}, and I am sorry that I have got extra little lines here that are showing up.1514

*Try to spread a little bit slower, A*_{31} X_{1} + A_{32} X_{2} + A_{33} X_{3} = B_{3}.1528

*Well take a look at this, this is just a linear system, that's it, it's just a linear system, you have seen this before.*1544

*This is three equations in three variables...*1554

*...X*_{1}, X_{2}, X_{3}, X_{1}, X_{2}, X_{3}, X_{1}, X_{2}, X_{3}, these A_{11} 's A_{2}, all of these are coefficients and these are the actual solutions.1561

*You can actually write a linear system as a matrix, so it looks like A*_{11}, A_{12}, A_{13}, this is the coefficient, the matrix of coefficients for the linear system.1581

*A*_{21}, A_{22}, A_{23}, A_{31}, A_{32}, A_{33}, and then you multiply it by the variables, which are X_{1}, X_{2}, X_{3}, and it equals B_{1}, B_{2}, B_{3}.1602

*We can take a linear system and represent it in matrix form; we take the matrix of coefficients, so this is the coefficient matrix.*1626

*M by N in this case is 3 by 3m, but it can be anything, this is the matrix of variables, it's the variable matrix, and it's always going to be some N vector.*1640

*And this is just the you might call the solution matrix, it's not really the solution matrix, the solution matrix is once you find X1, X2, X3, those are going to be your solutions, so you know what let's not even give this a name, let's just say this happens to be the, whatever it is.*1657

*It's the B that makes up linear system on the right side of the equality, okay now given this; we can actually form as it turns out.*1678

*We can form a special matrix...*1692

*... If we attach...*1701

*... B*_{1}, B_{2}, B_{3} to the coefficient matrix...1709

*... Okay, now we are going to write this out one more time, so I am going to take this thing and I am going to add up another column to this matrix.*1723

*I end up with, (A*_{11}, A_{12}, A_{13}), B_{1}, (A_{21}, A_{22}, A_{23}), B_{2}, (A_{31}, A_{32}, A_{33}) B_{3}.1732

*And sometimes we put like a little dotted line there, just to let you know that this is, and this is called an augmented matrix.*1752

*All I have done is I have augmented my coefficient matrix with my solutions on the right for the linear system, and we do separate it.*1759

*Sometimes you can see a solid, I tend to put a solid line, that's just my personal preference, some people don't put anything at all, again it's, it's completely up to you.*1770

*Therefore any linear system can be represented in matrix form and vice versa, any matrix with more than one column can be thought of as forming a linear system.*1780

*Let's see what we have here, example -2X as a system -2X + 0Y + Z = 5, and then we have 2X + 3Y -4Z =7.*1796

*We have 3X + 2Y + 2Z = 3, so this is our linear system and now let's break it up and in matrix form, so we want to write it this way, AX = B, this is the matrix representation.*1821

*A matrix A times a vector B gives us a vector B, so A the matrix A is going to be the coefficients, it' going to be , excuse me, the 2, the 0, the 1, the 2, the 3, the -4, the 3, the 2, the 2.*1842

*(-2, 0, 1, 2, 3, -4, 3, 2, 2) that’s our coefficient matrix, X let me do this in red, oops.*1861

*X is going to be our variables, our variables happen to be X, Y and Z, that's going to be X, Y, and Z, and B, let me go back to red, B vector is going to be (5, 7, 3).*1881

*That's it, you can represent, now we will do the augmented matrix, which means take the coefficient matrix and add this to it, so we end up with A augment with B, symbolized like that.*1903

*It is equal to (-2, 0, 1, 5) and I'll go ahead and do a solid line, because I like solid lines.*1918

*(2, 3, -4, 3, 2, 2), you have your coefficient matrix and you have your matrix that represents the linear system, that was originally given to you like that.*1928

*Now, let's see, now let's go the other way, let's say we have a matrix, (2, -1, 3, 4, 3, 0, 2, 5) let's say you are given this particular matrix, this particular matrix actually can represent a linear system.*1946

*We could take a linear system, represent it in matrix form, which we just did, we can take a matrix and represent it as a linear system, if we need to.*1977

*This ends up being, so let's say that this is the augmented matrix, so that means this is (1, 2, 3), that means we have 3 variables, that's what the column represent are the variables, and these are the equations.*1986

*We have 2X - 1Y + 3Z = 4, and then 3X + 0Y + 2Z = 5.*2001

*Linear system, matrix form, matrix represent a linear system.*2023

*Excuse me,, there you go, okay now let's talk about what we did today, recap our lesson.*2030

*We talked about the dot product of two vectors and a vector is just an N by 1 matrix, either as a column or row, it doesn't really matter.*2042

*What you do is you multiply the corresponding entries in the two vectors and you add up the total.*2051

*The dot product gives you a single number, a scalar, it's also called the scalar product, so dot product, scalar product, as you go on in mathematics you will actually refer to it as a scalar product not necessarily an dot product.*2056

*After that we talked about matrix multiplication where we actually invoke the dot product, so with matrix multiplication you can only multiply two matrices if the number of columns in the first matches the number of rows in the second.*2070

*Matrix multiplication does not commute, in other words A times B does not equal B times A in general.*2084

*It might happen accidentally, but it's not true in general.*2091

*The IJth entry in the product is the dot product of the Ith row of the first and the Jth column of the next.*2095

*Okay, now matrix representations of linear systems, any linear systems of equations can be represented as an augmented matrix, you take the matrix of coefficients and you add the column of solutions.*2107

*Any matrix with more than one column can represent a linear system of equations, that last column is going to be your solutions, that's the augment.*2122

*Okay, so let's do one more example here, so we will let A = (3, 5, 2, 4, 9, 2, excuse me.*2134

*And B = (1, 0, 1, 6), oh she knows, (2, 1, 3, 7) so here we have a 3 by 2 matrix and here we have a 2 by 4 matrix.*2153

*Yes, the 2's on the inside are the same, they end p cancelling and it's going to end up giving up a 3 by 4 matrix, so we are left with 2 outside.*2172

*We are going to be looking for a matrix which is 3 by 4, this is kind of interesting if you think about it, 3 by 2, 2by 4, now you get a 3 by 4, you get something that's bigger than both in some sense, okay.*2182

*AB equal to, well we take the first row and first column, 3 times 1 + 5 times 2, 3 times 1 is 3, 5 times 2 is 10, you end up with 13, first row, second column.*2195

*Well you take the first row, dot product of the second column, 3 times 0 is 0, 5 times 1 is 5, so you end you with 5., then you keep going.*2214

*you end up (18, 53, 8, 4, 14, 40, 13, 2, 15 and 69), so our product AB = this matrix.*2224

*Notice 8 times B is defined if I did B times A, well B times is equal to a 2 by 4 times the 3 by 2.*2243

*This 4 and this 3 aren't equal, BA is not even defined, we can't even do the multiplication, leave alone and find out whether it equals or not, which in general it doesn't, so in this case it's not even defined.*2256

*It only works when A and B are such that A is on the left of B, B is on the right of A, and they will often say that, we will often say in linear algebra, multiply by this on the left, multiply by this on the right.*2267

*We don't do that with numbers, we just say multiply the numbers, okay now let's let the variable that's the X, the vector = X, Y and Z and let's let the vector Z = (4, 2, 9).*2279

*Now we want to express...*2300

*.. Well actually we don't want it, let's go ahead...*2308

*... We want to express AX = Z as a linear system and as an augmenting matrix, both, so we have a matrix A, that's this one; we have a matrix X, we have a matrix Z.*2314

*We want to express AX = Z as a linear system in augmented matrix, okay, we’ll wait a minute, let's try it to bring it, it works.*2332

*We can't even do, this is, this has 3 and 2, so this can be XYZ, this is going to have to be XY, my apologies.*2346

*XY, there we go, because this is 3 by 2, this is 2 by 1, yes we want them at, the multiplication to be defined, so we end up with (3, 5, 2, 4, 9, 2)...*2356

*... Times XY = (4, 2, 9) so you end up with, well 3X + 5Y = 4.*2375

*3X + 5Y = 4, because you are doing the first row, first entry, first row first column, 2X + 4Y, 2X + 4Y = 2.*2392

*And 9X + 2Y, 9X + 2Y = (, all we have done is go this times that, equals that, this times that equals that, this times that equals that.*2409

*And we end up with our linear system, now we want to convert that to an augmented matrix; well we take 3, 2, 9, (3, 2, 9, 5, 4, 2).*2422

*I'll start coefficient matrix, right, and we have augmented with our solution matrix 4, 2, 9, (4, 2, 9).*2438

*That's all we have done AB, A times B, it is defined, we can find the multiplication.*2451

*We can take given X and given Z, this is a two vector, this is a three vector, we can take AXC represented as a linear system.*2459

*We express it this way, we do the matrix multiplication, we said corresponding things equal to each other, and we have actually converted this to a linear system.*2469

*This and this are equivalent, we can take this linear system and express it completely just as a matrix, an augmented matrix by adding the solutions as the augment on the right, and we end up with that.*2477

*Okay, thank you for joining us today for linear algebra, and our discussion of dot products and matrix multiplication on linear systems.*2493

*Thank you for joining us at educator.com, we will see you next time, bye, bye.*2499

*Welcome back to Educator.com, welcome back to linear algebra.*0000

*In our previous lesson, we discussed Eigenvalues, Eigenvectors, and we talked about that diagonalization process where once we find the specific Eigenvalues from the characteristic polynomial that we get from the determinant, setting it equal to 0, once we find the Eigenvalues we put those Eigenvalues back into the arrangement of the matrix.*0004

*Then we solve that matrix in order to find the particular Eigenvectors for that Eigenvalue, and the space that is spanned by the Eigen vectors happens to be called an Eigenspace.*0026

*In the previous lessons, we dealt with some random matrices... they were not particularly special in any sense.*0042

*Today, we are going to tighten up just a little bit, we are going to continue to talk about Eigenvalues and Eigenvectors, but we are going to talk about the diagonalization of symmetric matrices.*0048

*As it turns out, symmetric matrices turn up all over the place in science and mathematics, so, let us jump in.*0057

*We will start with a - you know - recollection of what it is that symmetric matrices are. Then we will start with our definitions and theorems and continue on like we always do.*0065

*Let us see here. Okay. Let us try a blue ink today. So, recall that a matrix is symmetric if a = a transpose.*0074

*So, a symmetric matrix... is when a is equal to a transpose, or when the a transpose is equal to a.*0089

*So, it essentially means that everything that is on the off diagonals is reflected along the main diagonal as if that is a mirror.*0108

*Just a quick little example, something like (1,2,3,3)... that is... so let us say this is matrix a. If I were to transpose it, which means shift it along its main diagonal, well, (1,2,3,3)... this is equal to a transpose... it is the same thing. (1,2,3,3), (1,2,3,3), this is a symmetric matrix.*0118

*Okay. Now, we will start off with a very, very interesting theorem. So, you recall, you know, you can take this matrix, we can set up that equation and where we took the Eigenvalue equation where you have Λs and the characteristic polynomial, and we solve the polynomial for its roots.*0145

*The real roots of that equation are going to be the Eigenvalues of this particular matrix. Well, as it turns out, all the roots of what we says is f(Λ), which is the characteristic polynomial... Λ(a)... symmetric matrix are real numbers.*0164

*So, as it turns out. If our matrix happens to be symmetric, we know automatically from this theorem that all of the roots are going to be real.*0195

*So, there is always going to be a real Eigenvalue. Now, we will throw out another theorem, which will help us. *0205

*If a is a symmetric matrix, then Eigenvectors belonging to distinct Eigenvalues, because you know sometimes Eigenvalues, they can repeat.*0222

*Eigenvalues are orthogonal. That is interesting... and orthogonal, as you remember, dot product is equal to 0, or perpendicular.*0250

*Okay. Once again, if a is a symmetric matrix, then the Eigenvectors belonging to distinct Eigenvalues are orthogonal.*0263

*Let us say we have a particular matrix, a 2 by 2 and let us say the Eigenvalues that I get are 3 and -4. Well, when I calculate the Eigenvectors for 3 and -4, as it turns out, those vectors that I get will be orthogonal. Their dot product will always equal 0.*0270

*So, let us do a quick example of this. We will let a equal (1,0,0), (0,0,1), (0,1,1), and if you take a quick look at it, you will realize that this is a symmetric matrix. Look along the main diagonal.*0288

*If I flip it along the main diagonal, as if that is a mirror, (0,0), (0,0), (1,1).*0310

*When I subject this to mathematical software, again, when you first are dealing with Eigenvectors, Eigenvalues I imagine your professor or teacher is going to have you work by hand, simply to get you used to working with the equation.*0319

*Just to get you an idea of what it is that you are working with, some mathematical object. But, once you are reasonably familiar, you are going to be using mathematical software to extract these Eigenvalues, and Eigenvectors. But, sometimes the process just takes too long otherwise.*0330

*So, what we get is... well, Λ1, -- let me start over here -- the first Eigenvalue is equal to 1, and that yields the Eigenvector (1,0,0).*0344

*Λ2, the second Eigenvalue is 0, 0 is a real value, and it yields the Eigenvector, -- tuh, Eigenvalue, Eigenvector, Eigenspace, yeah... I know -- Okay.*0357

*That gives me the vector (0,-1,1)... Λ3, the third Eigenvalue is 2 for this matrix, and it yields the Eigenvector (0,1,1).*0372

*If you were to check the dot product of this and this, this and this, this and this, the mutual dot products, they all equal 0. So, as it turns out, this theorem is confirmed.*0385

*The Eigenvectors corresponding to distinct Eigenvalues are mutually orthogonal. Okay.*0396

*Now, let us move onto another definition. Okay. A non-singular -- excuse me -- matrix a, remember non-singular means invertible, so it has an inverse... is called orthogonal.*0405

*The word orthogonal in 2 different ways. We are using it to apply to two vectors when the dot product is 0, but in this case we call them matrix orthogonal if the inverse of the matrix happens to equal the transpose of the matrix.*0435

*An equivalent statement to that... I will put equivalent, is a transpose a is equal to the identity matrix.*0452

*Well, just look at what happens here. If I take a, this says a inverse is equal to a transpose. Well, if I multiply by the matrix a on both sides on the right, a transpose a, that is this one, a inverse a is just the identity matrix, so these are two equivalent statements.*0465

*I personally prefer this definition right here. So, a non-singular matrix is called orthogonal, so it is an orthogonal matrix if the inverse and the transpose happen to be the same thing. That is a very, very special kind of matrix.*0481

*So, let us do a quick example of this. If I take the Eigenvectors that I got from the example that I just did, so the Eigenvectors that I just got were (1,0,0), (0,-1,1), and (0,1,1), okay? These are for the respective Eigenvalues 1, 0, 2.*0496

*First thing I am going to do, I am actually going to normalize these. So normalization, it just means taking them and dividing by the length of the vector. So, this vector actually... let me use red here for normalization.*0524

*This one stays (1,0,0), so let me put... normalized... this one is just, well, -1*^{2}, 1^{2}, this becomes (0,-1/sqrt(2),1/sqrt(2)). The length of this vector is sqrt(2).0536

*This one is the same thing. We have (0,1/sqrt(2),1/sqrt(2)), now if I take these vectors and set them up as columns in a matrix, and this is just something random that I did. I happened to have these available, so let us call this p.*0561

*p is equal to the matrix (1,0,0), (0,-1/sqrt(2),1/sqrt(2)), (0,1/sqrt(2),1/sqrt(2).*0578

*This matrix p, if I were to calculate its inverse, and if I were to calculate its transpose, they are the same.*0593

*p inverse equals p transpose. This is an orthogonal matrix.*0602

*So again, we are using orthogonal in two different ways. They are related, but not really. We call vectors mutually orthogonal, we call them matrix orthogonal, if the inverse and the transpose are the same thing.*0610

*Now, let us go back to blue ink here, and state another theorem.*0629

*An n by n matrix is orthogonal if, and only if, the columns or rows, so I will put rows in parentheses form an ortho-normal set of vectors in RN.*0640

*Okay. An n by n matrix is orthogonal if and only if the columns form an orthonormal set of vectors in RN.*0679

*So, if I have a matrix, and let us just take the columns... if the columns form an ortho-normal set, meaning that the length of... column 1 is a vector, column 2 is a vector, column 3 is a vector... if the length of those three is 1, that is the normal part, and if they are mutually orthogonal, well, this thing that we did right here... these columns we normalized it.*0687

*So, by normalizing it, we made the length 1 and these are mutually orthogonal, so this is an orthogonal matrix. If we did not know it already by finding the inverse and the transpose.*0712

*If I just happen to look at this and realize that, whoa, these are all normalized and these are mutually orthogonal. Then, I can automatically say that this is an orthogonal matrix, and I would not have to calculate anything. That is what this theorem is used for.*0723

*Okay, so now let us talk about a very, very, very important theorem. Certainly one of the top 5 in this entire course.*0737

*It is quite an extraordinary theorem when you see the statement of it and when we talk about it a little bit. Let me do it in red here.*0745

*So -- excuse me -- if a is a symmetric n by n matrix. Then there exists an orthogonal matrix p, such that p inverse × a × p is equal to some diagonal matrix d, a diagonal matrix, with the Eigenvalues of a along the main diagonal.*0751

*Okay, so not only a symmetric matrix always diagonalizable, but I can actually diagonalize it with a matrix that is orthogonal, where the columns and the rows are of length 1 and they are mutually orthogonal. Their dot product equals 0.*0834

*That is really, really extraordinary, so let us state this again. If a is a symmetric n by n matrix, then there exists an orthogonal matrix p such that p inverse × a × p gives me some diagonal matrix.*0851

*The entries along the main diagonal are precisely the Eigenvalues of a. That is what this equation tells me, that there is this relationship.*0866

*If I have a matrix a, I can actually take the Eigenvalues of a, I will bring them along the main diagonal and I can find a matrix p, such that when I take p inverse, when I sandwich a between p inverse and p, I actually produce that diagonal by composing the multiplication of this matrix and this matrix and this matrix. That is extraordinary, absolutely extraordinary.*0875

*So, let us see what happens when we are faced with an Eigenvalue which is repeated.*0900

*Remember, sometimes you can have an Eigenvalue, your characteristic polynomial can have repeated roots... so that will be a, let us say you have a 3 by 3, and you have Eigenvalues (1,1,2), well the 1 has a multiplicity of 2, because it shows up twice. *0905

*Okay, let us see how we deal with that. Let us go back to a blue ink here... oops.*0920

*If we are faced... an Eigenvalue of multiplicity k, then, when we find a basis for the null space associated with this Eigenvalue, in other words finding a basis for the Eigenspace, finding the Eigenvectors, that is all this means because that is what you are doing... you put the Eigenvalue back in that equation, you solve the homogeneous equation and you get a basis for the null space, which is the Eigenvectors associated with this Eigenvalue.*0936

*We use the Gram Schmidt ortho-normalization process to create an orthonormal basis for that Eigenspace.*1012

*So if I have an Eigenvalue which repeats itself, and once I find a basis for that Eigenspace, for that particular Eigenvalue, I can ortho-normalize and actually create vectors that are, well, orthonormal, and that will be my one set. Then I move on to my next Eigenvalue.*1049

*If my matrix is symmetric, I am guaranteed that the distinct Eigenvalues will give me things that are going to be mutually orthonormal.*1068

*Let us do a problem, and I think everything will fall into place very, very nicely.*1080

*So, example... we will let a = (0,2,2), (2,0,2), and (2,2,0)... 2... 2... 0...*1087

*Let us confirm that this is diagonal. Yes. 2, 2, 2, 2, 2, 2, absolutely. Main diagonal is the mirror. If you flip it you end up with the same thing.*1103

*Okay. Let us do the characteristic polynomial. Let us actually do this one a little bit in detail. It equals the determinant Λ - 0 - 2 - 2, Λ's along the diagonals and negatives everywhere else... Λ - 0... -2... -2... -2... Λ - 0.*1114

*We want the determinant of this. When we take the determinant of this, we actually end up with the following... Λ + 2 in factored form... Λ - 4.*1139

*So, I have solved for this polynomial and I have turned it into something factored. So, I get -- let me put it over here -- Λ1 = 2... -2, I am sorry.*1149

*Λ2 is also equal to 2, that is what this 2 here means. Okay. That means this Eigenvalue Λ = -2 has a multiplicity of 2, it shows up twice. *1160

*Of course, our third Λ, third Eigenvalue is going to equal 4. So, now let us go ahead and do solve this homogeneous system.*1170

*Well, I take -2, I stick it into here, and I solve the homogeneous system. So, I end up with the following.*1182

*I end up -- let me actually write... let me do this... no, it is okay -- so 4Λ = -2, we get the following system... we get -2, - 2, -2, 0.*1193

*It is this thing, and then the 0's over here, -2, -2, -2, 0. -2, -2, -2, 0.*1213

*Well, when we subject that to reduced row echelon form, we end up with 1,1,0, and 0's everywhere else.*1225

*So, this column, this column... so, we get -- let me do it this way -- s3, let us set it equal to s, this does not have a leading entry, so it is a random parameter.*1239

*x2 also does not have a leading entry. Remember this does not have to be in diagonal form, so this is the only one that has to be a leading entry.*1252

*So, set that equal to r, and x1 is equal to, well, -r, -s.*1260

*This is equivalent to the following... r × -1, 1, 0 + s × -1,0,1.*1271

*Okay. So, these 2 vectors right here form a basis for our Eigenspace. They are our Eigenvectors for this, for these Eigenvalues.*1291

*Well, what is the next step? We found the basis, so now we want to go ahead and we want to ortho-normalize them.*1305

*We want to make them orthogonal, and then we want to normalize them so they are orthonormal. So, we go through the Gram Schmidt process.*1316

*So, let me rewrite the vectors. I have (-1,1,0) -- so that we have them in front of us -- (0,1)... is a basis for the Eigenspace associated with Λ = -2. Okay.*1323

*So, we know that our first v1, this is going to be the first vector... we can actually take this one. So, I am going to let v1 = -1, 1, 0.*1355

*That is going to be our standard. We are going to orthogonalize everything with respect to that one.*1368

*Well, v2 is equal to... this is u1, this is u2... is equal to u2 - u2 · v1 over v1 · v1 × v1.*1373

*This is the definition of the ortho-normalization process, the Gram Schmidt process. You take the second vector, and you subtract... you work forward.*1399

*I will not recall the entire formula here, but you can go back and take a look at it where we did a couple of examples of that orthogonalization.*1409

*When you put all of these in, u2 is this one, v1 is this one, and you do the multiplication, you end up with the following... -1/2, -1/2, 1/2 and 1.*1416

*Okay. Now, you remember I do not need the fractions here because a vector in this direction is... well, it is in the same direction, so the length of these individual values does not really matter.*1433

*So, I am just going to take -1, -1, 1. Okay. So, now, -1, 1, 0... and -1, -1 -- I am not taking fractions here, what I am doing is I am actually multiplying everything by 2.*1447

*I can multiply a vector by anything because all it does is extend the vector or shorten the vector, it is still in the same direction, and it is the direction that I am interested in.*1473

*So, when I multiply by 2 -- this is not 1 -- 2 × that... and this ends up being 2 here. Okay.*1484

*Now, this is orthogonal. I want to normalize them.*1493

*When I normalize them, I get the following -- nope, we are not going to have these random lines everywhere --... -1/sqrt(2), 1/sqrt(2), 0... and 2 × 2 is 4, 1, 1, sqrt(6), so this is going to be... -1/sqrt(6), -1/sqrt(6), 2/sqrt(6).*1504

*This is orthonormal. So, with respect to that Eigenvalue -2, we have created an orthonormal basis for its Eigenspace. So this is going to be one column, this is going to be a second column, now let us go ahead and do the next Eigenvalue -- where are we... here we are.*1537

*Our other Eigenvalue was Λ = 4, so for Λ = 4, we put it back into that, remember Λ thing determinant equation... we end up with the following. We get 4, - 2, - 2, 0... -2, 4, -2, 0... -2, -2, 4, 0.*1563

*When we subject this to reduced row echelon we get 1, 0, -1, 0. We get 0, 1, -1, 0, 0 here... and 0's everywhere else.*1587

*Okay. That is a leading entry. That is a leading entry. Therefore, that is not a leading entry, so we can let that one be x3 = r. Any parameter.*1603

*Well, that means x2 - r = 0, so x2 = r, as well... and here it is x1 - r = 0, so x1 also equals r.*1616

*Therefore, this is equivalent to r × 1, 1, 1. Okay.*1630

*So, this right here is an Eigenvector for Λ = 4. It is one vector, it is a one dimensional Eigenspace. It spans the Eigenspace.*1639

*Now, we want to normalize this. So, when we normalize this, it is sqrt(3)... I will put normalize -- let me make some more room here, I am going to use up a lot of room for not a lot of... let me go this way -- normalize.*1653

*We end up with 1/sqrt(3), 1/sqrt(3), and 1/sqrt(3). So, now, we are almost there. Our matrix p that we were looking for. It is going to be precisely the vectors that we found. This, and the other two normalized vectors which we just created.*1680

*So, we get p = -1/sqrt(2), 1/sqrt(2), 0, -1/sqrt(6), -1/sqrt(6), 2/sqrt(6), 1/sqrt(3), 1/sqrt(3), 1/sqrt(3).*1708

*This matrix with these three columns is... if I did my calculations... if I took the inverse of this matrix and if I multiplied by my original matrix, and then I multiplied by this matrix, I end up with this d, which is -2, -2, 4.*1734

*The Eigenvalue's along the main diagonal, 0's everywhere else, and if you actually check this out, it will confirm that this is the case.*1758

*When I have a symmetric n by n matrix, I run through the process of diagonalization, but not only do I just diagonalize it, but I can orthogonally diagonalize it by using this orthogonal matrix, which is orthogonal... which means everything is orthonormal and they are mutually orthogonal to each other. Their dot product = 0.*1770

*I multiply p inverse ap, I get my diagonal matrix which is the Eigenvalues along the main diagonal. Notice the repeats... -2, -2, 4, so I have an Eigenspace of 2-dimensions, I have an Eigenspace of 1-dimension, which matches perfectly because my original matrix was 3-dimensional... r3.*1790

*Thank you for joining us for the diagonalization of symmetric matrices, we will see you next time. Bye-bye.*1810

*Welcome back to educator.com and welcome back to linear algebra.*0000

*In our last lesson, we talked about the diagonalization of symmetric matrices, and that sort of closed out and rounded out the general discussion of Eigen values, Eigen vectors, Eigen spaces, things like that.*0004

*The Eigen value and Eigen vector problem is a profoundly important problem in all areas of mathematics and science, especially in the area of differential equations and partial differential equations in particular. It shows up in all kinds of interesting guises. *0015

*Of course, differential equations and partial differential equations are pretty much the, well, it is what science is all about, essentially, because all phenomena are described via differential and partial differential equations.*0029

*So this Eigen vector, Eigen value problem will show up profoundly often.*0044

*Today we are going to talk about linear mappings, and the matrices associated with linear mappings. Mostly just the linear mappings. We will get to the matrices in the next couple of lessons.*0050

*But, so you remember some lessons back, we actually talked about linear mappings, but we mostly talked about them from... like a 3-dimensional space to a 4-dimensional space.*0059

*RN to RM, some kind of space that we are familiar with for the most part. But, you also remember we have used examples where a space of continuous functions is a vector space, the space of polynomials of a given degree is a vector space...*0070

*So, these vector spaces, they do not, the points in the vector spaces do not have to necessarily be points the way that we are used to thinking about them, they can be any kind of mathematical object if they satisfy the properties of a vector space.*0085

*Well, vector spaces are nice, and we like to have that structure to deal with, but really what is interesting... when linear algebra becomes interesting is when you discuss mappings, and in particular linear mappings between vector spaces.*0099

*Now, we are going to speak more abstractly about vector spaces in linear mappings, as opposed to specifically from RN to RM.*0113

*Much of this is going to be review, which is very, very important for what it is we are going to be doing next. So, let us get started.*0121

*Okay. Let us start off like we always do with a definition. Let us go to black. I have not used black ink in a while.*0130

*Definition, let v and w be vector spaces, a linear mapping, which is also called a linear transformation, L(v) into w, and this into is very important for our definition. *0140

*We will talk about it more why we choose into here and another word later... into is a function, a signing... unique vector that we signify as L(u) in w to each vector u in v, such that the following hold... a, and if I have 2 vectors -- u and u... u and v -- if I have two vector u and v, both of them in v, and if I add them, then apply the linear mapping, that is the same as applying the linear mapping to each of them separately and then adding them... for u and v in v.*0173

*The second thing is... if I apply, if I take some vector u and I multiply by a constant then apply the function, it is the same as if I were to take the vector alone, apply the linear function and then multiply it by the constant.*0247

*For u and v... and k is any real number. Okay. Let us stop and take a look at this really, really carefully here.*0269

*This -- let me use a red -- this plus sign here on the left, this is addition in the space v. Let me draw these out so you see them.*0286

*That is v, this is w, my linear map is going to take something from here, do something to it, and it is going to land in another space, okay?*0297

*So, this addition is -- here let me... v... w... -- this addition on the left, these, this is addition in this space... a vector u and v, they are here... I add them and then I apply.*0306

*This addition over here, this is addition in this space. They do not have to be the same. It is very, very important to realize that. This is strictly symbolic.*0321

*As you go on in mathematics everything is going to become more symbolic and not necessarily hvae the meanings that you are used to seeing them with.*0331

*Yes, it is addition, but it does not necessarily mean the same addition.*0338

*So, for example, I can have R3, where I add vectors, the addition of vectors is totally different. A vector plus a vector is... yes, we add individual components, but we are really adding a vector and a vector, two objects.*0344

*This might be the real number system where I am actually adding a number and a number. Those are not the same things, because numbers are not vectors.*0357

*So, we symbolize it this way as long as you understand over on the left it is the addition in the departure space, over here on the right it is addition in the arrival space.*0364

*Okay, so, let us talk about what it is this means... if I have a vector u and if I have a vector v, I can add them in my v space, I stay in my v space and I get this vector u + v. It is another vector.*0377

*It is a closure property, it is a vector space. Then, if I apply the linear mapping L to it, I end up with some L(u+v). That is what this symbolizes. I do something to it, and I end up somewhere.*0393

*In order for this to be linear, it says that I can add them and then apply L, or I can apply L to u to get L(u), and I can apply L to v separately to get L(v), and now when I add these, I end up with the same thing.*0412

*Again, this is extraordinary. This is what makes a linear mapping. Okay? It has nothing to do with the idea of a line. That is just a name that we use to call it that. We could have called it anything else.*0431

*But, it actually preserves a structure moving from one space to another space. There is no reason why that should happen, and yet there it is. We give it a special name. Really, extraordinarily beautiful.*0442

*Okay. If the linear mapping has to be from a space onto itself, or a space onto a copy of itself... in other words R3 to R3, R5 to R5, the space of polynomials to the space of polynomials, we have a special name for it... we call it an operator.*0456

*Let me separate these words here... We call it a linear operator.*0480

*Operator theory is an entirely different branch of mathematics unto itself. Operators are the most important of the linear mappings, or the most ubiquitous.*0494

*Okay. Let us recall a couple of examples from linear operators -- of linear maps, I am sorry -- recall our previous examples of linear maps.*0508

*We had something called a projection... the projection was a map from, let us say, R3 to, let us say R2.*0530

*Defined by L of the vector (x,y,z), take a three vector and I end up spitting out a 2 vector.*0544

*I just take the first 2 (x,y), it is called a projection. We call it a projection because we are using our intuition to name this. *0555

*It is as if we shine a light on a 3-dimensional object, the shadow is a 2-dimensional object. All shadows are two dimensional. That is what a projection is. I am projecting the object onto a certain plane. I am projecting a 3-dimensional object onto its 2-dimensional shadow, creating the shadow. That is a linear map.*0564

*Dilation. This is a linear map. This is actually a linear operator. R3 to R3, and it is defined by L of some vector u is equal to R × u. I am basically just multiplying it by some real number, where R is bigger than 1.*0583

*Dilation means to make bigger. So, I expand the vector.*0603

*A contraction. Contraction is the same thing, so I will just put ditto marks here... and it is defined by the same thing, except now R is going to be > 0 and < 1. So, I take something, a vector, and I make it smaller. I contract it, I shrink it.*0609

*We have something called reflection. L is from R2... this is also a linear operator... I am mapping something in the plane to something in the plane. It is defined by L of the vector (x,y), or the point (x,y) = x - y. That is it. I am just reflexing it along the x axis.*0629

*There is also a reflection along the y axis if I want, where it is the x that becomes negative. Same thing.*0661

*The final one, rotation, which is probably the most important and the most complex... and most beautiful as it turns out... of the linear maps... also a linear operator. R2 to R2, or R3 to R3. We can rotate in 3-dimensions.*0668

*We can actually rotate in any number of dimensions. Again, mathematics is not constrained by the realities of physical space. That is what makes mathematics beautiful.*0684

*These things that exist and are real in real space, they exist and are real in any number of dimensions... defined by L(u), if I take a vector, and if I multiply it by the following matrix, cos(Θ) - sin(Θ), sin(Θ), cos(Θ), × the vector u.*0694

*If I take a vector u, and I multiply it on the left by this matrix, cos(Θ), -sin(Θ), sin(Θ), cos(Θ), this two by two matrix... I actually rotate this vector by the angle Θ. That is what I am doing.*0723

*Every time you turn on your computer screen, every time you play a video game, these linear maps that are actually making your video game, making your computer screen possible. That is what is happening on the screen.*0735

*We are just taking images, and we are projecting them, we are dilating them, we are contracting them, we are reflecting them, we are rotating them... at very high speeds of course... but this is all that is happening. It is all just linear algebra taking place in your computer, on your screen.*0747

*Okay. So, in order to verify that a function is a linear mapping, we have to check the function against the definition. It means we have to check the part a and part b. Okay, so let us do an example.*0762

*Let us see, let v be an n-dimensional vector space... does not say anything about the nature of this space, just says n-dimensional... We do not know what the objects are... space... vector space... n-dimensional vector space.*0783

*Let s = v1, v2, all the way to vN be a basis for v.*0810

*Okay. We know that for v, a vector v in the vector space v, we can because this is a basis, we can write it as a series of constants... c1 × v1, just a linear combination of the elements of the basis. That is the whole idea of a basis. Linearly independent and spans the entire base.*0832

*Every vector in that space can be written as a linear combination. A unique linear combination, in fact... 1 + c2v2 + cNvN.*0851

*Okay. So, let us just throw that out there. Now, let us define our linear map, which takes v and maps it to the space RN, to N-space.*0871

*So some N-dimensional vector space v, and it is going to map to our N-dimensional Euclidian space, RN, and defined by L(v), whatever the vector is, I end up taking its -- I end up with the coordinate vector.*0885

*So, I have some vector v in some random vector space that has this basis. Well, anything in v can be written this way.*0908

*Therefore this v, what it does is it takes this v and it maps it to the RN space, which is the list of coordinates which is just the constants that make up the representation from the basis.*0918

*So, I am taking the vector v, and I am spitting out the coordinate vector of v, with respect to this basis s. *0932

*Okay. Is this linear map -- I am sorry -- is this map linear? Is L linear? We do not know if it is linear, we just know that it is a mapping from one vector space to another.*0949

*Well, let us check a. A, we need to check whether the sum of two vectors in v? Does it equal L(u) individually + L(v).*0953

*Well, let us do it. L(u + v). Well, we just use our definition, we just plug the definition in.*0970

*That is equal to u + v, the coordinate of that. Well, we already know that the coordinates are themselves are linear. So, this is equal to the coordinate of u with respect to s + the coordinate of v with respect to s, but that is just L(u) + L(v).*0981

*So, I have shown that this equals that. So, part a is taken care of. So, now let us do part b.*1007

*We need to show that L(k × u)... does it equal k × L(u)?*1018

*Well, L(k × vector u) = k × u, that is the coordinate vector of ku, but the coordinate vector of k × u with respect to s is equal to k × the coordinate vector of u with respect to s.*1029

*That is equal to k × L(u), because that is the definition. So, we have shown that L of ku equals k × L(u), so yes, b is also taken care of.*1050

*So, this map, that takes any vector from a vector space with a given basis and spits out... does something to it and spits out the coordinate vectors... the coordinate vector with respect to the basis s, which is just the coefficients that make it up, this is a linear map. That is all we are doing, we are just checking the definition.*1068

*Let us throw out a nice little theorem here. Let L from v to w, be a linear map... a linear transformation. Then, a L of the 0 vector in v is equal to the 0 vector in w.*1097

*So, we put this v and w to remind us that we are talking about different spaces. If I take the 0 vector in b, and if I apply L to it, it maps to the 0 vector in my arrival space. That is kind of extraordinary actually.*1126

*And b, which will make sense. It is just the inverse of addition. It says L(u - v) = L(u) - L(v), so we are just extending this subtraction.*1143

*Okay. Now, let us have another theorem, which will come in handy. Let L be a mapping from v to w, and we will let it be a linear mapping of an n-dimensional vector space into w.*1162

*Also, let s = set v1, v2, just like before, all the way to vN, be a basis for v. So, I have a vector space v, I have a basis for v, and I have some function L which takes a vector in v and spits out something in w.*1208

*If u is in v, then L(u) is completely determined... I am going to be a little bit clearer here. Let me actually write out all of my letters. Completely determined by the set L(v1) L(v2), so on and so forth... L(vN).*1237

*I will tell you what this means. If I have a vector space v, and if I have a basis for that vector space... and if I take some random vector in v, and apply the linear transformation to it, I end up somewhere in w. Well, because I have a basis for v, I know exactly where I am going to end up in w. 2138 Because all I have to do is take these basis vectors, v1 to vN, apply L to them, and the L(v1), L(v2), L(v3) all the way to L(vN)... they actually end up becoming precisely the vectors, in some sense, that are needed to describe w, where I ended up. That is what linearity means.*1276

*For this particular theorem, I am not going to go ahead and give you a complete proof, but I am going to make it plausible for you here. So, let us take this vector v, in the vector space v... well, we know that we can write v as c1v1 + c2v2 + so on and so forth + cNvN.*1318

*Okay. Now, let us apply L to this. Well, L(v) is equal to L of this whole thing. c1v1 + c2v2 + so on and so forth + cNvN.*1342

*Well, L is a linear map. That is the hypothesis of the... that is the given part of the theorem. It is a linear map. Well a linear map, just pull out the linearity by definition. That equals c1 × L(v1) + c2 × L(v2) + so on and so forth + cN × L(vN).*1362

*So, again, if I have a basis for my departure space, and I take some random vector v in that departure space, I transform it, you know do some function to it, I already know what my answer is going to be... it is going to be precisely the coefficients c1, c2, all the way to cN multiplied by the transformation on the basis vectors.*1389

*All I have to do is operate on the basis vectors and I stick the coefficients that I got from my original v and I have got my answer. Where I ended up in my arrival space.*1414

*So, again, it is completely determined where I end up in my arrival space is completely determined by the linear transformation on the basis of the departure space.*1427

*Okay. Thank you for joining us at Educator.com for this particular review of linear mappings, we will see you next time.*1440

*Welcome back to educator.com, welcome back to linear algebra.*0000

*Today we are going to be talking about something called the kernel and the range of a linear map, so we talked about linear maps... we recalled some of the definitions, well, recalled the definition of a linear map... we did a couple of examples on how to check linearity.*0004

*Now we are going to talk about some specific... get a little bit deeper into the structure of a linear map, so let us just jump in and see what we can do.*0020

*Okay. Let us start off with a definition here. Okay... a linear map L from v to w is said to be 1 to 1, if for all v1 and v2 in v, v1 not equal to v2, implies that L(v1) does not equal L(v2)... excuse me.*0029

*Basically what this means is that each vector in v1 maps to a completely different element of something in w. Now, we have seen examples where... let us just take the function like x*^{2}, that you know of.0099

*Well, I know that if I take 2 and I square it, I get 4. Well, if I take a different x, -2, and I square it, I also get 4. So, as it turns out, for that function, x*^{2}, the 2 and the -2, they map to the same number... 4.0116

*That is not 1 to 1. 1 to 1 means every different number maps to a completely different number, or maps to a completely different object in the arrival space.*0133

*So, let us draw what that means. Essentially what you have is... that is the departure space, and that is the arrival space, this is v, this is w, if I have v1, v2, v3... each one of these goes some place different.*0144

*They do not go to the same place distinct, distinct, distinct, because these are distinct, that is all it is. This is just a formal way of saying it, and we call it 1 to 1... which makes sense... 1 to 1, as opposed to 2 to 1, like the x*^{2} example.0164

*Okay. An alternative definition here, if I want to, this is an implication in mathematics. This says that if this holds, that this implies this.*0180

*It means that if I know this, then this is true. Well, as it turns out, there is something called the contrapositive, where I... it is equivalent to saying, well, here let me write it out...*0191

*So, I will end up using both formulations when I do the examples. That is why I am going to give you this equivalent condition for what 1 to 1 means.*0203

*An equivalent condition for 1 to 1 is that L(v1) = L(v2), implies that v1 = v2.*0214

*This is sort of a reverse way of saying it. If I note that I have two values here, L(v1) = L(v2), I automatically know that v1 and v2 are the same thing.*0234

*This is our way of saying, again, that this thing... that two things do not map to one thing. Only one thing maps to one thing distinctly.*0246

*This one... the only reason we have two formulations of it is different problems... sometimes this formulation is easier to work with from a practical standpoint, vs. this one.*0256

*As far as intuition and understanding it, this first one is the one that makes sense ot me personally. Two different things mapped to two different things. That is all this is saying. *0267

*Okay. Let us do an example here. A couple of examples, in fact. Example... okay.*0276

*Let L be a mapping from R2 to R2, so this is a linear operator... be defined by L of the vector xy is equal to x + y, x - y.*0285

*Okay. We will let v1 be x1, y1, we will let v2 be x2, y2... we want to show... we are going to use the second formulation... L(v1) = L(v2)... implies that v1 = v2.*0314

*So, we are trying to show that it is 1 to 1, and we are going to use this alternate condition.*0359

*Let us let this be true... so L(v1) = L(v2). That means x1 + y1, x1 - y1 = L(v2), which is x2 + y2, x2 - y2... not 1.*0364

*Well, these are equal to each other. That means I get this equation, x1 + y1 = x2 + y2, and from the second part, these are equal, so let me draw these are equal and these are equal.*0394

*So, x1 - y1 = x2 - y2. Alright.*0413

*The way I have arranged these, if I actually just add these equations straight down, I get 2x1, is equal to 2x2, which implies that x1 = x2.*0422

*When I put these back, I also get, y1 = y2. This means that v1, which is x1, y1, is equal to v2. *0437

*So, by starting with the sub position that this is the case, I have shown that this is the case, which is precisely what this implication means. Implication means that when this is true, it implies this.*0451

*Well, work this out mathematically, I start with this and I follow the train of logic, and if I end up with this that means this implication is true.*0465

*This implication is the definition of 1 to 1, therefore yes. This map is 1 to 1. In other words, every single vector that I take, that I map, will always map to something different.*0474

*Okay. Let us do a second example here. Example 2. L will be R3 to R2, so it is a linear map, not a linear operator.*0491

*It is defined by L(x,y,z) = xy. This is our projection mapping. Okay, I will talk some random xyz, instead of variables we will actually use numbers.*0511

*Let us let v1 = (2,4,5), and we will let our second vector = (2,4,-7).*0531

*Well, not let us use v1 is not equal to v2. These two are not equal to each other.*0544

*However, let us see if this implies... question, does it imply that L(v1) does not equal L(v2).*0551

*Well, L(v1) is 2,4... if I take (2,4,5), I take the first 2... and the question... does it equal (2,4), which is the L(v2).*0564

*Yes. I take that one and that one, v2... (2,4), (2,4) = (2,4)... so therefore, this implication is not true.*0580

*I started off with 2 different vectors, yet I ended up mapping to the same vector in R2. In other words what happened was these 2 spaces, okay, I had 2 separate vectors in my departure space.*0591

*I had this vector (2,4), they both mapped to the same thing. That is not 1 to 1. This is 2 to 1. So, no, not 1 to 1.*0604

*Okay. Now, we can go ahead and go through this process to check 1 to 1, but as it turns out, we often would like simpler ways to decide whether a certain linear mapping or a certain mapping is 1 to 1.*0617

*As it turns out, there is an easier way, so let us introduce another definition. This time I am going to do it in red. This is a profoundly important definition.*0632

*Let L be a mapping from v to w... you have actually seen a variant of this definition under a different name, and you will recognize it immediately when I write it down... be a linear map.*0645

*Okay. The kernel of L is the subset of v, the departure space, consisting of all vectors such that L of a system of all vectors v, let us actually use a vector symbol for this... all vectors v, such that L(v) = the 0 vector in w.*0665

*So, the kernel of a linear map is the set of all those vector in v, that map to 0 in the arrival space.*0722

*Let us draw a picture of this. Very important. That is the departure space v, this is the arrival space w, if I have a series of vectors, I will just mark them as x's and I will put the 0 vector here.*0732

*Let us say I have 3 vectors in v that map to 0, those three vectors, that is my kernel of my linear map. It is the set of vectors, the collection of vectors that end up under the transformation mapping to 0.*0750

*Null space. You should think about something called the null space. It is essentially the same thing here that we are talking about.*0769

*So, where are we now? Okay. So, in this particular case, this vector, this vector, this vector would be the kernel of this particular map, whatever it is, L.*0775

*Okay. Note that 0 in v is always in the kernel of L, right? Because a linear map, the 0 vector in the departure space maps to the 0 vector, so I know that at least 0 is in our kernel.*0788

*I might have more vectors in there, but at least I know the 0 is in there.*0810

*Okay. Let us do an example. L(x,y,z,w) = x + y, z + w, this is a mapping from R4 to R2.*0816

*We want all vectors in R4 that map to (0,0). Okay? We want all vectors v in R4 that equal the 0 vector.*0836

*In other words, we want it to equal (0,0). Okay, well, when we take a look at this thing right here, x + y = 0, z + w = 0.*0854

*Well, you get x = -y, z = -w, so as it turns out, all vectors of the following form, if I let w = r, and if I let y = s, something like that, well, what you get is the following.*0880

*So, these are my two equations so I end up with (-r,r) and (-s,s). So, here I let y = ... it looks like r, and it looks like I let w = s.*0903

*Yes, I let y = r, w = s, therefore z = -s, and x = -r. So, that is what you get. *0928

*Every vector of this form, so you might have (-1,1), (-2,2), every vector of this form is in the kernel of this particular linear map.*0937

*So, there is an infinite number of these. So, the kernel has an infinite number of members in here.*0951

*Now, come to some interesting theorems here. If the linear mapping from v to w is a linear map, then the kernel of L is a subspace.*0961

*So before, we said it is a subset. But it is a very special kind of subset. The kernel is actually a subspace of our departure space v. So, extraordinary.*0989

*Let us look at the example that we just did, we have this linear mapping, we found the kernel... the kernel is all vectors of this form... well, this is the same as r × (-1,1,0,0) + s × (0,0,-1,1).*1002

*Therefore, these little triangles mean therefore, (-1,1,0,0), that vector, which what is wrong with these writings... I think I am writing too fast, I think that is what is happening here.*1032

*So, (-1,1,0,0) and (0 ... this is not going to work... (0,0,-1,1) is a basis for the kernel of L.*1050

*So here, we found the kernel, all vectors of this form, we were able to break it up into a... two sets of vectors here.*1073

*Well, since we discovered this theorem says that it is not only a subset, it is actually a subspace... well, subspaces have bases, right?*1083

*Well, this actually is a basis for the kernel and the dimension of the kernel here is dimension 2, because I have 2 vectors in my basis. That is the whole idea of dimension.*1090

*Now, let us see what else we have got. If a linear map, which maps from RN to RM is linear.*1106

*And if it is defined by matrix multiplication, then, the kernel of L is just the null space.*1129

*So if I have a linear map, where I am saying that the mapping if I have some vector... that I take that vector and I multiply it by a matrix on the left, well, the kernel of that linear map is all of the vectors which map to 0.*1151

*So, if the kernel is just the null space of that. I mean, this is the whole definition, it is this homogeneous system... a, the matrix a, times x is equal to 0.*1165

*The theorem says a linear mapping is 1 to 1 if and only if the kernel of L is equal to the 0 vector... let me redo this last part... if and only if the kernel of L equals the 0 vector in v.*1183

*If the only vector in my departure space that maps to 0 in the arrival space is the 0 vector, that tells me that - excuse me - that the linear map is 1 to 1. That means that every element v in the departure space maps to a different element v.*1214

*All I need to do is make sure that it has a 0 vector... is the only vector in the kernel.*1235

*In other words, it is of dimension 0. Okay. We have got a corollary to that.*1242

*Actually, you know, the corollary is not all together that... it is important but we will deal with it again, so I do not really want to mention it here. I have changed my mind.*1262

*Now, let me introduce our last definition before we close it out.*1271

*If L from v to w is linear, if the mapping is linear, then the range of L is the set of all vectors in w that are images under L of vectors in v.*1280

*Okay, let us just show what that means. This is our departure space, our arrival space, this is w, this is v. Let us say I have v1, v2, v3, v4, and v5.*1342

*Let us say v1 maps to 21, let us say v2 also maps to w1, let us say v3 maps to w2, and let us say v4 maps to w3, and v5 maps to w3.*1360

*The range is w1, w2, w3. It is all of the vectors in w that come from some vector in v, under L.*1380

*Now, that does not mean that every single vector... we will talk more about this actually next lesson, where I will introduce the distinction between into and onto.*1394

*So, this is not saying that every single vector in w is the image of some vector that is mapped under L.*1408

*It says that all of the vectors in w that actually come from some vector in v, that is the range. So, the range is a subset of w.*1417

*You are going to see in a second, my last theorem before we closed out this lesson, it is the range is actually a subspace of w.*1428

*So, again, the range is exactly what you have known it to be all of these years.*1437

*Normally, we speak of the domain and the range, we speak about the whole space. That is not the case here.*1444

*The range is only those things in the arrival space that are actually represented, mapped, from some vector in v.*1450

*It is not all of the space, the arrival space could be all of the arrival space, but it is not necessarily that way.*1457

*Okay. So, let us do something like, actually let me do another picture just for the hell of it, so that you see.*1465

*So, we might have... so this is v... and this is w... so the kernel might be some small little subset of that, that is a subset of v, also happens to be a subspace.*1477

*Well the range might be, some subset of w. All of these vectors in here come from some vector in here.*1490

*Okay, so it is not the entire space, and it is also a subspace. Okay. That is going to be our final theorem before we close out this lesson.*1501

*If L, which maps v to w, the vector spaces, is linear, then range of L is a subspace... subspace of w.*1513

*So, the kernel is a subspace of the departure space, the range is a subspace of the arrival space. *1536

*We are going to close it out here, but I do want to say a couple of words before we actually go to the next lesson where we are going to talk about some relationships between the kernel and the range.*1547

*I am going to ask you to recall something that we discussed called the rank nullity theorem. We said that the rank of a matrix + the dimension of the null space, which we called the nullity is equal to the dimension of the column space, which is n.*1555

*Recall that, and in the next lesson we are going to talk about the dimension of the kernel, the dimension of the range space, and the dimension of the departure space.*1574

*It is really extraordinarily beautiful relationship that exists. Certainly one of the prettiest that I personally have ever seen.*1585

*So, with that, thank you for joining us here at educator.com, we will see you next time.*1592

*Hello and welcome back to Educator.com and welcome back to linear algebra.*0000

*Today we are going to continue our discussion of the kernel and range of a linear map of a linear transformation.*0004

*From the previous lesson, we left it off defining what the range of a linear map is.*0011

*Real quickly though, let me go back and discuss what the kernel of a linear map is.*0017

*Basically, the kernel of a linear map, from a vector space v to a vector space w is all those vectors in v that map to the 0 vector. That is it.*0022

*So, if I have one vector that goes to 0, that is the kernel. If I have 5 vectors that map to 0, those 5 vectors, they form the kernel. If I have an infinite number of vectors that form... that all map to the same thing, the 0 vector in w, that is what the kernel is.*0031

*Recall that the kernel is not only a subset of the vector space v, but it is also a sub-space, so it is a very special kind of thing.*0049

*As a subspace, you can find a basis for it. Okay. Now, we defined the range also. So, the range is all those vectors in w, the arrival space that are images of some vector in v.*0056

*So, if there is something in v that maps to w, all of those w that are represented... that is the range.*0071

*That does not mean it is all of w... it can be all of w, which we actually give a special name and we will talk about that in a second, but it is just those vectors in w that are mapped from vectors in v, under the linear transformation L.*0080

*Okay. Now, let us go ahead and get started with our first theorem concerning the range. *0096

*Well, just like the kernel is a subspace of the departure space, the range happens to be a subspace of the arrival space.*0102

*So, our first theorem says... the range L is a subspace of w for L being a linear mapping from v to w.*0111

*So, again, kernel is a subspace in v, range is a subspace of w.*0146

*Okay. Let us do an example here, concerning ranges and kernels and things like that. Ranges actually.*0152

*So, we will say that L is a mapping from R3 to R3 itself, which again, when the dimension is the same, or when it is a mapping from itself onto itself, we call it a linear operator, but it is still just a linear map - let it be defined by L of some vector x is equal to a matrix product... (1,0,1), (1,1,2), (2,1,3) × x, which happens to be x1, x2, x3 in component form.*0161

*So if I have a vector in x, the transformation, the linear transformation is multiplication by a matrix on the left.*0196

*Okay. Our question is... is L onto. Okay, so, this onto thing... remember we said that the range of a linear map is those vectors in the arrival space, w, that are the image of some vector v from the departure space.*0204

*Well, if every vector in w is the image of some vector in v, that means if every single vector in w is represented, that is what we mean it is onto.*0224

*That means the linear map literally maps onto the entire space w, as opposed to the range which is just a subspace of it, a part of it. That is all onto means, all of w is represented.*0235

*Okay. So, well, let us take a random vector in w, and in the case the arrival space, in R3, so let us just go ahead and... in R3, and we will just call it w and we will call it a, b, and c.*0248

*It is just some random vector in the arrival space. Okay.*0275

*Now, the question is, can we find some vector in the departure space that is the pre-image of this w in the arrival space? That is the whole idea. So, we speak about the image, we speak about the pre-image.*0281

*So, I am starting from the perspective of w, some random vector in w... can I find... if I take every vector w... can I find something in v that actually maps to that w. That is what we want to know. Is every vector in w represented? Okay.*0299

*So, the question we want to answer is can we find (x,y,z), also in R3 because R3 is the departure space such that (1,0,1), (1,1,2), (2,1,3) × (x,y,z) equals our a,b,c, which is our random in w. That is what we want to find.*0314

*We want to find x,y,z... x, y and, z, such that this is represented. What values of a,b,c, will make this possible. Well, we go ahead and we form the augmented system, (1,0,1), (1,1,2), (2,1,3), and we augment it with a.*0349

*We augment it with a... b... c... okay, that is our augment, and then we subject it to Gauss - Jordan elimination to take it to reduced row echelon form, and when we do that we end up with the following. (1,0,1,0,1,1,0,0,0)*0372

*Over here we end up with a... b - a... c - a - b, so let us talk about what this means. Notice this last row here is all 0's, and this is c - a - b over here.*0398

*The only way that this a consistent system, the only way that this has a solution is if c - a - b = 0.*0411

*So, the only way that some random vector, when I take a random vector in w, and I subject it to the conditions of this linear map, x, y, and z, the relationship between a, b, and c, has to be that c - a - b = 0.*0425

*What this means it that this is very specific. I cannot just take any random numbers. I cannot just take (5,7,18).*0440

*The relationship among these 3 numbers for the vector in w... a, b, c, has to be such that c - a - b = 0, which means that not every vector in w is represented. So, this is not onto.*0450

*Okay. I hope that makes sense. Again, I took a random vector, I need to be able to solve this system and have every single vector be possible, but this solution, this system tells me that this has a solution, because if c - a - b = 0.*0462

*Those are very specific numbers that actually do that. Yes, there may be an infinite number of them, but they do not represent all of the vectors in w, therefore this linear map is not onto.*0480

*Okay. Now, let us do something else. Continue the example... part b.*0490

*Now, the question is, find a basis for the range of L. Find a basis for the range of L.*0501

*So, we know it is a subspace, so we know the basis for it. So, let us go ahead and see what this range is.*0513

*In other words, let us take L of some random x,y,z, which is in v, the departure space... well, L is of course, that matrix, (1,0,1), (1,1,2), (2,1,3), × x,y,z, and when I do this matrix multiplication, I end up with the following.*0520

*The vector that I get is +z, x + y, + 2z, and the third entry is going to be 2x + y + 3z, and I just got that from basic matrix multiplication... this times that + this times that + this times that, and then go to the second row... this times that, this times that... *0546

*Remember? Matrix multiplication. We did it really, really, really early on. Okay.*0570

*So, this thing, I can actually pull out... it becomes the following. It is x × (1,1,2), I just take the coefficients of the x's, +y × (0,1,1)... + z × (1,2,3).*0575

*Therefore, that vector, that vector, and that vector... let me actually write them as a set, but we have not found the basis yet. This just gives me the span. Okay.*0603

*So, the vector is (1,1,2), (0,1,1), and (1,2,3), they span the range of L.*0616

*So remember, a series of vectors, a set of vectors that spans a subspace or a space, in order for it to be a basis, it has to be a linearly independent set.*0632

*So, once we have our span, we need to check these three vectors to makes sure that they are linearly independent, and we do that by solving, by taking this matrix... augmenting it with a 0 vector, and then solving... turning it into reduced row echelon form and then the vector of the one's with the leading entries... those actually form a basis for a space.*0644

*So, let us take this, the matrix again, so we do (1,1,2), (0,1,1), (1,2,3), we augment with (0,0,0), we turn it into reduced row echelon form, and we end up with the following.*0668

*We end up with (1,0,0), (0,1,0), both of those columns have leading entries, we end up with (1,1,0), no leading entry there, and of course (0,0,0).*0684

*So, our first column and our second column have leading entries which means the vectors corresponding to the first and second column, namely that and that, they form a basis for this space.*0693

*That means that this was actually a linear combination of these two. This set is not linearly independent, but these two alone are linearly independent. That is what this process did.*0707

*So, now, I am going to take the first two vectors, (1,1,2), the first two columns... and (0,1,1), this is a basis for the range. Notice, the basis for the range has 2 vectors in it.*0717

*A basis, the number of vectors in the basis is the dimension of that subspace. So, the range has a dimension of 2.*0744

*However, our w, our arrival space was R3. It has dimension of 3. Since this dimension is 2, it is not the entire space. So, this confirms the fact of what we did in a.*0751

*It confirms the fact that this linear map is not onto, and in fact this procedure is probably the best thing to do... find the spanning set for the range, and then reduce it to a basis, and then just count the number of vectors in the basis.*0765

*If the dimension of the range is less than the dimension of the arrival space, well, it is not onto. If it is, it is onto. *0779

*Okay. Let us continue with this and do a little bit more, extract a little bit more information here. Let us find the kernel of this linear map, find the kernel of L. Okay.*0790

*So, now, -- let me erase this -- now what we want, we want to take... well, the kernel is again, all the... we want to find all of the vectors that map to 0 in the arrival space, which means we want to solve the homogeneous system.*0803

*We want to find a, all the vectors x, that map to 0 in w. Well, that is just... take the matrix 1... let me do it as rows (1,0,1), (1,1,2), (2,1,3) × (x,y,z) is equal to (0,0,0). Okay.*0820

*Then, what we end up doing is... well, this, when we form the augmented matrix of this matrix + that, which is just the one that we just did, we end up with the following... (x,y,z), the vectors in v that satisfy this take the form (-r,-r,r), which is the same as r × (-1,-1,1).*0848

*This is one vector, therefore, the vector (-1,-1,1), this vector is a basis for the kernel. The kernel is a subspace of the departure space. This is a basis for the kernel. It is one dimensional.*0871

*Okay. d is L... 1 to 1... do you remember what 1 to 1 means? It means any two different vectors in the departure space mapped to two different vectors in the arrival space... 1 to 1.*0896

*Okay. Well, let us see. The dimension of the kernel of L = 1, which is what we just got up here. That implies that it is not 1 to 1.*0918

*The reason is... in order to have 1 to 1, the dimension of the kernel needs to be 0. It means this map, the only thing that should be in the kernel is the 0 vector in the departure space. That means 0 maps only to 0.*0932

*That is the only thing that maps to 0. Everything else maps to something else. If that is the case, when only the 0 is in the kernel, 0 in the departure space, we can say -- that is one of the theorems we had in the last lesson -- we can say that this linear map is 1 to 1.*0949

*We know that it is not onto. Now we also know that it is not 1 to 1. Okay.*0965

*So, this is not 1 to 1. Now, I would like you to notice something. Let me put this in blue.*0972

*We had our departure space v as R3, 3-dimensional. The dimension of the range of the linear map is equal to 2. The dimension of the kernel of this linear map was equal to 1. 2 + 1 = 3. This is always true. This is not a coincidence.*0983

*So, let us express this as a theorem. Profound, profound theorem.*1011

*Okay. Let L be a mapping from v to w, let it be a linear mapping -- because we are talking about linear maps after all -- let it be a linear map of an n-dimensional space... n-dimensional vector space into -- notice we did not say onto -- into an m-dimensional vector space w, vector space v, w.*1025

*Then, the dimension of the kernel of L + the dimension of the range of L is equal to the dimension of v. The departure space. Let us stop and think about that for a second.*1066

*If we have a linear map, and let us say my departure space is 5-dimensional, well, I know that the relationship that exists between the kernel of that linear map and the range of that linear map is that their sum of the dimensions is always going to equal the dimension of the departure space.*1086

*So, if I have a 5-dimensional departure space, let us say R5 -- excuse me -- and I happen to know that my kernel has a dimension 2, I know that my range has a dimension 3. I know that I am already dealing with a linear map that is neither 1 to 1 nor onto.*1109

*This is kind of extraordinary. Now, recall from a previous discussion when we were talking about matrices, and how matrix has those fundamental spaces.*1125

*It has the column space, it has the row space, it has the null space, which is the -- you know -- the space of all of the vectors that map to 0, which is exactly what the kernel is.*1136

*So, we can also express this in the analogous form, you have already seen this theorem before... the nullity of a matrix plus the rank of the matrix, which is the dimension of the row space, or column space, is equal to n for an m x n matrix.*1149

*Of course, we have already said that when we are dealing with the Euclidian space RN and RM, every linear map is representable by a matrix. So, we already did the matrix version of this. *1177

*Now we are doing the general linear mapping. We do not have to necessarily be talking about Euclidian space R2 to R3, it can be any n-dimensional space. For example, a space of polynomials of degree < or = 5. Boom. There you have your particular, you know, finite dimensional vector space.*1191

*Again, given a linear map, any linear map, the dimension of the kernel of that linear map, plus the dimension of the range of that linear map, is going to equal the dimension of the departure space. Deep theorem, profound theorem.*1211

*Okay. Now, let me see here, I wanted to talk a little bit about the nature of 1 to 1 and onto, just to give you a pictorial representation of what it is that really means, and then state a theorem for linear operators, when the arrival space and the departure space happen to be of the same dimensions.*1224

*So, let us draw some pictures and as you will see I am actually going to draw them... the sizes that I draw them are going to be significant.*1250

*If I have v, and if I have w, so this is v, this is w... departure space, arrival space... if I have a 1 to 1 map, a 1 to 1 map means that everything over here... everything in v maps to a different element in w.*1259

*It does not map to everything in w, but it maps to something different in w. However, everything in v is represented, and it goes to something over here.*1287

*I drew it like this to let you know that there is this size issue. In some sense, w is bigger than v, and I use the term bigger in quotes.*1296

*Now, let us do an onto map. So, this is 1 to 1, but it is not onto. Let me actually write that... "not onto".*1305

*Now we will do the other version. We will do something that is onto, but not 1 to 1.*1318

*So, let us make this v, and let us make this w. So now, an onto map means everything, every single vector in w comes from some vector in v.*1323

*But that does not mean that every vector in v maps to something in w. Every single one in here, so now, everything in w is represented, so this is onto, not 1 to 1.*1336

*Okay. It could also be that -- you know -- two different vectors here actually map to the same vector here.*1356

*The point is that every single vector in w comes from is the image under the linear map of something from v.*1364

*But, it does not mean that it is everything from v... something from v.*1372

*Okay. Now, 1 to 1 and onto, now let me state my theorem and we will draw our picture.*1378

*Theorem. Let L be mapping from v to w... be a linear map and let the dimension of v equal the dimension of w.*1386

*So, it is a mapping -- not necessarily the same space, but the have the same dimension -- so, the most general idea, not just R3 to R3, or R4 to R4, but some finite dimensional vector space that has dimension 5, some set of objects, it maps to a different set of objects that happen to have the same dimension. They do not have to be the same objects.*1409

*But, the dimensions of the two spaces are the same... then we can conclude the following: L... -- well, let me write it as an if, then statement -- if L is 1 to 1, then L is onto, and vice versa... If L onto, then L is 1 to 1.*1432

*So, if I am dealing with a map from one vector space to another, where the dimensions of the two spaces, the departure and the arrival are the same ... if I know that it is 1 to 1, I know that it is onto... if I know that it is onto, I know that it is 1 to 1.*1463

*This makes sense intuitively if you think about the fact that we want every vector in v to map to every vector in w. We call this a 1 to 1 correspondence, completely for the both sets. *1476

*The fact that they are the same dimension, it intuitively makes sense. In some sense, those spaces are the same size, if you will.*1489

*Again, we use the term size in quotes, so it looks something like this... everything in v is represented, and it maps to everything... something in v, everything in v maps to something in w, and all of w is represented.*1496

*In some sense, I am saying that they are equal, that they are the same size.*1519

*Again, we use those terms sort of loosely, but it sort of gives you a way of thinking about what it means... a smaller space going into a smaller space, onto. *1525

*This whole idea if they happen to be of the same dimension, the same size so to speak.*1537

*Then a 1 to 1 map implies that it is onto, and onto implies that it is 1 to 1.*1542

*Okay. Thank you for joining us at Educator.com, we will see you next time.*1549

*Welcome back to Educator.com, welcome back to linear algebra.*0000

*Today we are going to talk about the matrix of a linear map.*0004

*Okay. Let us just jump right in. We have already seen that when you have a linear map from RN to RM, let us say form R3 to R5... that that linear map is always representable by some matrix... a 5 by 3 map in this case. Always.*0009

*So, today, we want to generalize that result and deal with linear maps in general, not necessarily from one Euclidian space to another Euclidian space, but any vector space at all.*0030

*So, let us go ahead and start with a theorem. Some people might call this... most of you are familiar, of course, with the fundamental theorem of calculus. There's also something called the fundamental theorem of algebra that concerns the roots of a polynomial equation.*0042

*In some sense, if you want this theorem that I am about to write, you can consider it the fundamental theorem of linear algebra.*0058

*It sort of ties everything together. If you want to call it that. Some people do, some people don't, it certainly is not historically referred to that way the way the others are... but this sort of brings the entire course that we have done... everything has sort of come to this one point.*0065

*Let us go ahead and write it down very carefully, and talk about it, do some examples. *0085

*Here - okay - the statement of this theorem is a bit long, but there is nothing strange about it. Let L from v to w be a linear map from an n-dimensional vector space into an m-dimensional vector space.*0094

*Again, we are talking about finite dimensional vector spaces, always. We are not talking about infinite dimensional vector spaces.*0132

*There is a branch of mathematics that does deal with that called functional analysis, but we are concerned with finite.*0137

*N - dimensional vector space, sorry about that, okay... we will let s, which equals the set v1, v2, so on, to vN, be a basis for v.*0152

*And, t which equals w1, w2, so on and so forth, on to wM... be a basis for w, the arrival space.*0177

*I really love referring to them as departure and arrival space. It is a lot more clear that way.*0195

*Then, the m by n matrix a, whose j*^{th} column is the vector L(v_{j}), the coordinate vector with respect to t... and I will explain what all of this means in just a minute, do not worry about the notation... is the matrix associated with the linear map and has the following property.0202

*L(x) with respect to t is equal to a × x, with respect to s. Okay. So, let me read through this and talk a little bit about what it means.*0274

*So, L is a linear map from a finite dimensional vector space, excuse me, to another finite dimensional vector space. The dimensions do not necessarily have to be the same, it is a linear map.*0293

*Okay. S is a basis for v, the departure space. T is a basis for the arrival space. Okay. Then, there is a matrix associated with this linear map of 2 arbitrary vector spaces.*0302

*There is a matrix associated with this, and the columns of the matrix happen to be, so for example if I want the first column of this particular matrix, I actually perform L the linear map, I perform it on the basis vectors for the departure space.*0319

*Then once I find those values, I find their coordinate vectors with respect to the basis t.*0338

*You remember coordinates... and once I put them in the columns, that is the matrix that is associated with this linear map. The same way that this matrix was associated with the linear map from one Euclidian space to another, R4 to R7. *0345

*You know, giving you a 7 by 4 matrix. Well, remember we talked about coordinates. You know a vector can be represented by a linear combination of the elements of the basis.*0361

*So, if we have a basis for the elements of a particular space, all we have to do is... the coordinates are just the particular constants that make up that linear combination.*0372

*In some sense, we are sort of associating a 5-dimensional random vector space with R5. We are giving it numbers, that is... we are labeling it. That is what we are doing, and it has an interesting property.*0380

*That if I take some random vector in the departure space, and I perform some operation on it, and then I find its coordinate vector with respect to the basis t, it is the same as if I take that x before I do anything to it. Find its coordinate vector with respect to s in the departure space, and then multiply it by this particular matrix. I get the same answer.*0394

*So, let us just do some examples and I think it will make a lot more sense. So, let us see, but before I do... let me write out a procedure explicitly for how to compute the matrix of the linear map.*0420

*Okay. So, let us do this in red. Oops - there it is. Wow, that was interesting. That is a strange line. Alright.*0440

*So, procedure for computing the matrix of L from v to w, the matrix of a linear map with s and t as respective bases.*0458

*So s is a basis for v, t is a basis for w, and it is the same as the theorem up here. S we will represent as v1, v2, all the way to vN. W we will... t we will represent with w1, w2, all the way... all the way through there.*0495

*Okay. So, the first thing you do, is compute L*_{vj}, in other words, take all of the vector in the basis for the departure space, and perform the particular linear operation on them. Just perform the function and see what you get.0514

*Step 2, now, once you have those, you want to find the coordinate vector with respect to t, what that means is if you remember right, and if you do not you can review the previous lesson where we talked about coordinate vectors.*0539

*We did a fair number of examples if I remember right - express the L*_{vj} that you got from the first step as a linear combination of the vectors w1, w2, to wM. The vectors in the t bases... w1, w2... wM, and we will be doing this in a minute, so do not worry about the procedure if you do not remember it.0558

*Three, we set the thing we got from R2, we set this as the j*^{th} column of the matrix.0592

*So, we do it for each bases vector. For the bases of the departure space, and we... so if we have n vectors, we will have n columns.*0608

*That will be our matrix, and we are done. Okay.*0617

*Let us just... before I do that, I actually want to give you a little pictorial representation of what it is that is actually going on here. So, if I take x... let me show you what it is that I am talking about. Let me go back to blue.*0622

*What that property means. What it really means, t = a, this was the last thing that we wrote in the theorem, and we said that the coordinate vector under the transformation of some random vector in the departure space is equal to the matrix that we end up computing times the coordinate vector with respect to the bases s from the departure space. Here is what this means. *0649

*It means if I take some random x in the departure space, I can perform L on it and I get L(x), of course. Well, and then of course from there I can go ahead and find the coordinate vector, which is the L(x) with respect to some bases t. *0680

*So, in other words, I go from my departure space to my arrival space and then I actually convert that to some coordinate, because I need to deal with some numbers.*0707

*Well, as it turns out, instead what I can do is I can go ahead and just take x in my departure space, find its coordinate with respect to the bases of the departure space, and then I can just multiple by a, the matrix a that I compute.*0714

*They end up actually being the same thing. I can either go directly, or I can go through the matrix. That is what... you will see these often in algebraic courses in mathematics... is you will often see different paths to a particular place that you want to get to.*0732

*You can either do it directly from L, or you can do it through the matrix. And, it is nice to have these options, because sometimes this option might not be available, sometimes this might be the only one available. At least you have a path to get there.*0749

*a... x... s... those of you who have studied multi-variable calculus, or are doing so now, you are going to be discussing something called Green's Theorem and Stoke's Theorem, and possibly the generalized version of that.*0762

*I don't know, depending on the school that you are attending. But, essentially what those theorems do is they allow you to express an integral as the different kind of integral.*0780

*Instead of solving a line integral or a surface integral, you end up solving an area integral or a volume integral which you know how to do already from your basic calculus. It allows you a different path to the same place. That is what is going on.*0792

*That is essentially what the fundamental theorem of calculus is, is an alternate path to the same place. This is just an algebraic version of it... that is what we want in path.*0804

*We want different paths just to get some place. Because often one path is better than the other. Easier than the other. So again, it means if I want to take a vector, I can find its coordinate in the arrival space by just doing it directly.*0813

*But if that path is not available and I have the matrix, I can just take its coordinate vector in the departure space, multiply it by the matrix, and I end up with the same answer. That is kind of extraordinary.*0829

*Again, it is all a property of this linear map, and the maintenance of the structure of one space to another. Okay, let us just jump in to the examples because I think that is going to make the most sense.*0842

*Example... so, our linear map is going to be from R3 to R2. In this case we are using Euclidian spaces, 2-dimensional, 2 dimensions to 3 dimensions, and it is defined by L(x,y,z) = x + y, y - z. We take a 3 vector, we map it to a 2 vector. This is one entry, that is two entries. Okay.*0856

*Now, we have our two bases that we are given. So, basis is (1,0,0), no, that is not going to work. We have (1,0,0), (0,1,0), (0,0,1). The natural bases for R3.*0885

*And t is going to be... in the natural bases for R2. This is not going to work, I need to get this a little more clear. I know you guys know what is going on, but I definitely want... okay.*0909

*So, the first thing we are going to do is we are going to calculate L(v1), which is L(1,0,0), which equals 1 + 0, 0 - 0, which equals (1,0). Boom. That is that one.*0930

*We do the same thing for L of... well, let me... I am just going to go straight into it. Let us do L(v2), which is... (0,1,0). That is going to equal (1,1), and if I do L(0,0,1), I end up with 0 - 1. Okay.*0953

*So, I found L of the bases vectors of the departure space. Now, I want to express these... these numbers with respect to the basis for the arrival space, mainly with respect to t.*0981

*Well, since t is the natural basis, I do not have to do anything here, and again, when we are dealing with a natural basis, namely (1,0,0), (0,1,0), (0,0,1)... we do not have to change anything.*0998

*So, as it turns out, L(v1) with respect to the basis t, this thing with respect to t happens to equal (1,0), and the same for the others.*1016

*L(v2), with respect to the basis t because it is the natural basis is equal to (1,1), and then L(v3) with respect to the natural basis for t is equal to (0,-1).*1039

*Now, I take these 3 and I arrange them in a column, therefore my matrix a is... I arrange them as columns... (1,0), (1,1), (0,-1), that is my matrix for my transformation.*1055

*My linear transformation is associated with this matrix. Very nice.*1071

*Okay. Now, let us do the second example, which is going to... we are going to change the basis. We are not going to use the natural basis, we are going to change it, and we are going to see how the matrix changes. Alright.*1080

*I am going to do this one in red... Okay. So, everything is the same as far as the linear map is concerned. The only difference now is... well, let me write it out again... so L is a mapping from R3 to R2, defined by L(x,y,z) = x + y and y - z.*1093

*Now, our basis s = (1,0,1), (0,1,1) and (1,1,1)... and our basis t is now going to equal (1,2) and (-1,1), so we have changed the bases.*1129

*Well, we will see what happens. So, let us calculate L(v1), okay, that is going to equal 1 - 1, I will let you verify this.*1165

*L(v2) = (1,0) and L(v3) = (2,0). Okay.*1194

*To find L(v*_{j}) with respect to the basis t, here is what we have to do. We need to express each of these as a linear combination of the basis for the arrival space.1210

*In other words, we need to express L(v1), which is equal to (1,-1) as a1 × (1,2) + a2... 2 constants × (-1,1).*1233

*(1,2) and (-1,1), that is the basis for our arrival space. So, we need to find constants a1 and a2 such that a linear combination of them, this linear combination of them equals this vector. That is the whole idea behind the coordinate vectors.*1254

*Okay, and I am going to write out all of this explicitly so we see what it is that we are looking at... L(v2), which is equal to (1,0), I want that to equal b1 × (1,2) + b2 × (-1,1)... and I want L(v3) which equals (2,0), I want that to equal c1 × (1,2) + c2 × (-1,1).*1271

*Okay. Well this is just a... we are going to solve an augmented system, except now we are looking for 3 solutions, just 1, 2, 3, so we are going to augment with 3 new columns.*1302

*So, here is what we form. We form (1,2), this is our matrix, (-1,1), and then we augment it with these 3 (1,-1), (1,0), (2,0).*1313

*I convert to reduce row echelon form, and I get (1,0,0,1), I get (0,-1,1/3,-2/3,2/3,-4/3). There we go.*1329

*Now I have expressed the L, I found the L, I converted each of these L into the coordinate vector... here, here, here, that is what these are. These are the coordinates of these 3 things that I found up here, with respect to the basis of the arrival space.*1350

*Now I just take these, and that is my matrix... a = (0,... mmm, these random lines are killing me... (0,-1,1/3,-2/3,2/3,-4/3). This is our matrix and notice this is not the same matrix that we had before.*1369

*This is (0,1/3,2/3,-1,-2/3,-4/3). The previous a that we got with respect to the natural basis, if you remember, let me do this in black actually, a natural basis, the first example, we ended up with (1,0,1,1... this is not going to work, what is going on here with these lines, it is driving me crazy... (1,0,1,1,0,-1).*1402

*This is the matrix, the same linear map... this is one matrix with respect to one basis, the same linear map. This matrix is different because we changed the basis. This is very, very important, and I will discuss this towards the end of the lesson.*1447

*I will actually digress into a bit of a philosophical discussion for what it is that is going on here.*1465

*Okay. So, we have this. So, we found our matrix with respect to that, now, let us confirm that property that we have.*1472

*So, we said that there is this property... let me go back to blue... L of some random vector in the departure space, the coordinate vector with respect to the basis is equal to this a × the x with respect to the basis of the arrival space.*1485

*Okay. So, a is the matrix that we just found, the one with (0,-1,1/3,-2/3,2/3,-4/3). Okay.*1514

*Let us just let x, pick a random... let us let it equal (3,7,5), random vector.*1527

*Okay. Well, L(x), or L(3,7,5) = 3 + 7, 7 - 5. It is equal to (10,2). Okay.*1538

*Let us circle that in red. Let us just set that aside. That is our transformation, (10,2). Okay.*1554

*Now, x... (3,7,5), with respect to f, the basis of the departure space equals the following... we are looking for... so we want to express this with respect to the basis s, which was (1,0,1,0,1,1,1,1,1,1), if you want to flip back and take a look at that basis.*1560

*We are looking for a1, a2, a3, such that a1 × (1,0,1) + a2 × (0,1,1) + a3 × (1,1,1) = this (3,7,5). Okay.*1595

*As it turns out, this a1, a2, a3... when I actually perform this ax = b, set this up, augmented matrix, I solve it... I end up with the following... (-2,2,5).*1620

*This equals x, my (3,7,5) with respect to the basis s. Okay. So, we will set that off for a second. Now, let me take this x*_{x}, which is (-2,2,5), and let me multiply it by my... so let me basically perform the right side here... let me multiply it by the matrix that I just got.1641

*When I do that, I get (-1,1/3,-2/3,2/3,-4/3) × what I just got, which is (-2,2,5).*1672

*When I perform that, I end up with 4 - 6... put a blue circle around that.*1692

*Okay. That means... so, if I take... this is the coordinate vector of the thing that I got the (10,2). By getting it directly, when I solve the transformation, this is the coordinate vector with respect to its basis t, the arrival space.*1707

*It is telling me that that thing is equal to a × the x*_{s}, well I found the ax_{s} and I multiplied times the matrix, and this is what I got.1739

*Now... that means that this thing times the basis t should give me (10,2), right?*1748

*So, if I take 4 times the basis for t, which is (1,2) - 6 × the other basis vector (1,1), if I actually perform this, I end up with (10,2).*1757

*This is the same as that. That confirms this property. That is what is going on here. Really, what is ultimately important here is the ability, is the first part of this example, the first two examples, the ability to compute the transformation matrix.*1775

*It allows me instead of doing the transformation directly, which may or may not be difficult, to go through and just do a matrix multiplication problem, which is usually very, very easy.*1792

*Okay. So, now, to our philosophical discussion. You notice that the same linear transformation gave rise to two different matrices.*1803

*Well, that is because we used two different bases. So, what you are seeing here is an example of something very, very, very profound. Not just mathematically, but very profound physically. Very profound with respect to the nature of reality.*1819

*Something exists. A linear map in this case. Something independent exists and it is independent of our representation.*1836

*In other words, in order for us to handle it, notice this linear map... we have to deal with coordinates. We have to choose a basis.*1846

*In one example, we chose the natural basis. In the second part of the example, we chose a different basis. Both of them are perfectly good bases, and this matrix that actually represents the matrix changes... the linear map does not change.*1853

*So, as it turns out, the linear map is that thing underneath which does exist, but in order to handle that linear map, we need to give it labels. We need to give it a frame of reference. We need to be able to "measure" it.*1867

*That is what science is all about... we are taking things that exist and we are assigning labels to them.*1879

*Well, let me take this a little bit deeper... all of your lives you have been told... so if you have some coordinate system... the standard Cartesian coordinate system, and if I tell you from this (0,0) point, if I move 3 spaces to the right, and then if I move 7 spaces up, that there is this point (3,7).*1885

*Well, (3,7) is just the label that we have attached to it. That point in that 2-dimensional vector space, Euclidian vector space exists whether I call it (3,7) or if I change bases, it might be an entirely... it might be (4,-19), depending on the basis that I choose. *1905

*Again, these things exist independent of the labels that we assign to them. The representations that we use to handle them. We need to... we need to be able to represent them somehow so that we can handle them, but the representation is not the thing in itself.*1924

*It is very, very important to be able to distinguish between those two. Okay.*1939

*Is it reducing it to something that it is not? No. It is just a label, but it is important for us to actually recognize that, so when we speak about the point (3,7), we need to understand that we have given that point a representation so that we can handle it.*1948

*We can manipulate it mathematically so that we can assign it some sort of value in the real world.*1963

*But its existence is not contingent on that. It exists whether we handle it or not, that is what is amazing. So what is amazing about abstract mathematics is we can make statement about the existence of something without having to handle it.*1969

*It is the engineers and physicists that actually label them, give them frames of references, so that they can actually manipulate them. That is all that is going on here.*1982

*Okay, with that, I am going to close it out.*1993

*Thank you for joining us at Educator.com, and thank you for joining us for linear algebra. Take good care, bye-bye.*1997

*Welcome back to educator.com, and welcome back to linear algebra.*0000

*Today we are going to be discussing properties of matrix operations, so we have talked a little bit about matrices, before we talked about the dot product.*0004

*We introduced some linear systems with a couple of examples, we discovered that we can either have no solution, one solution or infinite number of solutions.*0012

*And now we are going to continue to develop some of the tools that we need in order to make linear algebra more understandable and more powerful, which is ultimately what we want, is we want a tool box from which we can pull and develop this really powerful mathematical technique.*0021

*Today we are going to be talking about the properties of the matrices, we are going to be talking about properties of additions, properties of matrix multiplication, scalar multiplication, other things like that.*0035

*Most of these properties are going to be familiar to you from what you have been doing with the real numbers ever since you were kids, addition, distributive property, associative property, things like that.*0046

*Most properties carry over, however not all properties carry over, and there is a few minor sodalities that we want to sort of make sure we understand and pay attention to, so things don't get to add up hand, so let's just jump right in.*0057

*Okay, the first thing we are going to be talking about are the properties of addition, properties of matrix addition, so let's go ahead and list them out....*0073

*... We will let A, B, C and D be matrices, more specifically we want them to be actually M by N matrices.*0086

*And again remember that we are using capital letters for matrices and usually when we talk about vectors, which is in M by 1, or a 1 by N matrix, then we will usually use a lower case letter with an arrow on top of it, but capital letters for standard matrix.*0102

*Let A,. B, C and D be M by N matrices, okay...*0117

*... A + B = B + A, this is the commutativity property, in other words it doesn't really matter which order I add them in, just like if you add 5 = 4 is 4+ 5, matrices are the same thing.*0129

*And again we are presuming that these are defined, so if you have a 2 by 3, you need a 2 by 3, you can't add matrices of different dimensions.*0142

*B, our second property...*0152

*... Is the associativity property, so it says that if I add B and C, and then if I add A to it, I can actually do it in a different order, I can add A and B first and then add C to it, again very similar from what you understand with the really numbers, it's perfectly true for matrices as well.*0162

*Yeah, okay, i am going to introduce a new symbol here, it's a little bit of short hand , it's a reverse E with an exclamation point, it means there is a unique.*0181

*that means there is 1, so there is a unique matrix...*0190

*... RIX, 0 and I am going to put a little bracket around it to make sure that it’s a, this is the symbol for the 0 matrix, which consist of all entries that are 0.*0197

*There exists a unique matrix 0, such that A + 0 matrix equals A, and this serves the same function in matrices as the 0 does in numbers, so in other words 5 + 0 is still 5.*0207

*This is called the additive identity, additive because we are dealing with additions, identity because it leads at the same, A over here on the left side of the equality sign, A over here on the right side of equality sign, so it is called additive identity...*0225

*... Or more formally, with these three lines here means it's equivalent to, we just call at the 0 matrix, it's probably more appropriate or more often be referring to it as just the 0 matrix or something like that instead of the additive identity, okay.*0245

*And our final, okay, once again our little symbol, there exist a unique M by N matrix...*0261

*... We will call it D, such that, we are sure, let's put it over here, A + D = the 0 matrix....*0274

*Call D - A, so we are referred to D as -A, in other words it's just the reverse, so if I have the number 7, well -7, 7 + -7 = 0.*0289

*It gives me the additive identity for the real numbers same way if I have some matrix A, and if I have -A to it, I end up with the 0 matrix, makes complete sense, and again we are doing corresponding entries, okay.*0303

*Lets do an example...*0318

*... Let's let A = (, 1, -2, -3, 0, 8 and 7), okay and we will let -A, which is just the same as -1 times A, well just reverse everything.*0325

*We have (-1, 2, 3) and then we have (0, -8, -7).*0343

*When we do A + -A, we add corresponding entries 1 + -1 = 0, -2 + 2 = 0, remember we have already taken care of this negative sign in the entries of the matrix, so I don't have to go -2 - -2, I have already taken care of it.*0354

*This -A is just a symbol, these are the entries, so we have -2 and 2 is 0, we have -3 and 3 is 0, and then we go on to form the 2 by 3 0 matrix, there you go.*0377

*Now we are going to move on to the properties of multiplication, let me go back to my black ink here...*0396

*... Okay, once again A, B, C are matrices of appropriate size, and if you remember when we deal with matrix multiplication, appropriate size, when we are multiplying two matrices.*0410

*Let's say for example we have, put eg for example, if I have a 3 by 2 matrix, and multiply that, I have to multiply that by a 2 by some other number matrix.*0435

*These inside dimensions have to match in order for my final matrix to be a 3 by whatever, my final matrix is outside, so this is what we mean by appropriate and again we are presuming that we can do the multiplication, the multiplication is defined in other words.*0449

*A, B and C are matrices of appropriate size, then we have A times BC quantity = AB quantity times C, again this is the associative property for multiplication now, I can multiply B times C then multiply by A.*0467

*It's the same thing as doing A times B first and then multiplying by C, I get the same answer, same as for real numbers.*0487

*B, A times quantity B + C, well it's exactly what you think it is, this is a distributive property for matrices, and it is equal to A times B + A times C, In other words I am treating these, just like numbers except their matrices.*0495

*I am going to reiterate that fact over, and over and over and over again, and later on when we talk about precise definitions of linearity, when we talk about linear maps in more of the abstract, it will actually make a lot more sense as it turns out.*0515

*Matrices and the numbers are actually examples of deeper properties, it's something called a group property, which those of you that go on to higher mathematics will discuss very beautiful area of mathematics.*0529

*And the nice thing is that you only have to prove it for one mathematical object called a group and then you just sort of check to see if these objects that you run across in your studies fulfil the certain handful of axioms and then since you have already done all the work, you don't have to do all the work for proving all the theorems for it again.*0540

*See, A + B times C = AC + BC, that's just the distributive property in reverse, okay, let's do an example here, let’s take....*0561

*... A= (5, 2, 3, 2, -3, 4), well take B = (2, -1, 1, 0, 0,2, 2, 2, 3, 0, 3, 0, -1, 3).*0582

*Okay, and let's take C also little bit of a big matrix here, (1, 0, 2, 2, -3, 0, 0, 0, 3 and 2, 1, 0).*0607

*Now I have to warn you from time to time when I write my matrices, I may actually skip these brackets over here, sometimes I'll just write them as a square array of numbers, it's a personal choice on my part.*0627

*As long as the numbers are associated, there should be no confusion, so in case you see that I have forgotten the brackets, it's a completely notational thing, you can actually deal mathematics anyway that you want to.*0638

*You can use symbols, you can use words, it's the underlying mathematics that's important, not the symbolic representation, of ‘course presuming that the symbolic representation is understandable to somebody else.*0649

*Okay, so we have our three matrices here, and let's go ahead and move forward, so we are going to calculate A times BC, now i am presuming that you are reasonably familiar with matrix multiplication.*0660

*We will go ahead and just write out, when we do B times C first, and then multiply by A on the left hand side, we end up the following matrix (43, 16, 56, 12, 30 and 8).*0675

*Now, when we do AB first, and then multiply by C on the right hand side, well as it turns out we get ( 43, 16, 56, 12, 30 and 8).*0696

*Now, since we are mostly with just arithmetic and adding numbers, i suggest that you actually go ahead and run through this to confirm that this is true, it is true.*0712

*One thing that you should note here, notice we did BC here, and A on the left hand side.*0720

*Here we did AB and we did C on the right hand side.*0729

*If you notice when we were writing down the properties for matrix multiplication, we did not say A times B = B times A, the way it did for addition.*0733

*Addition of matrices commute, A + B = B + A, however multiplication of matrices A times B is not equal to B times A, and generally speaking, if we do M times P matrix, times a P times N matrix.*0743

*M by P, P by N, these have to be the same, what you end up with is an M by N matrix.*0763

*Well if you reverse those two, if you get N and N by P matrix, and try to multiply it by M by P matrix, now P and M are not the same.*0770

*This, in this particular case, the multiplication is not even defined, now is it possible that if you switch to matrices, you may end up actually getting something that is equal, possible if you are dealing with a square matrix, hire them likely but it's not a general case.*0785

*Commutivity and multiplication doesn’t not hold, that's a very unusual property.*0800

*This is one of properties that's different in matrices than it is in numbers, 5 times 6 = 6 times 5, 5 + 6 = 6 + 5, not the case in matrices.*0805

*Remember, commutivity, I am sorry, multiplication in matrices does not commute, so multiplication in matrices...*0815

*... Does not commute, very important, it has profound consequences which we will deal with later on.*0831

*Okay, let's move on to some other definitions and properties that are associated with multiplication, so let us define an identity matrix.*0840

*Earlier we had talked about a 0 matrix, which is just a matrix with all 0 entries, and we said that it was the additive identity, in other words if you add it to any other matrix, nothing changes, you just get the matrix back.*0855

*In this case an identity matrix, it is an N by N matrix, so it is square, same number of rows as columns, where everything on the main diagonal, all entries are 1, so an example that the symbolism is something like this.*0867

*The identity matrix for 2 is a 2 by 2 matrix, (1, 0, 0, 1), this is the main diagonal right here from top left to bottom right.*0884

*This is the skew diagonal, this is the main diagonal, the identity matrix for a fourth dimension is exactly what you think it is (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1).*0893

*It's a four by four on the main diagonal, top left, bottom right, everything is a 1, so that's just the standard identity matrix, okay.*0912

*Now...*0924

*... Let A be a matrix....*0930

*... The identity matrix of dimension M times A, okay, well, let it be a matrix of dimension M by N, we should specify actually, so we have a matrix M by N.*0937

*the identity matrix multiplied by A on the left, meaning the Identity matrix on the A is the same as A times the identity matrix of dimension N on the right, and again this has to do with matrix multiplication.*0952

*It needs to be defined, this identity matrix is M by M, A is M by N, therefore the M's cancel and you are left with an M by N matrix and the same thing here.*0968

*A is M by N, let me write it out, M by N, the identity matrix here is N by N, these go way leaving you an M by N matrix, so when you are multiplying by identity matrices, commutivity can hold.*0980

*Next property, let's say we have the matrix A raised to some P power, P is an integer, 2, 3, 4, 5, 6, 7, natural numbers.*1000

*That's the same as just A times A times A, just P times...*1008

*... Same thing is for numbers, 4*^{3} is 4 times, 4 times, 4, the matrix A^{3} is just A time, A times A.1019

*Again provided that it's defined and in this case A is going to have to be a square matrix, okay.*1028

*Very important definition, just like the definition for numbers, any matrix to the 0 power is equal to the identity matrix, just like you know that any number by definition, in order to make the mathematics work, we define any number to the 0th power as 1.*1038

*10 to the 0th power is one, 30 to the 0th power is 1, π to the 0th power is 1, Okay.*1053

*AP, AQ and let the exact same way, AP + Q, you just add the exponents and multiply by that many times, so if you have A*^{2} times A^{3}, that's going to be A to the 5th, A times, A, times, A times, A times A.1061

*Just like numbers, there is nothing new here, we are just dealing with matrices, and our last one is A to the P, to the Q, well you know how to handle this with the numbers.*1080

*You just multiply P and Q, same thing A to the PQ, so if you have A*^{2}^{3}, that's going to be A^{6}, A^{2 times 3} = A^{6}.1092

*There's nothing new here; you are just dealing with matrices instead of numbers, okay....*1104

*... Couple of things to notate and be aware of if I have A times B to the P power, now with numbers, you are used to seeing, let's say 5 times 2*^{2}, well let me write it out for you.1118

*(5 times 2)*^{2}, you can go 5^{2}, 2 ^{2}, in some sense you are sort of distributing this 2 over there, with matrices it doesn't do that.1135

*It does not equal A to the P, B to the P and again it has to the commutivity property, so this is not generally true.*1149

*It's another property that's not true, when you multiply two matrices and you raise it to some exponential power, it is not the same as raising each one to the exponential power and multiplying them together, that doesn't work with matrices, and again it has some profound consequences for linear systems and for other deeper properties of linear algebraic systems, okay.*1159

*Now, you know that if A times B, if I take two numbers and I multiply them and I end up with 0, for A and B as members of the real number line, this is the symbol 4 is a member of, and this little r with two little bars here, the real numbers, just the numbers that you know are positive, negative, everything in between, rational, irrational.*1183

*Well, this implies you know that either A = 0, or B = 0, so again if I, I know from numbers that if I multiply two numbers together and I get 0, I know that one of them has to be 0.*1206

*Lets do an example here to show you that it's not true for matrices, if I take (1, 2, 2, 4) and if I take B = (4, 6, -2 and 3) and if this is 2 by 2, AB....*1220

*... I get the 0 matrix, however notice neither A nor B is 0, so again for matrices, it doesn't hold.*1241

*You can multiply two matrices and get the zero matrix that doesn't mean that one of them is equal to 0, it's not the same, so keep track of these things that are actually different from the real numbers.*1249

*Commutitivity of this property right here, this property right here where two matrices can be both non-zero and they can multiply to a 0 matrix.*1259

*Another thing that's different is, well you know the law of cancellation, if I have two numbers, A, B or actually A times B = A times C, I just know naturally that I can cancel the A.*1272

*Cancellation property, it implies that B = C, well let's take a matrix, let's do (1, 2, 2, 4) again, let's take B = (2, 1, 3, 2) and let's take a third matrix (-2, 7, 5 and -1).*1288

*Now, if I multiply...*1312

*... If I multiply AB, excuse me, and if I multiply AC, I get the same matrix, I get a 5, I get 16, 10.*1320

*Notice AB, AC but...*1334

*... B and C are not equal, so you can multiply a matrix A by a second matrix B, you can multiply that same matrix A by a third matrix C, you might end up with the same matrix, however that doesn't imply that the second matrix and the third matrix are the same.*1342

*Another property that's different with matrices that does not follow the real numbers, there is no cancellation property...*1362

*... Let's move forward, now we are going to be talking about some properties of scalar multiplication, I'll just go ahead and list these out, we won't necessarily do examples of this because they are reasonably straight forward.*1375

*we are just talking about multiplying a matrix by a number, which means multiplying each entry of the matrix by that number, straight, simple or arithmetic, no worries, okay.*1386

*We will let R and S, be real numbers and A and B matrices...*1401

*... First property, as times A is equal to RS, times A, this is if I have multiplied a matrix by some scalar S, and if I multiply by another scalar, I can pretty much just reverse the multiplication process.*1421

*I can multiply the two scalars together, and then multiply by the matrix, pretty intuitive for the most part, nothing strange happening here.*1438

*B, if I have two scalars that I add together like 5 + 2 times some matrix A, that's equal to, distributive property is all part of here.*1449

*RA + SA, again nothing particularly strange happening here, we are just listing the properties sot that we have a formality that we see in them.*1460

*R times (A + B), If I add two matrices together and then multiply by a scalar, same thing, RA + RB, the scalar distributes over both matrices.*1473

*D,...*1490

*... I I had multiplied R times some, if I have multiplied a scalar times some matrix, and then I decided to multiply by another matrix, I can actually reverse those.*1499

*I can multiply the two matrices together and then multiply by the scalar, again these are just ways of manipulating things that we might run across.*1507

*You'll do them intuitively, you don’t necessarily have to refer to these, you already know that these are true, you just go ahead and use them.*1515

*Okay, so again we don't have to worry about examples here, they are just arithmetic.*1523

*Properties of the transpose, okay, transpose, very important concept, we introduce the transpose before you remember the transpose is where you take the columns and the rows of a matrix and you switch them.*1531

*Just as a quick example let's just say we have a matrix, (2, 3) let's do it (3, 2, 1) something like that, we go ahead and we flip this along the main diagonal.*1545

*Everything that's a row becomes a column, everything that's a column becomes a row, so under transposition, this becomes (2, 3, 3, 2, 4, 1), what I have done is I have read, down the column and I have gone from left to right.*1560

*(2, 3) that's the first row (3, 2), that's the second row, (4, 1), all i have done is flip it along the main diagonal.*1581

*Okay, so now properties, let's say, we will let R be a member of the real numbers, let's use a little buit of formal mathematics here, and A and B...*1591

*... Are matrices, then we have, if I have already taken the transpose of A and then take the transpose again, so it's exactly what you think it is.*1608

*If I go this way, and then if I take the transpose again and I go back that way, I end up with my original matrix A, completely intuitive.*1621

*Okay, if I add A and B and then take the transpose, I can take the transpose of A and add it to the transpose of B, so A + B transpose, is the same as A transpose + B transpose.*1635

*This property is very interesting, something that you could probably expect intuitively, but as it turns out this property becomes the foundation for the definition of a linear function, which we will be more precise about later on.*1651

*There is actually no reason for believing that they should be true, so this has to be proved, in fact all of these properties, they need to be proven.*1665

*We won't worry about the proofs, they are all reasonably self-evident, but this one is kind of interesting in another self.*1674

*Okay, this next one is very curious, if I multiply two matrices, A time B, and then if I take the transpose, I don't get A transpose times B transpose, what I get is B transpose times A transpose.*1681

*This one, very important property, I'll put a couple of stars by it, I mean al of the properties are important, but this is the one that's most unusual and it probably trips up students the most.*1698

*When you multiply two matrices and then take the transpose, it's not the same as here, notice here the order A + B first, A transpose first, B transpose, your order is not retained, here the order is reversed.*1709

*You have to take the transpose of B first and then multiply by the transpose of A first, that's what makes these, that's what makes the equality hold.*1722

*And then of ‘course our final one, we have some scalar multiplied by a matrix A, if we take the transpose, that's just the same as the scalar times the transpose of A, so you take the transpose first, and then you multiply as supposed to here you multiply first, which is why we have the parenthesis and do their transpose, okay.*1733

*Most of you who are taking a look at this video, will probably have some sort of a book that you are referring to, I would certainly recommend taking a look at a proof of this property, property C, it's not a difficult proof, all you are doing is... *1757

*... You are literally just, you know picking the entries of A, multiplying by the entries of B, and sort of doing it long hand, so it is somewhat tedious, if could use that word, but it isn't difficult.*1773

*It's completely self-evident, and it's a little bit of a surprise when you actually end up with this, so that's the interesting thing about mathematics, is that things show up in the process of proof, that your intuition would not lead you to believe it is true, which is why we wouldn't believe our intuition.*1784

*In mathematics, past the certain point, we have to rely on proof, it's important and it will become very evident in the linear algebra when we discover some very deep fundamental properties, of, not of mathematical structures, but nature itself that one would not even believe could be true and yet they are.*1800

*There are statement about the nature of reality that you would never even think for a million years, might actually be true.*1818

*Okay, let's do a couple of examples, let's take a matrix A, and we will set it equal to (1, 3, 2, 2, -1, 3) and again forgive me for not putting the brackets around it, and I have a matrix B which is (0, 1, 2, 2, 3, -1).*1826

*Okay, now we want to ask ourselves, we are going to take A times B, we are going to take the transpose, we want to know, we want to confirm that it's actually equal to B transpose times A transpose.*1850

*What we are doing here is not a proof, what we are doing here is a confirmation, they are very different, confirmation just confirms something true for a specific case, a proof is true generally.*1866

*Okay, when we multiply AB, okay, I'll let you do the multiplication, this is a 2 by 3, this is a 3 by 2, so we are going to end up with a 2 by 2 matrix.*1877

*Again it's just arithmetic, so I'll let you guys take care of that (7, -3) and then we will go ahead and subject this to transposition, we end up with is (12, 7, 5, -3).*1889

*Notice the entry this is a square matrix, the entries along the main diagonal did not change, everything just switched positions, 7 went to 5, 5 went to 7, so that's the left hand side (12, 7, 5, -3).*1909

*Now, if take the transpose of B, which gives me a, well here, if I take B, if I take the transpose of B, I end up with a 2 by 3 matrix.*1923

*If I take A and take the transpose of A, this is a 2 by 3, I get a 3 by 2 matrix.*1940

*When I multiply B transpose times A transpose, 2 by 3 times the 3 by 2, 2 by 3 times 3 by 2, I get a 2 by 2, so it matches.*1949

*Now we want to see if the entries match, so as far as definitions are concerned, the matrix is defined when we do the multiplication, sure enough we end up with (12, 7, 5, -3).*1961

*This confirms the fact that, let's go back up here, A times B, multiply first and then take the transpose is the same as taking the transpose of first, multiplying it on the left of A transpose.*1976

*Again, this is not random, the order must be maintained in matrix multiplication because matrix multiplication does not commute.*1991

*In fact it's often not even defined.*2000

*Okay, let's take a look at what we have done here today, we have talked about properties of matrix addition, so very similar to properties for real numbers, matrix A + B = B + A, matrix addition commutes.*2004

*A + B + C quantity equals A + quantity + C, matrix addition is associative, there is a unique M by N matrix 0, such that A + the 0 matrix equals 0 + A matrix, and again we can do it this way because addition is commutative is equal to...*2021

*... Oops, my apologies here, this should be A, so A + 0 matrix leaves it unchanged, so this is called the 0 matrix or the additive identity, we will call it the 0 matrix most of the time.*2042

*For every matrix A, there exist an unique matrix D such that when you add D to A, in other words A + D, you get the 0 matrix.*2058

*We will write D as -A, which is exactly what it is, 7, _7, matrix A, matrix -A, and call it the additive inverse or we will just refer to as the negative of A.*2068

*Things like additive inverse, additive identity, these are formal terms that you will talk about for those of you that go on to study abstract algebra, group theory, ring theory, feel theory, things like that.*2081

*Beautiful area is mathematics by the way, my personal favorite.*2092

*Okay we talked about properties of matrix multiplication, A times B + C quantity equals A + B quantity times C, this is the associativity property of multiplication, notice there is nothing here about commutivity.*2099

*Matrix, sorry to keep hammering the point, I know you are probably getting sick of it but matrix multiplication does not commute, it's amazing.*2114

*How many times you actually sort of mentioned that and yet we are in such a habit of, you know commuting our multiplication that we don't even think twice about it, we have to catch ourselves, that's why I keep mentioning it.*2121

*Distributive property is effective, A times quantity B + C, is AB + AC, and A + B quantity times C is AC + BC.*2132

*Now for dealing with a square matrix N by N, same dimensions, rows and columns, we will define any matrix the 0 power is equal to the identity matrix.*2145

*And again the identity matrix is just that square matrix, where everything on the main diagonal is a 1.*2154

*Think of it as the one matrix, multiplying everything by 1, A*^{p}, where P is an integer, it's just A multiplied by itself P times.2160

*We have A*^{p} times A^{q}, same as numbers, just multiply, I am sorry, raise it to the sum of P + Q, and A to the P to the Qth power, just multiply P and Q.2174

*Again, these are all analogous to everything that you deal with numbers.*2189

*Scalar multiplication, R and S are real numbers, A and B are matrices, R times SA = RS times A, R + S times A = RA + SA, distribution.*2197

*Same thing the other way around, FR times A + B matrices, RA + RB and A times RB quantity = R times AB quantity.*2212

*Covered a lot of properties, but again most of them are the same, only a couple of several differences, okay let's...*2225

*... Oh sorry, let's pick a look at the properties of transpose, probably the most important.*2235

*Okay, let R be a scalar, A and B are matrices, A transpose, just recovers A.*2240

*A + B quantity transpose is equal to A transpose + B transpose, order is retained.*2249

*A times B transpose = B transpose times A transpose, order is not retained.*2256

*Highlight that one.*2266

*Some number times A, then take the transpose is equal to just taking the transpose of A and then multiplying by that number okay.*2269

*Let's see, let's define...*2281

*(2, 1, 3, -1, 2, 4, 3, 1, 0), that's our matrix C...*2287

*... (1, 1, 2), okay, my E a little but more clear here instead of just a couple of random lines.*2301

*(2, -1, 3, -3, 2, 1) and have that, we want to find 3C - 2E transpose, just a combination of things, we want to multiply 3 times , we want to subtract twice E, and then we want to take the transpose of that matrix if it's defined, okay.*2308

*Well, we know that if we have the transpose of some quantity, we can deal with it this way, we can take the transpose individually....*2337

*... And we know that when we have a number times a matrix transpose, we can go ahead and just take the transpose of the matrix first and then multiply by the number.*2353

*Transpose...*2369

*...Okay, when we go ahead and do, C transpose E transpose, okay, these are 3 by 3, 3 by 3.*2373

*It's going to end up with 3 by 3 transpose, so let's just go ahead and do it here, let's do, so C transpose, again we are flipping it along the main diagonal, so we will end up withy (2, -1, 3, 1, 2, 1) and then (3, 4, 0) right.*2386

*And then if we multiply that by 3, we will multiply each of these entries by 3, we end up with (6, -3, 9, 3, 6, 3) 3 times 3 is 9, 3 times 4 is 12, 3 times 0 is 0.*2410

*Okay, so this is our...*2434

*... #3 C and now if we take E transpose E, we will get (1, 2, -3) I just take in the columns, switch them to rows and I have (1, -1, 2) and I have (2, 3, 1) so I take the transpose.*2439

*Now I multiply that by -2, okay and I end up with...*2459

*... Let's go, -2 times 1 is -2, -2 times 2 is -4, -2 times -3 is 6, -2 times 1 is -2, -2 times -1 is 2 and -2 times 2 is -2.*2470

*Here we have -4 for this entry, then we have -6 for this entry and we have -2 for this entry.*2491

*Notice what I have done here, this is 3 C transpose - 2 E transpose, I could have just done 2 times this and subtracted, but I went ahead and treated this whole coefficient as a -2, -2 to get this.*2502

*Now, what I am doing is I am actually going to be adding these straight, okay.*2520

*Okay, and the final matrix that we end up with once we add those two matrices together, we end up with (4, -7, 15, 1, 8, -1, 5, 6 and -2).*2528

*This was 3C - 2E transpose, we treated it as 3C transpose + actually -2 times E transpose, you can treat this again, you can do, you can multiply by 2 and then subtract or you can actually multiply by the -2 treated as the whole number and do the addition.*2548

*You are doing the same thing; ultimately it's just about what's comfortable for you.*2580

*Okay, and that takes care of the properties of matrix multiplication, addition and transpose and scalar multiplication.*2585

*Thank you for joining us at educator.com.*2594

*Welcome back to educator.com, today we are going to be continuing our study of linear algebra, and we are going to be discussing solutions of linear systems part 1.*0000

*This lesson is going to consist of a couple of very, careful, systematic, slow examples of something called elimination in order to put a matrix into something called reduced row echelon form, so let's get started.*0012

*When we talked about linear systems earlier, we talked about the three things that we can do to a system of linear equations or a matrix, which is of ‘course just a representation of that linear system.*0033

*We can switch two rows, we can multiply one row by a constant and we can ad multiples of one row to another row, all of these three create an equivalent system, which just is our way of manipulating the linear system to make it easier for us to handle.*0046

*Let's begin today's lesson by defining what we mean by reduced row echelon form, which is going to be the final form that we want to put a augmented matrix into.*0065

*Again an augmented matrix is just the system of linear equations, the coefficients of the linear systems, plus that extra column which is the solutions to that linear system.*0075

*We are going to manipulate that, we are going to put it into reduced row echelon form, so an M by N matrix is in reduced row echelon form.*0085

*We will often refer to it as just RRE, if the following holds, if all rows that are entirely 0, if there are any or at the bottom of the matrix.*0094

*If we, as we read from left to right, the first non-zero entry in a row is a 1, we call that the leading entry, has to be a 1.*0108

*If rows I and I + 1, so for example rows 3 and 4, if they both have leading entries, then the leading entry of the I + 1 row is to the right of the leading entry of row I.*0120

*In other words if I have two leading entries in row three and row four, the one on row four is going to be to the right, it's not going to be right below it or to the left of it.*0135

*All of these will make sense in just a moment when we give some examples of reduced row echelon form.*0143

*Now, if a column actually contains a leading entry of some row, then all of the other entries in the column R is 0, note a matrix in reduced row echelon form may not have any rows consisting of al 0's.*0149

*Reduced row echelon form doesn't mean that we have to have a row that's all 0's, it means that if it does, they need to be at the bottom of the matrix, okay let's look at some examples....*0166

*... So we have the matrix (1, 0, 0, 4, 0, 1, 0, 5, 0, 0, 1, 2), this is in reduced row echelon form.*0192

*Notice, this has a leading entry, it is a 1, everything else is a 0, this column has or this row has a leading entry, but notice it's to the right, that's what we meant by to the right.*0203

*It's to the right of that one, and every other entry in that column is 0, here this row also has a leading entry, it is a 1, it is to the left of these other two, and every other entry in there is a 0, and it doesn't matter what this is.*0217

*(4, 5, 2) this is not a leading entry, this is the leading entry of that row, these numbers are, they are not irrelevant, but as far as the definition of reduced row echelon, they are not important.*0234

*Let's take a look at another matrix (2, 1, 2, 0, 0, 2, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0) same thing.*0247

*We have a leading entry 0's along he column, this one, notice this is a 2 here and there's a (0, 0) here.*0264

*This is the leading entry we haven't run across yet, so it’s okay that the 2 is here, it has nothing to do with the definition, but now we run, yes leading entry is a (1, 0, 0).*0276

*It qualifies, it's to the right of this one, the next row, we have a (1, 0, 0,) and it's to the right of these other two, so this one is in reduced row echelon form.*0286

*C will take (1, 0, 0, 3, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1) and we have a couple of rows of all 0's.*0301

*We have this nice big, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, , 5 by 5 matrix, let's see we have this 0 entries there at the bottom of the matrix that takes care of part A of the definition.*0319

*We have a leading entry in this row, it is a 1, that's nice, all of the other entries are 0, there's nothing in this column irrelevant.*0332

*There is a leading entry in this row and it is to the right of that one and all of the other entries are 0.*0340

*No leading entry, no leading entry, no leading entry, yes it satisfies the definition for reduced row echelon, so these three matrices are examples of matrices that are in reduced row echelon form.*0349

*Now let's take a look at a prime...*0366

*(1, 2, 0, 4, 0, 0, 0, 0, 0, 0, 1, -3) this matrix, it has a leading entry and other two entries are 0.*0373

*But you notice here, this row is the second row and yet it's on top of the row that's not all 0's, so this one is not in reduced row echelon form because this 0 row needs to be at the bottom.*0389

*let's take B prime, (1, 0, 3, 4, 0, 2, -2, 5, 0, 0, 1, 2) okay, in this particular case we take a look at the 1, it's a 0 entry.*0408

*The 0's are, the rest of the entries in the column are 0, here the leading entry, the first number that's non-zero is a 2, kit's not a 1, it has to be a 1for it to be in reduced row echelon form.*0424

*You can certainly take this entire row and divide it by 2, to get 1, -1, and five halves, and then you will almost be in reduced row echelon form, but as it stands, this is not reduced row echelon form.*0440

*Let's take (1, 0, 3, 4, 0, 1, -2, 5), let's take (0, 1, 2, 2) and let's take all 0's.*0457

*Okay, so we have a leading entry, which is nice, all of the other entries are 0's, very good, we have a leading entry which is a 1, it is to the left of, I'm sorry to the right of this one, it's the other, the one above it is to the left of it, so far so good.*0472

*But this other entry here in this column is a 1, it's not a 0, it needs to be a 0, so as it stands this one is not reduced row echelon form.*0491

*Okay...*0504

*... Very important theorem, every M by N matrix, every M by N matrix is row equivalent to a unique matrix in reduced row echelon form, which means that no matter what M by N matrix I give you, with the three operations of exchanging rows...*0512

*Multiplying a row by a constant, non-zero constant, and multiplying a multiple of, or adding a multiple of one row to another row, with those three operations handled over and over and over again, you can convert it to a reduced row echelon matrix. *0535

*That is unique, that's what's really interesting, later on we will talk about something called Gaussian elimination and it's the slightly, less detailed version of what it is that we are going to be doing today.*0551

*And it usually suffices for most purposes in linear algebra, but the matrices that you end up with are not unique.*0564

*What makes reduced row echelon fantastic is that, even if you and your friend and another friend do a whole bunch of different operations, in the end you'll end up with the same matrix, there is only one matrix that's in reduced row echelon form for any given matrix.*0571

*That's actually kind of extraordinary when you think about it, so now we are going to go through a very systematic and careful example.*0585

*We are not going to prove the uniqueness, but this example will actually show how you come across the, how you transform the particular matrix into reduced row echelon form.*0592

*Okay, so let's start off with our matrix, and again I am not going to be putting the brackets on there, simply because there is a whole bunch iof extra writing that we want to avoid.*0603

*(0, 2, 3, -4 and 1) (0, 0, 2, 3, 4), (0, 0, 2, 3, 4), (2, 2, -5, 2, 4) (2, , 2, -5, 2and 4).*0613

*Excuse me, my 4 sometimes look like 9, and (2, 0, -6, 9 and 7), okay, so this is the matrix we started off with, this is the one that we want to convert to reduced row echelon form, that's what we want, that's our goal.*0633

*And we will check to see that all the properties of RRE are satisfied, okay so step 1...*0651

*... You want to identify...*0662

*... The first column...*0671

*... Which is not all o's, so in this case we go here leading from left to right, this column, right (0, 0, 2, 2,), it's the first column that are not all 0's, okay.*0679

*Again, very systematic step 2, find...*0697

*... The first...*0705

*... Non-zero entry in that column...*0710

*... By the way, this column is called a pivotal column, and the first entry that you find, which is right there...*0719

*... First column is the first entry going down, that's non-zero, that's actually called the pivot, so in our first series of steps, 2 is our pivot.*0734

*Okay...*0748

*... What we are going to do here is we are going to take this row, this row and we are goign to interchange it with the first row, so we want the row that has the pivot in it to be on top.*0754

*Okay, so I'll write out step 3, before I write the, step 3...*0770

*... Interchange if necessary, you don't necessarily have to do this, in this case we do have to, if necessary...*0785

*... The first row...*0799

*... With the row containing the pivot...*0804

*... When we do that, we end up with the following matrix (2, 2, -5, 2, 4, 0, 0, 2, 3, 4, 0, 2, 3, -4, 1, 2, 0, -6, 9 and 7.*0810

*All we have done is we have exchanged the first and the third rows, that was the one with the pivot, we put it on top, okay.*0832

*Now we are going to divide this first row, so let's make step 4...*0840

*... Divide the first row, what is now the first row, not the original first row, so these steps refer to the matrix that I just completed, so divide the first row, by its first entry.*0849

*In other words I am just trying to make this first entry a 1, so I am going to divide everything in this row by 2, so I end up with (1, 1, -5halves, 1, 2) and then I copy the other, (0, 0, 2, 3, 4, 0, 2, 3, -4, 1).*0865

*And (2, 0, -6, 9 and 7), so this is what I end up with, okay.*0890

*Next, I am going to, actually let me rewrite what I had before (1, 1, -5halves, 1, 2, 0, 0, 2, 3, 4, 0, 2, 3, -4, 1, 2, 0, -6, 9 and 7).*0903

*Okay....*0933

*.. We are going to take our fifth step, and that is we are going to do is we are going to, remember what we wanted to do, we just draw echelon form we wanted to get all of these 0, so we are going to add a multiple of this to this, to make it, to make this thing 0.*0936

*In other words we are going to multiply the first line by -2 added to the fourth row and see what we get, so step 5...*0950

*... Add -2 times first row...*0963

*... The fourth row, and then when we do that, so -2 times 1 is -2 + 2, so I'll leave everything else the same, nothing changes in the other rows.*0972

*(2, 3, 4, 0, 2, 3, -4, 1), this becomes 0, -2 times 1 is -2 + 0, we end up with -2...*0987

*... -2 times -5 halves is 5, 5 - 6, 5 - 6 is -1, -2 times 1 is -2, -2 + 9 is 7.*1005

*_2 times 2 is -4, -4 + 7 is 3.*1022

*Look what we have done, we have created a row with a leading entry of 1, everything else is 0, now I am going to put a little bracket here...*1029

*... My first row is done, now I am going to think about this matrix, I completely ignore this one, I never have to worry about the first row again.*1041

*Now step 6, well, this is the nice thing about it , we just repeat everything we just did, repeat steps 1 to 5 for this particular matrix here.*1049

*Okay, so what is it that we did, we found the first column that had the, that was not all 0's, this is all 0's so we go to this column, and in this column, we find the first entry that is non-zero, which is 2.*1064

*That's going to be our new pivot, okay, we take that pivot and we move it up to the top, and again we don't touch this one, we leave it alone so we exchange this row right here with this row.*1080

*Now we are talking about 1, 2, 3, so we are going to exchange rows 1 and 2, so now let's go ahead and write down that matrix.*1095

*Let me actually do a little arrow here, and I am going to say...*1104

*... Switch 2 and 3, I am sorry, so we are still talking about the entire matrix, so we are going to call this row 2, we are going to call this row 3, so we are going to switch 2 and 3.*1112

*And when we do that, the matrix we end up with is the following, we get of ‘course the original (1, 1, -5 halves, 1, 2), we get (0, 2, 3 and -4, 1).*1124

*We get (0, 0, 2, 3, 4) we get (0, -2, -1, 7 and 3), okay.*1140

*Now, we want to take, remember what we did before, the next step, once we have this pivotal, the row with the pivot, I'll go ahead and circle the pivot, so we know what we did to it.*1151

*Now we want to go ahead and divide that by that number, everything in that row by that number to turn this into a 1.*1162

*We will divide...*1170

*... Row 2 by 2, which is the pivot, matrix we get is (1, 1, -5 halves, 1, 2) nothing changes, (0, 1, 3halves, -2 and 1 half)...*1176

*...Alright ...*1198

*... And we get the (0, 0, 2, 3, 4, 0, -2, -1, 7 and 3).*1202

*Now we have a leading entry of 1, and now we need to do is we need to get rid of...*1216

*... Did we make a mistake here?..*1230

*... See here, let us multiply...*1234

*... Two times the second row, added to this row to get rid of this and make it a 0...*1242

*... Let me see if I can squeeze this in here, (1, 1, -5 halves, 1 and 2, 0, 1, 3 halves, -2, 1 half, 0, 0, 2, 3, 4).*1254

*Half...*1275

*... Okay...*1278

*1...*1283

*... Let me see here...*1288

*... And let me go, so we will do negative, yes we will do +2 times this row + this row, so we will end up with 0 here.*1294

*+2 times 1 is 2 - 2 is 0, +2 tim3s 3 halves is 3, 3 - 1 will end up giving me a 2, + 2 times -2 is 4.*1303

*-4, -4 + should be +3 and +2 times 1 half is 1, 1 + 3 is 4.*1320

*Okay, so I am going to go ahead and copy this onto the next page here...*1331

*(2, 3, 4, 2, 3, 4), so we have something to continue with, alright, so we have got (1, 1, -5 halves, 1, 2, 0, 1, 3 halves, -2, 1 half, 0, 0, 2, 3, 4).*1337

*We have (0, 0, 2, 3, 4), okay, let's see what is the best way to handle this one next.*1362

*Let's go ahead and continue with our process, so now we are dealing with, well...*1375

*... The couple of things that we can do here, you notice, so we have a leading entry, we have all 0's in this column, we have a leading entry here, 0's down here, not a 0 up here.*1388

*We have a choice, we can go ahead and take care of this 0, and 0 this out now and then deal with what's left over, the ones that don't have leading entries or we can go ahead and continue the process and get leading entries all the way down and then go back and take care of all the other 0's.*1398

*I think I am going to go ahead and do that, it's more little bit more consistent with the process that we had done, so now this stays, this stays.*1417

*We will leave those two alone, now we are just dealing with this one, so again we repeat those steps that we did before, (0, 0) this is the first column non-zero entries.*1425

*And the first entry is here, so now our pivot is here, and notice we don't have to do anything, we don't have to switch anything, it's already at the top, so since the pivot is there, we can go ahead and divide by 2...*1436

*... We will divide row 3 by 2 to make it a 1, which turns everything into (1, 1, -5 halves, 1 and 2, 0, 1, 3 halves, -2, 1 half, 0, 0, 1, 3, halves)*1462

*And...*1485

*... 4 divided by 2 is 2, and then we have (0, 0, 2, 3, 4), okay so now I am going to go...*1490

*... I am going to go this way just to create a little bit more room, what I am going to do here is I am going to take -2 times this row added to this row, in order to turn this into a 0, so we wil do...*1509

*... -2 times the third row + the fourth row.*1527

*And what I end up with is (1, 1, -5 halves, 1, 2, 0, 1, 3 halves, -2, 1 half, 0, 0, 1, 3 halves, 2, 0).*1537

*When you multiply this out so, -2 times that + that is 0, -2 times that + that, you end up with -3 + 3 is 0, -2 times that is -4 + 4, you end up with this matrix right here.*1558

*Again we are getting down to where we want it, we have the all 0's down to the bottom, we have leading entries, now we just have to take care of these other 0's to make sure everything else is 0 in the entries that have, in the columns that have leading entries as 1.*1575

*Okay, so let's see, let's do...*1588

*... Let's get rid of...*1598

*... Let's get rid of this 3 halves first, which means I am going to take 3 halves, -3 halves times this row to this row, so let me write own what it is that I am doing.*1602

*We have -3 half times the third row + the second and when I do that, I end up with...*1611

*... (1, 1, -5 halves, 1, 2, 0, 1, 0, -17 fourths, -5 halves, 0, 0, 1, 3 halves 2) and of ‘course the bottom one is all 0's.*1626

*Now I am going to get rid of...*1648

*... This 5 halves, so I am going to multiply 5 halves times the third row...*1652

*... + the first row, that will give me (1, 1, 0, 19 fourth, 7, 0, 1, 0, -17 fourths, -5 halves 0, 0)*1662

*(3 halves, 2) and of ‘course the final 0's...*1687

*... Now, let's see what we have got, I am going to create a little bit more space here, this time, I'm going to, so I have taken care of this and this are fine, now I just have this one to take care of, so I am going to multiply...*1695

*-1, oops, -1 times the second row...*1713

*... + the first row and when I do that I get (1, 0,), let me do this one actually specifically -1 times 1 is -1 + 1 is 0.*1720

*-1 times 0, 0 + 0, 0, -1 times -17 fourths + 19 fourths, you end up with 19 fourths + 17 fourths, 36 fourths, which is 9.*1739

*And then the same thing here, -1 times 5 halves is 5 halves + 7, you end up with 19 halves, 0, 1, 0, -17 fourth, -5 halves.*1754

*Here you get (0, 0, 1, 3 halves and 2, 0, 0, 0, 0, 0).*1772

*This, my friends is our final...*1783

*... Reduced row echelon, my apologies for these little marks with the pen here, so the matrix that we started off with through series of operations was converted to this unique reduced row echelon matrix, okay this is unique.*1788

*It's the only matrix that you'll end up with, notice we have adding entry 1, all 0's leading entry 1 is to the right of this one, all 0's, leading entry 1, it's to the right, all 0's.*1804

*No leading entry, so it doesn’t matter what these are...*1818

*... Okay...*1825

*... Let's go ahead and do one more example, we will move a little but more quickly this time, again we are just doing the same operations that you are accustomed to....*1828

*... Let's take as our matrix (1, 2, 3, 9, 2, -1, 1, 8, 3, 0, -1, 3) okay.*1841

*Look for the first column, not all 0's, that's this one, the first non-zero entry here, our pivot is 1, we don't have to do anything to it, it's already at the top.*1860

*And we don't have to divide by it, it's already a 1, so all we have to do is turn this into a 0, and this into a 0, so I am going to do this in two steps, okay.*1868

*I'll go ahead and let you do the, what we are going to be dealing is multiplying -2 times the first row + the second, -2 times that, plus that to turn this and do with 0.*1879

*And the second step we are going to do is -3 times the second row...*1892

*...-3 times the first row, of ‘course the first row...*1900

*... Times the first row + the third, so -3 times that + that, so I am going to do it in two steps, I'm going to combine it and just give the final matrix that we get.*1908

*We should end up with (1, 2, 3, 9, 0, -5, -5, -10, 0, -6, -10, -24), so we have taken care of this one.*1918

*This is done, now we want to deal with that, so our pivot here, we look for the first column that's non-zero entry, that's here.*1936

*We will look for the first entry from top to bottom, so our new pivot is -5, so what we want to do is we want to divide everything in here by that number to turn this into a 1.*1949

*We will go ahead and...*1961

*We have (1, 2, 3, 9), we have 0, 1, -5 divided by -5, 1, -5 divided by 2, and we have 0, -6, -10, -24.*1969

*Okay, so now let me move up...*1989

*... Here, I want to get rid of this, make it a 0, I want to make 0's, everything underneath, so I am going to multiply 6 times...*1995

*... The second row + the third, so I end up with the matrix 1, oops...*2013

*... We don't want these, too many of these stray marks to confuse is, my apologies.*2023

*(1, 2, 3, 9) and the we are going to have (0, 1, 1, 2) and we are going to have (0, 0, -4, -12) okay.*2030

*We find a new pivot, so in this particular case, we have taken care of that, our new pivot is now -4, because this were always done, this were always done.*2051

*We have leading entries of 1 and 1, we can take care of the other's layer, who want to work our way down from the top-left to the bottom right.*2061

*We go ahead and divide this by -4, the whole thing by -4, so let me move on to the next page...*2069

*... And rewrite this...*2079

*... We started off with (1, 2, 3, 9, 0, 1, 1, 2, 0, 0, -4, -12) okay.*2083

*We divide the third by -4, to get (1, 2, 3, 9, 0, 1, 1, 2, 0, 0, 1 and 3).*2099

*Now we have that and now we just, now we have our 1, we have our 1, we have our 1, we need to convert this to (0, 0) and we need to convert this to 0.*2118

*Lets go ahead and take...*2129

*Lets do this in a couple of steps, let's go...*2135

*… -1 times the third row + the second...*2139

*... And we will do -3 times the third row, -3 times the third row + the first...*2149

*... And when we do that we end up with (1, 2, 0, 0, 0, 1, 0, -1, 0, 0, 1, 3).*2161

*And now what we want to do is we want to get rid of this 2, so we are going to take -2, times the second.*2182

*We are going to add it to the first, and what we are going to end up with is (1, 0, 0, 2, 0, 1, 0, -1, 0, 0, 1, 3).*2195

*Now we check the leading entry, leading entry, right goes to the right, these are all 0's, these are 0's, these are 0's no leaving entry problems here, this is irrelevant.*2212

*Our final...*2225

*... Reduced row echelon matrix is this matrix (1, 0, 0, 2, 0, 1, 0, -1, 0, 0, 1, -3), let's start it from our initial ones with series of those operations.*2229

*Find your pivotal column, find your pivot, move it to the top, divide by the pivot to make it a 1, eliminate all the 0's underneath that one.*2243

*Leave that row alone; move down to the next into the same thing.*2254

*Find the column that's not all 0's, find the first entry that's not a 0, that's your pivot, move it to the top, divide by it to turn it into alone.*2257

*Remove all the zeros, do that all the way down in a stereo step fashion, echelon is a Greek word for stereo step, which is why it's like that.*2267

*It moves in a stereo step from top-left to bottom right, and then once you get down to where you can't really go further, then you go back and you eliminate all the zero's above the ones, to get a unique matrix and reduced row echelon form.*2274

*Thank you for joining us today at educator.com, look forward to seeing you next time.*2290

*Welcome back to educator.com and thank you for joining use for linear algebra.*0000

*Today we are going to be discussing solutions of linear systems part-2, okay let's go ahead and get started.*0005

*We closed off the last session by converting the following matrix...*0013

*... And you will remember of course sometimes I actually leave off the little brackets on the side of the matrices, simply because it's a personal notational preference that's all, as long as you understand which grouping of numbers goes together okay.*0020

*We had 91, 2, 3, 9, 2, -1, 1, 8) and (3, 0, -1, 3) and we converted that to reduced row echelon form,*0034

*(2, 1, 0, 02,), (0, 1, 0, -1) and (0, 0, 1, 3), and notice the reduced row echelon form, it has 1 as leading entries and wherever the row has a 1, that is a leading entry.*0052

*Everything else in that column is a 0, this column of ‘course doesn't matter, because it's the third row here, so it is irrelevant as far as the definition of reduced row echelon is concerned, okay.*0072

*Now, we did that just for matrices, well we already know that a given matrix represents a linear system of equations, so this system looks like this in terms of the variables....*0087

*... X + 2Y +3Z = 9, notice X + 2Y + 3Z = 9, and then 2X - Y + Z = 8.*0104

*And then 3X + 0Y, you don't necessarily have to put this there, I like to put it there simply because for me it keeps thing in order, in line and it just keeps things consistent.*0123

*-Z = 3 and again what we have done is...*0136

*... Whenever we transform a matrix from the standard matrix to its reduced row echelon form, we form an equivalent system, so what we produce is the following, this time I am going to actually avoid the 0's.*0144

*We have X = 2, Y = -1 and Z = 3, so this is our solution.*0156

*Matrix represents the linear system, converted to reduced row echelon form, we get, that's one of the reasons why we like these 1's in these positions and 0's everywhere else.*0172

*It's because it just gives you the answer, X is one thing, Y is another and Z is another okay.*0182

*And in this particular case we have a solution, so it is a unique solution, for this particular system, for this particular, well, for this particular system, there is no other, it is unique, okay.*0189

*Let's take a look at another example, let's actually go ahead and exquisitely give the linear system this time, and then we will just deal with the matrix, so we have, let me, let me write this system up here.*0206

*We have X + Y + 2Z - 5W = 3, we have 2X + 5Y - Z -9W = -3, we have 2X again.*0220

*+ Y - Z + 3W = -11, and we have X - 3Y, excuse me, + 2Z + 7W = -5.*0245

*And again I think the biggest problem that you are going to run across with linear lagebra, working with matrices is just keeping everything in order, you have a bunch of letters floating around, you have a bunch of numbers floating around.*0264

*The biggest problem you really going to run into is, believe it or not, just planar arithmetic, keeping everything straight.*0274

*Okay let's go ahead and just take the coefficients, make it a matrix augmented with this, you know series of numbers right here, turn it into reduced echelon form, and we will have our solution and see what it is.*0280

*Our matrix looks like this, we just take the coefficients, we have (1, 1, 2, -5) and we have 3, let me go ahead and put a little, just to show you that these are the coefficients of the variables and these are the solution set.*0292

*We have (2, 5, -1, -9), we have (2, 1, -1, 3) and we have (1, -3, 2 and 7) and then (-3, -11, -5), (-3, -11, -5).*0311

*This matrix is the one is the one that we want to convert into reduced row echelon form, okay.*0330

*We notice, remember our process, we go ahead and we find the first row that actually, or the first column that has non-zero entries, which is this one, and then we find the first row that has a non-zero entry, which is up here.*0339

*this becomes our pivot, once we find our pivot, we divide everything in that row by that number, to turn it into a 1.*0352

*But in this case it's already a 1, so it's not a problem, so in order to get rid of, the next step is to get rid of this number, this number, this number, to turn everything in that column into a 0, and of ‘course we do that by.*0358

*We are going to multiply the first equation by -2 added to the second row, and then we are going to multiply the first row by -2 added to this row, and then the third thing we're are going to do is multiply this by -1, and add it to this row.*0371

*And when we do that, we will have converted this column to 0, and then we move over on to the next column, if you want to check up the process, you can go back to the next lesson and review it, what we did it very, carefully.*0386

*Each individual pivot, each sub matrix and so forth, I'll go ahead and do just the first conversion, once I have done these three, we end up with the following matrix, of ‘course the original one doesn't change.*0398

*We have (1, 1, 2, -5, 3) we have (0, 3, -5, 1, -9), (0, -1, -5, 13, -17), and we have (0, -5, -2, 17).*0412

*I think that's 17, yes and -11, so these are all 0's and now I submit this is the entire matrix, our first set of conversions.*0437

*Now we notice this, so we can leave that alone, now we just deal with this, so this is al 0's we move to this column, so this particular column, 3 happens to be the first number that we run across going down.*0448

*This becomes our pivot, so our next step would be to divide this entire row by 3, to turn this into a 1q, and then do the same process.*0463

*Multiply this by 1, add it to this row, multiply this row by 5, add it to this row, and so on until we get reduced row echelon form.*0471

*Now I am going to go ahead and skip the process, and I am just going to give you the reduced row echelon form, and then I am going to take a couple of minutes to talk about how it is that we do that and the use of mathematical software to make this much easier for you.*0480

*What we end up with is the following matrix, let me just draw a little arrow here, and put in RRE, our reduce row echelon form.*0493

*What you end up with is (1, 0, 0, 2, -5, 0, 1, 0, -3, 2) excuse me for this little marks that show up, i don't want to confuse the numbers here.*0502

*And then we have (0, 0, 1, -2, 3) and the final row is going to be a row of all 0's which is fine, it doesn't matter.*0522

*Here is our reduced row echelon, notice (1, 1, 1,), these are all entries, 0's, the columns, so this is what we end up with.*0533

*Now I want to take a couple of minutes to talk to you about the use of mathematical software.*0544

*The techniques for this, what's this called, George, Gaussian elimination, excuse me, turning it into reduced row echelon form, dealing with the matrices.*0550

*A lot of the techniques that we are going to be developing are going to be computationally intensive, so it's not a problem if you want to do them by hand, I think it’s certainly a good way to get comfortable with the material, but at some point when the matrices start to get big, even like a 3 by 3 or 4 by 4.*0560

*You definitely want to start using mathematical software to make your life easier.*0575

*As long as you understand the process originally, it's not a problem after that, now there are several math software’s that are available, for example one of them is maple.*0579

*It's one of the oldest, it's the one that I use personally, it's the one that I was trained by, there are something called Mathcad, it's very popular in the engineering field.*0590

*There is something called Mathematica, also a very powerful and these are all symbolic manipulation software, not altogether different than what's available on ATI 89, so if you have a ATI 89, you can also do matrices on there, because it handles symbolically as supposed to numerically like older calculators used to do.*0601

*And there's also something specific called mat lab and it stands for matrix laboratory, this is specifically for linear algebra, it's also very powerful programming language.*0621

*Anyone of these is fine, very simple to use, you'll, you just plug in the matrix, and then you have some pull down menus or some commands, and you say reduced row echelon or find the inverse, find the transpose.*0633

*You can go from here to here, without having to go through all the intense computation, let the computer do that for you, it’s the math that's important, not necessarily the computation, especially since like I said before.*0645

*Arithmetic makes mistakes when you are dealing with 100s and 100's of numbers, they are going to happen, so if that I throw that out there, feel free to make use of any of these software and they are reasonable inexpensive.*0659

*Okay, so let's get back to our solution, so we have this final reduced row echelon form here, and what is actually, let me actually go to another page and rewrite it.*0673

*I have (1, 0, 0, 2, -5, 0, 1, 0, -3, 2, 0, 0, 1, -2, 3, 0, 0, 0, 0, 0) , by the way when you are doing entries for matrix, notice I tend to go in rows.*0687

*You can also go in columns, what you don't want to do is just sort of start going randomly, putting numbers here and there to fill the main.*0709

*believe it or not, that actually creates problem, so be as systematic as possible in all things you do mathematically.*0715

*This is the matrix, it's equivalent to the following system, well this is X, so we have X + 2 and let's say that this variable is W, so we have X, we have Y, we have Z, we have W and we have the solutions right here.*0721

*W have X + 2W = -5, we have Y...*0737

*... -3W = 2, we have the Z - 2W = 3, again it's in the W column, these are the solutions.*0747

*Now, notice what we have, this system right here, we have an X Y and Z, and in each case W shows up, so W becomes a free parameter.*0760

*That means I can choose any value I want for W, and the I can solve for X, Y and Z, so what we here is...*0770

*... Infinite solutions...*0779

*... You can stop here if you want, let me go ahead and show you what this looks like explicitly in terms of X, Y, Z, in other words I am going to solve for X and Y and Z, because W is a free parameter.*0783

*I am going to give it, I am going to put it down here, I am just going to call it R, you can give it anything you want, okay.*0793

*XZ is going to equal to -5...*0803

*... -2R, Y is going to equal 2 + 3R and Z = 3 + 2R.*0809

*And these are explicit representations with all of the variables on one side and everything else on the other, this is an implicitly, again it's an implicit relation because you can always solve it for one of the variables, that's all implicit means.*0825

*these are your solutions, infinite number of solutions, also notice something very, you already notice this, of ‘course the column that doesn't have the leading entry, the column where there are multiple entries, those are the columns that are going to be your free parameters.*0839

*In this particular system, there is only one column like that, all of the other columns had leading entries, and they were all 0's, if you end up having two or three or four columns, you are going to end up having two or three or four different parameters.*0857

*Speaking of which, let's go ahead and take a look at 1, okay I'll just do the matrix, and then we will do the row, reduced row echelon form, save us some time.*0873

*We have the system (1, 2, 0, -3, 1, 0, 2) 1, 2, 3, 4, 5, 6,, 6 variables, this last column is always the solution.*0886

*Okay, so we have X, Y, Z, S, T, W, something like that, so this is six variables that we are dealing with here (1, 2, 1, -3, 1, 2, 3).*0898

*We have (1, 2, 0, -3, 2, 1, 4) and my apologies my 4's always look like 9, (3, 6, 1, -9, 4, 3, 9).*0911

*This is the particular system that we are dealing with...*0928

*... Okay we have four equations, 1, 2, 3, 4, and six unknowns, that's our augmented matrix, our solution set.*0933

*When we go ahead and convert this using our mathematical software, to reduced row echelon form, we get the following, and again remember reduced row echelon is unique, so the reduced row echelon that you get is going to be the same as everybody else's.*0941

*There's only one reduced row echelon form, so we get (1, 2, 0, -3, 0, -1, 0), we get (0, 0, 1, 0, 0, 2, 1).*0954

*(0, 0, 0, 0, 1, 1, 2) and we get a row of all 0's, so that’s our reduced row echelon form, let's take a look at the ones with leading entries.*0975

*We have this one, I have put an arrow for the ones that have leading entries, okay, these are going to be the variable that we actually end up solving for, these are actually not the free parameters.*0989

*Now, the columns, I’ll circle them, that don't have leading entries, that are just kind of randomly arranged again, this is reduced row echelon form, it's not a problem that satisfies the definition.*1001

*These are going to be here for your parameters, so column 1, column 3, column 5, and then column 2, column 4 and column 6.*1016

*Those are going to be your free variables, those can be anything, so let's go ahead and pick some variables, let's actually call them.*1027

*Well, since we are dealing with this many, let's just go X*_{1}, X_{2}, X_{3}, X_{4}, X_{5} and X_{6}, so once again the ones that are going to be free parameters are 2, 4, and 6, so let me circle those, 2, 4 and 6.1035

*Okay, now i should probably move onto other page.*1056

*Now the linear system that reduced row echelon form represents is the following, X*_{1} + 2X_{2} -3X_{4} - X_{6} = 0.1064

*And then we have X*_{3} + 2X_{6} =1, and then we have X_{5} + X_{6} = 2.1080

*Okay, now we assign the free parameter's once again we said to the 2, 4, and the 6, so we get something like this X*_{2} = I'll just say R, could be anything.1099

*X*_{4} could be anything, so let's just call it S, and X_{6}, that can equal anything, we will call it T.1113

*Now, X1, just solve...*1121

*... That equation, just move this over there, this over there and this over there, and what you end up with is X*_{1} = T + 3S - R, 1127

*X*_{3}, just move this over there, that becomes 1- and we said that X_{6} is T, so 1 - 2T, and we solve for X_{5}.1142

*We just move this over to that side, that's equal to 2 _ X*_{6}, which is T...1158

*... this is the power of reduced row echelon, again use the mathematical software, and then just literally read the solution right off; this is the implicit expression of the solutions.*1170

*This is the explicit expression of the solutions; the reduced row echelon had columns with leading entries, those with the variable that you solve for, that's these things at the end.*1183

*Those are conditional, the columns that had variable entries, the non-leading entries, those are the ones that we assigned free parameters to.*1193

*They can be anything you choose, so again we are dealing with infinite number of solutions here...*1201

*... Okay...*1214

*.. Okay, let's do another one, we will do (1, 2, 3, 4, 5), (1, 3, 5, 7, 11), (1, 0, -1, -2, -6)...*1221

*.. Put it through our mathematical software, we get a reduced row echelon of (1, 0, -1, -2, 0), we get (0, 1, 2, 3, 0).*1239

*And then we get (0, 0, 0, 0, and 1), okay let's take a look at this third row, (0, 0, 0, 0, 1).*1253

*This basically tells me that, so we know that, this is our solution set, and this is our X, X*_{1}, X_{2}, X_{3}, X_{4}.1263

*This is telling me that....*1273

*... 0 times X*_{4} is equal to 1, well you know 0 times anything is equal to 0, it's telling me that 0 = 1, that is not true.1277

*Again no solution for this system, and reduced row echelon let us know that again unique solution, infinite number of solutions, no solutions, okay.*1287

*They also call this...*1299

*... An inconsistent system...*1303

*... Okay...*1311

*... Want to talk about homogenous systems a little bit, so homogenous system...*1315

*... Homogeneous systems are linear systems of ‘course...*1330

*... where each equation...*1342

*... Is equal to 0, that's it, everything on the right hand side of the equality sign is just a 0, homogenous systems are very important in mathematics, they play a very important role in the theory of differential equations.*1349

*And of ‘course the theory of differential equations is highly applicable for all fields of engineering and physics, so homogeneous systems, huge field of research.*1360

*An example would be something like 2X + 3Y - Z = 0, X - 2Y - 2Z = 0, notice everything is 0 on the right hand side, kind of makes it a lot easier to deal with, + 3Y - Z = 0.*1370

*That's it, this is just an example of a homogenous system, it just means that everything on the right hand side is 0.*1393

*Now you notice that the homogeneous system always has the trivial solution, which means X, Y and Z , all the variables are 0, so that's called the trivial solution.*1400

*We are not too concerned with the trivial solution, XY = 0, for all I, X of I meaning X*_{1}X_{2}X_{3}, here we have them as X, Y and Z, but if you have more than four or five, you just refer to them as X_{123456}, like we did in the previous example.1412

*And they all equals 0 for all I, this upside down A is a symbol which means for all, again just a little bit of formal mathematics.*1431

*Okay, now we will go ahead and we will give you a theorem, which we won't prove, but which will come in handy, notice here we have three equations and we have four unknowns.*1441

*Oh no I am sorry, I can't even count now, no it's three equations and three unknowns, so now the theorem says...*1457

*... A homogeneous system of M...*1468

*... Equations and N unknowns...*1478

*... Always has a non-trivial solution...*1490

*... if M is less than N, in other words if the number of M is the number of equations, so if the number of equations is less than the number of variables, this homogeneous system always has a non-trivial solution, that means there is always at least one solution that is not all 0's.*1504

*X = 0, Y = 0, Z = 0 and so on, so let's repeat that, a homogenous system of M equations and N unknowns always has a non-trivial solution.*1524

*It doesn't whether there are infinitely many or one, it just says, there, at least one, at least one exist if M is less than N, if the number of equations is less than the number of unknowns.*1534

*Lets do an example.*1544

*Okay, so we have (1, 1, 1, 1, 0), (1, 0, 0, 1, 0), (1, 2, 1, 0, 0).*1548

*When we subject this to Gaussian elimination, which brings us to reduced row echelon, we end up with the following, (1, 0, 0) (1, 0) (0, 1, 0, -1, 0) and (0, 0, 1).*1564

*(1, 0)...*1584

*... Again we are...*1588

*This is our solution, this is the entire matrix, so here we are talking about three equations, 1, 2, 3, and we have 1, 2, 3, 4 unknowns, this is of ‘course the solution set on the right side of the equality sign.*1591

*We have more unknowns than equations, and sure enough the reduced row echelon shows us that yes, we have leading entry, leading entry, leading entry.*1608

*Let's call this X, let's call this Y, let's call this Z, these can be the free parameter, we can call it, we can call the variable W, so we can set W = S.*1617

*And then solve for this, this has a solution and in fact in this particular case, because we have a free parameter, we have an infinite number of solutions, okay.*1629

*Let's go ahead and list these out explicitly, what this says is X + W = 0, this says Y - W = 0.*1641

*This says Z + W = 0, so W's are free parameter, so we will set W = S, we will solve for Z, Z= -W, which is -S.*1658

*Y = W, which is S and X = -W, which is -S, this is our explicit solution, this is our implicit solution, you might have a favorite, it's up to you.*1676

*Personally I actually prefer the implicit form, I like to do the math myself, when I see it like this, it's perfectly fine I again, just a personal preference...*1691

*... You might as well use our erase function here, so again this is implicit, this is explicit, and there you go.*1707

*Solutions of a homogeneous system Gaussian elimination, reduced row echelon, matrices represent linear systems, linear systems represents matrices.*1722

*Thank you for joining us at educator.com, we will see you next time.*1730

*Welcome back to educator.com, this is linear algebra, and today we are going to be talking about finding the inverse of a matrix.*0000

*The inverse is kind of analogous to a reciprocal as far as the real numbers are concerned, but again like matrices with respect to real numbers.*0009

*A lot of the things carry over as you remember from some of the properties like of commutativity of addition, distribution, things like that.*0020

*But certain properties don't carry over, so we can certainly think about it, you know analogously, if we want to, but we definitely want to understand that matrices and real numbers are very different mathematical objects, even though they do have certain things in common.*0026

*Okay, let's go ahead and get started, so finding the inverse of a matrix, let's start off with a definition so...*0039

*...An N by N matrix...*0052

*... A is called non-singular...*0057

*... Or invertible...*0069

*... If there exists -- and remember, this reverse E, it means there exists...*0076

*... An N by N...*0083

*.... Matrix, B such that A times B = B times A = the N by N identity matrix, which remember the identity matrix is that N by N matrix, where everything on the main diagonal is a 1.*0087

*The analogy as far as real numbers are concerned, it is kind of like taking the number 2 and multiplying it by 1 half.*0107

*You get 1, or 1 half times 2, you get 1, because the 2's cancel.*0113

*Well 2 and 1 half are in some sense inverses of each other, so when you multiply them, you get the identity for the numbers, the real numbers which is 1.*0117

*Well the analogous identity for matrices is the one along the main diagonal, so this is the definition in N by N matrix, A is called non-singular or invertible.*0126

*If there exist an N by N matrix B such that this holds, A times B = B times A, gives you the identity matrix.*0136

*Okay, so B is called the inverse...*0144

*... A, oh, we will quickly, both are these terms are used interchangeably, sometimes I am going to use non-singular, sometimes I am going to use invertible.*0156

*To be perfectly honest with you to this day for me, non-singular, it always takes me a couple of seconds to remember what that actually means, so when we say non-singular, we mean that it's invertible, that means inverse actually exist.*0165

*You are going to run across in a minute.*0179

*Matrices that are singular, which means that they are non-invertible, which means that inverse doesn't exist, so again you are welcome to use, each one, we will be using both interchangeably and eventually, I think you will just be comfortable with the little one, okay...*0181

*... And as we just said, if no such matrix exists...*0200

*... Oops...*0210

*... Then A is singular...*0214

*... Or, non-invertible, I think some of the confusion comes from the fact that sometimes we use non-singular, and then invertible, and singular, non-invertible.*0221

*Okay, so let's just take an example, a nice little 2 by 2, we have (2, 3, 2, 2).*0233

*This matrix right here, and again use mathematical software, it gives you the inverse, just like this, so B happens to be (-1, 3 halves, 1, -1), so there are two matrices A and B.*0240

*Well, when we actually multiply A times B and when we multiply B times A, and remember matrix multiplication does not commute, so they are not necessarily equal, but in this case AB does equal Ba, and they both happen to equal the identity matrix, which is equivalent to this thing (0, 1).*0258

*Again a matrix with 1's along the main diagonal, okay if...*0280

*... A matrix has an inverse...*0291

*... The inverse is unique, again you can't have two or three or four different inverses, you only have one.*0302

*We won't prove this, but it is a very, actually it's a rather quick proof, but we won't worry about that we are concerned with the using this idea as supposed to proving it.*0312

*Okay, let's talk a little bit about notation...*0321

*... We want to denote...*0328

*... The inverse of A as A with the little -1 as a superscript. NB, which means nota bene, which means notice this very carefully.*0336

*This is symbolic, okay...*0353

*What that means, this A*^{-1}, it's a symbol, this A^{-1} does not mean 1 over A, this doesn't work for matrices, it's not defined, this is strictly a symbol that we use.0360

*Sure, you are used to seeing numbers like 2*^{-1}, which is equivalent to 1 half, you just flip it.0374

*That's not the same here, we use the same symbolism, but it is only symbolic, it doesn't mean take 1 an divide by a matrix.*0381

*Division by a matrix is not defined, it's not even something that we can deal with, but so bear that in mind....*0386

*... Excuse me, now let's just take a couple of properties of non-singular matrices, and again non-singular means invertible, once that actually have an inverse...*0398

*... And we call again, we are talking about square matrices, N by N, 2 by 2, 3 by 3, 4 by 4 and so on, we don't speak of inverses of other matrices.*0416

*Okay, property A, if I have the inverse and if I take the inverse of the inverse, I recover A, which makes sense, you take the inverse, you take the inverse again, you are back where you started, which is actually the definition of inverse.*0426

*It works in a circle, if you remember dealing with inverse functions, it works the same way.*0440

*B, if I take two matrices A and B and multiply then and then take the inverse, I can actually get the same thing if I take the inverse of B first, multiply by the inverse of A, and notice the order here, this is very important.*0448

*Just like with the transpose, when we did A times B transpose, that's equal to B transpose times A transpose, the same thing here.*0469

*Those of you that are actually working from a book are interested in actually seeing the proof of this, I would encourage you to take a look at it, again the proof is not complicated, it's just a little tedious in the sense that you are dealing with every individual, little detail.*0479

*It's easy to follow, it's just arithmetic, but it's sort of interesting to see how something which is not very intuitive, would actually end up looking like this, so make sure that the order is correct.*0493

*We also have a B prime, which is just the same thing for multiple entries, so if I have for example, A times B times, C times D, so on.*0505

*Inverse, well I just reverse, I'll just do it backwards, that's equal to D inverse times C inverse, B inverse, A inverse, just work your way backwards, just like the transpose.*0518

*And see our final property, if we take a matrix an take the transpose of it, and then take the inverse of it, well what we can do is just take the inverse first and then take the transpose.*0532

*In other words, the transpose and the inverse are switchable, okay.*0545

*Let's see what we can do, we want to find a practical procedure for finding the inverse of any given matrix.*0553

*And here it is, it's actually very simple, it's something that we have already done, we are going to be using Gauss Jordan elimination again, we are going to be doing reduced row echelon form, except now, we are going to put two matrices next to each other.*0561

*We are going to be doing it simultaneously, and then the one on the right that we end up getting will actually be our inverse, it's really quite beautiful.*0572

*Step 1, form and don't worry if this procedure as I write it out doesn't really make much sense, when you see the example, it will be perfectly clear.*0583

*Form the N by 2N matrix...*0595

*... A augmented by the identity matrix..*0603

*... Step 2...*0614

*... Transform...*0620

*... The augmented matrix...*0625

*... To reduced row echelon form, this entire matrix here., transform the entire thing to reduced row echelon again using mathematical software.*0631

*I can tell you how wonderful mathematical software is it, it has made life so wonderful, it's amazing...*0641

*... Final 3, now you have couple of possibilities, suppose after converting it to reduced row echelon form, you have produced...*0650

*... The following matrix, C, D, so basically after conversion, A has been converted to C, this identity matrix has been converted to D, here are the possibilities.*0664

*If C turns out to be the identity matrix itself, then D is your inverse.*0679

*Really all we have done, we have taken the original matrix, put the identity matrix next to it, and then we have reduced to, we have done a reduced row echelon form.*0691

*Well it converts, if the, if the inverse actually exists, a, this thing becomes the identity and the identity matrix becomes the inverse.*0699

*And B...*0711

*... If C doesn't equal the identity matrix...*0714

*... The C has a row of 0's...*0721

*... And this implies that A inverse does not exist...*0731

*... Okay so we have formed the N by 2N matrix, we take the matrix, put the identity matrix next to it, we transform it to reduced row echelon form.*0744

*If this happens to be the identity matrix, then our matrix D is our inverse, we are done, if it's not the identity matrix, one of the rows will actually be all 0's that means the matrix, that means the inverse doesn't exist, let's do some example...*0752

*... Okay, we want to find the inverse of A, so let's do A = 1, 1, 1, 0, 2, 3, 5, 5, 1), okay, so step 1, we want to go ahead and form the augmented matrix.*0771

*We take the, we just do al the whole line here, so we go (1, 1, 1, 0, 2, 3, 5, 5, 5), and we are going to augment it with the 3 by 3 identity matrix...*0789

*... (0, 0, 1), which is just 1's along the main diagonal, let me go ahead and ut brackets around this, and then we convert to reduced row echelon form, let me, let me go down here.*0807

*We run our math’s software, and when we end up with is, and again reduced row echelon is unique, you get (1, 0, 0, 0, 1, 0, 0, 0, 1) and over here you will get some fractions.*0821

*13 eights -1 half - 1 eighth -15 eighths, 1 half, eighths, 5 fourths, you get a 0 and you get -1 fourth.*0837

*Sure enough, now we ask ourselves, if this, the identity matrix it is 1 along the main diagonals, everything else is 0, it is 3 by 3, so the inverse exist.*0855

*Not only does the inverse exist, there is your inverse, so we have done the existence and the process itself gives us our inverse, so A inverse = well I am not going to write it out, but.*0869

*Again, that's your matrix, 13 eighths - 1 half - 1 eighths - 15 eighths, 1 half, 3 eights, 5 fourths, 0 and -1 fourth.*0881

*That means that A, the original matrix times this gives me the identity matrix, an this times that gives me the identity matrix, these are inverses of each other, okay.*0890

*Lets do another example...*0902

*... This time we will take A = (1, 2, -3), (1, -2, 1), (5, -2, -3) okay, I am just going to go ahead and augment it already to the right again this is 3 by , so we have (1, 0, 0) , (0, 1, 0) , (0, 0, 1).*0906

*We will subject it to reduced row echelon, when we do that, what we get is the following matrix, (1, 2, -3, -0, -4, 4, 0, 0,) and we get (1 0, 0, -1, 1, 0, -2, -, 0) this...*0930

*... Is not I3, that is not the identity matrix, therefore A inverse does not exist...*0961

*... This is actually kind of amazing to think that you can just sort of pick a collection of numbers and arrange them in a square, sometimes an inverse exist for it and sometimes it doesn't, by virtue of the actual identity of the numbers.*0976

*Just as an aside, there is some really strange and beautiful mathematics going on here, so every once in a while that's nice and sort of pull back away from the computation, away from the practicality of what you are doing and think about some.*0992

*This is illicitly leading to very deep fundamental truths about nature and how nature operates, and about the things which exist and the things which don't, so that's what ultimately makes mathematics beautiful, in addition to, of course, it's practical value, okay.*1003

*Let's talk about linear systems and inverses, so we dealt with matrices, inverses, now let's associate it with linear systems, because again ultimately we are going to deal with linear systems.*1022

*Okay, so let's write a few things down here...*1032

*... If A is N by N, then Ax = B...*1038

*.... Is a system of...*1055

*... Any equations in N unknowns...*1062

*Let's take just an example of N =3 as supposed to doing it in its most general case, so we have, so you remember this is the matrix in vector representation of a linear system.*1069

*We can take a matrix, multiply by the vector X, the vector variables, which is just A, and in this particular case maybe a # by 1, and it's equal to a 3 By 1 vector and a vector is just that thing what the, its just a 1 by N matrix in other words.*1086

*We have something that actually looks like this, A*_{11}, A_{12}, A_{13}, A_{21}, A_{22}, A_{23}, A_{31}, A_{32}, A_{33}, and of ‘course these are just the entries, the first number is the row, second number is a column.1104

*Just as a quick review, times X*_{1}, X_{2}, X_{3}, here variable vector equals B_{1}, B_{2}, B_{3}.1123

*This is a symbolic representation, short-hand notations will of the entire system, that's what it actually looks like when you spread it out, this is our quick way of talking about it.*1135

*Now let's do something with this to see what happens, let's rewrite it again, we have, let's write it over here, AX = B, okay.*1144

*Now we just talked about inverses, so presuming that A actually has an inverse, well then inverse is just another N by N matrix, so we can multiply by A, so let's go ahead and multiply by A inverse on the left hand side.*1156

*And of course in a equality, anything I do to the left hand side, I have to do to the right hand side to retain the equality, so let's multiply both sides by the inverse, so i end up with something like this.*1172

*A inverse, times AX = well, A inverse times B, well properties of matrices, this times this times that, associative, so why don't i just associate these two.*1185

*I can write this as A inverse times A, put those together times X = A inverse times B, well A inverse times A, it's just the identity matrix, so it's identity matrix = A inverse times B.*1199

*And the identity matrix times something, it just is the identity matrix, it gives you that thing back, so identity matrix times X, just gives you X = A inverse times B.*1221

*Stop and take a look at that one for a second, okay, so if A is non-singular, we have discovered a way of actually finding the unique solution for this, for the variables.*1238

*It's equal to, well if I take this and if I just multiply on the left by the inverse of the original matrix, the coefficient matrix, I actually find the solution, XS = A inverse times B.*1253

*So just by using the inverse in standard mathematical manipulation that we are all familiar with, we have actually come up with a way of finding an unique solution for this.*1262

*Let's actually list this as a theorem, linear systems and inverses, so...*1276

*... If A is N by N, actually this theorem that I am going to list is four homogeneous systems, and will, and a little bit will actually talk about non-homogeneous systems where the right hand side actually does have a vector B, not just all 0's.*1288

*Then the homogeneous system X = 0, and remember this is the 0 matrix, all 0's, 0 vector I mean is that al the entries are 0's, has a...*1313

*... Non- trivial solution...*1335

*... If...*1341

*... And only if A is singular, and we remember singular means non-invertible...*1348

*... For a homogeneous system, notice what we did before for the AX = B, we notice that if we have a system AX = B, if A is non-singular, meaning if it is invertible, if the inverse exist, we can use the inverse to actually find the solution X by just multiplying B on the left hand side to the left of B by that inverse matrix.*1363

*For the homogeneous system, it's actually different, for the homogeneous when the solution set is all 0's, then it has a non-trivial solution if and only if A is singular.*1384

*In other words for the homogeneous system, if A, if the inverse doesn't exist, then I can conclude that the system has a solution, this, if and only if, we will actually see it a lot in mathematics, and all this means is that you will see it symbolized like this as a little aside.*1397

*It just means its equivalent to, what this actually says is that it, that this solution has a non-trivial solution means that A is non-singular, or it means that A is not, if A is singular or if A is singular, then it has a non-trivial solution, so in other words it goes both ways*1416

*Lets do an example, let me draw a little line here, okay...*1439

*... Let's consider the linear system, do it in matrix form, (1, 2, -3, 0) and we will do the augment here, to show that we are talking specifically about a homogeneous system, (0, 5, -2, -3, 0), okay.*1448

*This says X + 2Y - 3Z = 0, 1 - X - 2I + 3Z = 0, 5Z - 2I - 3Z = 0, okay let’s see if we can find the inverse.*1469

*When we form the augmented matrix, we have, let's do (1, 2, -3, 1, 0, 0, 1, -2, 1, 5, -2, -3, 0, 1, 0, 0, 0, 1) and then we subject it to...*1484

*... Reduced row echelon form, let me move it over here, we end up with the following (1, 0, -1, 0, 1, -1, 0, 0, 0), we don't even have to worry about the other entries, it actually is not relevant.*1509

*Simply because we notice here we don't have the identity matrix, therefore we don't have the identity matrix, that means that A inverse does not exist...*1528

*...Which means that it is singular, and according to our theorem, if it's singular, that means that this solution, this system does have a solution, so again...*1541

*... Singular, non- invertible, singular, it's non-invertible, that means that this has a solution, okay different than the other way, so this is very unusual when all of this, when everything on the right hand side is equal to a 0, the matrix has to be non-invertible for there to be a solution.*1557

*Where else if there are a series of numbers here on this side, we need it to be non-singular; we need to be able to find an inverse in order to be able to find a solution for it, okay...*1576

*Okay, so...*1593

*Let...*1599

*Okay, so now let's talk about some theorems and something which is going to, something called a list of non-singular equivalences of this list is going to be very important for us, to grow up a list of linear algebra, we are going to be adding to the list, then it's actually going to get quite long.*1604

*And again this list of non-singular equivalences allows us to move back and forth between things that are equivalent, when I know something about a system, I look through this list, I can tell you something else about that system.*1621

*Let's start with a theorem first though, so we have, if A is an N by N matrix, then A is non-singular or invertible if and only if the linear system AX = B has a unique solution.*1634

*That was the thing that we did when we multiplied on the left by the inverse, so this if and only if, again, it just means that it goes in both directions.*1647

*If A is non-singular, that means that the system AX = B has a unique solution, the other way it means if AX = B has a unique solution, that means that A is non-singular.*1658

*Now you might think that it's over a quill to actually state it as equivalence, to actually explicitly say it and it has to work forward as well as backward.*1671

*As it turns out, if you remember from your geometric course where you studied logic, where you did P, then Q, it doesn't always work the other way around.*1680

*For example if I said if it's raining today, then it's cloudy, that's true, but if I reversed it, and if I said if it's cloudy and then it's raining today.*1689

*That isn't necessarily true, we can have a cloudy day, but without it being, without it raining, so it's very important especially in mathematics to make sure that things go both ways, or if not both ways.*1697

*We have to specify that it's only one way, that's why we list things the way that we do, that's why we write things the way we do, math’s is very precise.*1708

*Okay, so list of non-singular equivalences, now this is not like, these are equivalence, which means that 1 is the same as 2 is the same as 3 is the same as 4.*1717

*What that means is any one of these can replace any one, other one of these, this doesn't mean that if this is so, then this is so, they are all equivalent, there are just different ways of representing the same thing.*1730

*If I know that A is non-singular, I also know that AX = 0, the homogeneous has only the trivial solution, because our theorem said that if it's non, if its singular meaning non-invertible, then it has a trivial solution.*1740

*But here we are saying that A is non-singular, it's invertible, that means the homogeneous system only has the trivial solution, meaning all of the X's are equal to 0, that A is well equivalent to the identity matrix, which is what we did before, we set up the augmented matrix.*1759

*We converted one to the other, the normal matrix became the identity, and the identity became the inverse, and the system AX = B has a unique solution, so these are the first four equivalences, and each section that we actually move forward to.*1777

*We are going to add to these equivalences, by the end of the course, you will have a whole series of equivalences for non-singularity or invertibility, so obviously invertibility, non-singularity is a profoundly important concept in linear algebra, absolutely central, okay.*1793

*Let's see what we can do here, let's do an example.*1811

*Okay, we want to know, let's go back to black...*1817

*... Does the following system...*1829

*Have a non-trivial solution? Our system is 2X - Y + 5Z = 0.*1835

*3X + 2Y - 3Z = 0, X - Y + 4Z = 0, okay so...*1854

*... Non-trivial solution means...*1873

*... Singular matrix, in other words the matrix formed by (2, -1, 5, 3, 2, -3, 1, -1, 4)...*1882

*... Should be singular, well let's check.*1897

*We go ahead and we form the augmented matrix, so we will take (2, -1, 5) we will do (1, 0, 0, 0, 1, 0, 0, 0, 1), then we will finish this one off, (3, 2, -3, 1, -1 and 4).*1900

*We will subject this to our mathematical software and we end up with the following (1, 0, 1, 0, 1, -3, 0, 0, 0) okay, doesn't matter what these are, there are entries of course that they don't matter because we notice this row of 0's.*1921

*We are definitely talking about something which is, doesn't have an inverse, which means that it is singular, which implies that yes there exists a non-trivial solution...*1941

*... Those of you to go on and to working a science is particularly in engineering, often times it's true that you are going to be interested in finding the solution to the particular equations that you are dealing with, but as it turns out a lot of times, you are going to be looking for the quality of the solution.*1962

*Sometimes, the quality of the solution may need not necessarily the solution about what you can say about it, as much as you can say about it without actually finding it.*1981

*Will believe it or not, give you more information that the actual solution itself, sometimes you guess, sometimes its possible to find a solution to a problem, sometimes it isn't, but you can infer different properties of the solution without finding the solution itself.*1988

*And again often times the qualitative value is going to be more important than the solution itself, and of ‘course a lot of this will make sense as you go on in your engineering studies.*2005

*But just to let you know sometimes it's nice to know whether something exists or not before we actually decide to find it, and in fact the history of mathematics is replete with hundreds of years going by with people looking for a solution to a particular problem, only to discover several hundred years later that the solution actually doesn't exist.*2015

*They were looking for something that didn't exist, very curious...*2033

*Okay, so now let's go ahead and solve this, so let me see what we have here...*2040

*... Let's go ahead when we decide to actually find the solution itself, we will do row reduction on the augmented matrix, so we do (2, -1, 5) and we just take the matrix and the absolute linear system and subject that to reduced row echelon.*2054

*(3, 2, -3, 1, -1, 4), okay, reduced row echelon form we get the following, we get (1, 0, 1, 0, 0, 1, -3, 0) get (0, 0, 0, 0, ).*2076

*Again when we are reducing just the system itself to reduced row echelon, this row of 0's is not a problem, here we have a leading entry, here we have a leading entry, here we have a parameter.*2101

*We can just read this off, if we say that this is X, this is Y, this is Z, what we end up with is X + Z = 0.*2115

*And we get Y - 3Z = 0, so Z is our free parameter, this is the implicit solution, we can leave it like that, or we can say Z = S, we can say, then put Z back in here, we can say Y = 3S, and we can put X = -S.*2127

*This is the explicit solution, s, you just put in any value you want, solve for X, Y and Z, so in this case not only a non-trivial solution, an infinite number of non-trivial solutions.*2154

*Okay...*2169

*let's do a slightly more complex example, find all values of A...*2177

*... Such that... *2189

*... The inverse of...*2193

*... A, which we will take as (1, 1, 0, 1, 0, 0, 1, 2) A...*2200

*... Exists, so find al values of A, that's A right here, such that the inverse of this thing actually exists, meaning it is non-singular, it is invertible.*2213

*We will say...*2226

*Inverse exists, that implies non-singular and going back to our list of non-singular equivalences, that means that its row equivalent to, in this case this is a 3 by 3, row equivalent to I3.*2230

*In other words I should be able to buy a series of, those things that we do gets converted to our reduced row echelon form, I should be able to convert this to the identity matrix, the 3 by 3 identity matrix are 1's all along the main diagonal.*2256

*Let's go ahead and actually do that and see why happens by, you know using A, and here is where mathematical software absolutely comes in, it will do this entirely symbolically for you and here is what we get.*2273

*We get (1, 1, 0, 1, 0, 0, 1, 2, a) (1, 0, 0, 0, 1, 0, 0, 0, 1) that's our augmented matrix.*2286

*What we want to convert it to when we do reduced row echelon form, it ends up being the following, (1, 0, 0, 0, 1, 0, 0, 1), you end up with (0, 1, 0) here, (1, -1, 0) and you get -2 over A.*2306

*1 over A and 1 over A, this process gives us the inverse, this is A inverse right here, we have done it, converted it, it's row equivalent to this.*2330

*It's row equivalent to I3, in the process of doing that we have actually created the inverse right here, and the only thing we have to look at now is what this A have to be to make this defined.*2345

*Well as it turns out, A can be absolutely anything, but A cannot be 0, so A not equal to 0...*2360

*It’s kind of curious to think you have this system, any number here will work, but the minute you put a 0 here, you have all of a sudden created a matrix where the inverse doesn't exist.*2377

*We have used the same process up here, we have variable math software, reduced row echelon form, we have come down, we have created this.*2388

*We take a look here to see what is it that makes this defined, well as long as A is not 0, these numbers are perfectly well defined.*2396

*Okay, so that was dealing with inverses, thank you for joining us here at educator.com, let’s see you next time for linear algebra.*2405

*Welcome back to educator.com, this is linear algebra, and today we are going to be talking about determinants.*0000

*Determinants have, determinants are very curious things in mathematics, obviously they play a very big role in linear algebra, but they also play a big role in other areas of mathematics.*0007

*Now we are not going to be necessarily be dealing with too many of the theoretical aspects, we are going to be more concerned with computation, using them to actually solve problems.*0020

*But it is good to know that determinants are very deep part of mathematical research, let's go ahead and get started.*0031

*Okay let us...*0039

*Take a matrix A, and we will take, we will use our notation A*_{11}, A_{12}, A_{21}, A_{22}.0044

*Okay, 2 by 2 matrix, we define the determinant of A...*0057

*... Another symbol for the determinant is a straight line up and do, we will be using that symbol, we will be using both interchangeable A*_{12}, A_{21}, A_{22}.0065

*So it depends on what it is that you are talking about, you use the lines whether you want to specify the entries.*0075

*We use this more functional notation, determinant of this matrix A, when we want to simply speak about it in the abstract, so we define it as A*_{11} times A_{12} - A_{12} times A_{21}.0083

*The pattern is this times that - this times that...*0100

*... Again a*_{11} along the main diagonal this times that - that times this including the signs.0109

*Let's do the same for a 3 by 3 and then we will do some examples, so let's say that B is our 3 by 3.*0119

*We have B*_{11}, B_{12}, B_{13}, B_{21}, B_{22}, B_{23}, B_{31}, B_{32}, B_{33}.0130

*Linear algebra is notationally intensive, okay now, the determinant of B is equal to, okay I am going to write it out, and then we will talk about an actual pattern by that we can use.*0149

*B*_{11} times B_{22}, times B_{23} + B_{12} times B_{23}, times B_{31} + B_{13} times B_{21} times B_{32}...0173

*... - B*_{11} times B_{23} times B_{32} - B_{12}, time B_{21}, times B_{33}...0200

*... - B*_{13} B_{22}, B_{31},0222

*Here is the pattern, there are different ways to think about this, and I know that many of you have of ‘course seen determinants before back in high school and perhaps in other areas of mathematics, perhaps in some college courses, in calculus or something like that.*0232

*Here is the general pattern, notice we have some that are +, +, +, + and some that are -, -, -...*0247

*... First one is B*_{11}, B_{22}, B_{33}, going from top left to bottom right, multiply everything down this way.0261

*The second entry, just move over and go down again to the right, B*_{12}, times B_{23}, but since you have nothing over here, just go down to this one, because you need three entries, notice each one of these has 3, 3, 3, 3, 3, 3.0271

*You need three factors in the multiplication, so it's this times this times this.*0289

*Now you go to the next one B*_{13}, there's nothing here, but you need three of them, so you go here and here, so it's B_{13}, B_{21} times B_{32}, that takes care of the plus part.0295

*Now let's deal with the minus part, go back to the B*_{11}, well B_{11} now go, try going down to the left, well there's nothing here at the left but you need three terms.0308

*It's B*_{11}, B_{23}, B_{32}, go to the next one over, B_{12}, B_{21}, there's nothing here, but there is one here B_{33}.0317

*B*_{13}, B_{22}, B_{31}, there are different kind of pattern's that you can come up with, this is simply the best pattern that I personally have come up with to work with 3 by 3.0332

*Again, you have probably seen determinants before, so whatever pattern you come up with is fine, I think this works out best, simply because you are going to the right, positive, you are going down the left, negative, if that makes sense.*0347

*Those have positive signs, when you are moving in this direction, you have negative signs, and of ‘course in this, you have a 3 by 3, each term has to have three things multiplied by each other, okay.*0361

*Lets do some examples...*0372

*Let's go back to, actually you know what?, I think I am going to try blue ink, let's define A as (1, 2, 3, 2, 1, 3, 3, 1, 2) okay, so let's do our pattern.*0378

*Let's see, let's go ahead and put something like that, and we will say the determinant of A, okay, 1 times 1 times 2 is 2, okay + 2 times 3 times 3.*0397

*2 times 3 is 6, 6 times 3 is 18 + 3 times 2 times 1, 6 okay.*0412

*Now, - 1 times 3 times 1 is 3, - 2 times 2 times 2, 2 times 2 is 4 + 4 is 8 + this 8 off - 3 times 1 times 3, -9.*0423

*When we add them all up, hopefully my arithmetic is correct, please check me you should get 6, so again positive this way, negative that way.*0445

*Let's do a 2 by 2, let's say B is equal to (4, -7, 2, -3) so now we have some negative entries, okay, the determinant of B is equal to this times this.*0460

*4 times -3 is -12 - this times this, this times this is -14, a - sign has to stay, so it's -12 - (-14) - 12 + 14, it is equal to 2.*0479

*Okay, so these signs here, this + here, + here, + here, -, -, -, they always stay there, it doesn't matter what these numbers are,*0500

*If this is negative, then a negative times a negative is positive, but you have to have three positive terms in a 3 by 3, you have to have 3 negative terms in a 3 by 3.*0512

*Negative in this case doesn't depends on what these numbers are, but those negative signs and these positive signs have to be there.*0522

*They are not part of the arithmetic; they are part of the definition of the determinant, okay.*0529

*Let's see here...*0538

*let's go over some properties of the determinants, just like we did properties of matrices, we will talk about some properties of determinants, so let A be an N by N matrix okay, then the determinant of the A transpose is the determinant of A.*0543

*In other words if I take A, take the transpose of it, and if I, then i take the determinant, it's the same as the determinant of A, no change, if you have a matrix and you interchange two rows or two columns this way or this way of the matrix, the determinant changes sign, so it goes from positive to negative, negative to positive.*0559

*Positive to negative, negative to positive, if two rows or columns of a matrix are equal, then the determinant equals 0, that isn't simple.*0579

*If a row or a column of A is entirely 0's then the determinant again is equal to 0, if a single row or a column is multiplied by a non-zero constant R, non-zero, then the determinant is multiplied by R.*0593

*The whole determinant is multiplied by R, if one row or column is just multiplied by r, let's do an example...*0609

*... We will let A = (1, 2, 3(1, 5, 3) and (2, 8, 6) okay, we want to find the determinant of A, in this case i am actually going to write it with this symbol, and you will see why in a minute.*0621

*I am going to rewrite it (1, 2, 3, 1, 5, 3, 2, 8, 6), in this case, using some of the properties particularly the one where we say if we multiply by a particular constant, I am going to use something that's going to be a Kent 2, factoring out, that you are used to from algebra, that's why I used this.*0641

*So I want you to see all of the entries, that's equal to... So notice this is (2, 8, 6).*0663

*You can divide this by 2, which means I can actually factor our a 2 from here, so i am going to put a 2, and i leave the other rows the same(1, 5, 3) and this becomes (1, 4, 3).*0672

*I can factor out a 3 here too, so I have 2, times 3,(1, 2, 1), I can factor out a 3 from this column, (1, 5, 1, 1, 4, 1) okay.*0686

*Now notice, I have a column of 1's and another column of 1's, two column that are the same, so now the determinant is equal to 2 times 3.*0705

*And when I have something, the two columns are the same, the determinant is 0, so saved myself a lot of problems, I didn't have to go through that whole strange, this diagonal, that diagonal, this entry, that entry, positive negative.*0716

*I used the properties of determinants, to actually make my life a lot easier, and I was able to find the determinants of the 3 by 3 pretty quickly, just by some standard algebraic manipulation, okay...*0731

*... Property number 6, if a multiple of one row or column of A, is added to another row or column, then the determinant is unchanged, remember that process of elimination that we did, we are doing Gauss Jordan reduction, Gauss Jordan elimination, where we multiplied some multiple of one row and added it to another.*0750

*When you do that, you are creating an equivalent system you remember, so the determinant doesn't change because the system is equivalent, now the seventh property, very important, if A is upper triangular, that means all entries below the main diagonal are 0, then the determinant a, of A is the product of the entries on the main diagonal.*0771

*A, upper triangular matrix, looks like let’s just say (1, 2, 3) let's say (3, 4, 6)...*0792

*... everything, so this is the main diagonal, okay, everything below the main diagonal is 0, so notice (0, 0, 0) but there are entries on the main diagonal, some of them can be 0's but they are not all 0's, okay.*0807

*Upper triangular means the upper part, the upper right hand art is the shape of a triangle, okay, when that's the case, then the determinant is just that.*0820

*It's kind of nice, so if you can actually turn it into buy a bunch of manipulation that you do to matrices, our elimination, interchanging rows, multiplying, you know one row by another, adding it to another row.*0831

*If you can actually change it to an upper triangular matrix and you just multiply those entries and you have your determinant, let's do an example, okay...*0843

*I have, I will go ahead and put my determinant symbol right there already (4, 3, 2) (3, -2, 5) and (2, 4, 6), okay i am going to go ahead and factor out a 2 from the third row right here.*0858

*That's equal to 2 times (4, 3, 2) (3, -2, 5), (1, 2, 3), okay.*0879

*I am actually going to switch this row, the third row and the first row, I am just going to switch them, and when I do that, I change the sign of the determinant, so I just take a -2 in front of that, and then I go (1, 2, 3), I will lead the second row the same (2, 5, 4, 3, 2) and these are pretty simple things that we are doing here.*0892

*Now, I have a 1 here and i have a 3 and then the 4, I am going to multiply this first row by -3 added to this, okay...*0916

*We end up with -2 times (1, 2, 3), (0, -8, -4) (4, 3, 2), now I am going to do the same thing with this row right here.*0928

*I am going to multiply this first row by -4, added to the third row, so I end up with -2 times (1, 2, 3), (0, -8, -4), (0, -5, -10), okay.*0943

*I am going to factor up, notice here, 4, there is an 8, there is a 4, I am going to go ahead and factor out a 4, so -2, we will take a 4 and I am going to also take out a 5 here, okay.*0963

*5, that turns it into (1, 2, 3)...*0979

*... (0, -2, -1) and (0, -1) oops, that is a -10, if I divide by 5, which should give me a -2.*0986

*Okay, so all I have done is factor up...*1001

*... I am actually going to do one more thing here, I am going to switch these rows simply because I want the 1 on top of the 2, personal choice, I don't necessarily need to do it, so again when I switch a row, I change the sign, so I get rid of that negative sign 2 times 4 times 5.*1007

*(1, 2, 3), (0, -1, -2), (0, -2, -1) okay, now I am going to multiply this by positive 2, this second row by +2 added to this one to get rid of this -2.*1026

*Okay, and when I do that, I get 2 times 4 times 5 (1, 2, 3), (0, -1, -2), (0, 0, 3), now have a...*1045

*... Upper triangular matrix, 0's entries along the main diagonal, now my determinant is equal to, 2...*1063

*... I am going to skip the parenthesis, times 4, times 5, times 1, times -1, times 3.*1074

*just this times, this times that, and now the determinant of this is just the entries along the main diagonal, and I have just a straight multiplication problem, and I should end up with -120, so properties, I have take a matrix...*1086

*... Subjected it to a bunch of, you know properties, simplified a little bit in order to find the determinant, okay...*1102

*... Few more properties to go, number 6 is if I take two matrices and I multiply them and then take the determinant, well it's the same as just taking the determinant of the first one times, the determinant of the second one, so determinant of A times B is equal to the determinant of A times determinant of B, reasonably straight forward there.*1116

*Now, if A is nonsingular, if it's invertible, if I have the inverse and if I take the determinant of it, it's the same as taking 1 over the determinant of the original matrix, notice this is not...*1135

*... Again we are not talking about 1 over A, 1 over A, A is a matrix, division by a matrix is not defined, but the determinant is a number, so division by a number is defined.*1150

*Let's go ahead and do an example, we will let A = (1, 2, 3, 4), we know that the determinant of A = 1 times 4, just 4 - 2 times 3, 6 = -2, okay.*1162

*Now, when we use our math's software to calculate the inverse of this matrix, we get the following (-2, 1, 3 halves, -1 half), when we take the determinant of the inverse, we get -2 times -1 half.*1186

*-2, I will write this one out, times -1 half, -...*1210

*... 1 times 3 halves, -2 times -1 half is a 1, -3 halves = -1 half.*1219

*Well, we said that the determinant of the inverse, of ‘course, we said that it's equal to, we want to confirm that, 1 divided by the determinant of the original.*1233

*Well the determinant of the inverse is -1 half, is it equal to 1 over -2, yes...*1248

*... If you want to know the determinant of the inverse, instead of finding the inverse and getting the determinant, you can just find the determinant of A and take the reciprocal of it.*1260

*Again, this is not 1 over the matrix A, this 1 over the determinant of A, the determinant is a number, the matrix itself is not a number.*1267

*And that covers determinants, thank you for joining us at educator.com, linear algebra; we will see you next time, bye, bye.*1280

*Welcome back to educator.com, thank you for joining us, this is linear algebra, and today we are going to continue by discussing co-factor expansions and using the co-factor expansion of a matrix to compute determinants, and to also compute the inverse of a matrix.*0000

*Now we did talk about the inverse of a matrix last time, and although this particular procedure that we are going to show you today with co-factor expansions with something that we define in a minute called the ad joint.*0018

*It is the radically, useful computationally it's not something that you want to use for more than a 3 by 3 or a 4 by 4, but again for theoretical reasons, it's always nice to be introduced to it, and to see how it functions.*0026

*Let's just dive in and write now...*0038

*Okay, let's start with a definition...*0043

*... Okay, that's always we will let A ...*0051

*... Be an N by N matrix...*0059

*... And we will let M, capital M*_{ij} be the N - 1 times N - 1.0063

*Sub matrix, and again these definitions are more for formal purposes once we actually do examples, anything that seems a little strange and unusual here will make a lot more sense...*0082

*.. Of A obtained...*0093

*... By deleting, the ith row, and jth column...*0102

*Now let's say so let A be an N by N matrix, let M*_{ij} be the N - 1 by N -1 sub matrix of A obtained by deleting the ith row and the jth column.0121

*Now the determinant of this M*_{ij}...0133

*... Is called...*0143

*... The minor...*0147

*... Of the entry...*0151

*...A*_{ij}, so for example if we had A_{32}, that would be the third column, second entry, we would knock out that row and that column.0158

*We would take the determinant of what was left over, and then that's called the minor of that particular entry, and again we will do an example and it will make more sense.*0169

*One mo0re definition, the co-factor...*0178

*... Of A*_{i}, oops let me put an i there of ij, which we denote...0188

*... Capital A*_{ij}, so co-factor, we use the capital, minor we use the M...0201

*... Is the following A*_{ij} equals -1, raised to the power of I + J times the determinent of M_{ij}.0211

*Okay, don't let all these subscripts and i's and j's and -1's scare you, let's do an example and it will make a lot of sense, so let's define our matrix A over here.*0228

*We will have (3, -1 and 2), we will have (4, 5, 6), and the third row will be oops, excuse me...*0241

*... We will do (7, 1, 2), now M*_{12}, so M_{12}, this is the sub matrix we get from crossing out the first row, second column.0252

*If I knock out first row second column, I am left with (4, 6, 7 and 2), so we have (4, 6, 7 and 2).*0268

*And that's exactly what it is, first row second column, go back to the original matrix, cross out, and the numbers that you have left over, those form sub-matrix.*0282

*In this case 2 by 2, because the original was 3 by 3, so we cut and remember up the definition N-1, by N - 1, 3 by 3 becomes a 2 by 2.*0292

*Now if we take the determinant...*0302

*... Of this M*_{12} and again the determinant of a 2 by 2 matrix is just going to be, put some parenthesis, it's just that times that - that times that, so 2 times 4 is 8 - (-42).0307

*What we end up with is -34, so let's put that aside for a second, so we have the minor, we have the determinant of the minor, and then we have that other thing that we defined which is the co-factor.*0322

*Well the co-factor of (1, 2) is -1, and the power that you raise it to is the sum of this (1, 2) right here, so it's 1 + 2 times the determinant that we got.*0337

*M*_{12}, now -1 to the third power is -1, so it's -1 times the determinant we found already, which is -4, so our co-factor is 34.0354

*Lets go through this again, we have a matrix, we have a minor, we have the determinant of that minor, and we have something called the co-factor.*0371

*Our matrix, this is our original matrix right here, let me actually use, so this is our original matrix, we decided to take the minor, the M*_{12}, which means crossed out the first row, second column, so we crossed out the first row , the second column.0380

*What we are left with was A 2 by 2, that's our minor, it is a matrix.*0395

*When we take the determinant of that minor, we actually end up getting this number right here, so this is a number when you take a determinant , remember a determinant actually gives you back a number.*0400

*And then what we do is we find the co-factor, the co-factor (1, 2) is the determinant that we got, multiplied by -1, raised to the power of the sum of the indices, ! + 2.*0410

*Let's do another example, go back to my, actually let me go back to blue ink here, so let's see this time let's do, let's calculate the M*_{23} minor.0428

*When we go back to our original matrix and we knock out the second row, and third column, what we end up with is another, the 2 by 2, which is (3, -1, 7 and 1).*0444

*When we take the determinant of M*_{23}, it is going to be 3 times 1 - (-1) times 7, it's going to equal 3 - (-7), which is 3 + 7, we get 10.0461

*That's our determinant, and now our co-factor A*_{23}, that's equal to -1, raised to the power of 2 + 3, the row + the column, times our determinant.0478

*I will just go ahead and put that here, okay -1 raised to the fifth power is -1, so you end up with -10.*0494

*Once again, we have our matrix, the original matrix, we knock out the second row and third column, because we are interested in the M*_{23}.0505

*Anf we have a matrix, we take the determinant of that matrix, an then from that we derive something called the co-factor, so it's the co-factor that's actually going to be the real important thing that we continue to deal with.*0514

*Okay, so now what we have this thing called a co-factor, as it turns out we can use it to evaluate determinants, so before what we do this, we use the properties of the determinants to manipulate the matrix.*0528

*Find as many as 0's as we can, may be factor out some numbers, simplify things as much as possible, essentially put it into upper triangular form if you remember from the last lesson.*0541

*And then just multiply everything along the main diagonal, well this is another method of actually doing it, and again computationally it may not be as efficient, it may or it may not, it depends on the situation.*0551

*But theoretically it comes in very handy, and it will make more sense as we proceed with linear algebra, but for right now let's just go ahead and work on actually finding a determinant using this co-factor expansion....*0563

*Okay, we have a theorem, okay, A = A*_{ij} and remember this symbol is just a short hand symbol for the entire matrix, all of the entries, okay.0579

*A is N by N, okay...*0597

*... Then for each I, which is less than or equal to N, greater than or equal to 1...*0604

*The determinant of the matrix A is actually A*_{i1} times the co-factor.0615

*I1 + A*_{i2} times the co-factor, i2 + so on as many rows or columns A_{in}, oops, that should be a small A.0624

*All this...*0645

*... *_{in} times A_{in}, now there is another one of these four of the columns and but again we have a lot of indices, we have a lot of A's, you know lowercase, uppercase.0649

*Instead of sort of throwing at a bunch of symbolism, let's just go ahead and do an example, and it will make more sense, essentially what this says is that I can pick a particular row or column of my choice, and I can expand that matrix along that row or column.*0663

*We will just see what we meant in a minute, that's all this is saying, that you can sort of add that entry of the row and, let's say you pick the first row, you can take the first row entry times the co-factor for that entry.*0679

*The second entry in the row and the co-factor for that entry, add them all together, and as it turns out, you end up getting the value of the determinants.*0692

*Let's just jump into the example.*0701

*We will let A = (1, 2, -3 and 4) (-4, 2, 1, 3) (3, 0, 0, -3), (2, 0, -2 and 3).*0704

*This is our matrix, now we take a look at this matrix and we want to make things as easy as possible for us, so we want to pick the row or the column that has the most number of 0's, because that way those terms just drop out of this sum up here.*0728

*They actually don't show up at all, so it makes, makes our life a lot easier, so when I look at this, I see the third row, has two 0's.*0744

*I am going to go ahead and expand along this row, now I am going to give you a little bit of sum as little bit of check board pattern.*0753

*In this case we have a 4 by 4 determinants, and as it turns out, remember that -1 raised to the power of the I + J, well as it turns out when you, I + J is going to iterate, 1, 2, 3, 4, 5, 6, 7.*0764

*And what happens is that -1 becomes -1 + 1 - 1 + 1, so instead of keeping track of all the symbolism, as far as the definitions are concerned, and this is going to draw out the +, - pattern for a 4 by 4, and also a 3 by 3 for you so that you know.*0779

*+, -, +, -, basically it's just alternating +'s and - all the way through, -, +, and you can never have a + next to a - vertically or horizontally.*0798

*that's all it is, so you can do this for a 5 by 5, 6 by 6 if you need to free yourself, that's +, that's -, that's -, +, -, -, +, -, +.*0809

*And here we have +, -, +, and you will see what this means in just a minute, -, +, -, +, -, +, so these are the things that you want to keep in mind when you do your co-factor expansion, okay.*0820

*Now we have decided to actually expand this along the third row, so here is what the expansion look like, now I take, I am doing 1, 2, 3, 4, so I am going to have four terms.*0834

*Well my first term over here is a 3, and if I go over here and if I take a look, I see that this is a positive, so that means that whatever term is there, in the end I stick a positive sign in front of it.*0849

*In this case it's going to be a positive and I like to actually put my positives in front of my positive values, it's just to have it that I have, it's up to you, negatives of ‘course, you need positives, you don't necessarily need, but for me it helps keeps things consistent and balanced.*0861

*3, now we said that you are going to expand it along this and the co-factor is if you erase that row and erase that column, what you are left with is this, this,. this number, this, this, this number and this, this, this number, a 3 by 3.*0876

*Lets go ahead and...*0894

*... (2, -3, 4), (2, 1, 3), (0, -2 and 3) okay, now we move to the next one, I'll go ahead and write...*0898

*... What I actually know, I won't write up, because this is 0, it's just going to be a -...*0913

*... 0, this is 0, so it's going to be a + 0, now our last term -, minus is here we put that there.*0921

*This is -3...*0935

*... Now we knock out that row and that column, but that's in and we are left (1, 2, -3)...*0940

*... - (4, 2, 1)...*0953

*... (2, 0, -2), so now what we have to evaluate is this thing right here, okay.*0965

*We have done a co-factor expansion, we chose our third row, because it has a couple of 0's in it, we took this entry times...*0976

*... The co-factor of that, we took, if we were to actually write out this one, we would knock out that entry, and then that entry, and here this one we took this entry times its co-factor.*0987

*Now we have this, so now we have a 3 times the 3 by 3, couple of 0's, and a -(-3) times another 3 by 3, so now let's evaluate this, evaluate this and we will put everything together.*0999

*Go back to my black ink here, I am going to rewrite what it is that we had, so we had 3 times (2, -3, 4), (2, 1, 3, 0), -2 and 3.*1013

*We have -0, we have +0, we have -(-3) times (1, 2, -3).*1030

*Again with linear algebra, there is a lot of minus's, plus's and numbers floating around, I'd like to write everything, I don't like to do it in my head, and for example I don't turn this into a +3, as I am doing it.*1041

*I wait until the end, so that I make sure that as I am going down the list, every symbol that I have is completely explicit and clear, not something that I have forgotten (-4, 2, 1, 2, 0, -2).*1054

*Okay, so let's go ahead and expand this one, now this one, we can probably do, we can expand it aong this column or this row, because it has a 0 in it, it doesn't really matter which one.*1070

*Let's go, you know what I think I am going to go ahead and take this row, red, I think I am going to go ahead and expand along that row, so I have got...*1083

*... The 0 is gone, so I have a -2, now I go back to my pattern, let me rewrite my pattern here, so that I remember it for a 3 by 3, +, -, +, -, +, - and +, -, +.*1100

*Here it's going to end up being...*1115

*... -2 times, and when I knock out that row and that column, I get (2, 4, 2, 3).*1122

*And then I have this entry right here, the 3, well that's a + sign, so i put a + sign, I put the 3.*1136

*And now I illuminate the row and the column what that belongs to and I am left with (2, -3), (2, 1).*1147

*Okay, that is going to end up equaling...*1156

*... 2 times 3 is 6, 6 - 4 times 2, that's -2, well okay so this determinant is -2, -2 times -2 is 4 times -1 is -4.*1163

*That's why I want to keep track at everything, and I don't want to do it in my head early on, I want to see every negative sign and every positive sign.*1180

*Here we have 2 times 1 is 2, - of -3 times 2, so - of -6 is 8, 3 times 8 is 24, positive, 2, so this equals 20.*1187

*Okay, so this thing right here, this determinant is equal to 20, now we will do this one, okay that will be our, let's call this, so we will call this number 1, that's our first determinant number 1.*1204

*And we will call this number 2, so now we are going to do the second determinant, now we take a look in same thing, we have a 0, 1 here, so let's go ahead and expand it along this, so we look down here, it's positive.*1221

*It's going to be 2 times, well we eliminate the row and the column what that belongs to, (2, -3, 2 1), (2, -3, 2, 1), the 0 doesn't matter, but I'll go ahead and put it in anyway.*1235

*Actually you know what, we don't need it, it's not that important, and the -2 over here, it is positive, so...*1247

*... -2, I would like to write it that way, and when I eliminate that row and that column, I am left with (1, 2, -4, 2), (1, 2, -4...*1257

*... -4 , 2 okay, and when I do this multiplication, I end up with 16 - 20 = -4.*1275

*Now this is, these are just the square determinant, notice I haven't taken care of the 3 and the - (-3), so now I am going to do that, so I have my final answer as 3...*1285

*... Times the 20, okay, now it's...*1297

*... - (-3) times -4.*1304

*And when you do all of that you end up with 48...*1313

*... Again this last part, let's do it again, I got 20 for this determinants, I have got -4 for this determinant, but I have the 3 and the - (-3) from the 4.*1319

*I brought those down, 3 times the 20, - sign is here, -3 is here and the determinant that we solved for this one is -4.*1331

*Make sure you keep track of all these, a single - sign will make you go in an entirely different direction, that's why it's really important to write everything out and do it this way., it is the best way to do it.*1341

*B is clear and explicit is possible, don't do anything in your head along the way, okay.*1352

*Let’s see what else we have here...*1361

*... Let's talk about the inverse of a matrix by co-factor, so what we just did was introduce co-factors, we did a solved determinants by co-factors, and now we are going to see if there is a way to actually find an inverse by co factors, and there is.*1365

*So far the way that we have been doing inverses is actually very wonderful computational method, so if I have some matrix A, which is N by N, I actually end up forming the augmented matrix if you remember.*1380

*Okay, let's just put that there, I ended up forming this matrix, and then I subjected this matrix to Gauss Jordan elimination, in order to end up with the reduced row echelon form, and if it actually has a matrix.*1396

*If it has an inverse, that A turns into the identity matrix, and the identity matrix turns into some matrix B, well as it turns out, that B...*1410

*... Is the inverse, wonderful procedure to do it, always works, if it doesn't turn out to be an identity matrix, in other words if you get a row of 0's or a row of inconsistence.*1423

*Then that just means that the identity matrix, that means that the inverse doesn't exist, excuse me, we are drawing around all kinds of terms, identity matrix, inverse, determinants, try to keep them all straight, I will try to keep them all straight, okay.*1433

*Now let's introduce another way using co-factors, start with the definition...*1449

*... Okay, we will let A = be the matrix, IJ, okay and it is N by N of ‘course.*1457

*Well let's just write it out, N by N, then...*1470

*... Then N by N matrix, okay...*1478

*... ADJ of A, of the, called the ad joint...*1486

*... Of A...*1500

*...Is the matrix...*1507

*... Who is I, Jth entry.. is a co-factor...*1512

*... JI, in symbols, it looks like this, ad joint of A and sometimes you will see parenthesis around, sometimes not.*1523

*A*_{11}, A_{21}, A_{n1}, A_{12}, A_{22}, A_{1n}, all the way down to...1538

*... A*_{nn}, okay, very careful here.1559

*Let's go through this, very carefully, the first thing that I would like you to notice, so we have, if we start with some matrix, and we want to form the ad joint of that matrix, okay, what we do is...*1566

*Let's say for example the...*1569

*... For that entry, for the first row, first column entry of the ad joint, we actually take the co-factor of the entry for the original matrix.*1582

*But what's interesting is notice that the entry for the second, for the first row, second column, is the adjoint for the second row first column.*1596

*Everything is sort, is transposed, so what you end up is that, the order is reversed here, so when you are going along a matrix, and you are going along the rows and columns of that matrix.*1609

*When you form the ad joint, you are actually going to reverse the order, so we will give you an actual procedure, the method for doing this properly, so that you will always end up getting it right, okay....*1622

*... Best method...*1639

*... For forming the ad joint of A, very simple 1...*1647

*... Form, the co-factor matrix of A and then another words, the matrix of the co-factors of A, and again it will make sense when we do an example.*1657

*And then just take the transpose, that’s the best way to do it, take the transpose instead of trying to do each entry and try to remember where it goes.*1673

*Just do it for the matrix straight, and then just flip it along the main diagonal, okay let's do an example.*1682

*We have our standard matrix A, it is (3, -2, 1, 5, 6, 2, 1, 0, -3) okay.*1690

*And just to remind ourselves, I will go ahead and put +, -, +, -, +, -, +, -, +...*1706

*... Our pattern, okay, so now we are going to actually find the co-factor for this entry, this entry, this entry, for all of the entries, and then we are going to put them in into a new matrix, those entries, and then we are going to flip it.*1716

*Okay, so A*_{11}...1730

*... = well, remember the co-factor, + then when you knock out this row and that column, you are left with (6, 2, 0, -3)...*1736

*(6, 2, 0, -3), when I take the determinant of that, it's 6 times -3, which is -18 - 0 +, so it's -18, that's that entry.*1747

*Now we will go to, where should I do it, I will do it, I will do it over here, okay, I know, it's okay I will do it over here.*1762

*A*_{12}, okay so this is the co-factor for the entry, first row...1778

*... Second column, so it's a - sign, right, so - and then if I knock out this row, this column, first row second column, I get (5, 2, 1, -3) left over.*1786

*(5, 2, 1, -3) left over, 5 times -3, -15, -2 times 1 -17, - (-17), I end up with +17.*1802

*Again arithmetic is a real issue here, let me erase some of this here...*1817

*Okay, I want to actually be able to do it right underneath, so A*_{13}, so we are so on the first row third column, that entry is that, okay.1825

*It's positive, when I knock out, the row and the column that, that entry is now left with (5, 6, 1, 0).*1837

*Okay, and the, 5 times 0 is 0, -6 times 1 + -6...*1849

*Okay, now let's move to the second row, so now we want to find A*_{21}...1858

_{21}, well negative sign, when we come over here, the entry is a 5, we knock out that row, that column, we are left with (-2, 1, 0, -3).1866

*That determinant is equal to -6...*1886

*We want to do A*_{22}...1893

*...Positive, we are here, when we knock out that row, that row , that column, we are left with (3, 1, 1, -3)...*1897

*... (3, 1, 1, -3), this is -9 - that, you get a -10, okay.*1909

*Let's go to A*_{23}...1921

*... A*_{23}, actually you know what; I am going to, so that we actually have something to refer to.1927

*I am going to rewrite the original matrix here, and yeah, that's not a problem, so A, our original matrix was (3, -2, 1), (5, 6, 2), (1, 0, -3).*1937

*We want to be able to refer to it, and we said that the last thing that we are working on was the co-factor for the second row, third column.*1954

*Okay, so we go to second row third column, and row third column here, it's A negative, +, -, +, -, +, -, and then when we knock that out, and that out, we get (3, -2, 1, 0).*1966

*(3, -2, 1, 0), 3 times 0 is 0, -(-2) is + 2, -(-2), okay now we will work on the third row A*_{31}, positive, negative, positive, so it's positive, 3321 When I knock out that row, that column, I am left with (-2, 1, 6, 2), (-2, 1, 6, 2) when I solve that I get -10.1980

*I have a (3, 2), so positive, negative, positive, positive, negative, negative, and I knock out that row, that column I am left with (3, 1, 5, 2).*2014

*Okay, I end up with -1, and my last one, the co-factor for the third row, third column, +, -, +, +, -, +, is +.*2028

*When I knock out that column, that row, I am left with (3, -2, 5, 6), (3, -2, 5, 6), and I should end up with 28 if I had done my arithmetic right, okay so now we can put all of these numbers into a matrix.*2044

*We can form...*2064

*... We start with, let's call it the, well we don't have to call it anything, the co-factor...*2067

*... Matrix, of A, okay that's our first step, we form the co-factor, then we take the transpose, so we put all of the numbers that we got in (-18, 17, -6).*2078

* (-6, -10, -2, -10, -1), so this one, the last one we did, and 28, and now after we do this, we want to take the transpose of this, okay.*2095

*let's go ahead and take the transpose, and we end up with the final matrix, which looks like this, so it's going to be -18, -6, -10, 17, -10, -1...*2115

*-6, -2 and 28, so I started with a matrix, I found the co-factors for each entry, in which numbers essentially.*2137

*And I take those numbers and I put them into, and I form a new matrix, okay, this is my co-factor matrix, and then that matrix that i got, I take the transpose of that.*2152

*And I end up with...*2163

*... This is my ad joint...*2168

*... It's a bit of a process, but this is how you do it, okay, that's the, here, now let's use our ad joint to come up with, the actual inverse.*2174

*We see, we will let A, be n by N...*2192

*... Then...*2203

*... Yeah let me write out the theorem and then we will settle with it, A times the ad joint of A...*2208

*... Okay, equals the ad joint of A, times A is equal to the determinant of A times the identity matrix, and the only reason the identity matrix shows up here.*2217

*Is that here we have two matrices multiplied by each other, that gives you matrix, remember the determinant is a number, so and fairer, i want to take turn a number into a matrix, and multiply it by the identity matrix.*2231

*It just means multiplying everything on the main diagonal, so it's a way it is converting a number to a matrix, that's what the only reason that it shows up here.*2241

*this theorem says that if I take the original matrix, and I multiply by its ad joint, or if I do it the other way, if I take the ad joint and multiply by the original matrix, they are both defined, because it's just N by N either way.*2250

*I end up actually with the determinant times the identity matrix, now again since this is an equality, anything that I do to one side, I can do to the other and maintain the equality, so let's just fit with this a little bit.*2262

*Let's take... let's take this one and let's take that one, we can, I just take randomly in order, so let's just go ahead and knock this out, the equality is still retained, but it doesn't really matter.*2276

*I am going to multiply both of these by inverse ion the left.*2290

*When I do that, the equality is retained, so the A inverse, times A times ad joint of A, equals A inverse times determinant of A times the identity matrix, okay.*2298

*Now, well A inverse times A , and remember associativity, A inverse times A is just the identity matrix, so I end up with the identity matrix times the ad joint of A is equal to...*2309

*... A inverse times the determinant of A times the identity matrix, and again the identity matrix as far as the matrices are concerned.*2329

*It is the identity, in other words, just acts like a 1, so when we do 5 times 1, we don't necessarily put that 1 there, we just say its 5, so, for all practical purposes, we can ignore these, and now since determinant is a number, I can divide both sides by that number.*2341

*Remember I can't divide by a matrix, but I can divide by a number, the determinant of A, so we can ignore this and this, they are just, they are there.*2360

*Now, the determinant, those cancel and I am left with the A inverse is equal to the ad joint of A, divided by the determinant of A.*2371

*This is my formula, so now if I am given a matrix and if I want to find the inverse, I can do it one of two ways, I can go ahead and set up that augmented matrix, and convert the matrix to the identity if it is such and the identity matrix will turn into the inverse if it exists.*2383

*If not then, the inverse doesn't exist, and here, or I can find the ad joint matrix, which again I can do very simply with mathematical software, it's not a problem.*2400

*And I can divide it, oops, I forgot to put the A, I can divide it by a determinant, well notice something really interesting here that determinant is a number.*2409

*Well, if the determinant is 0, we know that division by 0 is not defined, so in this particular case, if you know the determinant of a matrix is 0, that automatically tells you that the inverse doesn't exist., so now we have, we are going to actually write that as a theorem.*2420

*Okay, A is non-singular, and we will remember non-singular meant invertible, it has an inverse...*2440

*... Well, if and only if, which means is equivalent to the determinant of A, not being 0, excuse me, so if I am given a matrix, and if I take the determinant of that matrix.*2455

*Again with math software, however i want to do it, if the determinant ends up not being 0, I know the matrix, I know the inverse exists.*2469

*If the determinant is 0, this formula that establishes a relationship between the inverse of determinant of the ad joint tells me I can't divide by 0 that tells me that the inverse doesn't exist.*2476

*Okay...*2490

*Now, in the beginning of the lesson I mentioned that computationally this is probably not the best way to go about finding an inverse, but it is kind of interesting from a theoretical point of view to see that if you start with some inverse, and if you end up sort of fiddling with the numbers, the square array of numbers.*2495

*And you end up with something called the ad joint, or if you divide by it's determinant, it ends up being related to its inverse, this is what we look for in mathematics.*2511

*There is no reason in the world to believe that this is true, there is nothing in intuitions to lead you to actually investigate that this would be true, but when you start fiddling with things, and when you start sort of following logical conclusions and seeing where particular mathematical derivations lead you.*2519

*You end up with something really extraordinarily beautiful, the ad joint of matrix, its inverse, it's ad joint, and it's determinant are actually related.*2536

*that's very strange, and again there is no reason it should be that way, but there it is, we will lose dated a fundamental fact about nature, a fundamental fact not about mathematics, but about how numbers behave, how collections of numbers behave in this case.*2546

*Okay, let's go ahead and add to our list of no-singular equivalences, you remember last time we have a list where we said if A is non-singular meaning invertible, we can draw other conclusions that are equivalent to it.*2562

*like we said before we are going to continue to add to that list, and the list is going to get rather long, but it's going to be very powerful list...*2574

*... Okay, so we have, A is non-singular invertible, okay...*2587

*... that's the same as saying that if I take A and multiply by some variable matrix that the system, the linear system, the homogeneous system AX = 0, has only the trivial solution.*2597

*Okay, so if its invertible, if it has a matrix, the AX = 0, the homogeneous solution, so just by knowing that something is actually invertible, taking a determinant and realizing that the determinant is not 0.*2609

*I can tell you something about the homogeneous solution, I have made a qualitative statement about it, I don't have to worry about trying to finding it, because it tells me it's only trivial, it's not worth finding.*2623

*A is row equivalent to the identity matrix meaning, with those manipulations of exchanging rows, exchanging columns, multiplying one by the other, aI can convert it to the identity matrix.*2633

*AX = B, has a unique solutions of every b, so if I have a matrix, an N by N matrix A, and I know that the inverse, I know that the determinant is not 0.*2644

*that means the inverse exists, I know that the, if I have, that the non-homogeneous solution, that there is actually an unique solution, not an infinite number of them, and not no solution.*2655

*And of ‘course the last one, which is the one we did today, if it is non-singular invertible. that's the same as determinant not being equal to 0.*2667

*Okay, let's do an example, okay we want to compute the inverse of the following matrix if it exists.*2677

*Let's try, let's try blue ink, how's that?, so we have our matrix (4, 2, 2), (0, 1, 2), (1, 0, 3), so we have a 3 by 3 matrix.*2688

*First thing we want to do is find it's determinant, okay, so the determinant of A, I am going to do a co-factor expansion, and I think I am going to go expand it along this column.*2705

*I could do this this row, because a 0 in it, but I am going to go ahead and expand along this column this column, so again let me over here, write +, -, +, -, +, -, +, -, +.*2722

*When I go according to this column, the first term is going to be a +, second is going to be a -, which it doesn't matter it's a 0, and it's going to be a + over here.*2733

*The determinant is equal to 4 times; I knock out 1, 2, 0, 3...*2742

*... - 0 + 1 times (2, 2, 1, 2).*2751

*1 times is 3, -0, 4 times 3 is 12...*2761

*... And this one of ‘course doesn't matter, 2 times 2 is 4, -2 is 2 times 2 so it is +2.*2768

*Our determinant is 14, it’s not equal to 0, so that the inverse exist, now let's actually find it using our formula, okay.*2779

*We want to form, our first step is we want to form the matrix, the co-factor of A, we want to take A, which let me rewrite it again because just to make a little more clear what matrix we are dealing with.*2791

*(4, 2, 2), (0, 1, 2), (1, 0, 3), if we want to form the co-factor, then we want to take the transpose of that okay, so let's go ahead and do our co-factors.*2804

*A*_{11}, so i go up here, A_{11}, that means I knock out this group, this row, this column and I am left with (1, 3, 2, 0).2819

*The determinant of that is 1 times 3, is 3 - 0 is 3, its positive, A*_{11} is 3...2831

*... A*_{12}, I go up here first row second column, I knock that out, I knock that out, I am left with (0, 2, 1, 3), okay.2843

*And since this is going to be a negative, 0 - of -, this is -2 is the determinant, but I have stick a negative sign in front of it, so it ends up being +2.*2855

*A*_{13}, I knock out that column, that row, I am left with (0, 0, 1, 1), so it''s 0 -1 is -1, and it's positive, so that's equal to -1.2870

*A*_{21}, I will do one more, and then I will just write down what the others are, so now we are on the A_{21}, which is second row first column.2887

*I knock out this column, knock out that row, I am left with (2, 2, 0, 3), 2 times 3 is 6 - 2 times 0 is 6, so 6 - 0 is 6.*2898

*However we are expanding, we are using this co-factor, which is a negative, so it's -6.*2911

*And then when we continue along this fashion, we get A*_{22} will go +10, we get A_{23} is equal to +2.2919

*A*_{31} is equal to a +2, A_{32} is equal to a -8, and A_{33}, I'll actually do this one, 33 we are down here.2932

*Knock out this one, knock out that one, I am left with (4, 2, 0, 1), 4 times 1 is 4, -0 is 4.*2947

*Positive, positive 4, okay, so now I am going to actually put this, I did like this, because these are the numbers that you are going to arrange, your matrix is actually going to be (3, 2, -1, -6, 10, +2, 2, -8 and 4) a co-factor matrix.*2958

*Lets form that, so our co-factor of A is equal to (3, 2, -1, -6, 10, 2) and we have (2, -8, and 4).*2978

*Now we want to subject that to transposition, and what we end up with is just flip the rows and the columns.*2998

*(3, -6, 2, 2, 10, -8, -1, 2 and 4).*3007

*This is our ad joint...*3019

*... Okay and now we said that our determinant of the original matrix is equal to 14, we know that the A inverse oops, well that was interesting...*3023

*... We know that A inverse is equal to the ad joint of A, divided by the determinant of A, well we just take this ad joint which we just got.*3046

*Divide every entry in there by 14, so we get three 14's - 6 14's, 2 14's.*3061

*Now I wanted to talk to you about reduction, yes, 2 14's is 1 se7th, 6 14's is 3 7th, it's up to you, i personally, I don't like, I like to leave my numbers the way that I found them.*3072

*Reducing is fine, if you want to, if you don't want to it's perfectly acceptable too, I like the degree of consistency, i don't like 7's and 14's floating around in my numbers.*3086

*It just helps me, thus so i actually don't reduce my numbers, I know most high school math's teachers will probably kill me for that, but there it is.*3093

*Okay, so two 14's, ten 14's - eight 14's - 1 over 14, 2 over 14 and 4 over 14.*3103

*This final matrix is our inverse that we were looking for, again it the ad joint of A over the determinant of A, determinant exists, we are good.*3119

*If the determinant is 0, we don't have to bother trying to find an inverse, so very, nice theorem to be able to use that.*3135

*Okay, we are going to close off this particular lesson with a really beautiful theorem called Cramer's rule, you probably remember it from your high school course, i, perhaps you didn't see it.*3142

*I imagine most of you have probably seen it though, it offers a way to find a solution to a linear system that is the same number of unknowns as variables, same number of unknowns as equations.*3154

*Let us write out, let me go back to my black ink actually, let...*3170

*A*_{11}X_{1} + A_{12}X_{2} + A_{1n}X_{n}.3178

*A*_{21}X_{1} + A_{22}X2_{2} + A_{2n}X_{n} and then we will work all way down, A_{n1}X_{1} + all the way down to A_{nn}X_{n}.3190

*This is a system of, oh let me put my solutions here, B*_{1}, B_{2}, all the way to B_{n}, so this is a linear system.3212

*N equations and unknowns, M by N system okay, the theorem says if the determinant of A and the determinant of A is just the coefficient matrix, in other words I take all of these coefficients...*3225

*... And then I put them in matrix form...*3245

*... If the determinant of A does not equals 0, then...*3249

*... The system has the unique solution...*3259

*... X*_{1} is equal to determinant A_{1} over determinant of A, X_{2} = determinant of A_{2} over the determinant of A etc.3273

*X*_{3}, X_{4}, okay and let's finish this off, we are...3293

*... A*_{n} is obtained...3306

*... From A...*3314

*... By replacing the N'th column...*3319

*... Of A, with the vector B, which is the solutions, and again it will make sense when we actually do an example here.*3330

*We have the following system -2X*_{1} + 3X_{2} -X_{3} = 1.3341

*X*_{1} + 2X_{2} - X_{3} = 4.3356

*And -2X*_{1} - X_{2} + X_{3} = -3, okay.3365

*Let's do our matrix A, it is just the coefficients, let me do it in red here, so we have (-2, 3, -1, 1, 2, -1)....*3375

*... (-2, -1 and +1) good, and let's do our B, vector m, which is just spread here, (1, 4, 3) okay.*3392

*My X*_{1} is going to equal, now when I take the determinant of this, let's actually go ahead and write out what we have.3407

*I am not going to go through the process of it, I use mathematical software and the determinant is -2, okay, so now when I take A, let me erase this here...*3416

*... A*_{1}, I get A_{1}, by replacing this first column, 1, with that number, so(1, 4, 3) with this whole vector I mean.3434

*3 and then (3, 2, -1, -1, -1, 1) okay, when I take the determinant of that I end up with -4.*3447

*therefore X*_{1} is equal to the determinant of A_{1}, which is -4, over the determinant of A - 2 is 2.3461

*This is my final answer for X, A*_{2}, well it's equal to this with that second column replaced by that vector, so I end up with (-2, 1, -2, 1, 4, 3, -1, -1, 1).3472

*I subject that to determinant, mathematical software, I end up with -6, therefore X*_{2} = -6 over -2, which is equal to 3.3493

*And my final A*_{3} is equal to (-2, 1, -2, 3, 2, -1) and the third column is replaced with that.3507

*(1, 4, 3)...*3522

*... I subject that to the determinant function and I end up with -8, therefore X*_{3} = -8 over -2, my final answer is 4, X_{1} is 2, X_{2} is 3, X_{3} is 4.3528

*For my coefficient matrix, and I take the determinant of the A*_{1}, A_{2}, A_{3} matrix that I obtain by sticking the solution vector in the particular column for the, for X_{1}, X_{2}. X_{3} and I solve it that way.3547

*Thank you for joining us at educator.com, we will see you next time.*3568