Sign In | Subscribe
Start learning today, and be successful in your academic & professional career. Start Today!
Loading video...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Linear Algebra
  • Discussion

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Bookmark and Share
Lecture Comments (1)

0 answers

Post by Manfred Berger on June 21, 2013

I've been thinking a bit about example 1. If I was to use Gram-Schmidt to expand this into a full basis of R3 and then take the image of v with respect to my new basis, the projection would be the first 2 components, correct?

Orthogonal Complements, Part II

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Relations Among the Four Fundamental Vector Spaces Associated with a Matrix A 2:16
    • Four Spaces Associated With A (If A is m x n)
    • Theorem
    • Example 1
    • Null Space and Column Space
  • Projections and Applications 16:50
    • Projections and Applications
    • Projection Illustration
    • Example 1
    • Projection Illustration Review

Transcription: Orthogonal Complements, Part II

Welcome back to Educator.com and welcome back to linear algebra.0000

In our last lesson, we introduced the notion of an orthogonal complement, and this time we are going to continue talking about orthogonal complements.0004

We are going to be talking about these 4 fundamental subspaces that are actually associated with any random matrix, and we are going to talk about the relationships that exist between these spaces.0013

Then we are going to talk about something called a projection. The projection is a profoundly, profoundly important concept.0023

It shows up in almost every area of physics and engineering and mathematics in ways that you would not believe.0033

As it turns out, those of you who are engineers and physicists... one of the tools in your tool box that is going to be almost the primary tool for many years to come is going to be the idea of something called Fourier series.0042

If you have not been introduced to it yet, you will more than likely be introduced to it sometime this year... and Fourier series actually is an application of projection.0052

Essentially what you are doing is you are taking a function and what you are doing is you are projecting that function onto -- how shall I say this -- you are projecting it onto an infinite dimensional vector space on the individual axes which are the trigonometric functions.0061

Let us say, for example, I have a function 5x. I can actually project that function onto the cos(x) axis, onto the sin(x) axis, onto the cos(2x) axis, onto the sin(2x) axis, so on and so forth.0080

I can actually represent that function in terms of cosine functions.0096

Now, you will learn it algebraically, but really what you are doing is you are actually doing a projection.0100

You are projecting a function onto other functions, and it is really quite extraordinary.0105

When you see it that way, the entire theory of Fourier series becomes open to you, and more than that, the entire theory of orthogonal polynomials becomes open to you. 0111

That is going to connect to our topic that we discuss in the next lesson, which is Eigenvectors and Eigenvalues.0120

So, linear algebra really brings together all areas of mathematics. Very, very central. Okay. Let us get started.0126

Let us see. So, let us go ahead and talk about our four fundamental vector spaces associated with a matrix a.0135

So, if a is an m by m matrix, then, there are 4 spaces associated with a.0143

You actually know of all of these spaces. We have talked about them individually, now we are going to bring them together.0173

One is the null space of a, which is, if you remember, is the solution space for the equation ax = 0.0178

Let me put that in parentheses here. It is the solution space, the set of all vectors x, such that the matrix a × x = 0. Just a homogeneous system.0189

Two, we have something called the row space of a. Well, if you remember, the row space of a if I take the rows of a, just some random m by m matrix, they actually form a series of vectors... m vectors in RN.0200

The space that is spanned by those vectors, that is the row space.0221

Then we have the null space of a transpose. So, if I take a and just flip it along its main diagonal and then I solve this equation for this set of vectors, x such that a transpose × x = 0, I get its null space.0229

It is also a subspace, and the row space is a subspace. All of these are subspaces.0246

Oops -- it would be nice if I could actually count properly... 1, 2, 3.0251

Now, our fourth space is going to be, well, you can imagine... it is going to be the column space. 0257

Again, the column space if I take the individual columns of the matrix, they are vectors in RM, and they form a space... the span of those vectors form a space.0264

Now, they do not all have to be linearly independent. I can take those vectors, remember from an old discussion and I can find a basis... so the basis might be fewer vectors but they still span the same space.0278

Okay. So, let us start with a theorem here, which is an incredibly beautiful theorem.0290

As you can figure it out, linear algebra is full of unbelievably beautiful theorems... beautiful and very, very practical.0299

If a is m by n, then, this is kind of extraordinary, the null space of a is the orthogonal complement of the row space of a.0306

That is kind of amazing. Think about what that means for a second.0337

If I just have this rectangular array of numbers, 5 by 6 and I just throw some numbers in there... when I solve the equation ax = 0, the homogeneous system associated with that matrix, I am going to get a subspace, the null space.0344

As it turns out, if I take that matrix and I turn it into reduced row echelon, the non-zero rows form a basis for the column space. That is how we found the basis for the column space.0359

Those two subspaces, they are orthogonal to each other. That is extraordinary. There is no reason to believe why that should be the case, and yet there it is.0370

B, the complement of that, the null space of a transpose is the orthogonal complement to the column space.0380

It is the orthogonal complement of the column space of a.0399

So, that is the relationship. The null space of a given matrix a, its null space and its column space are orthogonal complements. 0407

If I take the transpose of a, the null space of the transpose and the column space of the original a, which ends up being the row space of a transpose because I have transposed it... those two are orthogonal complements.0416

Let us do an example and see if this... if we can make sense of some of it just by seeing some numbers here.0433

So, let us go... let us let a equal, it is going to be a big matrix here, and again we do not worry about big matrices because we have our math software.0441

1, -2, 1, 0, 2... now I am not going to go through all of the steps.0450

You know, this reduced row echelon, solving homogeneous systems, all of this stuff, I am going to give you the final results.0457

At this point, I would like to think that you are reasonably comfortable either with the computational procedure manually, or you are using mathematical software yourself. I just do this in math software, myself.0463

1, -1, 4, 1, 3, -1, 3, 2, 1, -1, 2, -3, 5, 1, 5... Okay.0473

So, this is our matrix a. Our task is to find the four fundamental spaces associated with this matrix and confirm the theorem.0488

So, this random rectangular array of numbers, something really, really amazing emerges from this. There are spaces that are deeply, deeply interconnected.0516

So, let us see what happens here. Okay. When we take a, so the first thing we want to do is we want to find the row space of a.0527

Let us go ahead and do that. So, row space of a.0536

When I take a, and I reduce it to reduced row echelon form, I get the following matrix, 1, 0, 7, 2, 4, 0, 1, 3, 1, 1... and I get 0's in the other 2 rows.0540

Well, what that means, basically what that tells me is that my row space, if I take these -- let me go red -- if I take that vector and that vector, they form a basis for the row space.0560

So, my row space... basis for the row space and I am going to write these as column vectors, is 1, 0, 7, 2, 4... and 0, 1, 3, 1, 1...0577

So, the dimension is 2. There you go. Also, I have my row space is 2-dimensional. Okay.0600

Now, I need to find the null space. Well, I can almost guess this. The theorem says that the row space and the null space are going to be orthogonal complements.0610

Well, I know that the orthogonal complement, or some subspace + the direct some of some subspace plus its orthogonal complement gives me the actual space itself.0620

In this case, I am talking about 1, 2, 3, 4, 5... I am talking about R5.0630

Well, if I already have a dimension 2, I know that the dimension of my orthogonal complement is going to be 3 and so I am hoping that when I actually do the calculation I end up with 3 vectors.0637

Let us see what happens. The null space, well the null space is the set of all vectors x such that a(x) = 0.0648

I solve a homogeneous system and I get my basis... I am not going to actually show this one.0662

So, my basis for null of a... I tend to symbolize it like that... is equal... set notation... I have the vector (-7, -3, 1, 0, 0).0669

I end up with (-2, -1, 0, 1, 0) and the presumption here is that you are comfortable doing this, the reduced row echelon, solving the homogeneous system, putting it into a form that you can actually read off your vectors.0686

(-4, -1, 0, 0, 1). Well, there you go. We have a dimension equals 3.0703

So, the dimension 2 + the dimension 3 = 5. The row space was a series of vectors in R5, so our dimensions match.0711

Now the question is, I need to basically check that this is the case... I need to check that each of these vectors is orthogonal to the 2 vectors that I found.0721

As it turns out, they are orthogonal. When you actually take the dot product of each of these with each of the other ones, you are going to end up with 0.0731

So, this confirms our theorem. The first part of the theorem. The row space of a and the null space of a.0739

Okay. So, now let us take our column space. So, we are going to take a transpose.0748

Now, let me actually write out a transpose, I would like you to see it. It is (1,1,-1,2)... this is going to be R4, okay? -2.0760

Now, the column space of a, what I have done here is I have actually transpose a. I have turns the rows in columns and the columns into rows.0776

So, now the columns of a are written as rows. That is why I am doing it this way. Okay?0783

-2,-1,3,-3,1,4,2,5,0,1,1,1... and I have 2,3,-1, and 5. Okay.0790

So, I have 5 vectors in R4. So, here we are talking about R4.0814

Alright. Now, when I subject this to reduced row echelon form, I am going to end up with some non-zero columns.0823

That is going to be a basis for my column space.0833

I get 1, 0, -2, 1, 0, 1, 1, 1, and 0's everywhere else, 0, 0, 0, 0... 0, 0, 0, 0.0838

These first 2 actually form a basis for my column space.0850

So, let me write that down... basis for my column space equals the set of vector 1, 0, -2, 1, 1, 0, -2, 1... I think it is always best to write them in vertical form, and 0, 1, 1, 1.0856

Not good -- we would like them to be clear... 1, 1, 1... there you go. That forms a basis for our column space.0869

Well, the dimension is 2. You know your spaces are 4... 4 - 2 is 2.0890

We are going to expect that are homogeneous system, our null space of a transpose is going to be 2-dimensional. We should have 2 vectors.0897

Well, let us confirm. As it turns out, when we solve a transpose × x = the 0 vector... basis for null -- love this stuff, it is great -- a transpose equals... again, I am just going to give the final answer... 2, -1, 1, 0.0905

That is one vector, and the second vector is... these are basis vectors for our subspace... 0, 1.0938

Sure enough, we end up with a dimension 2. So, our dimensions match. Now we just need to check that any vector in here × any vector in what we just got ,the column space are orthogonal.0945

It turns out that they are. If you do the dot product of those, you are going to end up with 0.0956

So, sure enough, once again, row space a, okay? is going to be the orthogonal complement of the null space of a.0961

The column space of a is the orthocomplement of null space of a transpose.0982

Simply by virtue of rectangular array of numbers, you have this relationship where these spaces are deeply interconnected.0995

It is really rather extraordinary... and they add up to the actual dimension of the space. Okay.1001

So, let us talk about projections and some applications. So, projections are very, very, very important.1011

Recall, if you will that w is a subspace, I am just going to write ss for subspace of RN, then, w direct sum plus w perp is equal to RN.1019

We sort of have been hammering that point. That is some subspace and some orthogonal complement, the dimensions add up to n.1042

When you add them you actually get this space, RN. Okay.1048

That was one of the proofs that we discussed... one of the theorems that we had in the last lesson.1053

Now, we did not go through a proof of that, and I certainly urge you, with each of these theorems, to at least look at the proofs, because a lot of the proofs are constructive in nature and they will give you a clue as to why things are the way that they are.1059

So, in the proof of the theorem, it is shown that if w, if that subspace has an orthonormal basis, remember an orthonormal basis is well, all of the vector are of length 1, and they are all mutually orthogonal.1070

We had that Gram Schmidt orthonormalization process where we take a basis and we can actually turn it into an orthonormal basis by first making it orthogonal and then dividing by the norms of each of those vectors.1104

So, if it has an orthonormal basis, let us say w1, w2, all the way to wk, we do not know how many dimensions it is.1116

v is any vector in RN, then there exists a unique... there exists unique vectors w from the subspace w and u from the subspace w perp, such that the vector v can be written as w + u.1132

Well, we know this already. Essentially what we are saying is that if we take any vector in RN, I can represent it uniquely as some vector from the subspace w + some vector in its orthogonal complement.1171

Okay. Here is the interesting part. Also... we will write it as an also... this particular w, let me actually circle it in blue, there is a way to find it.1185

Here is how we find it... w = the vector v · w1 × w1 + the vector v · w2 × w2 + the vector v · wk, as many vectors as there are in the basis, × wk. 1196

This is called the projection. This is called the projection of v onto the vector space w.1230

It is symbolized as proj... as a subscript we write the w... that is the subspace w... and this is the vector v. Okay.1250

We definitely need to investigate what it is that this looks like. When we do a projection -- let me draw this out so that you see what this looks like.1262

So, we are going to be working in R3. So, let me draw a plane here.1270

Let me draw a vector... this is going to be our w vector, then this is going to be our u vector.1287

Let me make v... let me make it blue. So, v, once again, let us remind ourselves... v is any vector in this particular case, it will just be vector RN.1300

You know what, since we are dealing in 3, let me be specific. R3.1315

w is our vector in the subspace, which is a 2-dimensional subspace in 2, so this plane here... that is the subspace w, and the u is in the subspace w perp.1323

So, here is what we are doing. Well, we said that this particular... so v can be written as something from w + something from u, because we know that RN is equal to the direct sum of w and w perp.1347

So some vector from w, some vector from here, so that is a vector in w, that is a vector in w perp.1368

Well, when we add them together, we get v. This is just standard vector addition.1374

Here is what is really interesting. If we have a basis for this subspace, if we actually project v, project means shine a light on v so that you have a shadow, a shadow of v on this subspace... that is what the projection means.1380

That is where you get that v · w1 × w1 + v · w2... when you do that, what we just wrote down for the projection, you actually end up finding w.1397

Okay. Now, we had also written since v is equal to w + u... as it turns out if I wanted to find u, well, just move that over.1412

Equals v - w. That is it. This is really, really great. So, let us do a problem and I think all of this will start to make sense.1425

So, let us go... example here... we will let w be a subspace of R3 with an orthonormal basis... I often just write ortho for orthonormal basis.1437

Again, orthonormal bases, they tend to have fractions in them because they are of length 1.1459

We do not always want to use (0,0,1), we want something to be reasonably exciting.1465

Let us go... oops -- these lines are making me crazy -- 2/sqrt(14), 3/sqrt(14), 1/sqrt(14)... one vector.1473

The other vector is 1/sqrt(2),0,-1/sqrt(2)... so these are orthonormal.1493

It is an orthonormal basis for the subspace R3, there is 2 vectors in it, so our subspace w has dimension 2, which means that our orthogonal complement w perp has dimension 1. 2 + 1 has to equal 3.1501

Okay. We will also let v, the vector in R3 equal to some random 4, 2, 7... this is what we want to find.1516

We want to find the projection of v onto w, and the vector... and we want to find -- this has to stop, why does this keep happening?1529

So, we want to find the projection of v into this subspace, and we want to find the vector u that is orthogonal to every vector in w.1549

In other words, we want to find w perp. Okay. So, how can we do that. Switch the page here...1572

Well we know that from our formula, from our theorem, that our w is equal to the projection onto w of v, which is what we wanted.1584

That is going to equal v · w1, one of the vectors in the basis × one of the vectors in the basis... plus v · w2, the second vector in the basis × that vector in the basis.1597

Again, I am going to let you work these out. So take v · w1, you are going to get a scalar, multiply it by w1, it is going to give you a vector.1617

You are going to add to that v · w2, which is a scalar × w2, which is a vector. When you add two vectors together you are going to get a vector.1626

So, you will end up with something like this. 21/sqrt(14) × 2/sqrt(14), 3/sqrt(14), 1/sqrt(14), + -3/sqrt(2) ×, well 1/sqrt(2), 0, -1/sqrt(2). That is what this is.1634

Then when you put those together, you are going to end up with 21/14, 63/14, 42/14, if I have done my arithmetic correctly.1666

So, the projection of v onto the subspace w is this vector right here.1684

That is what that means. If I take v and if I take the shadow of v on that subspace, I am going to get a vector.1692

It is nice. Okay. Now, well we know that v is equal to w, this is w by the way. That is the projection.1701

plus u, well we have v, we just found w, so now we want to find u.1714

Well, u is just equal to v - w, when I take v - w, that is equal to, well, (4,2,7) - what I just found... 21/14, 63/14, 42/14.1720

I am going to get 35/14, - 35/14, and 56/14. So, my vector v in R3 is a sum of that vector + that vector.1746

It is pretty extraordinary, yeah? Again, this idea of a projection. All you are doing is you are taking a random vector and onto another space you are just shining the light. you are just taking the shadow.1772

The shadow means you are taking the perpendicular... you are dropping a perpendicular from the end of that vector onto there, and this vector that you get, whatever it is, that is the projection.1787

That is all the projection means. Perpendicular.1800

As we know, the perpendicular from a point down to something else is the shortest distance from that object to that something else.1805

So, let us draw the picture one more time, so that we are clear about what it is we are doing.1815

We had w, we had u, I will put v here, that is v, this is -- oops, we wanted this in red.1825

This is u, this is w, vector v is equal to w + u, u is equal to the vector v - the vector w. That is what this picture says.1841

So, the distance from v, from the vector v to the subspace w... this is a capital W... is, well, the distance from v to the subspace w, the perpendicular distance, well it is equal to the norm of u.1863

Well, the norm of u equals the norm of the projection of v onto the subspace w, which is equal to the norm of v - ... no, I am sorry, that is not correct, getting a little ahead of myself here.1898

Vector u, the norm of u is the norm of this thing, which is v - the projection of v onto the subspace w.1923

In 3-space, it makes sense because you are used to seeing 3-space and distance.1938

Well, the distance from this point to this point is just the distance of the vector u, which you can calculate here. That is just the norm, and you know you found w by that formula that we just used which is the projection of v onto the subspace w.1944

This is the subspace w, so we project it on here, we get w, here is what may not make sense. What if you are dealing with a 14 dimensional space?1960

Let us say that your vector in R-14, you project it onto a subspace which is say 3 dimensional.1970

How do you talk about a distance in that case? Well, again, distance is just an algebraic property.1977

So, in some sense, you have this distance of a vector in a 14-dimensional space to its 3-dimensional subspace.1982

There is a distance "defined," and that distance is precisely the projection of that 14-dimensional vector, of that vector in 14-dimensional space onto the 3-dimensional subspace.1994

Again, this is the power of mathematics. We are not limited by reality. We are not limited by our senses. We are, in fact, not limited at all as long as the math supports it. As long as the algebra is correct. We are not limited by time or space.2007

Okay. Thank you very much for joining us here at Educator.com to finish up our discussion of orthogonal complements. We will see you next time.2023