Sign In | Subscribe
Start learning today, and be successful in your academic & professional career. Start Today!
Loading video...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Linear Algebra
  • Discussion

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Bookmark and Share
Lecture Comments (10)

7 answers

Last reply by: Professor Hovasapian
Wed May 1, 2013 4:41 AM

Post by Matt C on April 28, 2013

Professor Hovasapian
Sorry to spring all these questions on you in one week, but thursday is my final. My professor gave me a matrix and he said that it was diagonalizable. I went through all the steps and I cannot get it to diagonalize. I have the matrix A= [[2,2,-2], [-1,1,2], [0,1,1]], I write all matrix's in column form. He claims that it is diagonalizable, but I have spent a long time trying to figure this out. Lambda = R.

det(A-RI) = -(R-2)(-1+R)^2. R=2, R=1.

(A-1*I)x=0 and I get [ [1,2,-2], [-1,0,2], [0,1,0]], I then subject that to rref. [[1,0,0], [0,1,0],[.5, .5, 0]]. I only have 1 free variable which means the basis is 1, which is less then k. Is there a way where you can quick check if this matrix is Diagonalizable. Like I said I have spent hours on this and I am getting no where.

1 answer

Last reply by: Professor Hovasapian
Mon Feb 25, 2013 3:03 AM

Post by Tach M on February 24, 2013

if the polynomial equation of a matrix M has real and distinct roots, Does it mean that M is similar to a diagonal matrix????

Diagonalization of Symmetric Matrices

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Diagonalization of Symmetric Matrices 1:15
    • Diagonalization of Symmetric Matrices
    • Theorem 1
    • Theorem 2
    • Example 1
    • Definition 1
    • Example 2
    • Theorem 3
    • Theorem 4
    • Example 3

Transcription: Diagonalization of Symmetric Matrices

Welcome back to Educator.com, welcome back to linear algebra.0000

In our previous lesson, we discussed Eigenvalues, Eigenvectors, and we talked about that diagonalization process where once we find the specific Eigenvalues from the characteristic polynomial that we get from the determinant, setting it equal to 0, once we find the Eigenvalues we put those Eigenvalues back into the arrangement of the matrix.0004

Then we solve that matrix in order to find the particular Eigenvectors for that Eigenvalue, and the space that is spanned by the Eigen vectors happens to be called an Eigenspace.0026

In the previous lessons, we dealt with some random matrices... they were not particularly special in any sense.0042

Today, we are going to tighten up just a little bit, we are going to continue to talk about Eigenvalues and Eigenvectors, but we are going to talk about the diagonalization of symmetric matrices.0048

As it turns out, symmetric matrices turn up all over the place in science and mathematics, so, let us jump in.0057

We will start with a - you know - recollection of what it is that symmetric matrices are. Then we will start with our definitions and theorems and continue on like we always do.0065

Let us see here. Okay. Let us try a blue ink today. So, recall that a matrix is symmetric if a = a transpose.0074

So, a symmetric matrix... is when a is equal to a transpose, or when the a transpose is equal to a.0089

So, it essentially means that everything that is on the off diagonals is reflected along the main diagonal as if that is a mirror.0108

Just a quick little example, something like (1,2,3,3)... that is... so let us say this is matrix a. If I were to transpose it, which means shift it along its main diagonal, well, (1,2,3,3)... this is equal to a transpose... it is the same thing. (1,2,3,3), (1,2,3,3), this is a symmetric matrix.0118

Okay. Now, we will start off with a very, very interesting theorem. So, you recall, you know, you can take this matrix, we can set up that equation and where we took the Eigenvalue equation where you have Λs and the characteristic polynomial, and we solve the polynomial for its roots.0145

The real roots of that equation are going to be the Eigenvalues of this particular matrix. Well, as it turns out, all the roots of what we says is f(Λ), which is the characteristic polynomial... Λ(a)... symmetric matrix are real numbers.0164

So, as it turns out. If our matrix happens to be symmetric, we know automatically from this theorem that all of the roots are going to be real.0195

So, there is always going to be a real Eigenvalue. Now, we will throw out another theorem, which will help us. 0205

If a is a symmetric matrix, then Eigenvectors belonging to distinct Eigenvalues, because you know sometimes Eigenvalues, they can repeat.0222

Eigenvalues are orthogonal. That is interesting... and orthogonal, as you remember, dot product is equal to 0, or perpendicular.0250

Okay. Once again, if a is a symmetric matrix, then the Eigenvectors belonging to distinct Eigenvalues are orthogonal.0263

Let us say we have a particular matrix, a 2 by 2 and let us say the Eigenvalues that I get are 3 and -4. Well, when I calculate the Eigenvectors for 3 and -4, as it turns out, those vectors that I get will be orthogonal. Their dot product will always equal 0.0270

So, let us do a quick example of this. We will let a equal (1,0,0), (0,0,1), (0,1,1), and if you take a quick look at it, you will realize that this is a symmetric matrix. Look along the main diagonal.0288

If I flip it along the main diagonal, as if that is a mirror, (0,0), (0,0), (1,1).0310

When I subject this to mathematical software, again, when you first are dealing with Eigenvectors, Eigenvalues I imagine your professor or teacher is going to have you work by hand, simply to get you used to working with the equation.0319

Just to get you an idea of what it is that you are working with, some mathematical object. But, once you are reasonably familiar, you are going to be using mathematical software to extract these Eigenvalues, and Eigenvectors. But, sometimes the process just takes too long otherwise.0330

So, what we get is... well, Λ1, -- let me start over here -- the first Eigenvalue is equal to 1, and that yields the Eigenvector (1,0,0).0344

Λ2, the second Eigenvalue is 0, 0 is a real value, and it yields the Eigenvector, -- tuh, Eigenvalue, Eigenvector, Eigenspace, yeah... I know -- Okay.0357

That gives me the vector (0,-1,1)... Λ3, the third Eigenvalue is 2 for this matrix, and it yields the Eigenvector (0,1,1).0372

If you were to check the dot product of this and this, this and this, this and this, the mutual dot products, they all equal 0. So, as it turns out, this theorem is confirmed.0385

The Eigenvectors corresponding to distinct Eigenvalues are mutually orthogonal. Okay.0396

Now, let us move onto another definition. Okay. A non-singular -- excuse me -- matrix a, remember non-singular means invertible, so it has an inverse... is called orthogonal.0405

The word orthogonal in 2 different ways. We are using it to apply to two vectors when the dot product is 0, but in this case we call them matrix orthogonal if the inverse of the matrix happens to equal the transpose of the matrix.0435

An equivalent statement to that... I will put equivalent, is a transpose a is equal to the identity matrix.0452

Well, just look at what happens here. If I take a, this says a inverse is equal to a transpose. Well, if I multiply by the matrix a on both sides on the right, a transpose a, that is this one, a inverse a is just the identity matrix, so these are two equivalent statements.0465

I personally prefer this definition right here. So, a non-singular matrix is called orthogonal, so it is an orthogonal matrix if the inverse and the transpose happen to be the same thing. That is a very, very special kind of matrix.0481

So, let us do a quick example of this. If I take the Eigenvectors that I got from the example that I just did, so the Eigenvectors that I just got were (1,0,0), (0,-1,1), and (0,1,1), okay? These are for the respective Eigenvalues 1, 0, 2.0496

First thing I am going to do, I am actually going to normalize these. So normalization, it just means taking them and dividing by the length of the vector. So, this vector actually... let me use red here for normalization.0524

This one stays (1,0,0), so let me put... normalized... this one is just, well, -12, 12, this becomes (0,-1/sqrt(2),1/sqrt(2)). The length of this vector is sqrt(2).0536

This one is the same thing. We have (0,1/sqrt(2),1/sqrt(2)), now if I take these vectors and set them up as columns in a matrix, and this is just something random that I did. I happened to have these available, so let us call this p.0561

p is equal to the matrix (1,0,0), (0,-1/sqrt(2),1/sqrt(2)), (0,1/sqrt(2),1/sqrt(2).0578

This matrix p, if I were to calculate its inverse, and if I were to calculate its transpose, they are the same.0593

p inverse equals p transpose. This is an orthogonal matrix.0602

So again, we are using orthogonal in two different ways. They are related, but not really. We call vectors mutually orthogonal, we call them matrix orthogonal, if the inverse and the transpose are the same thing.0610

Now, let us go back to blue ink here, and state another theorem.0629

An n by n matrix is orthogonal if, and only if, the columns or rows, so I will put rows in parentheses form an ortho-normal set of vectors in RN.0640

Okay. An n by n matrix is orthogonal if and only if the columns form an orthonormal set of vectors in RN.0679

So, if I have a matrix, and let us just take the columns... if the columns form an ortho-normal set, meaning that the length of... column 1 is a vector, column 2 is a vector, column 3 is a vector... if the length of those three is 1, that is the normal part, and if they are mutually orthogonal, well, this thing that we did right here... these columns we normalized it.0687

So, by normalizing it, we made the length 1 and these are mutually orthogonal, so this is an orthogonal matrix. If we did not know it already by finding the inverse and the transpose.0712

If I just happen to look at this and realize that, whoa, these are all normalized and these are mutually orthogonal. Then, I can automatically say that this is an orthogonal matrix, and I would not have to calculate anything. That is what this theorem is used for.0723

Okay, so now let us talk about a very, very, very important theorem. Certainly one of the top 5 in this entire course.0737

It is quite an extraordinary theorem when you see the statement of it and when we talk about it a little bit. Let me do it in red here.0745

So -- excuse me -- if a is a symmetric n by n matrix. Then there exists an orthogonal matrix p, such that p inverse × a × p is equal to some diagonal matrix d, a diagonal matrix, with the Eigenvalues of a along the main diagonal.0751

Okay, so not only a symmetric matrix always diagonalizable, but I can actually diagonalize it with a matrix that is orthogonal, where the columns and the rows are of length 1 and they are mutually orthogonal. Their dot product equals 0.0834

That is really, really extraordinary, so let us state this again. If a is a symmetric n by n matrix, then there exists an orthogonal matrix p such that p inverse × a × p gives me some diagonal matrix.0851

The entries along the main diagonal are precisely the Eigenvalues of a. That is what this equation tells me, that there is this relationship.0866

If I have a matrix a, I can actually take the Eigenvalues of a, I will bring them along the main diagonal and I can find a matrix p, such that when I take p inverse, when I sandwich a between p inverse and p, I actually produce that diagonal by composing the multiplication of this matrix and this matrix and this matrix. That is extraordinary, absolutely extraordinary.0875

So, let us see what happens when we are faced with an Eigenvalue which is repeated.0900

Remember, sometimes you can have an Eigenvalue, your characteristic polynomial can have repeated roots... so that will be a, let us say you have a 3 by 3, and you have Eigenvalues (1,1,2), well the 1 has a multiplicity of 2, because it shows up twice. 0905

Okay, let us see how we deal with that. Let us go back to a blue ink here... oops.0920

If we are faced... an Eigenvalue of multiplicity k, then, when we find a basis for the null space associated with this Eigenvalue, in other words finding a basis for the Eigenspace, finding the Eigenvectors, that is all this means because that is what you are doing... you put the Eigenvalue back in that equation, you solve the homogeneous equation and you get a basis for the null space, which is the Eigenvectors associated with this Eigenvalue.0936

We use the Gram Schmidt ortho-normalization process to create an orthonormal basis for that Eigenspace.1012

So if I have an Eigenvalue which repeats itself, and once I find a basis for that Eigenspace, for that particular Eigenvalue, I can ortho-normalize and actually create vectors that are, well, orthonormal, and that will be my one set. Then I move on to my next Eigenvalue.1049

If my matrix is symmetric, I am guaranteed that the distinct Eigenvalues will give me things that are going to be mutually orthonormal.1068

Let us do a problem, and I think everything will fall into place very, very nicely.1080

So, example... we will let a = (0,2,2), (2,0,2), and (2,2,0)... 2... 2... 0...1087

Let us confirm that this is diagonal. Yes. 2, 2, 2, 2, 2, 2, absolutely. Main diagonal is the mirror. If you flip it you end up with the same thing.1103

Okay. Let us do the characteristic polynomial. Let us actually do this one a little bit in detail. It equals the determinant Λ - 0 - 2 - 2, Λ's along the diagonals and negatives everywhere else... Λ - 0... -2... -2... -2... Λ - 0.1114

We want the determinant of this. When we take the determinant of this, we actually end up with the following... Λ + 2 in factored form... Λ - 4.1139

So, I have solved for this polynomial and I have turned it into something factored. So, I get -- let me put it over here -- Λ1 = 2... -2, I am sorry.1149

Λ2 is also equal to 2, that is what this 2 here means. Okay. That means this Eigenvalue Λ = -2 has a multiplicity of 2, it shows up twice. 1160

Of course, our third Λ, third Eigenvalue is going to equal 4. So, now let us go ahead and do solve this homogeneous system.1170

Well, I take -2, I stick it into here, and I solve the homogeneous system. So, I end up with the following.1182

I end up -- let me actually write... let me do this... no, it is okay -- so 4Λ = -2, we get the following system... we get -2, - 2, -2, 0.1193

It is this thing, and then the 0's over here, -2, -2, -2, 0. -2, -2, -2, 0.1213

Well, when we subject that to reduced row echelon form, we end up with 1,1,0, and 0's everywhere else.1225

So, this column, this column... so, we get -- let me do it this way -- s3, let us set it equal to s, this does not have a leading entry, so it is a random parameter.1239

x2 also does not have a leading entry. Remember this does not have to be in diagonal form, so this is the only one that has to be a leading entry.1252

So, set that equal to r, and x1 is equal to, well, -r, -s.1260

This is equivalent to the following... r × -1, 1, 0 + s × -1,0,1.1271

Okay. So, these 2 vectors right here form a basis for our Eigenspace. They are our Eigenvectors for this, for these Eigenvalues.1291

Well, what is the next step? We found the basis, so now we want to go ahead and we want to ortho-normalize them.1305

We want to make them orthogonal, and then we want to normalize them so they are orthonormal. So, we go through the Gram Schmidt process.1316

So, let me rewrite the vectors. I have (-1,1,0) -- so that we have them in front of us -- (0,1)... is a basis for the Eigenspace associated with Λ = -2. Okay.1323

So, we know that our first v1, this is going to be the first vector... we can actually take this one. So, I am going to let v1 = -1, 1, 0.1355

That is going to be our standard. We are going to orthogonalize everything with respect to that one.1368

Well, v2 is equal to... this is u1, this is u2... is equal to u2 - u2 · v1 over v1 · v1 × v1.1373

This is the definition of the ortho-normalization process, the Gram Schmidt process. You take the second vector, and you subtract... you work forward.1399

I will not recall the entire formula here, but you can go back and take a look at it where we did a couple of examples of that orthogonalization.1409

When you put all of these in, u2 is this one, v1 is this one, and you do the multiplication, you end up with the following... -1/2, -1/2, 1/2 and 1.1416

Okay. Now, you remember I do not need the fractions here because a vector in this direction is... well, it is in the same direction, so the length of these individual values does not really matter.1433

So, I am just going to take -1, -1, 1. Okay. So, now, -1, 1, 0... and -1, -1 -- I am not taking fractions here, what I am doing is I am actually multiplying everything by 2.1447

I can multiply a vector by anything because all it does is extend the vector or shorten the vector, it is still in the same direction, and it is the direction that I am interested in.1473

So, when I multiply by 2 -- this is not 1 -- 2 × that... and this ends up being 2 here. Okay.1484

Now, this is orthogonal. I want to normalize them.1493

When I normalize them, I get the following -- nope, we are not going to have these random lines everywhere --... -1/sqrt(2), 1/sqrt(2), 0... and 2 × 2 is 4, 1, 1, sqrt(6), so this is going to be... -1/sqrt(6), -1/sqrt(6), 2/sqrt(6).1504

This is orthonormal. So, with respect to that Eigenvalue -2, we have created an orthonormal basis for its Eigenspace. So this is going to be one column, this is going to be a second column, now let us go ahead and do the next Eigenvalue -- where are we... here we are.1537

Our other Eigenvalue was Λ = 4, so for Λ = 4, we put it back into that, remember Λ thing determinant equation... we end up with the following. We get 4, - 2, - 2, 0... -2, 4, -2, 0... -2, -2, 4, 0.1563

When we subject this to reduced row echelon we get 1, 0, -1, 0. We get 0, 1, -1, 0, 0 here... and 0's everywhere else.1587

Okay. That is a leading entry. That is a leading entry. Therefore, that is not a leading entry, so we can let that one be x3 = r. Any parameter.1603

Well, that means x2 - r = 0, so x2 = r, as well... and here it is x1 - r = 0, so x1 also equals r.1616

Therefore, this is equivalent to r × 1, 1, 1. Okay.1630

So, this right here is an Eigenvector for Λ = 4. It is one vector, it is a one dimensional Eigenspace. It spans the Eigenspace.1639

Now, we want to normalize this. So, when we normalize this, it is sqrt(3)... I will put normalize -- let me make some more room here, I am going to use up a lot of room for not a lot of... let me go this way -- normalize.1653

We end up with 1/sqrt(3), 1/sqrt(3), and 1/sqrt(3). So, now, we are almost there. Our matrix p that we were looking for. It is going to be precisely the vectors that we found. This, and the other two normalized vectors which we just created.1680

So, we get p = -1/sqrt(2), 1/sqrt(2), 0, -1/sqrt(6), -1/sqrt(6), 2/sqrt(6), 1/sqrt(3), 1/sqrt(3), 1/sqrt(3).1708

This matrix with these three columns is... if I did my calculations... if I took the inverse of this matrix and if I multiplied by my original matrix, and then I multiplied by this matrix, I end up with this d, which is -2, -2, 4.1734

The Eigenvalue's along the main diagonal, 0's everywhere else, and if you actually check this out, it will confirm that this is the case.1758

When I have a symmetric n by n matrix, I run through the process of diagonalization, but not only do I just diagonalize it, but I can orthogonally diagonalize it by using this orthogonal matrix, which is orthogonal... which means everything is orthonormal and they are mutually orthogonal to each other. Their dot product = 0.1770

I multiply p inverse ap, I get my diagonal matrix which is the Eigenvalues along the main diagonal. Notice the repeats... -2, -2, 4, so I have an Eigenspace of 2-dimensions, I have an Eigenspace of 1-dimension, which matches perfectly because my original matrix was 3-dimensional... r3.1790

Thank you for joining us for the diagonalization of symmetric matrices, we will see you next time. Bye-bye.1810