Raffi Hovasapian

Raffi Hovasapian

Diagonalization of Symmetric Matrices

Slide Duration:

Table of Contents

Section 1: Linear Equations and Matrices
Linear Systems

39m 3s

Intro
0:00
Linear Systems
1:20
Introduction to Linear Systems
1:21
Examples
10:35
Example 1
10:36
Example 2
13:44
Example 3
16:12
Example 4
23:48
Example 5
28:23
Example 6
32:32
Number of Solutions
35:08
One Solution, No Solution, Infinitely Many Solutions
35:09
Method of Elimination
36:57
Method of Elimination
36:58
Matrices

30m 34s

Intro
0:00
Matrices
0:47
Definition and Example of Matrices
0:48
Square Matrix
7:55
Diagonal Matrix
9:31
Operations with Matrices
10:35
Matrix Addition
10:36
Scalar Multiplication
15:01
Transpose of a Matrix
17:51
Matrix Types
23:17
Regular: m x n Matrix of m Rows and n Column
23:18
Square: n x n Matrix With an Equal Number of Rows and Columns
23:44
Diagonal: A Square Matrix Where All Entries OFF the Main Diagonal are '0'
24:07
Matrix Operations
24:37
Matrix Operations
24:38
Example
25:55
Example
25:56
Dot Product & Matrix Multiplication

41m 42s

Intro
0:00
Dot Product
1:04
Example of Dot Product
1:05
Matrix Multiplication
7:05
Definition
7:06
Example 1
12:26
Example 2
17:38
Matrices and Linear Systems
21:24
Matrices and Linear Systems
21:25
Example 1
29:56
Example 2
32:30
Summary
33:56
Dot Product of Two Vectors and Matrix Multiplication
33:57
Summary, cont.
35:06
Matrix Representations of Linear Systems
35:07
Examples
35:34
Examples
35:35
Properties of Matrix Operation

43m 17s

Intro
0:00
Properties of Addition
1:11
Properties of Addition: A
1:12
Properties of Addition: B
2:30
Properties of Addition: C
2:57
Properties of Addition: D
4:20
Properties of Addition
5:22
Properties of Addition
5:23
Properties of Multiplication
6:47
Properties of Multiplication: A
7:46
Properties of Multiplication: B
8:13
Properties of Multiplication: C
9:18
Example: Properties of Multiplication
9:35
Definitions and Properties (Multiplication)
14:02
Identity Matrix: n x n matrix
14:03
Let A Be a Matrix of m x n
15:23
Definitions and Properties (Multiplication)
18:36
Definitions and Properties (Multiplication)
18:37
Properties of Scalar Multiplication
22:54
Properties of Scalar Multiplication: A
23:39
Properties of Scalar Multiplication: B
24:04
Properties of Scalar Multiplication: C
24:29
Properties of Scalar Multiplication: D
24:48
Properties of the Transpose
25:30
Properties of the Transpose
25:31
Properties of the Transpose
30:28
Example
30:29
Properties of Matrix Addition
33:25
Let A, B, C, and D Be m x n Matrices
33:26
There is a Unique m x n Matrix, 0, Such That…
33:48
Unique Matrix D
34:17
Properties of Matrix Multiplication
34:58
Let A, B, and C Be Matrices of the Appropriate Size
34:59
Let A Be Square Matrix (n x n)
35:44
Properties of Scalar Multiplication
36:35
Let r and s Be Real Numbers, and A and B Matrices
36:36
Properties of the Transpose
37:10
Let r Be a Scalar, and A and B Matrices
37:12
Example
37:58
Example
37:59
Solutions of Linear Systems, Part 1

38m 14s

Intro
0:00
Reduced Row Echelon Form
0:29
An m x n Matrix is in Reduced Row Echelon Form If:
0:30
Reduced Row Echelon Form
2:58
Example: Reduced Row Echelon Form
2:59
Theorem
8:30
Every m x n Matrix is Row-Equivalent to a UNIQUE Matrix in Reduced Row Echelon Form
8:31
Systematic and Careful Example
10:02
Step 1
10:54
Step 2
11:33
Step 3
12:50
Step 4
14:02
Step 5
15:31
Step 6
17:28
Example
30:39
Find the Reduced Row Echelon Form of a Given m x n Matrix
30:40
Solutions of Linear Systems, Part II

28m 54s

Intro
0:00
Solutions of Linear Systems
0:11
Solutions of Linear Systems
0:13
Example I
3:25
Solve the Linear System 1
3:26
Solve the Linear System 2
14:31
Example II
17:41
Solve the Linear System 3
17:42
Solve the Linear System 4
20:17
Homogeneous Systems
21:54
Homogeneous Systems Overview
21:55
Theorem and Example
24:01
Inverse of a Matrix

40m 10s

Intro
0:00
Finding the Inverse of a Matrix
0:41
Finding the Inverse of a Matrix
0:42
Properties of Non-Singular Matrices
6:38
Practical Procedure
9:15
Step1
9:16
Step 2
10:10
Step 3
10:46
Example: Finding Inverse
12:50
Linear Systems and Inverses
17:01
Linear Systems and Inverses
17:02
Theorem and Example
21:15
Theorem
26:32
Theorem
26:33
List of Non-Singular Equivalences
28:37
Example: Does the Following System Have a Non-trivial Solution?
30:13
Example: Inverse of a Matrix
36:16
Section 2: Determinants
Determinants

21m 25s

Intro
0:00
Determinants
0:37
Introduction to Determinants
0:38
Example
6:12
Properties
9:00
Properties 1-5
9:01
Example
10:14
Properties, cont.
12:28
Properties 6 & 7
12:29
Example
14:14
Properties, cont.
18:34
Properties 8 & 9
18:35
Example
19:21
Cofactor Expansions

59m 31s

Intro
0:00
Cofactor Expansions and Their Application
0:42
Cofactor Expansions and Their Application
0:43
Example 1
3:52
Example 2
7:08
Evaluation of Determinants by Cofactor
9:38
Theorem
9:40
Example 1
11:41
Inverse of a Matrix by Cofactor
22:42
Inverse of a Matrix by Cofactor and Example
22:43
More Example
36:22
List of Non-Singular Equivalences
43:07
List of Non-Singular Equivalences
43:08
Example
44:38
Cramer's Rule
52:22
Introduction to Cramer's Rule and Example
52:23
Section 3: Vectors in Rn
Vectors in the Plane

46m 54s

Intro
0:00
Vectors in the Plane
0:38
Vectors in the Plane
0:39
Example 1
8:25
Example 2
15:23
Vector Addition and Scalar Multiplication
19:33
Vector Addition
19:34
Scalar Multiplication
24:08
Example
26:25
The Angle Between Two Vectors
29:33
The Angle Between Two Vectors
29:34
Example
33:54
Properties of the Dot Product and Unit Vectors
38:17
Properties of the Dot Product and Unit Vectors
38:18
Defining Unit Vectors
40:01
2 Very Important Unit Vectors
41:56
n-Vector

52m 44s

Intro
0:00
n-Vectors
0:58
4-Vector
0:59
7-Vector
1:50
Vector Addition
2:43
Scalar Multiplication
3:37
Theorem: Part 1
4:24
Theorem: Part 2
11:38
Right and Left Handed Coordinate System
14:19
Projection of a Point Onto a Coordinate Line/Plane
17:20
Example
21:27
Cauchy-Schwarz Inequality
24:56
Triangle Inequality
36:29
Unit Vector
40:34
Vectors and Dot Products
44:23
Orthogonal Vectors
44:24
Cauchy-Schwarz Inequality
45:04
Triangle Inequality
45:21
Example 1
45:40
Example 2
48:16
Linear Transformation

48m 53s

Intro
0:00
Introduction to Linear Transformations
0:44
Introduction to Linear Transformations
0:45
Example 1
9:01
Example 2
11:33
Definition of Linear Mapping
14:13
Example 3
22:31
Example 4
26:07
Example 5
30:36
Examples
36:12
Projection Mapping
36:13
Images, Range, and Linear Mapping
38:33
Example of Linear Transformation
42:02
Linear Transformations, Part II

34m 8s

Intro
0:00
Linear Transformations
1:29
Linear Transformations
1:30
Theorem 1
7:15
Theorem 2
9:20
Example 1: Find L (-3, 4, 2)
11:17
Example 2: Is It Linear?
17:11
Theorem 3
25:57
Example 3: Finding the Standard Matrix
29:09
Lines and Planes

37m 54s

Intro
0:00
Lines and Plane
0:36
Example 1
0:37
Example 2
7:07
Lines in IR3
9:53
Parametric Equations
14:58
Example 3
17:26
Example 4
20:11
Planes in IR3
25:19
Example 5
31:12
Example 6
34:18
Section 4: Real Vector Spaces
Vector Spaces

42m 19s

Intro
0:00
Vector Spaces
3:43
Definition of Vector Spaces
3:44
Vector Spaces 1
5:19
Vector Spaces 2
9:34
Real Vector Space and Complex Vector Space
14:01
Example 1
15:59
Example 2
18:42
Examples
26:22
More Examples
26:23
Properties of Vector Spaces
32:53
Properties of Vector Spaces Overview
32:54
Property A
34:31
Property B
36:09
Property C
36:38
Property D
37:54
Property F
39:00
Subspaces

43m 37s

Intro
0:00
Subspaces
0:47
Defining Subspaces
0:48
Example 1
3:08
Example 2
3:49
Theorem
7:26
Example 3
9:11
Example 4
12:30
Example 5
16:05
Linear Combinations
23:27
Definition 1
23:28
Example 1
25:24
Definition 2
29:49
Example 2
31:34
Theorem
32:42
Example 3
34:00
Spanning Set for a Vector Space

33m 15s

Intro
0:00
A Spanning Set for a Vector Space
1:10
A Spanning Set for a Vector Space
1:11
Procedure to Check if a Set of Vectors Spans a Vector Space
3:38
Example 1
6:50
Example 2
14:28
Example 3
21:06
Example 4
22:15
Linear Independence

17m 20s

Intro
0:00
Linear Independence
0:32
Definition
0:39
Meaning
3:00
Procedure for Determining if a Given List of Vectors is Linear Independence or Linear Dependence
5:00
Example 1
7:21
Example 2
10:20
Basis & Dimension

31m 20s

Intro
0:00
Basis and Dimension
0:23
Definition
0:24
Example 1
3:30
Example 2: Part A
4:00
Example 2: Part B
6:53
Theorem 1
9:40
Theorem 2
11:32
Procedure for Finding a Subset of S that is a Basis for Span S
14:20
Example 3
16:38
Theorem 3
21:08
Example 4
25:27
Homogeneous Systems

24m 45s

Intro
0:00
Homogeneous Systems
0:51
Homogeneous Systems
0:52
Procedure for Finding a Basis for the Null Space of Ax = 0
2:56
Example 1
7:39
Example 2
18:03
Relationship Between Homogeneous and Non-Homogeneous Systems
19:47
Rank of a Matrix, Part I

35m 3s

Intro
0:00
Rank of a Matrix
1:47
Definition
1:48
Theorem 1
8:14
Example 1
9:38
Defining Row and Column Rank
16:53
If We Want a Basis for Span S Consisting of Vectors From S
22:00
If We want a Basis for Span S Consisting of Vectors Not Necessarily in S
24:07
Example 2: Part A
26:44
Example 2: Part B
32:10
Rank of a Matrix, Part II

29m 26s

Intro
0:00
Rank of a Matrix
0:17
Example 1: Part A
0:18
Example 1: Part B
5:58
Rank of a Matrix Review: Rows, Columns, and Row Rank
8:22
Procedure for Computing the Rank of a Matrix
14:36
Theorem 1: Rank + Nullity = n
16:19
Example 2
17:48
Rank & Singularity
20:09
Example 3
21:08
Theorem 2
23:25
List of Non-Singular Equivalences
24:24
List of Non-Singular Equivalences
24:25
Coordinates of a Vector

27m 3s

Intro
0:00
Coordinates of a Vector
1:07
Coordinates of a Vector
1:08
Example 1
8:35
Example 2
15:28
Example 3: Part A
19:15
Example 3: Part B
22:26
Change of Basis & Transition Matrices

33m 47s

Intro
0:00
Change of Basis & Transition Matrices
0:56
Change of Basis & Transition Matrices
0:57
Example 1
10:44
Example 2
20:44
Theorem
23:37
Example 3: Part A
26:21
Example 3: Part B
32:05
Orthonormal Bases in n-Space

32m 53s

Intro
0:00
Orthonormal Bases in n-Space
1:02
Orthonormal Bases in n-Space: Definition
1:03
Example 1
4:31
Theorem 1
6:55
Theorem 2
8:00
Theorem 3
9:04
Example 2
10:07
Theorem 2
13:54
Procedure for Constructing an O/N Basis
16:11
Example 3
21:42
Orthogonal Complements, Part I

21m 27s

Intro
0:00
Orthogonal Complements
0:19
Definition
0:20
Theorem 1
5:36
Example 1
6:58
Theorem 2
13:26
Theorem 3
15:06
Example 2
18:20
Orthogonal Complements, Part II

33m 49s

Intro
0:00
Relations Among the Four Fundamental Vector Spaces Associated with a Matrix A
2:16
Four Spaces Associated With A (If A is m x n)
2:17
Theorem
4:49
Example 1
7:17
Null Space and Column Space
10:48
Projections and Applications
16:50
Projections and Applications
16:51
Projection Illustration
21:00
Example 1
23:51
Projection Illustration Review
30:15
Section 5: Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors

38m 11s

Intro
0:00
Eigenvalues and Eigenvectors
0:38
Eigenvalues and Eigenvectors
0:39
Definition 1
3:30
Example 1
7:20
Example 2
10:19
Definition 2
21:15
Example 3
23:41
Theorem 1
26:32
Theorem 2
27:56
Example 4
29:14
Review
34:32
Similar Matrices & Diagonalization

29m 55s

Intro
0:00
Similar Matrices and Diagonalization
0:25
Definition 1
0:26
Example 1
2:00
Properties
3:38
Definition 2
4:57
Theorem 1
6:12
Example 3
9:37
Theorem 2
12:40
Example 4
19:12
Example 5
20:55
Procedure for Diagonalizing Matrix A: Step 1
24:21
Procedure for Diagonalizing Matrix A: Step 2
25:04
Procedure for Diagonalizing Matrix A: Step 3
25:38
Procedure for Diagonalizing Matrix A: Step 4
27:02
Diagonalization of Symmetric Matrices

30m 14s

Intro
0:00
Diagonalization of Symmetric Matrices
1:15
Diagonalization of Symmetric Matrices
1:16
Theorem 1
2:24
Theorem 2
3:27
Example 1
4:47
Definition 1
6:44
Example 2
8:15
Theorem 3
10:28
Theorem 4
12:31
Example 3
18:00
Section 6: Linear Transformations
Linear Mappings Revisited

24m 5s

Intro
0:00
Linear Mappings
2:08
Definition
2:09
Linear Operator
7:36
Projection
8:48
Dilation
9:40
Contraction
10:07
Reflection
10:26
Rotation
11:06
Example 1
13:00
Theorem 1
18:16
Theorem 2
19:20
Kernel and Range of a Linear Map, Part I

26m 38s

Intro
0:00
Kernel and Range of a Linear Map
0:28
Definition 1
0:29
Example 1
4:36
Example 2
8:12
Definition 2
10:34
Example 3
13:34
Theorem 1
16:01
Theorem 2
18:26
Definition 3
21:11
Theorem 3
24:28
Kernel and Range of a Linear Map, Part II

25m 54s

Intro
0:00
Kernel and Range of a Linear Map
1:39
Theorem 1
1:40
Example 1: Part A
2:32
Example 1: Part B
8:12
Example 1: Part C
13:11
Example 1: Part D
14:55
Theorem 2
16:50
Theorem 3
23:00
Matrix of a Linear Map

33m 21s

Intro
0:00
Matrix of a Linear Map
0:11
Theorem 1
1:24
Procedure for Computing to Matrix: Step 1
7:10
Procedure for Computing to Matrix: Step 2
8:58
Procedure for Computing to Matrix: Step 3
9:50
Matrix of a Linear Map: Property
10:41
Example 1
14:07
Example 2
18:12
Example 3
24:31
Loading...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Linear Algebra
Bookmark & Share Embed

Share this knowledge with your friends!

Copy & Paste this embed code into your website’s HTML

Please ensure that your website editor is in text mode when you paste the code.
(In Wordpress, the mode button is on the top right corner.)
  ×
  • - Allow users to view the embedded video in full-size.
Since this lesson is not free, only the preview will appear on your website.
  • Discussion

  • Answer Engine

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Lecture Comments (10)

7 answers

Last reply by: Professor Hovasapian
Wed May 1, 2013 4:41 AM

Post by Matt C on April 28, 2013

Professor Hovasapian
Sorry to spring all these questions on you in one week, but thursday is my final. My professor gave me a matrix and he said that it was diagonalizable. I went through all the steps and I cannot get it to diagonalize. I have the matrix A= [[2,2,-2], [-1,1,2], [0,1,1]], I write all matrix's in column form. He claims that it is diagonalizable, but I have spent a long time trying to figure this out. Lambda = R.

det(A-RI) = -(R-2)(-1+R)^2. R=2, R=1.

(A-1*I)x=0 and I get [ [1,2,-2], [-1,0,2], [0,1,0]], I then subject that to rref. [[1,0,0], [0,1,0],[.5, .5, 0]]. I only have 1 free variable which means the basis is 1, which is less then k. Is there a way where you can quick check if this matrix is Diagonalizable. Like I said I have spent hours on this and I am getting no where.

1 answer

Last reply by: Professor Hovasapian
Mon Feb 25, 2013 3:03 AM

Post by Tach M on February 24, 2013

if the polynomial equation of a matrix M has real and distinct roots, Does it mean that M is similar to a diagonal matrix????

Diagonalization of Symmetric Matrices

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Diagonalization of Symmetric Matrices 1:15
    • Diagonalization of Symmetric Matrices
    • Theorem 1
    • Theorem 2
    • Example 1
    • Definition 1
    • Example 2
    • Theorem 3
    • Theorem 4
    • Example 3

Transcription: Diagonalization of Symmetric Matrices

Welcome back to Educator.com, welcome back to linear algebra.0000

In our previous lesson, we discussed Eigenvalues, Eigenvectors, and we talked about that diagonalization process where once we find the specific Eigenvalues from the characteristic polynomial that we get from the determinant, setting it equal to 0, once we find the Eigenvalues we put those Eigenvalues back into the arrangement of the matrix.0004

Then we solve that matrix in order to find the particular Eigenvectors for that Eigenvalue, and the space that is spanned by the Eigen vectors happens to be called an Eigenspace.0026

In the previous lessons, we dealt with some random matrices... they were not particularly special in any sense.0042

Today, we are going to tighten up just a little bit, we are going to continue to talk about Eigenvalues and Eigenvectors, but we are going to talk about the diagonalization of symmetric matrices.0048

As it turns out, symmetric matrices turn up all over the place in science and mathematics, so, let us jump in.0057

We will start with a - you know - recollection of what it is that symmetric matrices are. Then we will start with our definitions and theorems and continue on like we always do.0065

Let us see here. Okay. Let us try a blue ink today. So, recall that a matrix is symmetric if a = a transpose.0074

So, a symmetric matrix... is when a is equal to a transpose, or when the a transpose is equal to a.0089

So, it essentially means that everything that is on the off diagonals is reflected along the main diagonal as if that is a mirror.0108

Just a quick little example, something like (1,2,3,3)... that is... so let us say this is matrix a. If I were to transpose it, which means shift it along its main diagonal, well, (1,2,3,3)... this is equal to a transpose... it is the same thing. (1,2,3,3), (1,2,3,3), this is a symmetric matrix.0118

Okay. Now, we will start off with a very, very interesting theorem. So, you recall, you know, you can take this matrix, we can set up that equation and where we took the Eigenvalue equation where you have Λs and the characteristic polynomial, and we solve the polynomial for its roots.0145

The real roots of that equation are going to be the Eigenvalues of this particular matrix. Well, as it turns out, all the roots of what we says is f(Λ), which is the characteristic polynomial... Λ(a)... symmetric matrix are real numbers.0164

So, as it turns out. If our matrix happens to be symmetric, we know automatically from this theorem that all of the roots are going to be real.0195

So, there is always going to be a real Eigenvalue. Now, we will throw out another theorem, which will help us.0205

If a is a symmetric matrix, then Eigenvectors belonging to distinct Eigenvalues, because you know sometimes Eigenvalues, they can repeat.0222

Eigenvalues are orthogonal. That is interesting... and orthogonal, as you remember, dot product is equal to 0, or perpendicular.0250

Okay. Once again, if a is a symmetric matrix, then the Eigenvectors belonging to distinct Eigenvalues are orthogonal.0263

Let us say we have a particular matrix, a 2 by 2 and let us say the Eigenvalues that I get are 3 and -4. Well, when I calculate the Eigenvectors for 3 and -4, as it turns out, those vectors that I get will be orthogonal. Their dot product will always equal 0.0270

So, let us do a quick example of this. We will let a equal (1,0,0), (0,0,1), (0,1,1), and if you take a quick look at it, you will realize that this is a symmetric matrix. Look along the main diagonal.0288

If I flip it along the main diagonal, as if that is a mirror, (0,0), (0,0), (1,1).0310

When I subject this to mathematical software, again, when you first are dealing with Eigenvectors, Eigenvalues I imagine your professor or teacher is going to have you work by hand, simply to get you used to working with the equation.0319

Just to get you an idea of what it is that you are working with, some mathematical object. But, once you are reasonably familiar, you are going to be using mathematical software to extract these Eigenvalues, and Eigenvectors. But, sometimes the process just takes too long otherwise.0330

So, what we get is... well, Λ1, -- let me start over here -- the first Eigenvalue is equal to 1, and that yields the Eigenvector (1,0,0).0344

Λ2, the second Eigenvalue is 0, 0 is a real value, and it yields the Eigenvector, -- tuh, Eigenvalue, Eigenvector, Eigenspace, yeah... I know -- Okay.0357

That gives me the vector (0,-1,1)... Λ3, the third Eigenvalue is 2 for this matrix, and it yields the Eigenvector (0,1,1).0372

If you were to check the dot product of this and this, this and this, this and this, the mutual dot products, they all equal 0. So, as it turns out, this theorem is confirmed.0385

The Eigenvectors corresponding to distinct Eigenvalues are mutually orthogonal. Okay.0396

Now, let us move onto another definition. Okay. A non-singular -- excuse me -- matrix a, remember non-singular means invertible, so it has an inverse... is called orthogonal.0405

The word orthogonal in 2 different ways. We are using it to apply to two vectors when the dot product is 0, but in this case we call them matrix orthogonal if the inverse of the matrix happens to equal the transpose of the matrix.0435

An equivalent statement to that... I will put equivalent, is a transpose a is equal to the identity matrix.0452

Well, just look at what happens here. If I take a, this says a inverse is equal to a transpose. Well, if I multiply by the matrix a on both sides on the right, a transpose a, that is this one, a inverse a is just the identity matrix, so these are two equivalent statements.0465

I personally prefer this definition right here. So, a non-singular matrix is called orthogonal, so it is an orthogonal matrix if the inverse and the transpose happen to be the same thing. That is a very, very special kind of matrix.0481

So, let us do a quick example of this. If I take the Eigenvectors that I got from the example that I just did, so the Eigenvectors that I just got were (1,0,0), (0,-1,1), and (0,1,1), okay? These are for the respective Eigenvalues 1, 0, 2.0496

First thing I am going to do, I am actually going to normalize these. So normalization, it just means taking them and dividing by the length of the vector. So, this vector actually... let me use red here for normalization.0524

This one stays (1,0,0), so let me put... normalized... this one is just, well, -12, 12, this becomes (0,-1/sqrt(2),1/sqrt(2)). The length of this vector is sqrt(2).0536

This one is the same thing. We have (0,1/sqrt(2),1/sqrt(2)), now if I take these vectors and set them up as columns in a matrix, and this is just something random that I did. I happened to have these available, so let us call this p.0561

p is equal to the matrix (1,0,0), (0,-1/sqrt(2),1/sqrt(2)), (0,1/sqrt(2),1/sqrt(2).0578

This matrix p, if I were to calculate its inverse, and if I were to calculate its transpose, they are the same.0593

p inverse equals p transpose. This is an orthogonal matrix.0602

So again, we are using orthogonal in two different ways. They are related, but not really. We call vectors mutually orthogonal, we call them matrix orthogonal, if the inverse and the transpose are the same thing.0610

Now, let us go back to blue ink here, and state another theorem.0629

An n by n matrix is orthogonal if, and only if, the columns or rows, so I will put rows in parentheses form an ortho-normal set of vectors in RN.0640

Okay. An n by n matrix is orthogonal if and only if the columns form an orthonormal set of vectors in RN.0679

So, if I have a matrix, and let us just take the columns... if the columns form an ortho-normal set, meaning that the length of... column 1 is a vector, column 2 is a vector, column 3 is a vector... if the length of those three is 1, that is the normal part, and if they are mutually orthogonal, well, this thing that we did right here... these columns we normalized it.0687

So, by normalizing it, we made the length 1 and these are mutually orthogonal, so this is an orthogonal matrix. If we did not know it already by finding the inverse and the transpose.0712

If I just happen to look at this and realize that, whoa, these are all normalized and these are mutually orthogonal. Then, I can automatically say that this is an orthogonal matrix, and I would not have to calculate anything. That is what this theorem is used for.0723

Okay, so now let us talk about a very, very, very important theorem. Certainly one of the top 5 in this entire course.0737

It is quite an extraordinary theorem when you see the statement of it and when we talk about it a little bit. Let me do it in red here.0745

So -- excuse me -- if a is a symmetric n by n matrix. Then there exists an orthogonal matrix p, such that p inverse × a × p is equal to some diagonal matrix d, a diagonal matrix, with the Eigenvalues of a along the main diagonal.0751

Okay, so not only a symmetric matrix always diagonalizable, but I can actually diagonalize it with a matrix that is orthogonal, where the columns and the rows are of length 1 and they are mutually orthogonal. Their dot product equals 0.0834

That is really, really extraordinary, so let us state this again. If a is a symmetric n by n matrix, then there exists an orthogonal matrix p such that p inverse × a × p gives me some diagonal matrix.0851

The entries along the main diagonal are precisely the Eigenvalues of a. That is what this equation tells me, that there is this relationship.0866

If I have a matrix a, I can actually take the Eigenvalues of a, I will bring them along the main diagonal and I can find a matrix p, such that when I take p inverse, when I sandwich a between p inverse and p, I actually produce that diagonal by composing the multiplication of this matrix and this matrix and this matrix. That is extraordinary, absolutely extraordinary.0875

So, let us see what happens when we are faced with an Eigenvalue which is repeated.0900

Remember, sometimes you can have an Eigenvalue, your characteristic polynomial can have repeated roots... so that will be a, let us say you have a 3 by 3, and you have Eigenvalues (1,1,2), well the 1 has a multiplicity of 2, because it shows up twice.0905

Okay, let us see how we deal with that. Let us go back to a blue ink here... oops.0920

If we are faced... an Eigenvalue of multiplicity k, then, when we find a basis for the null space associated with this Eigenvalue, in other words finding a basis for the Eigenspace, finding the Eigenvectors, that is all this means because that is what you are doing... you put the Eigenvalue back in that equation, you solve the homogeneous equation and you get a basis for the null space, which is the Eigenvectors associated with this Eigenvalue.0936

We use the Gram Schmidt ortho-normalization process to create an orthonormal basis for that Eigenspace.1012

So if I have an Eigenvalue which repeats itself, and once I find a basis for that Eigenspace, for that particular Eigenvalue, I can ortho-normalize and actually create vectors that are, well, orthonormal, and that will be my one set. Then I move on to my next Eigenvalue.1049

If my matrix is symmetric, I am guaranteed that the distinct Eigenvalues will give me things that are going to be mutually orthonormal.1068

Let us do a problem, and I think everything will fall into place very, very nicely.1080

So, example... we will let a = (0,2,2), (2,0,2), and (2,2,0)... 2... 2... 0...1087

Let us confirm that this is diagonal. Yes. 2, 2, 2, 2, 2, 2, absolutely. Main diagonal is the mirror. If you flip it you end up with the same thing.1103

Okay. Let us do the characteristic polynomial. Let us actually do this one a little bit in detail. It equals the determinant Λ - 0 - 2 - 2, Λ's along the diagonals and negatives everywhere else... Λ - 0... -2... -2... -2... Λ - 0.1114

We want the determinant of this. When we take the determinant of this, we actually end up with the following... Λ + 2 in factored form... Λ - 4.1139

So, I have solved for this polynomial and I have turned it into something factored. So, I get -- let me put it over here -- Λ1 = 2... -2, I am sorry.1149

Λ2 is also equal to 2, that is what this 2 here means. Okay. That means this Eigenvalue Λ = -2 has a multiplicity of 2, it shows up twice.1160

Of course, our third Λ, third Eigenvalue is going to equal 4. So, now let us go ahead and do solve this homogeneous system.1170

Well, I take -2, I stick it into here, and I solve the homogeneous system. So, I end up with the following.1182

I end up -- let me actually write... let me do this... no, it is okay -- so 4Λ = -2, we get the following system... we get -2, - 2, -2, 0.1193

It is this thing, and then the 0's over here, -2, -2, -2, 0. -2, -2, -2, 0.1213

Well, when we subject that to reduced row echelon form, we end up with 1,1,0, and 0's everywhere else.1225

So, this column, this column... so, we get -- let me do it this way -- s3, let us set it equal to s, this does not have a leading entry, so it is a random parameter.1239

x2 also does not have a leading entry. Remember this does not have to be in diagonal form, so this is the only one that has to be a leading entry.1252

So, set that equal to r, and x1 is equal to, well, -r, -s.1260

This is equivalent to the following... r × -1, 1, 0 + s × -1,0,1.1271

Okay. So, these 2 vectors right here form a basis for our Eigenspace. They are our Eigenvectors for this, for these Eigenvalues.1291

Well, what is the next step? We found the basis, so now we want to go ahead and we want to ortho-normalize them.1305

We want to make them orthogonal, and then we want to normalize them so they are orthonormal. So, we go through the Gram Schmidt process.1316

So, let me rewrite the vectors. I have (-1,1,0) -- so that we have them in front of us -- (0,1)... is a basis for the Eigenspace associated with Λ = -2. Okay.1323

So, we know that our first v1, this is going to be the first vector... we can actually take this one. So, I am going to let v1 = -1, 1, 0.1355

That is going to be our standard. We are going to orthogonalize everything with respect to that one.1368

Well, v2 is equal to... this is u1, this is u2... is equal to u2 - u2 · v1 over v1 · v1 × v1.1373

This is the definition of the ortho-normalization process, the Gram Schmidt process. You take the second vector, and you subtract... you work forward.1399

I will not recall the entire formula here, but you can go back and take a look at it where we did a couple of examples of that orthogonalization.1409

When you put all of these in, u2 is this one, v1 is this one, and you do the multiplication, you end up with the following... -1/2, -1/2, 1/2 and 1.1416

Okay. Now, you remember I do not need the fractions here because a vector in this direction is... well, it is in the same direction, so the length of these individual values does not really matter.1433

So, I am just going to take -1, -1, 1. Okay. So, now, -1, 1, 0... and -1, -1 -- I am not taking fractions here, what I am doing is I am actually multiplying everything by 2.1447

I can multiply a vector by anything because all it does is extend the vector or shorten the vector, it is still in the same direction, and it is the direction that I am interested in.1473

So, when I multiply by 2 -- this is not 1 -- 2 × that... and this ends up being 2 here. Okay.1484

Now, this is orthogonal. I want to normalize them.1493

When I normalize them, I get the following -- nope, we are not going to have these random lines everywhere --... -1/sqrt(2), 1/sqrt(2), 0... and 2 × 2 is 4, 1, 1, sqrt(6), so this is going to be... -1/sqrt(6), -1/sqrt(6), 2/sqrt(6).1504

This is orthonormal. So, with respect to that Eigenvalue -2, we have created an orthonormal basis for its Eigenspace. So this is going to be one column, this is going to be a second column, now let us go ahead and do the next Eigenvalue -- where are we... here we are.1537

Our other Eigenvalue was Λ = 4, so for Λ = 4, we put it back into that, remember Λ thing determinant equation... we end up with the following. We get 4, - 2, - 2, 0... -2, 4, -2, 0... -2, -2, 4, 0.1563

When we subject this to reduced row echelon we get 1, 0, -1, 0. We get 0, 1, -1, 0, 0 here... and 0's everywhere else.1587

Okay. That is a leading entry. That is a leading entry. Therefore, that is not a leading entry, so we can let that one be x3 = r. Any parameter.1603

Well, that means x2 - r = 0, so x2 = r, as well... and here it is x1 - r = 0, so x1 also equals r.1616

Therefore, this is equivalent to r × 1, 1, 1. Okay.1630

So, this right here is an Eigenvector for Λ = 4. It is one vector, it is a one dimensional Eigenspace. It spans the Eigenspace.1639

Now, we want to normalize this. So, when we normalize this, it is sqrt(3)... I will put normalize -- let me make some more room here, I am going to use up a lot of room for not a lot of... let me go this way -- normalize.1653

We end up with 1/sqrt(3), 1/sqrt(3), and 1/sqrt(3). So, now, we are almost there. Our matrix p that we were looking for. It is going to be precisely the vectors that we found. This, and the other two normalized vectors which we just created.1680

So, we get p = -1/sqrt(2), 1/sqrt(2), 0, -1/sqrt(6), -1/sqrt(6), 2/sqrt(6), 1/sqrt(3), 1/sqrt(3), 1/sqrt(3).1708

This matrix with these three columns is... if I did my calculations... if I took the inverse of this matrix and if I multiplied by my original matrix, and then I multiplied by this matrix, I end up with this d, which is -2, -2, 4.1734

The Eigenvalue's along the main diagonal, 0's everywhere else, and if you actually check this out, it will confirm that this is the case.1758

When I have a symmetric n by n matrix, I run through the process of diagonalization, but not only do I just diagonalize it, but I can orthogonally diagonalize it by using this orthogonal matrix, which is orthogonal... which means everything is orthonormal and they are mutually orthogonal to each other. Their dot product = 0.1770

I multiply p inverse ap, I get my diagonal matrix which is the Eigenvalues along the main diagonal. Notice the repeats... -2, -2, 4, so I have an Eigenspace of 2-dimensions, I have an Eigenspace of 1-dimension, which matches perfectly because my original matrix was 3-dimensional... r3.1790

Thank you for joining us for the diagonalization of symmetric matrices, we will see you next time. Bye-bye.1810

Educator®

Please sign in to participate in this lecture discussion.

Resetting Your Password?
OR

Start Learning Now

Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.

Membership Overview

  • Available 24/7. Unlimited Access to Our Entire Library.
  • Search and jump to exactly what you want to learn.
  • *Ask questions and get answers from the community and our teachers!
  • Practice questions with step-by-step solutions.
  • Download lecture slides for taking notes.
  • Track your course viewing progress.
  • Accessible anytime, anywhere with our Android and iOS apps.