Raffi Hovasapian

Raffi Hovasapian

Orthonormal Bases in n-Space

Slide Duration:

Table of Contents

Section 1: Linear Equations and Matrices
Linear Systems

39m 3s

Intro
0:00
Linear Systems
1:20
Introduction to Linear Systems
1:21
Examples
10:35
Example 1
10:36
Example 2
13:44
Example 3
16:12
Example 4
23:48
Example 5
28:23
Example 6
32:32
Number of Solutions
35:08
One Solution, No Solution, Infinitely Many Solutions
35:09
Method of Elimination
36:57
Method of Elimination
36:58
Matrices

30m 34s

Intro
0:00
Matrices
0:47
Definition and Example of Matrices
0:48
Square Matrix
7:55
Diagonal Matrix
9:31
Operations with Matrices
10:35
Matrix Addition
10:36
Scalar Multiplication
15:01
Transpose of a Matrix
17:51
Matrix Types
23:17
Regular: m x n Matrix of m Rows and n Column
23:18
Square: n x n Matrix With an Equal Number of Rows and Columns
23:44
Diagonal: A Square Matrix Where All Entries OFF the Main Diagonal are '0'
24:07
Matrix Operations
24:37
Matrix Operations
24:38
Example
25:55
Example
25:56
Dot Product & Matrix Multiplication

41m 42s

Intro
0:00
Dot Product
1:04
Example of Dot Product
1:05
Matrix Multiplication
7:05
Definition
7:06
Example 1
12:26
Example 2
17:38
Matrices and Linear Systems
21:24
Matrices and Linear Systems
21:25
Example 1
29:56
Example 2
32:30
Summary
33:56
Dot Product of Two Vectors and Matrix Multiplication
33:57
Summary, cont.
35:06
Matrix Representations of Linear Systems
35:07
Examples
35:34
Examples
35:35
Properties of Matrix Operation

43m 17s

Intro
0:00
Properties of Addition
1:11
Properties of Addition: A
1:12
Properties of Addition: B
2:30
Properties of Addition: C
2:57
Properties of Addition: D
4:20
Properties of Addition
5:22
Properties of Addition
5:23
Properties of Multiplication
6:47
Properties of Multiplication: A
7:46
Properties of Multiplication: B
8:13
Properties of Multiplication: C
9:18
Example: Properties of Multiplication
9:35
Definitions and Properties (Multiplication)
14:02
Identity Matrix: n x n matrix
14:03
Let A Be a Matrix of m x n
15:23
Definitions and Properties (Multiplication)
18:36
Definitions and Properties (Multiplication)
18:37
Properties of Scalar Multiplication
22:54
Properties of Scalar Multiplication: A
23:39
Properties of Scalar Multiplication: B
24:04
Properties of Scalar Multiplication: C
24:29
Properties of Scalar Multiplication: D
24:48
Properties of the Transpose
25:30
Properties of the Transpose
25:31
Properties of the Transpose
30:28
Example
30:29
Properties of Matrix Addition
33:25
Let A, B, C, and D Be m x n Matrices
33:26
There is a Unique m x n Matrix, 0, Such That…
33:48
Unique Matrix D
34:17
Properties of Matrix Multiplication
34:58
Let A, B, and C Be Matrices of the Appropriate Size
34:59
Let A Be Square Matrix (n x n)
35:44
Properties of Scalar Multiplication
36:35
Let r and s Be Real Numbers, and A and B Matrices
36:36
Properties of the Transpose
37:10
Let r Be a Scalar, and A and B Matrices
37:12
Example
37:58
Example
37:59
Solutions of Linear Systems, Part 1

38m 14s

Intro
0:00
Reduced Row Echelon Form
0:29
An m x n Matrix is in Reduced Row Echelon Form If:
0:30
Reduced Row Echelon Form
2:58
Example: Reduced Row Echelon Form
2:59
Theorem
8:30
Every m x n Matrix is Row-Equivalent to a UNIQUE Matrix in Reduced Row Echelon Form
8:31
Systematic and Careful Example
10:02
Step 1
10:54
Step 2
11:33
Step 3
12:50
Step 4
14:02
Step 5
15:31
Step 6
17:28
Example
30:39
Find the Reduced Row Echelon Form of a Given m x n Matrix
30:40
Solutions of Linear Systems, Part II

28m 54s

Intro
0:00
Solutions of Linear Systems
0:11
Solutions of Linear Systems
0:13
Example I
3:25
Solve the Linear System 1
3:26
Solve the Linear System 2
14:31
Example II
17:41
Solve the Linear System 3
17:42
Solve the Linear System 4
20:17
Homogeneous Systems
21:54
Homogeneous Systems Overview
21:55
Theorem and Example
24:01
Inverse of a Matrix

40m 10s

Intro
0:00
Finding the Inverse of a Matrix
0:41
Finding the Inverse of a Matrix
0:42
Properties of Non-Singular Matrices
6:38
Practical Procedure
9:15
Step1
9:16
Step 2
10:10
Step 3
10:46
Example: Finding Inverse
12:50
Linear Systems and Inverses
17:01
Linear Systems and Inverses
17:02
Theorem and Example
21:15
Theorem
26:32
Theorem
26:33
List of Non-Singular Equivalences
28:37
Example: Does the Following System Have a Non-trivial Solution?
30:13
Example: Inverse of a Matrix
36:16
Section 2: Determinants
Determinants

21m 25s

Intro
0:00
Determinants
0:37
Introduction to Determinants
0:38
Example
6:12
Properties
9:00
Properties 1-5
9:01
Example
10:14
Properties, cont.
12:28
Properties 6 & 7
12:29
Example
14:14
Properties, cont.
18:34
Properties 8 & 9
18:35
Example
19:21
Cofactor Expansions

59m 31s

Intro
0:00
Cofactor Expansions and Their Application
0:42
Cofactor Expansions and Their Application
0:43
Example 1
3:52
Example 2
7:08
Evaluation of Determinants by Cofactor
9:38
Theorem
9:40
Example 1
11:41
Inverse of a Matrix by Cofactor
22:42
Inverse of a Matrix by Cofactor and Example
22:43
More Example
36:22
List of Non-Singular Equivalences
43:07
List of Non-Singular Equivalences
43:08
Example
44:38
Cramer's Rule
52:22
Introduction to Cramer's Rule and Example
52:23
Section 3: Vectors in Rn
Vectors in the Plane

46m 54s

Intro
0:00
Vectors in the Plane
0:38
Vectors in the Plane
0:39
Example 1
8:25
Example 2
15:23
Vector Addition and Scalar Multiplication
19:33
Vector Addition
19:34
Scalar Multiplication
24:08
Example
26:25
The Angle Between Two Vectors
29:33
The Angle Between Two Vectors
29:34
Example
33:54
Properties of the Dot Product and Unit Vectors
38:17
Properties of the Dot Product and Unit Vectors
38:18
Defining Unit Vectors
40:01
2 Very Important Unit Vectors
41:56
n-Vector

52m 44s

Intro
0:00
n-Vectors
0:58
4-Vector
0:59
7-Vector
1:50
Vector Addition
2:43
Scalar Multiplication
3:37
Theorem: Part 1
4:24
Theorem: Part 2
11:38
Right and Left Handed Coordinate System
14:19
Projection of a Point Onto a Coordinate Line/Plane
17:20
Example
21:27
Cauchy-Schwarz Inequality
24:56
Triangle Inequality
36:29
Unit Vector
40:34
Vectors and Dot Products
44:23
Orthogonal Vectors
44:24
Cauchy-Schwarz Inequality
45:04
Triangle Inequality
45:21
Example 1
45:40
Example 2
48:16
Linear Transformation

48m 53s

Intro
0:00
Introduction to Linear Transformations
0:44
Introduction to Linear Transformations
0:45
Example 1
9:01
Example 2
11:33
Definition of Linear Mapping
14:13
Example 3
22:31
Example 4
26:07
Example 5
30:36
Examples
36:12
Projection Mapping
36:13
Images, Range, and Linear Mapping
38:33
Example of Linear Transformation
42:02
Linear Transformations, Part II

34m 8s

Intro
0:00
Linear Transformations
1:29
Linear Transformations
1:30
Theorem 1
7:15
Theorem 2
9:20
Example 1: Find L (-3, 4, 2)
11:17
Example 2: Is It Linear?
17:11
Theorem 3
25:57
Example 3: Finding the Standard Matrix
29:09
Lines and Planes

37m 54s

Intro
0:00
Lines and Plane
0:36
Example 1
0:37
Example 2
7:07
Lines in IR3
9:53
Parametric Equations
14:58
Example 3
17:26
Example 4
20:11
Planes in IR3
25:19
Example 5
31:12
Example 6
34:18
Section 4: Real Vector Spaces
Vector Spaces

42m 19s

Intro
0:00
Vector Spaces
3:43
Definition of Vector Spaces
3:44
Vector Spaces 1
5:19
Vector Spaces 2
9:34
Real Vector Space and Complex Vector Space
14:01
Example 1
15:59
Example 2
18:42
Examples
26:22
More Examples
26:23
Properties of Vector Spaces
32:53
Properties of Vector Spaces Overview
32:54
Property A
34:31
Property B
36:09
Property C
36:38
Property D
37:54
Property F
39:00
Subspaces

43m 37s

Intro
0:00
Subspaces
0:47
Defining Subspaces
0:48
Example 1
3:08
Example 2
3:49
Theorem
7:26
Example 3
9:11
Example 4
12:30
Example 5
16:05
Linear Combinations
23:27
Definition 1
23:28
Example 1
25:24
Definition 2
29:49
Example 2
31:34
Theorem
32:42
Example 3
34:00
Spanning Set for a Vector Space

33m 15s

Intro
0:00
A Spanning Set for a Vector Space
1:10
A Spanning Set for a Vector Space
1:11
Procedure to Check if a Set of Vectors Spans a Vector Space
3:38
Example 1
6:50
Example 2
14:28
Example 3
21:06
Example 4
22:15
Linear Independence

17m 20s

Intro
0:00
Linear Independence
0:32
Definition
0:39
Meaning
3:00
Procedure for Determining if a Given List of Vectors is Linear Independence or Linear Dependence
5:00
Example 1
7:21
Example 2
10:20
Basis & Dimension

31m 20s

Intro
0:00
Basis and Dimension
0:23
Definition
0:24
Example 1
3:30
Example 2: Part A
4:00
Example 2: Part B
6:53
Theorem 1
9:40
Theorem 2
11:32
Procedure for Finding a Subset of S that is a Basis for Span S
14:20
Example 3
16:38
Theorem 3
21:08
Example 4
25:27
Homogeneous Systems

24m 45s

Intro
0:00
Homogeneous Systems
0:51
Homogeneous Systems
0:52
Procedure for Finding a Basis for the Null Space of Ax = 0
2:56
Example 1
7:39
Example 2
18:03
Relationship Between Homogeneous and Non-Homogeneous Systems
19:47
Rank of a Matrix, Part I

35m 3s

Intro
0:00
Rank of a Matrix
1:47
Definition
1:48
Theorem 1
8:14
Example 1
9:38
Defining Row and Column Rank
16:53
If We Want a Basis for Span S Consisting of Vectors From S
22:00
If We want a Basis for Span S Consisting of Vectors Not Necessarily in S
24:07
Example 2: Part A
26:44
Example 2: Part B
32:10
Rank of a Matrix, Part II

29m 26s

Intro
0:00
Rank of a Matrix
0:17
Example 1: Part A
0:18
Example 1: Part B
5:58
Rank of a Matrix Review: Rows, Columns, and Row Rank
8:22
Procedure for Computing the Rank of a Matrix
14:36
Theorem 1: Rank + Nullity = n
16:19
Example 2
17:48
Rank & Singularity
20:09
Example 3
21:08
Theorem 2
23:25
List of Non-Singular Equivalences
24:24
List of Non-Singular Equivalences
24:25
Coordinates of a Vector

27m 3s

Intro
0:00
Coordinates of a Vector
1:07
Coordinates of a Vector
1:08
Example 1
8:35
Example 2
15:28
Example 3: Part A
19:15
Example 3: Part B
22:26
Change of Basis & Transition Matrices

33m 47s

Intro
0:00
Change of Basis & Transition Matrices
0:56
Change of Basis & Transition Matrices
0:57
Example 1
10:44
Example 2
20:44
Theorem
23:37
Example 3: Part A
26:21
Example 3: Part B
32:05
Orthonormal Bases in n-Space

32m 53s

Intro
0:00
Orthonormal Bases in n-Space
1:02
Orthonormal Bases in n-Space: Definition
1:03
Example 1
4:31
Theorem 1
6:55
Theorem 2
8:00
Theorem 3
9:04
Example 2
10:07
Theorem 2
13:54
Procedure for Constructing an O/N Basis
16:11
Example 3
21:42
Orthogonal Complements, Part I

21m 27s

Intro
0:00
Orthogonal Complements
0:19
Definition
0:20
Theorem 1
5:36
Example 1
6:58
Theorem 2
13:26
Theorem 3
15:06
Example 2
18:20
Orthogonal Complements, Part II

33m 49s

Intro
0:00
Relations Among the Four Fundamental Vector Spaces Associated with a Matrix A
2:16
Four Spaces Associated With A (If A is m x n)
2:17
Theorem
4:49
Example 1
7:17
Null Space and Column Space
10:48
Projections and Applications
16:50
Projections and Applications
16:51
Projection Illustration
21:00
Example 1
23:51
Projection Illustration Review
30:15
Section 5: Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors

38m 11s

Intro
0:00
Eigenvalues and Eigenvectors
0:38
Eigenvalues and Eigenvectors
0:39
Definition 1
3:30
Example 1
7:20
Example 2
10:19
Definition 2
21:15
Example 3
23:41
Theorem 1
26:32
Theorem 2
27:56
Example 4
29:14
Review
34:32
Similar Matrices & Diagonalization

29m 55s

Intro
0:00
Similar Matrices and Diagonalization
0:25
Definition 1
0:26
Example 1
2:00
Properties
3:38
Definition 2
4:57
Theorem 1
6:12
Example 3
9:37
Theorem 2
12:40
Example 4
19:12
Example 5
20:55
Procedure for Diagonalizing Matrix A: Step 1
24:21
Procedure for Diagonalizing Matrix A: Step 2
25:04
Procedure for Diagonalizing Matrix A: Step 3
25:38
Procedure for Diagonalizing Matrix A: Step 4
27:02
Diagonalization of Symmetric Matrices

30m 14s

Intro
0:00
Diagonalization of Symmetric Matrices
1:15
Diagonalization of Symmetric Matrices
1:16
Theorem 1
2:24
Theorem 2
3:27
Example 1
4:47
Definition 1
6:44
Example 2
8:15
Theorem 3
10:28
Theorem 4
12:31
Example 3
18:00
Section 6: Linear Transformations
Linear Mappings Revisited

24m 5s

Intro
0:00
Linear Mappings
2:08
Definition
2:09
Linear Operator
7:36
Projection
8:48
Dilation
9:40
Contraction
10:07
Reflection
10:26
Rotation
11:06
Example 1
13:00
Theorem 1
18:16
Theorem 2
19:20
Kernel and Range of a Linear Map, Part I

26m 38s

Intro
0:00
Kernel and Range of a Linear Map
0:28
Definition 1
0:29
Example 1
4:36
Example 2
8:12
Definition 2
10:34
Example 3
13:34
Theorem 1
16:01
Theorem 2
18:26
Definition 3
21:11
Theorem 3
24:28
Kernel and Range of a Linear Map, Part II

25m 54s

Intro
0:00
Kernel and Range of a Linear Map
1:39
Theorem 1
1:40
Example 1: Part A
2:32
Example 1: Part B
8:12
Example 1: Part C
13:11
Example 1: Part D
14:55
Theorem 2
16:50
Theorem 3
23:00
Matrix of a Linear Map

33m 21s

Intro
0:00
Matrix of a Linear Map
0:11
Theorem 1
1:24
Procedure for Computing to Matrix: Step 1
7:10
Procedure for Computing to Matrix: Step 2
8:58
Procedure for Computing to Matrix: Step 3
9:50
Matrix of a Linear Map: Property
10:41
Example 1
14:07
Example 2
18:12
Example 3
24:31
Loading...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Linear Algebra
Bookmark & Share Embed

Share this knowledge with your friends!

Copy & Paste this embed code into your website’s HTML

Please ensure that your website editor is in text mode when you paste the code.
(In Wordpress, the mode button is on the top right corner.)
  ×
  • - Allow users to view the embedded video in full-size.
Since this lesson is not free, only the preview will appear on your website.
  • Discussion

  • Answer Engine

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Lecture Comments (7)

2 answers

Last reply by: Elias Assaf
Thu Nov 14, 2013 3:00 AM

Post by Elias Assaf on November 13, 2013

Hi, you're videos are absolutely top, top, top class.

Just a little thing though, in you're first example, if you take the dot product of U1 and U2 you get
(-2,0,2) which is not orthogonal. Or have I missunderstood anything?

Thanks.
Elias

2 answers

Last reply by: Josh Winfield
Thu Apr 3, 2014 11:15 PM

Post by Christian Fischer on November 5, 2013

Hi Raffi, Great lecture - it helped me a lot!
I might be mistaking but at 27.15 (minutes.seconds) when you write v2=(-1,2,1) Did you really mean ((-1/3),(2/3),(-1/3))?

P.S. I got all year replies on multivariable calculus. Thank you so much! I'll get back to you later with respect to that :)

Have a great day
Christian    

0 answers

Post by Manfred Berger on June 20, 2013

I have actually never seen the techniques you used in example 2 before. I'm quite sure this could be modified to solve arbitrary homogeneos systems.

Orthonormal Bases in n-Space

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Orthonormal Bases in n-Space 1:02
    • Orthonormal Bases in n-Space: Definition
    • Example 1
    • Theorem 1
    • Theorem 2
    • Theorem 3
    • Example 2
    • Theorem 2
    • Procedure for Constructing an O/N Basis
    • Example 3

Transcription: Orthonormal Bases in n-Space

Welcome back to Educator.com, welcome back to linear algebra.0000

In the previous lesson, we talked about transition matrices for a particular space, where we have 2 or more bases for that space.0004

There we said that one basis is as good as another. As it turns out that is true.0015

One basis is not necessarily better than another intrinsically, however as it turns out, there are certain bases... one particular basis in particular that is better computationally.0019

It just makes our life a lot easier. That is called an orthonormal basis.0029

So, that is what we are going to talk about today. We are going to introduce the concept, and then we will talk about how to take a given basis and turn it into an orthonormal basis by something called the Gram Schmidt orthonormalization process.0034

It can be a little bit computationally intensive, and notationally intensive, but there is nothing particularly strange about it.0047

It is all things you have seen. Essentially it is all just arithmetic, and a little bit of the dot product.0055

So, let us just jump in and see what we can do.0059

Okay. Let us talk about the standard basis in R2 and R3, the basis that we are accustomed to.0063

If I take R2, I can have a basis (1,0), that is one vector, and the other vector is (0,1)... okay?0071

Also known as i and j, also known as e1 and e2, again just different symbols for the same thing.0084

R3 is the same thing. R3... we have... (1,0,0), (0,1,0), and (0,0,1), as a basis.0095

Three vectors, three dimensional vector space, you also know them as i,j, k, and we have also referred to them as e1, e2, e3.0108

Now, what is interesting about these particular bases, notice, let us just deal with R3... vector a and vector 2 are orthogonal meaning that their dot product is 0, perpendicular.0122

1 and 3 are orthogonal, 2 and 3 are orthogonal. Not only that, they are not just orthogonal, mutually orthogonal, but each of these has a length of 1.0136

So, this is what we call orthonormal, that the vectors are mutually orthogonal, and they have a length of 1.0146

As it turns out, this so called natural basis works out really, really, well computationally.0154

We want to find a procedure... is there a way where given some random basis, or several random bases, can we choose among them and turn that basis into something that is orthonormal, where all of the vectors have a length of 1 and all of the vectors are mutually orthogonal.0160

As it turns out, there is. The Gram Schmidt orthonormalization process. A beautiful process, and we will go through it in just a minute.0175

Let us just start off with some formal definitions first.0183

So, we have a set s, which is the vectors u1, u2... all the way to uN is orthogonal, if any two vectors in s are orthogonal.0188

What that means mathematically is that ui · uj, so u1 · u3 = 0.0222

The dot product of those two vectors is equal to 0. That is the definition of orthogonality. Perpendicularity if you will.0231

Now, the set is orthonormal if each vector also has a length norm of 1.0239

Mathematically, that means ui dotted with itself gives me 1.0262

Let us just do an example. Let us take u1 = the vector (1,0,2), u2 = the vector (-2,0,1), and u3 = the vector (0,1,0).0269

Okay. So, as it turns out, the set u1, u2, u3, well if I do the dot product of u1, u2... u1, u3... u2, u3, I get 0 for the dot product.0293

So, this set is orthogonal.0311

Now, let us calculate some norms. Well, the norm of u1 is equal to this squared + this squared + this squared under the radical sign.... is sqrt(5).0316

The norm for u2, that is our symbol for norm, is equal to this squared + this squared + this squared... also sqrt(5).0334

And... the norm for u3 is 1. So, this one is already a unit vector, these two are not.0348

So, since I have the norm, how do I create a vector that is length 1. I take the vector and I multiply it by the reciprocal of its norm, or I divide it by its norm, essentially.0353

So, we get the following. If I have the set u1/sqrt(5), u2/sqrt(5), and u3... this set is orthonormal.0369

Again, radical sign in the denominator, it is not a problem, it is perfectly good math. In fact, I think it is better math than simplification.0388

What they call rationalizing the denominator, or something like that... I like to see where my numbers are.0399

If there is a radical 5 in the denominator, I want it to be there. I do not want it to be u2sqrt(5)/5, that makes no sense to me, personally, but it is up to you though.0405

Okay. Quick little theorem here.0416

If s is an orthogonal set, or orthonormal, then s is linearly independent.0428

So, if I have a set that I do not know is a basis, I know this theorem says that they are linearly independent.0452

So, the particular space that they span, it is a basis for that space.0460

Okay. Let us go ahead and do this. Like we said before, using an orthonormal basis can actually improve the computation.0466

It just makes the computational effort a little bit less painful.0476

So, now if we have s, u1, u2, all the way to un...0481

If this set is a basis for some vector space v and u is some random vector in v, then we know that we can express u as linear combination... c1u1 + c2u2 + cNuN.0498

u's, v's, w's, all kinds of stuff going on. Okay, so what we did before was we just solved this linear system.0528

We found c1, c2, all the way to cN, you now, Gauss-Jordan elimination, reduced row echelon form.0536

Well, here is what is interesting. If s is orthonormal, well still if it is orthonormal, when it is orthogonal it is still a basis so you still get this property.0545

You know -- u is still that, but it is a really, really simple way to find the cN without finding the linear system.0562

What you end up with is the following. Each ci is equal to the vector u dotted with ui.0570

For example, if I wanted to find the second coefficient, I would just take the vector u and I would dot it with the second vector in the basis.0582

That is fantastic. It is just a simple dot product. For vectors that are -- you know -- maybe 2-space, 3-space, 4-space, maybe even 5-space, there is no solution, you do not have to worry about it.0592

You can just do the multiplication and the addition in your head. The dot product is really, really easy to find.0600

So, let us do an example of this. We will let s equal... okay... it is going to be a little strange because we are going to have some fractions here...0608

(2/3, -2/3, 1/3, 2/3, 1/3, -2/3, 1/3, 2/3, 2/3,), so this is our set s.0625

Okay, well, we want to be able to write some random vector v, which is equal to let us say (3,4,5) as a linear combo of the vectors in s.0650

So, we want c1, let us call it v1, the vectors in s, let us just call them v1, v2, v3... + c2v2 + c3v3.0675

So, we want to find c1, c2, c3. Well, as it turns out that basis, even though it looks a little strange, it actually ends up being orthonormal.0691

The length of each of those vectors is 1, and the mutual dot product of each of those is 0.0700

So, it is an orthonormal set. Therefore, c1 = v · v1.0708

Well, v is just 3, 4, 5, okay? I am going to dot it with v1, which was 2/3, -2/3, 1/3, and I get 1.0718

When I do c2, well that is just equal to v · v2. When I do that, I get 0.0737

If I do c3, that is just v, again, 3, 4, 5, dotted with v3, which was v3 up there in s, and this ends up being 7.0746

Therefore, v is equal to, well, 1 × v1 + 0 × v2 + 7v3.0759

There we go. I did not have to solve the linear system. I just did simple dot product. Simple multiplication and addition. Very, very nice.0771

One of the benefits of having an orthonormal basis. There are many benefits, trust me on this one.0781

Okay. So, now let us come down to the procedure of actually constructing our orthonormal basis.0786

So we are going to go through this very, very carefully. There is going to be a lot of notation, but again, the notation is not strange. You just have to make sure to know what is where.0792

The indices are going to be very, very important... so, take a look at it here, take some time to actually stare at the Gram Schmidt orthogonalization process in your textbook.0802

That is really the best way to sort of wrap your mind around it. Of course, doing examples, which we will do when you do problems, but just staring at something is really a fantastic way to understand what it is that is going on.0815

In mathematics, it is the details, the indices, the order of things. Okay.0826

Let us see. So, let us write it out as a theorem first.0835

You know, let me... let me go to a black ink... theorem...0841

Let w be a non-zero subspace. So again, when we can speak of a basis we can speak of a base for the entire space, it does not really matter.0857

In this case we are just going to express this theorem as a subspace.0870

Again, the whole space itself is a subspace of itself, so this theorem is perfectly valid in this case.0875

... be a subspace of rN with basis s = u1, u2, uN. N vectors, N space.0883

Then, there exists an orthonormal basis t, which is equal to... let us call it w1, w2, all the way to wN for w.0906

So, this theorem says that if I am given a basis for this subspace or a space itself, I can find... there exists an orthonormal basis.0934

Well, not only does one exist, as it turns out, this particular procedure constructs it for us. So, the proof of this theorem is the construction of that orthonormal basis itself.0945

That is the Gram Schmidt orthonormalization process. They call it the orthogonalization process, which is really what we are doing.0955

We are finding orthogonal vectors, but we know how to change vectors that are orthogonal to vectors that are orthonormal.0961

We just divide by their norms... nice and simple. Okay.0967

So, let us list the procedure so that we have some protocol to follow.0973

Procedure... okay... procedure for constructing an orthonormal basis t, which we said is going to be w1, w2... all the way to wN.0979

Constructing an orthonormal basis t... from basis s = u1, u2, all the way to uN.1014

I am going to change something here. I am going to not use w, I think I am going to use v instead.1038

I used u for s, so I am going to go back and choose... let me call them v so that we stay reasonably consistent... to vN.1047

Okay. So, we are given a basis s, we want to construct an orthonormal basis t from s, here is how we do it.1058

First things first. We let v1, the first vector in our basis t, our orthonormal, we let it equal u1.1068

We just take it straight from there. The first vector is the first vector.1076

Okay. Two. This is where the notation gets kind of intensive.1082

vi = ui - ui · 11/v1 · v1 × the vector v1 - ui · v2/v2 · v2 × the vector v2... and so on.1086

Until we get to ui · vi-1/vi-1 · vi-1 × vi-1. Okay.1134

Do not worry about this, it will... when we do the example, this will make sense. Again, this is just mathematical formulism to make sure that everything is complete1158

When we do the example, you will see what all of these i's and i-1's and v's mean.1165

Three. t*, when we have collected our v1, v2, that we go from the first two steps... is orthogonal.1173

We have created an orthogonal set.1194

Now, we want to take -- that is fine, we will go ahead and we will take for every vi, we are going to divide it by its norm.1199

So, for each of these v1, v1, in this set which is orthogonal, we are going to divide each of these vectors by the norm of that vector.1231

Then, of course, what you get is the final set t, which is v1/norm(v1), v2/norm(v2), and so on and so forth, all the way to vn, not vi.../norm(vn)/1242

This set is orthonormal. Let us just do an example and it will all make sense.1287

So, let us start here. Let us do our example in blue.1299

We will let s = u1, u2, u3 = 1, 1, 1, -1, 0, -1, -1, 2, 3.1308

This is our set. We want to transform s into an orthonormal basis, for R3. This is a basis for R3.1342

These are linearly independent. They span R3. Three vectors. We want to change this into an orthonormal basis.1353

We want each of the vectors in our basis to be orthogonal, mutually orthogonal, and we want them to have a length of 1. So, we will run through our procedure.1361

Okay. First thing we do, let v1 = u1. So the first thing I am going to do is I am going to choose by vector (1,1,1).1371

That is my first vector in my orthogonal set. Nice, we got that out of the way. Okay.1380

Two. Go back to the previous slide and check to see that number two thing with all of the indices going on. Here is what it looks like based on this number of vectors.1387

v2 = u2 - u2 · v1, which is this thing /v1 · v1 × vector v1.1400

That is equal to... well, u2 is (-1, 0, -1).1430

Now, when I take u2, which is (-1, 0, -1), and I dot it with v1, which is (1, 1, 1), I get -2.1440

When I take v1 dotted with v1, I get three. So it is -2/3 × v1, which is the (1,1,1).1453

When I do that, I get -1/3, 2/3, -1/3... okay.1471

The next one v3, I have v1, I have v2, which is here. I need v3, right? because I need 3 vectors.1489

So, v3 = well, it is equal to -- go back to that number 2 in our procedure -- it is equal to u3 - u3 · v1/v1 · v1 × v1 - u3 · v2/v2 · v2, all × v2.1498

Well if you remember that last entry in that number 2, it said vi-1.1543

Well, since we are calculating v3, 3 - 1 is 2, so that is it. We stop here. We do not have to go anymore.1547

That is what that symbolism meant, it tells us how many of these we get.1557

If we are calculating v4, well, 4 - 1 is... so that is vi, i is 4. It is 4 - 1, that means we go all the way up to this last entry, which is 3. So we would have three of these.1563

That is all this means. That is all that symbolism meant. Just follow the symbolism, and everything will be just fine.1578

okay. This actually ends up equaling... well u3 is (-1,2,3).1584

When I do u3 · v1, which is (-1,2,3) · (1,1,1) over (1,1,1) · (1,1,1), I am going to end up with 4/3 × (1,1,1) - ... when I do u3 · v2, which is u3 · v2/v2 · v2.1597

I am going to get -2/6 × v2 - (1,2,1)... okay?1627

I am going to end up with (-2,0,2). So, let me go to red.1644

(1,1,1), that... and that... so let us write our what we have got.1657

Our t*, our orthogonal set is this sub... interesting... 1, 1, 1, -1/3, 2/3, -1/3, and -2, 0, 2.1678

This set is orthogonal. Now, let us take a look at v2 real quickly here.1706

v2 = (-1/3, 2/3, -1/3), well, let me pull out the third... that is equal to 1/3 × (-1, 2, -1).1721

These vectors, if I just take the vector (-2, 2, 1), and if I take the vector 1/3 × that, which is this vector, they are vectors going in the same direction.1741

They are just different lengths of each other. So, because they are vectors going in the same direction, I do now need the fractional version of it. I can just drop the denominator from that, because again, we are going to be normalizing this.1754

We are going to be reducing it to a length of 1, so it does not matter whether I take this vector or this vector.1766

They are just different vectors in the same direction. Does that make sense?1772

So, I can rewrite my t*, my orthogonal set, as (1,1,1), (-1,2,-1), and (-2,0,2).1776

They are still orthogonal. This, the dot product of this and this is going to be 0. They dot product of this and this is going to be 0, it does not matter. They are in the same direction.1795

They are just different lengths, we are going to be normalizing it anyway. So, we want to make life easier for ourselves, so that is our orthogonal set.1806

Now, what we want to do is calculate the norms.1820

So, the norm of v1 = sqrt(3). The norm of v2 = sqrt(6), and the norm of v3, right here, is equal to sqrt(8).1827

Therefore, our final orthonormal... we have 1/sqrt(3), 1/sqrt(3), 1/sqrt(3)... -1/sqrt(6), 2/sqrt(6), -1/sqrt(6) and... -2/sqrt(8), 0, 2/sqrt(8)...1855

Again, I may have missed some minus signs or plus signs... it is procedure that is important.1897

This is orthonormal. In this case, the length matters. In the previous, they were orthogonal and so we dropped the denominator from the vectors that we found because again, they are vectors in the same direction.1907

The same direction, they are still going to be orthogonal. So, I can make it easy for myself by not having to deal with fractions. But, in this case, we are normalizing it. We are taking that orthogonal set and we are dividing each of those vectors by its own norm to create vectors.1925

Each of these has a length of 1. Any two of these are mutually orthogonal. Their dot product equals 0. This is an orthonormal basis for our three.1940

It is just as good of a basis as our standard basis, the i, j, k. Computations are easy, and who knows, there might be some problem where this basis is actually the best basis to work with.1953

Again, it just depends on frames of reference.1964

Okay. Thank you for joining us at Educator.com.1968

Educator®

Please sign in to participate in this lecture discussion.

Resetting Your Password?
OR

Start Learning Now

Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.

Membership Overview

  • Available 24/7. Unlimited Access to Our Entire Library.
  • Search and jump to exactly what you want to learn.
  • *Ask questions and get answers from the community and our teachers!
  • Practice questions with step-by-step solutions.
  • Download lecture slides for taking notes.
  • Track your course viewing progress.
  • Accessible anytime, anywhere with our Android and iOS apps.