Raffi Hovasapian

Raffi Hovasapian

Orthogonal Complements, Part II

Slide Duration:

Table of Contents

Section 1: Linear Equations and Matrices
Linear Systems

39m 3s

Intro
0:00
Linear Systems
1:20
Introduction to Linear Systems
1:21
Examples
10:35
Example 1
10:36
Example 2
13:44
Example 3
16:12
Example 4
23:48
Example 5
28:23
Example 6
32:32
Number of Solutions
35:08
One Solution, No Solution, Infinitely Many Solutions
35:09
Method of Elimination
36:57
Method of Elimination
36:58
Matrices

30m 34s

Intro
0:00
Matrices
0:47
Definition and Example of Matrices
0:48
Square Matrix
7:55
Diagonal Matrix
9:31
Operations with Matrices
10:35
Matrix Addition
10:36
Scalar Multiplication
15:01
Transpose of a Matrix
17:51
Matrix Types
23:17
Regular: m x n Matrix of m Rows and n Column
23:18
Square: n x n Matrix With an Equal Number of Rows and Columns
23:44
Diagonal: A Square Matrix Where All Entries OFF the Main Diagonal are '0'
24:07
Matrix Operations
24:37
Matrix Operations
24:38
Example
25:55
Example
25:56
Dot Product & Matrix Multiplication

41m 42s

Intro
0:00
Dot Product
1:04
Example of Dot Product
1:05
Matrix Multiplication
7:05
Definition
7:06
Example 1
12:26
Example 2
17:38
Matrices and Linear Systems
21:24
Matrices and Linear Systems
21:25
Example 1
29:56
Example 2
32:30
Summary
33:56
Dot Product of Two Vectors and Matrix Multiplication
33:57
Summary, cont.
35:06
Matrix Representations of Linear Systems
35:07
Examples
35:34
Examples
35:35
Properties of Matrix Operation

43m 17s

Intro
0:00
Properties of Addition
1:11
Properties of Addition: A
1:12
Properties of Addition: B
2:30
Properties of Addition: C
2:57
Properties of Addition: D
4:20
Properties of Addition
5:22
Properties of Addition
5:23
Properties of Multiplication
6:47
Properties of Multiplication: A
7:46
Properties of Multiplication: B
8:13
Properties of Multiplication: C
9:18
Example: Properties of Multiplication
9:35
Definitions and Properties (Multiplication)
14:02
Identity Matrix: n x n matrix
14:03
Let A Be a Matrix of m x n
15:23
Definitions and Properties (Multiplication)
18:36
Definitions and Properties (Multiplication)
18:37
Properties of Scalar Multiplication
22:54
Properties of Scalar Multiplication: A
23:39
Properties of Scalar Multiplication: B
24:04
Properties of Scalar Multiplication: C
24:29
Properties of Scalar Multiplication: D
24:48
Properties of the Transpose
25:30
Properties of the Transpose
25:31
Properties of the Transpose
30:28
Example
30:29
Properties of Matrix Addition
33:25
Let A, B, C, and D Be m x n Matrices
33:26
There is a Unique m x n Matrix, 0, Such That…
33:48
Unique Matrix D
34:17
Properties of Matrix Multiplication
34:58
Let A, B, and C Be Matrices of the Appropriate Size
34:59
Let A Be Square Matrix (n x n)
35:44
Properties of Scalar Multiplication
36:35
Let r and s Be Real Numbers, and A and B Matrices
36:36
Properties of the Transpose
37:10
Let r Be a Scalar, and A and B Matrices
37:12
Example
37:58
Example
37:59
Solutions of Linear Systems, Part 1

38m 14s

Intro
0:00
Reduced Row Echelon Form
0:29
An m x n Matrix is in Reduced Row Echelon Form If:
0:30
Reduced Row Echelon Form
2:58
Example: Reduced Row Echelon Form
2:59
Theorem
8:30
Every m x n Matrix is Row-Equivalent to a UNIQUE Matrix in Reduced Row Echelon Form
8:31
Systematic and Careful Example
10:02
Step 1
10:54
Step 2
11:33
Step 3
12:50
Step 4
14:02
Step 5
15:31
Step 6
17:28
Example
30:39
Find the Reduced Row Echelon Form of a Given m x n Matrix
30:40
Solutions of Linear Systems, Part II

28m 54s

Intro
0:00
Solutions of Linear Systems
0:11
Solutions of Linear Systems
0:13
Example I
3:25
Solve the Linear System 1
3:26
Solve the Linear System 2
14:31
Example II
17:41
Solve the Linear System 3
17:42
Solve the Linear System 4
20:17
Homogeneous Systems
21:54
Homogeneous Systems Overview
21:55
Theorem and Example
24:01
Inverse of a Matrix

40m 10s

Intro
0:00
Finding the Inverse of a Matrix
0:41
Finding the Inverse of a Matrix
0:42
Properties of Non-Singular Matrices
6:38
Practical Procedure
9:15
Step1
9:16
Step 2
10:10
Step 3
10:46
Example: Finding Inverse
12:50
Linear Systems and Inverses
17:01
Linear Systems and Inverses
17:02
Theorem and Example
21:15
Theorem
26:32
Theorem
26:33
List of Non-Singular Equivalences
28:37
Example: Does the Following System Have a Non-trivial Solution?
30:13
Example: Inverse of a Matrix
36:16
Section 2: Determinants
Determinants

21m 25s

Intro
0:00
Determinants
0:37
Introduction to Determinants
0:38
Example
6:12
Properties
9:00
Properties 1-5
9:01
Example
10:14
Properties, cont.
12:28
Properties 6 & 7
12:29
Example
14:14
Properties, cont.
18:34
Properties 8 & 9
18:35
Example
19:21
Cofactor Expansions

59m 31s

Intro
0:00
Cofactor Expansions and Their Application
0:42
Cofactor Expansions and Their Application
0:43
Example 1
3:52
Example 2
7:08
Evaluation of Determinants by Cofactor
9:38
Theorem
9:40
Example 1
11:41
Inverse of a Matrix by Cofactor
22:42
Inverse of a Matrix by Cofactor and Example
22:43
More Example
36:22
List of Non-Singular Equivalences
43:07
List of Non-Singular Equivalences
43:08
Example
44:38
Cramer's Rule
52:22
Introduction to Cramer's Rule and Example
52:23
Section 3: Vectors in Rn
Vectors in the Plane

46m 54s

Intro
0:00
Vectors in the Plane
0:38
Vectors in the Plane
0:39
Example 1
8:25
Example 2
15:23
Vector Addition and Scalar Multiplication
19:33
Vector Addition
19:34
Scalar Multiplication
24:08
Example
26:25
The Angle Between Two Vectors
29:33
The Angle Between Two Vectors
29:34
Example
33:54
Properties of the Dot Product and Unit Vectors
38:17
Properties of the Dot Product and Unit Vectors
38:18
Defining Unit Vectors
40:01
2 Very Important Unit Vectors
41:56
n-Vector

52m 44s

Intro
0:00
n-Vectors
0:58
4-Vector
0:59
7-Vector
1:50
Vector Addition
2:43
Scalar Multiplication
3:37
Theorem: Part 1
4:24
Theorem: Part 2
11:38
Right and Left Handed Coordinate System
14:19
Projection of a Point Onto a Coordinate Line/Plane
17:20
Example
21:27
Cauchy-Schwarz Inequality
24:56
Triangle Inequality
36:29
Unit Vector
40:34
Vectors and Dot Products
44:23
Orthogonal Vectors
44:24
Cauchy-Schwarz Inequality
45:04
Triangle Inequality
45:21
Example 1
45:40
Example 2
48:16
Linear Transformation

48m 53s

Intro
0:00
Introduction to Linear Transformations
0:44
Introduction to Linear Transformations
0:45
Example 1
9:01
Example 2
11:33
Definition of Linear Mapping
14:13
Example 3
22:31
Example 4
26:07
Example 5
30:36
Examples
36:12
Projection Mapping
36:13
Images, Range, and Linear Mapping
38:33
Example of Linear Transformation
42:02
Linear Transformations, Part II

34m 8s

Intro
0:00
Linear Transformations
1:29
Linear Transformations
1:30
Theorem 1
7:15
Theorem 2
9:20
Example 1: Find L (-3, 4, 2)
11:17
Example 2: Is It Linear?
17:11
Theorem 3
25:57
Example 3: Finding the Standard Matrix
29:09
Lines and Planes

37m 54s

Intro
0:00
Lines and Plane
0:36
Example 1
0:37
Example 2
7:07
Lines in IR3
9:53
Parametric Equations
14:58
Example 3
17:26
Example 4
20:11
Planes in IR3
25:19
Example 5
31:12
Example 6
34:18
Section 4: Real Vector Spaces
Vector Spaces

42m 19s

Intro
0:00
Vector Spaces
3:43
Definition of Vector Spaces
3:44
Vector Spaces 1
5:19
Vector Spaces 2
9:34
Real Vector Space and Complex Vector Space
14:01
Example 1
15:59
Example 2
18:42
Examples
26:22
More Examples
26:23
Properties of Vector Spaces
32:53
Properties of Vector Spaces Overview
32:54
Property A
34:31
Property B
36:09
Property C
36:38
Property D
37:54
Property F
39:00
Subspaces

43m 37s

Intro
0:00
Subspaces
0:47
Defining Subspaces
0:48
Example 1
3:08
Example 2
3:49
Theorem
7:26
Example 3
9:11
Example 4
12:30
Example 5
16:05
Linear Combinations
23:27
Definition 1
23:28
Example 1
25:24
Definition 2
29:49
Example 2
31:34
Theorem
32:42
Example 3
34:00
Spanning Set for a Vector Space

33m 15s

Intro
0:00
A Spanning Set for a Vector Space
1:10
A Spanning Set for a Vector Space
1:11
Procedure to Check if a Set of Vectors Spans a Vector Space
3:38
Example 1
6:50
Example 2
14:28
Example 3
21:06
Example 4
22:15
Linear Independence

17m 20s

Intro
0:00
Linear Independence
0:32
Definition
0:39
Meaning
3:00
Procedure for Determining if a Given List of Vectors is Linear Independence or Linear Dependence
5:00
Example 1
7:21
Example 2
10:20
Basis & Dimension

31m 20s

Intro
0:00
Basis and Dimension
0:23
Definition
0:24
Example 1
3:30
Example 2: Part A
4:00
Example 2: Part B
6:53
Theorem 1
9:40
Theorem 2
11:32
Procedure for Finding a Subset of S that is a Basis for Span S
14:20
Example 3
16:38
Theorem 3
21:08
Example 4
25:27
Homogeneous Systems

24m 45s

Intro
0:00
Homogeneous Systems
0:51
Homogeneous Systems
0:52
Procedure for Finding a Basis for the Null Space of Ax = 0
2:56
Example 1
7:39
Example 2
18:03
Relationship Between Homogeneous and Non-Homogeneous Systems
19:47
Rank of a Matrix, Part I

35m 3s

Intro
0:00
Rank of a Matrix
1:47
Definition
1:48
Theorem 1
8:14
Example 1
9:38
Defining Row and Column Rank
16:53
If We Want a Basis for Span S Consisting of Vectors From S
22:00
If We want a Basis for Span S Consisting of Vectors Not Necessarily in S
24:07
Example 2: Part A
26:44
Example 2: Part B
32:10
Rank of a Matrix, Part II

29m 26s

Intro
0:00
Rank of a Matrix
0:17
Example 1: Part A
0:18
Example 1: Part B
5:58
Rank of a Matrix Review: Rows, Columns, and Row Rank
8:22
Procedure for Computing the Rank of a Matrix
14:36
Theorem 1: Rank + Nullity = n
16:19
Example 2
17:48
Rank & Singularity
20:09
Example 3
21:08
Theorem 2
23:25
List of Non-Singular Equivalences
24:24
List of Non-Singular Equivalences
24:25
Coordinates of a Vector

27m 3s

Intro
0:00
Coordinates of a Vector
1:07
Coordinates of a Vector
1:08
Example 1
8:35
Example 2
15:28
Example 3: Part A
19:15
Example 3: Part B
22:26
Change of Basis & Transition Matrices

33m 47s

Intro
0:00
Change of Basis & Transition Matrices
0:56
Change of Basis & Transition Matrices
0:57
Example 1
10:44
Example 2
20:44
Theorem
23:37
Example 3: Part A
26:21
Example 3: Part B
32:05
Orthonormal Bases in n-Space

32m 53s

Intro
0:00
Orthonormal Bases in n-Space
1:02
Orthonormal Bases in n-Space: Definition
1:03
Example 1
4:31
Theorem 1
6:55
Theorem 2
8:00
Theorem 3
9:04
Example 2
10:07
Theorem 2
13:54
Procedure for Constructing an O/N Basis
16:11
Example 3
21:42
Orthogonal Complements, Part I

21m 27s

Intro
0:00
Orthogonal Complements
0:19
Definition
0:20
Theorem 1
5:36
Example 1
6:58
Theorem 2
13:26
Theorem 3
15:06
Example 2
18:20
Orthogonal Complements, Part II

33m 49s

Intro
0:00
Relations Among the Four Fundamental Vector Spaces Associated with a Matrix A
2:16
Four Spaces Associated With A (If A is m x n)
2:17
Theorem
4:49
Example 1
7:17
Null Space and Column Space
10:48
Projections and Applications
16:50
Projections and Applications
16:51
Projection Illustration
21:00
Example 1
23:51
Projection Illustration Review
30:15
Section 5: Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors

38m 11s

Intro
0:00
Eigenvalues and Eigenvectors
0:38
Eigenvalues and Eigenvectors
0:39
Definition 1
3:30
Example 1
7:20
Example 2
10:19
Definition 2
21:15
Example 3
23:41
Theorem 1
26:32
Theorem 2
27:56
Example 4
29:14
Review
34:32
Similar Matrices & Diagonalization

29m 55s

Intro
0:00
Similar Matrices and Diagonalization
0:25
Definition 1
0:26
Example 1
2:00
Properties
3:38
Definition 2
4:57
Theorem 1
6:12
Example 3
9:37
Theorem 2
12:40
Example 4
19:12
Example 5
20:55
Procedure for Diagonalizing Matrix A: Step 1
24:21
Procedure for Diagonalizing Matrix A: Step 2
25:04
Procedure for Diagonalizing Matrix A: Step 3
25:38
Procedure for Diagonalizing Matrix A: Step 4
27:02
Diagonalization of Symmetric Matrices

30m 14s

Intro
0:00
Diagonalization of Symmetric Matrices
1:15
Diagonalization of Symmetric Matrices
1:16
Theorem 1
2:24
Theorem 2
3:27
Example 1
4:47
Definition 1
6:44
Example 2
8:15
Theorem 3
10:28
Theorem 4
12:31
Example 3
18:00
Section 6: Linear Transformations
Linear Mappings Revisited

24m 5s

Intro
0:00
Linear Mappings
2:08
Definition
2:09
Linear Operator
7:36
Projection
8:48
Dilation
9:40
Contraction
10:07
Reflection
10:26
Rotation
11:06
Example 1
13:00
Theorem 1
18:16
Theorem 2
19:20
Kernel and Range of a Linear Map, Part I

26m 38s

Intro
0:00
Kernel and Range of a Linear Map
0:28
Definition 1
0:29
Example 1
4:36
Example 2
8:12
Definition 2
10:34
Example 3
13:34
Theorem 1
16:01
Theorem 2
18:26
Definition 3
21:11
Theorem 3
24:28
Kernel and Range of a Linear Map, Part II

25m 54s

Intro
0:00
Kernel and Range of a Linear Map
1:39
Theorem 1
1:40
Example 1: Part A
2:32
Example 1: Part B
8:12
Example 1: Part C
13:11
Example 1: Part D
14:55
Theorem 2
16:50
Theorem 3
23:00
Matrix of a Linear Map

33m 21s

Intro
0:00
Matrix of a Linear Map
0:11
Theorem 1
1:24
Procedure for Computing to Matrix: Step 1
7:10
Procedure for Computing to Matrix: Step 2
8:58
Procedure for Computing to Matrix: Step 3
9:50
Matrix of a Linear Map: Property
10:41
Example 1
14:07
Example 2
18:12
Example 3
24:31
Loading...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Linear Algebra
Bookmark & Share Embed

Share this knowledge with your friends!

Copy & Paste this embed code into your website’s HTML

Please ensure that your website editor is in text mode when you paste the code.
(In Wordpress, the mode button is on the top right corner.)
  ×
  • - Allow users to view the embedded video in full-size.
Since this lesson is not free, only the preview will appear on your website.
  • Discussion

  • Answer Engine

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Lecture Comments (4)

2 answers

Last reply by: Professor Hovasapian
Fri Dec 13, 2019 9:18 AM

Post by Sungmin Lee on December 9, 2019

Hi, professor Raff
your lecture is always helpful to  me. I'm taking this lecture as preparation for college
(excuse me on my poor English. I'm working on it) I have some ideas want to be checked.
First, Regarding to m by n matrix A, it was obvious to me that (Rank of A) + (Nullity of A) = n because Rank of A is equal to leading entry of matrix of A after being transformed to RRE form(lets call that matrix B) and Nullity same with the number of arbitrary constants.
Second, I think transition matrix P(s<-t) = S^(-1)•T. The reason is that 1. turning coordinate vector respect T to respect to In, and then turning that coordinate vector respect to S. (S and T is matrix which constituted with vector of basis S and T respectively)
lastly, Row space and null space must be orthogonal naturally as definition of null space is set of vectors that satisfy Ax=0 and we can regard matrix product AB as combination of dot products of row vectors of matrix A and column of B, which makes x orthogonal with A as result is 0.
Thank you for reading this long text. And again, thank you for understanding of my poor English

0 answers

Post by Manfred Berger on June 21, 2013

I've been thinking a bit about example 1. If I was to use Gram-Schmidt to expand this into a full basis of R3 and then take the image of v with respect to my new basis, the projection would be the first 2 components, correct?

Orthogonal Complements, Part II

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Relations Among the Four Fundamental Vector Spaces Associated with a Matrix A 2:16
    • Four Spaces Associated With A (If A is m x n)
    • Theorem
    • Example 1
    • Null Space and Column Space
  • Projections and Applications 16:50
    • Projections and Applications
    • Projection Illustration
    • Example 1
    • Projection Illustration Review

Transcription: Orthogonal Complements, Part II

Welcome back to Educator.com and welcome back to linear algebra.0000

In our last lesson, we introduced the notion of an orthogonal complement, and this time we are going to continue talking about orthogonal complements.0004

We are going to be talking about these 4 fundamental subspaces that are actually associated with any random matrix, and we are going to talk about the relationships that exist between these spaces.0013

Then we are going to talk about something called a projection. The projection is a profoundly, profoundly important concept.0023

It shows up in almost every area of physics and engineering and mathematics in ways that you would not believe.0033

As it turns out, those of you who are engineers and physicists... one of the tools in your tool box that is going to be almost the primary tool for many years to come is going to be the idea of something called Fourier series.0042

If you have not been introduced to it yet, you will more than likely be introduced to it sometime this year... and Fourier series actually is an application of projection.0052

Essentially what you are doing is you are taking a function and what you are doing is you are projecting that function onto -- how shall I say this -- you are projecting it onto an infinite dimensional vector space on the individual axes which are the trigonometric functions.0061

Let us say, for example, I have a function 5x. I can actually project that function onto the cos(x) axis, onto the sin(x) axis, onto the cos(2x) axis, onto the sin(2x) axis, so on and so forth.0080

I can actually represent that function in terms of cosine functions.0096

Now, you will learn it algebraically, but really what you are doing is you are actually doing a projection.0100

You are projecting a function onto other functions, and it is really quite extraordinary.0105

When you see it that way, the entire theory of Fourier series becomes open to you, and more than that, the entire theory of orthogonal polynomials becomes open to you.0111

That is going to connect to our topic that we discuss in the next lesson, which is Eigenvectors and Eigenvalues.0120

So, linear algebra really brings together all areas of mathematics. Very, very central. Okay. Let us get started.0126

Let us see. So, let us go ahead and talk about our four fundamental vector spaces associated with a matrix a.0135

So, if a is an m by m matrix, then, there are 4 spaces associated with a.0143

You actually know of all of these spaces. We have talked about them individually, now we are going to bring them together.0173

One is the null space of a, which is, if you remember, is the solution space for the equation ax = 0.0178

Let me put that in parentheses here. It is the solution space, the set of all vectors x, such that the matrix a × x = 0. Just a homogeneous system.0189

Two, we have something called the row space of a. Well, if you remember, the row space of a if I take the rows of a, just some random m by m matrix, they actually form a series of vectors... m vectors in RN.0200

The space that is spanned by those vectors, that is the row space.0221

Then we have the null space of a transpose. So, if I take a and just flip it along its main diagonal and then I solve this equation for this set of vectors, x such that a transpose × x = 0, I get its null space.0229

It is also a subspace, and the row space is a subspace. All of these are subspaces.0246

Oops -- it would be nice if I could actually count properly... 1, 2, 3.0251

Now, our fourth space is going to be, well, you can imagine... it is going to be the column space.0257

Again, the column space if I take the individual columns of the matrix, they are vectors in RM, and they form a space... the span of those vectors form a space.0264

Now, they do not all have to be linearly independent. I can take those vectors, remember from an old discussion and I can find a basis... so the basis might be fewer vectors but they still span the same space.0278

Okay. So, let us start with a theorem here, which is an incredibly beautiful theorem.0290

As you can figure it out, linear algebra is full of unbelievably beautiful theorems... beautiful and very, very practical.0299

If a is m by n, then, this is kind of extraordinary, the null space of a is the orthogonal complement of the row space of a.0306

That is kind of amazing. Think about what that means for a second.0337

If I just have this rectangular array of numbers, 5 by 6 and I just throw some numbers in there... when I solve the equation ax = 0, the homogeneous system associated with that matrix, I am going to get a subspace, the null space.0344

As it turns out, if I take that matrix and I turn it into reduced row echelon, the non-zero rows form a basis for the column space. That is how we found the basis for the column space.0359

Those two subspaces, they are orthogonal to each other. That is extraordinary. There is no reason to believe why that should be the case, and yet there it is.0370

B, the complement of that, the null space of a transpose is the orthogonal complement to the column space.0380

It is the orthogonal complement of the column space of a.0399

So, that is the relationship. The null space of a given matrix a, its null space and its column space are orthogonal complements.0407

If I take the transpose of a, the null space of the transpose and the column space of the original a, which ends up being the row space of a transpose because I have transposed it... those two are orthogonal complements.0416

Let us do an example and see if this... if we can make sense of some of it just by seeing some numbers here.0433

So, let us go... let us let a equal, it is going to be a big matrix here, and again we do not worry about big matrices because we have our math software.0441

1, -2, 1, 0, 2... now I am not going to go through all of the steps.0450

You know, this reduced row echelon, solving homogeneous systems, all of this stuff, I am going to give you the final results.0457

At this point, I would like to think that you are reasonably comfortable either with the computational procedure manually, or you are using mathematical software yourself. I just do this in math software, myself.0463

1, -1, 4, 1, 3, -1, 3, 2, 1, -1, 2, -3, 5, 1, 5... Okay.0473

So, this is our matrix a. Our task is to find the four fundamental spaces associated with this matrix and confirm the theorem.0488

So, this random rectangular array of numbers, something really, really amazing emerges from this. There are spaces that are deeply, deeply interconnected.0516

So, let us see what happens here. Okay. When we take a, so the first thing we want to do is we want to find the row space of a.0527

Let us go ahead and do that. So, row space of a.0536

When I take a, and I reduce it to reduced row echelon form, I get the following matrix, 1, 0, 7, 2, 4, 0, 1, 3, 1, 1... and I get 0's in the other 2 rows.0540

Well, what that means, basically what that tells me is that my row space, if I take these -- let me go red -- if I take that vector and that vector, they form a basis for the row space.0560

So, my row space... basis for the row space and I am going to write these as column vectors, is 1, 0, 7, 2, 4... and 0, 1, 3, 1, 1...0577

So, the dimension is 2. There you go. Also, I have my row space is 2-dimensional. Okay.0600

Now, I need to find the null space. Well, I can almost guess this. The theorem says that the row space and the null space are going to be orthogonal complements.0610

Well, I know that the orthogonal complement, or some subspace + the direct some of some subspace plus its orthogonal complement gives me the actual space itself.0620

In this case, I am talking about 1, 2, 3, 4, 5... I am talking about R5.0630

Well, if I already have a dimension 2, I know that the dimension of my orthogonal complement is going to be 3 and so I am hoping that when I actually do the calculation I end up with 3 vectors.0637

Let us see what happens. The null space, well the null space is the set of all vectors x such that a(x) = 0.0648

I solve a homogeneous system and I get my basis... I am not going to actually show this one.0662

So, my basis for null of a... I tend to symbolize it like that... is equal... set notation... I have the vector (-7, -3, 1, 0, 0).0669

I end up with (-2, -1, 0, 1, 0) and the presumption here is that you are comfortable doing this, the reduced row echelon, solving the homogeneous system, putting it into a form that you can actually read off your vectors.0686

(-4, -1, 0, 0, 1). Well, there you go. We have a dimension equals 3.0703

So, the dimension 2 + the dimension 3 = 5. The row space was a series of vectors in R5, so our dimensions match.0711

Now the question is, I need to basically check that this is the case... I need to check that each of these vectors is orthogonal to the 2 vectors that I found.0721

As it turns out, they are orthogonal. When you actually take the dot product of each of these with each of the other ones, you are going to end up with 0.0731

So, this confirms our theorem. The first part of the theorem. The row space of a and the null space of a.0739

Okay. So, now let us take our column space. So, we are going to take a transpose.0748

Now, let me actually write out a transpose, I would like you to see it. It is (1,1,-1,2)... this is going to be R4, okay? -2.0760

Now, the column space of a, what I have done here is I have actually transpose a. I have turns the rows in columns and the columns into rows.0776

So, now the columns of a are written as rows. That is why I am doing it this way. Okay?0783

-2,-1,3,-3,1,4,2,5,0,1,1,1... and I have 2,3,-1, and 5. Okay.0790

So, I have 5 vectors in R4. So, here we are talking about R4.0814

Alright. Now, when I subject this to reduced row echelon form, I am going to end up with some non-zero columns.0823

That is going to be a basis for my column space.0833

I get 1, 0, -2, 1, 0, 1, 1, 1, and 0's everywhere else, 0, 0, 0, 0... 0, 0, 0, 0.0838

These first 2 actually form a basis for my column space.0850

So, let me write that down... basis for my column space equals the set of vector 1, 0, -2, 1, 1, 0, -2, 1... I think it is always best to write them in vertical form, and 0, 1, 1, 1.0856

Not good -- we would like them to be clear... 1, 1, 1... there you go. That forms a basis for our column space.0869

Well, the dimension is 2. You know your spaces are 4... 4 - 2 is 2.0890

We are going to expect that are homogeneous system, our null space of a transpose is going to be 2-dimensional. We should have 2 vectors.0897

Well, let us confirm. As it turns out, when we solve a transpose × x = the 0 vector... basis for null -- love this stuff, it is great -- a transpose equals... again, I am just going to give the final answer... 2, -1, 1, 0.0905

That is one vector, and the second vector is... these are basis vectors for our subspace... 0, 1.0938

Sure enough, we end up with a dimension 2. So, our dimensions match. Now we just need to check that any vector in here × any vector in what we just got ,the column space are orthogonal.0945

It turns out that they are. If you do the dot product of those, you are going to end up with 0.0956

So, sure enough, once again, row space a, okay? is going to be the orthogonal complement of the null space of a.0961

The column space of a is the orthocomplement of null space of a transpose.0982

Simply by virtue of rectangular array of numbers, you have this relationship where these spaces are deeply interconnected.0995

It is really rather extraordinary... and they add up to the actual dimension of the space. Okay.1001

So, let us talk about projections and some applications. So, projections are very, very, very important.1011

Recall, if you will that w is a subspace, I am just going to write ss for subspace of RN, then, w direct sum plus w perp is equal to RN.1019

We sort of have been hammering that point. That is some subspace and some orthogonal complement, the dimensions add up to n.1042

When you add them you actually get this space, RN. Okay.1048

That was one of the proofs that we discussed... one of the theorems that we had in the last lesson.1053

Now, we did not go through a proof of that, and I certainly urge you, with each of these theorems, to at least look at the proofs, because a lot of the proofs are constructive in nature and they will give you a clue as to why things are the way that they are.1059

So, in the proof of the theorem, it is shown that if w, if that subspace has an orthonormal basis, remember an orthonormal basis is well, all of the vector are of length 1, and they are all mutually orthogonal.1070

We had that Gram Schmidt orthonormalization process where we take a basis and we can actually turn it into an orthonormal basis by first making it orthogonal and then dividing by the norms of each of those vectors.1104

So, if it has an orthonormal basis, let us say w1, w2, all the way to wk, we do not know how many dimensions it is.1116

v is any vector in RN, then there exists a unique... there exists unique vectors w from the subspace w and u from the subspace w perp, such that the vector v can be written as w + u.1132

Well, we know this already. Essentially what we are saying is that if we take any vector in RN, I can represent it uniquely as some vector from the subspace w + some vector in its orthogonal complement.1171

Okay. Here is the interesting part. Also... we will write it as an also... this particular w, let me actually circle it in blue, there is a way to find it.1185

Here is how we find it... w = the vector v · w1 × w1 + the vector v · w2 × w2 + the vector v · wk, as many vectors as there are in the basis, × wk.1196

This is called the projection. This is called the projection of v onto the vector space w.1230

It is symbolized as proj... as a subscript we write the w... that is the subspace w... and this is the vector v. Okay.1250

We definitely need to investigate what it is that this looks like. When we do a projection -- let me draw this out so that you see what this looks like.1262

So, we are going to be working in R3. So, let me draw a plane here.1270

Let me draw a vector... this is going to be our w vector, then this is going to be our u vector.1287

Let me make v... let me make it blue. So, v, once again, let us remind ourselves... v is any vector in this particular case, it will just be vector RN.1300

You know what, since we are dealing in 3, let me be specific. R3.1315

w is our vector in the subspace, which is a 2-dimensional subspace in 2, so this plane here... that is the subspace w, and the u is in the subspace w perp.1323

So, here is what we are doing. Well, we said that this particular... so v can be written as something from w + something from u, because we know that RN is equal to the direct sum of w and w perp.1347

So some vector from w, some vector from here, so that is a vector in w, that is a vector in w perp.1368

Well, when we add them together, we get v. This is just standard vector addition.1374

Here is what is really interesting. If we have a basis for this subspace, if we actually project v, project means shine a light on v so that you have a shadow, a shadow of v on this subspace... that is what the projection means.1380

That is where you get that v · w1 × w1 + v · w2... when you do that, what we just wrote down for the projection, you actually end up finding w.1397

Okay. Now, we had also written since v is equal to w + u... as it turns out if I wanted to find u, well, just move that over.1412

Equals v - w. That is it. This is really, really great. So, let us do a problem and I think all of this will start to make sense.1425

So, let us go... example here... we will let w be a subspace of R3 with an orthonormal basis... I often just write ortho for orthonormal basis.1437

Again, orthonormal bases, they tend to have fractions in them because they are of length 1.1459

We do not always want to use (0,0,1), we want something to be reasonably exciting.1465

Let us go... oops -- these lines are making me crazy -- 2/sqrt(14), 3/sqrt(14), 1/sqrt(14)... one vector.1473

The other vector is 1/sqrt(2),0,-1/sqrt(2)... so these are orthonormal.1493

It is an orthonormal basis for the subspace R3, there is 2 vectors in it, so our subspace w has dimension 2, which means that our orthogonal complement w perp has dimension 1. 2 + 1 has to equal 3.1501

Okay. We will also let v, the vector in R3 equal to some random 4, 2, 7... this is what we want to find.1516

We want to find the projection of v onto w, and the vector... and we want to find -- this has to stop, why does this keep happening?1529

So, we want to find the projection of v into this subspace, and we want to find the vector u that is orthogonal to every vector in w.1549

In other words, we want to find w perp. Okay. So, how can we do that. Switch the page here...1572

Well we know that from our formula, from our theorem, that our w is equal to the projection onto w of v, which is what we wanted.1584

That is going to equal v · w1, one of the vectors in the basis × one of the vectors in the basis... plus v · w2, the second vector in the basis × that vector in the basis.1597

Again, I am going to let you work these out. So take v · w1, you are going to get a scalar, multiply it by w1, it is going to give you a vector.1617

You are going to add to that v · w2, which is a scalar × w2, which is a vector. When you add two vectors together you are going to get a vector.1626

So, you will end up with something like this. 21/sqrt(14) × 2/sqrt(14), 3/sqrt(14), 1/sqrt(14), + -3/sqrt(2) ×, well 1/sqrt(2), 0, -1/sqrt(2). That is what this is.1634

Then when you put those together, you are going to end up with 21/14, 63/14, 42/14, if I have done my arithmetic correctly.1666

So, the projection of v onto the subspace w is this vector right here.1684

That is what that means. If I take v and if I take the shadow of v on that subspace, I am going to get a vector.1692

It is nice. Okay. Now, well we know that v is equal to w, this is w by the way. That is the projection.1701

plus u, well we have v, we just found w, so now we want to find u.1714

Well, u is just equal to v - w, when I take v - w, that is equal to, well, (4,2,7) - what I just found... 21/14, 63/14, 42/14.1720

I am going to get 35/14, - 35/14, and 56/14. So, my vector v in R3 is a sum of that vector + that vector.1746

It is pretty extraordinary, yeah? Again, this idea of a projection. All you are doing is you are taking a random vector and onto another space you are just shining the light. you are just taking the shadow.1772

The shadow means you are taking the perpendicular... you are dropping a perpendicular from the end of that vector onto there, and this vector that you get, whatever it is, that is the projection.1787

That is all the projection means. Perpendicular.1800

As we know, the perpendicular from a point down to something else is the shortest distance from that object to that something else.1805

So, let us draw the picture one more time, so that we are clear about what it is we are doing.1815

We had w, we had u, I will put v here, that is v, this is -- oops, we wanted this in red.1825

This is u, this is w, vector v is equal to w + u, u is equal to the vector v - the vector w. That is what this picture says.1841

So, the distance from v, from the vector v to the subspace w... this is a capital W... is, well, the distance from v to the subspace w, the perpendicular distance, well it is equal to the norm of u.1863

Well, the norm of u equals the norm of the projection of v onto the subspace w, which is equal to the norm of v - ... no, I am sorry, that is not correct, getting a little ahead of myself here.1898

Vector u, the norm of u is the norm of this thing, which is v - the projection of v onto the subspace w.1923

In 3-space, it makes sense because you are used to seeing 3-space and distance.1938

Well, the distance from this point to this point is just the distance of the vector u, which you can calculate here. That is just the norm, and you know you found w by that formula that we just used which is the projection of v onto the subspace w.1944

This is the subspace w, so we project it on here, we get w, here is what may not make sense. What if you are dealing with a 14 dimensional space?1960

Let us say that your vector in R-14, you project it onto a subspace which is say 3 dimensional.1970

How do you talk about a distance in that case? Well, again, distance is just an algebraic property.1977

So, in some sense, you have this distance of a vector in a 14-dimensional space to its 3-dimensional subspace.1982

There is a distance "defined," and that distance is precisely the projection of that 14-dimensional vector, of that vector in 14-dimensional space onto the 3-dimensional subspace.1994

Again, this is the power of mathematics. We are not limited by reality. We are not limited by our senses. We are, in fact, not limited at all as long as the math supports it. As long as the algebra is correct. We are not limited by time or space.2007

Okay. Thank you very much for joining us here at Educator.com to finish up our discussion of orthogonal complements. We will see you next time.2023

Educator®

Please sign in to participate in this lecture discussion.

Resetting Your Password?
OR

Start Learning Now

Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.

Membership Overview

  • Available 24/7. Unlimited Access to Our Entire Library.
  • Search and jump to exactly what you want to learn.
  • *Ask questions and get answers from the community and our teachers!
  • Practice questions with step-by-step solutions.
  • Download lecture slides for taking notes.
  • Track your course viewing progress.
  • Accessible anytime, anywhere with our Android and iOS apps.