Raffi Hovasapian

Raffi Hovasapian

Change of Basis & Transition Matrices

Slide Duration:

Table of Contents

Section 1: Linear Equations and Matrices
Linear Systems

39m 3s

Intro
0:00
Linear Systems
1:20
Introduction to Linear Systems
1:21
Examples
10:35
Example 1
10:36
Example 2
13:44
Example 3
16:12
Example 4
23:48
Example 5
28:23
Example 6
32:32
Number of Solutions
35:08
One Solution, No Solution, Infinitely Many Solutions
35:09
Method of Elimination
36:57
Method of Elimination
36:58
Matrices

30m 34s

Intro
0:00
Matrices
0:47
Definition and Example of Matrices
0:48
Square Matrix
7:55
Diagonal Matrix
9:31
Operations with Matrices
10:35
Matrix Addition
10:36
Scalar Multiplication
15:01
Transpose of a Matrix
17:51
Matrix Types
23:17
Regular: m x n Matrix of m Rows and n Column
23:18
Square: n x n Matrix With an Equal Number of Rows and Columns
23:44
Diagonal: A Square Matrix Where All Entries OFF the Main Diagonal are '0'
24:07
Matrix Operations
24:37
Matrix Operations
24:38
Example
25:55
Example
25:56
Dot Product & Matrix Multiplication

41m 42s

Intro
0:00
Dot Product
1:04
Example of Dot Product
1:05
Matrix Multiplication
7:05
Definition
7:06
Example 1
12:26
Example 2
17:38
Matrices and Linear Systems
21:24
Matrices and Linear Systems
21:25
Example 1
29:56
Example 2
32:30
Summary
33:56
Dot Product of Two Vectors and Matrix Multiplication
33:57
Summary, cont.
35:06
Matrix Representations of Linear Systems
35:07
Examples
35:34
Examples
35:35
Properties of Matrix Operation

43m 17s

Intro
0:00
Properties of Addition
1:11
Properties of Addition: A
1:12
Properties of Addition: B
2:30
Properties of Addition: C
2:57
Properties of Addition: D
4:20
Properties of Addition
5:22
Properties of Addition
5:23
Properties of Multiplication
6:47
Properties of Multiplication: A
7:46
Properties of Multiplication: B
8:13
Properties of Multiplication: C
9:18
Example: Properties of Multiplication
9:35
Definitions and Properties (Multiplication)
14:02
Identity Matrix: n x n matrix
14:03
Let A Be a Matrix of m x n
15:23
Definitions and Properties (Multiplication)
18:36
Definitions and Properties (Multiplication)
18:37
Properties of Scalar Multiplication
22:54
Properties of Scalar Multiplication: A
23:39
Properties of Scalar Multiplication: B
24:04
Properties of Scalar Multiplication: C
24:29
Properties of Scalar Multiplication: D
24:48
Properties of the Transpose
25:30
Properties of the Transpose
25:31
Properties of the Transpose
30:28
Example
30:29
Properties of Matrix Addition
33:25
Let A, B, C, and D Be m x n Matrices
33:26
There is a Unique m x n Matrix, 0, Such That…
33:48
Unique Matrix D
34:17
Properties of Matrix Multiplication
34:58
Let A, B, and C Be Matrices of the Appropriate Size
34:59
Let A Be Square Matrix (n x n)
35:44
Properties of Scalar Multiplication
36:35
Let r and s Be Real Numbers, and A and B Matrices
36:36
Properties of the Transpose
37:10
Let r Be a Scalar, and A and B Matrices
37:12
Example
37:58
Example
37:59
Solutions of Linear Systems, Part 1

38m 14s

Intro
0:00
Reduced Row Echelon Form
0:29
An m x n Matrix is in Reduced Row Echelon Form If:
0:30
Reduced Row Echelon Form
2:58
Example: Reduced Row Echelon Form
2:59
Theorem
8:30
Every m x n Matrix is Row-Equivalent to a UNIQUE Matrix in Reduced Row Echelon Form
8:31
Systematic and Careful Example
10:02
Step 1
10:54
Step 2
11:33
Step 3
12:50
Step 4
14:02
Step 5
15:31
Step 6
17:28
Example
30:39
Find the Reduced Row Echelon Form of a Given m x n Matrix
30:40
Solutions of Linear Systems, Part II

28m 54s

Intro
0:00
Solutions of Linear Systems
0:11
Solutions of Linear Systems
0:13
Example I
3:25
Solve the Linear System 1
3:26
Solve the Linear System 2
14:31
Example II
17:41
Solve the Linear System 3
17:42
Solve the Linear System 4
20:17
Homogeneous Systems
21:54
Homogeneous Systems Overview
21:55
Theorem and Example
24:01
Inverse of a Matrix

40m 10s

Intro
0:00
Finding the Inverse of a Matrix
0:41
Finding the Inverse of a Matrix
0:42
Properties of Non-Singular Matrices
6:38
Practical Procedure
9:15
Step1
9:16
Step 2
10:10
Step 3
10:46
Example: Finding Inverse
12:50
Linear Systems and Inverses
17:01
Linear Systems and Inverses
17:02
Theorem and Example
21:15
Theorem
26:32
Theorem
26:33
List of Non-Singular Equivalences
28:37
Example: Does the Following System Have a Non-trivial Solution?
30:13
Example: Inverse of a Matrix
36:16
Section 2: Determinants
Determinants

21m 25s

Intro
0:00
Determinants
0:37
Introduction to Determinants
0:38
Example
6:12
Properties
9:00
Properties 1-5
9:01
Example
10:14
Properties, cont.
12:28
Properties 6 & 7
12:29
Example
14:14
Properties, cont.
18:34
Properties 8 & 9
18:35
Example
19:21
Cofactor Expansions

59m 31s

Intro
0:00
Cofactor Expansions and Their Application
0:42
Cofactor Expansions and Their Application
0:43
Example 1
3:52
Example 2
7:08
Evaluation of Determinants by Cofactor
9:38
Theorem
9:40
Example 1
11:41
Inverse of a Matrix by Cofactor
22:42
Inverse of a Matrix by Cofactor and Example
22:43
More Example
36:22
List of Non-Singular Equivalences
43:07
List of Non-Singular Equivalences
43:08
Example
44:38
Cramer's Rule
52:22
Introduction to Cramer's Rule and Example
52:23
Section 3: Vectors in Rn
Vectors in the Plane

46m 54s

Intro
0:00
Vectors in the Plane
0:38
Vectors in the Plane
0:39
Example 1
8:25
Example 2
15:23
Vector Addition and Scalar Multiplication
19:33
Vector Addition
19:34
Scalar Multiplication
24:08
Example
26:25
The Angle Between Two Vectors
29:33
The Angle Between Two Vectors
29:34
Example
33:54
Properties of the Dot Product and Unit Vectors
38:17
Properties of the Dot Product and Unit Vectors
38:18
Defining Unit Vectors
40:01
2 Very Important Unit Vectors
41:56
n-Vector

52m 44s

Intro
0:00
n-Vectors
0:58
4-Vector
0:59
7-Vector
1:50
Vector Addition
2:43
Scalar Multiplication
3:37
Theorem: Part 1
4:24
Theorem: Part 2
11:38
Right and Left Handed Coordinate System
14:19
Projection of a Point Onto a Coordinate Line/Plane
17:20
Example
21:27
Cauchy-Schwarz Inequality
24:56
Triangle Inequality
36:29
Unit Vector
40:34
Vectors and Dot Products
44:23
Orthogonal Vectors
44:24
Cauchy-Schwarz Inequality
45:04
Triangle Inequality
45:21
Example 1
45:40
Example 2
48:16
Linear Transformation

48m 53s

Intro
0:00
Introduction to Linear Transformations
0:44
Introduction to Linear Transformations
0:45
Example 1
9:01
Example 2
11:33
Definition of Linear Mapping
14:13
Example 3
22:31
Example 4
26:07
Example 5
30:36
Examples
36:12
Projection Mapping
36:13
Images, Range, and Linear Mapping
38:33
Example of Linear Transformation
42:02
Linear Transformations, Part II

34m 8s

Intro
0:00
Linear Transformations
1:29
Linear Transformations
1:30
Theorem 1
7:15
Theorem 2
9:20
Example 1: Find L (-3, 4, 2)
11:17
Example 2: Is It Linear?
17:11
Theorem 3
25:57
Example 3: Finding the Standard Matrix
29:09
Lines and Planes

37m 54s

Intro
0:00
Lines and Plane
0:36
Example 1
0:37
Example 2
7:07
Lines in IR3
9:53
Parametric Equations
14:58
Example 3
17:26
Example 4
20:11
Planes in IR3
25:19
Example 5
31:12
Example 6
34:18
Section 4: Real Vector Spaces
Vector Spaces

42m 19s

Intro
0:00
Vector Spaces
3:43
Definition of Vector Spaces
3:44
Vector Spaces 1
5:19
Vector Spaces 2
9:34
Real Vector Space and Complex Vector Space
14:01
Example 1
15:59
Example 2
18:42
Examples
26:22
More Examples
26:23
Properties of Vector Spaces
32:53
Properties of Vector Spaces Overview
32:54
Property A
34:31
Property B
36:09
Property C
36:38
Property D
37:54
Property F
39:00
Subspaces

43m 37s

Intro
0:00
Subspaces
0:47
Defining Subspaces
0:48
Example 1
3:08
Example 2
3:49
Theorem
7:26
Example 3
9:11
Example 4
12:30
Example 5
16:05
Linear Combinations
23:27
Definition 1
23:28
Example 1
25:24
Definition 2
29:49
Example 2
31:34
Theorem
32:42
Example 3
34:00
Spanning Set for a Vector Space

33m 15s

Intro
0:00
A Spanning Set for a Vector Space
1:10
A Spanning Set for a Vector Space
1:11
Procedure to Check if a Set of Vectors Spans a Vector Space
3:38
Example 1
6:50
Example 2
14:28
Example 3
21:06
Example 4
22:15
Linear Independence

17m 20s

Intro
0:00
Linear Independence
0:32
Definition
0:39
Meaning
3:00
Procedure for Determining if a Given List of Vectors is Linear Independence or Linear Dependence
5:00
Example 1
7:21
Example 2
10:20
Basis & Dimension

31m 20s

Intro
0:00
Basis and Dimension
0:23
Definition
0:24
Example 1
3:30
Example 2: Part A
4:00
Example 2: Part B
6:53
Theorem 1
9:40
Theorem 2
11:32
Procedure for Finding a Subset of S that is a Basis for Span S
14:20
Example 3
16:38
Theorem 3
21:08
Example 4
25:27
Homogeneous Systems

24m 45s

Intro
0:00
Homogeneous Systems
0:51
Homogeneous Systems
0:52
Procedure for Finding a Basis for the Null Space of Ax = 0
2:56
Example 1
7:39
Example 2
18:03
Relationship Between Homogeneous and Non-Homogeneous Systems
19:47
Rank of a Matrix, Part I

35m 3s

Intro
0:00
Rank of a Matrix
1:47
Definition
1:48
Theorem 1
8:14
Example 1
9:38
Defining Row and Column Rank
16:53
If We Want a Basis for Span S Consisting of Vectors From S
22:00
If We want a Basis for Span S Consisting of Vectors Not Necessarily in S
24:07
Example 2: Part A
26:44
Example 2: Part B
32:10
Rank of a Matrix, Part II

29m 26s

Intro
0:00
Rank of a Matrix
0:17
Example 1: Part A
0:18
Example 1: Part B
5:58
Rank of a Matrix Review: Rows, Columns, and Row Rank
8:22
Procedure for Computing the Rank of a Matrix
14:36
Theorem 1: Rank + Nullity = n
16:19
Example 2
17:48
Rank & Singularity
20:09
Example 3
21:08
Theorem 2
23:25
List of Non-Singular Equivalences
24:24
List of Non-Singular Equivalences
24:25
Coordinates of a Vector

27m 3s

Intro
0:00
Coordinates of a Vector
1:07
Coordinates of a Vector
1:08
Example 1
8:35
Example 2
15:28
Example 3: Part A
19:15
Example 3: Part B
22:26
Change of Basis & Transition Matrices

33m 47s

Intro
0:00
Change of Basis & Transition Matrices
0:56
Change of Basis & Transition Matrices
0:57
Example 1
10:44
Example 2
20:44
Theorem
23:37
Example 3: Part A
26:21
Example 3: Part B
32:05
Orthonormal Bases in n-Space

32m 53s

Intro
0:00
Orthonormal Bases in n-Space
1:02
Orthonormal Bases in n-Space: Definition
1:03
Example 1
4:31
Theorem 1
6:55
Theorem 2
8:00
Theorem 3
9:04
Example 2
10:07
Theorem 2
13:54
Procedure for Constructing an O/N Basis
16:11
Example 3
21:42
Orthogonal Complements, Part I

21m 27s

Intro
0:00
Orthogonal Complements
0:19
Definition
0:20
Theorem 1
5:36
Example 1
6:58
Theorem 2
13:26
Theorem 3
15:06
Example 2
18:20
Orthogonal Complements, Part II

33m 49s

Intro
0:00
Relations Among the Four Fundamental Vector Spaces Associated with a Matrix A
2:16
Four Spaces Associated With A (If A is m x n)
2:17
Theorem
4:49
Example 1
7:17
Null Space and Column Space
10:48
Projections and Applications
16:50
Projections and Applications
16:51
Projection Illustration
21:00
Example 1
23:51
Projection Illustration Review
30:15
Section 5: Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors

38m 11s

Intro
0:00
Eigenvalues and Eigenvectors
0:38
Eigenvalues and Eigenvectors
0:39
Definition 1
3:30
Example 1
7:20
Example 2
10:19
Definition 2
21:15
Example 3
23:41
Theorem 1
26:32
Theorem 2
27:56
Example 4
29:14
Review
34:32
Similar Matrices & Diagonalization

29m 55s

Intro
0:00
Similar Matrices and Diagonalization
0:25
Definition 1
0:26
Example 1
2:00
Properties
3:38
Definition 2
4:57
Theorem 1
6:12
Example 3
9:37
Theorem 2
12:40
Example 4
19:12
Example 5
20:55
Procedure for Diagonalizing Matrix A: Step 1
24:21
Procedure for Diagonalizing Matrix A: Step 2
25:04
Procedure for Diagonalizing Matrix A: Step 3
25:38
Procedure for Diagonalizing Matrix A: Step 4
27:02
Diagonalization of Symmetric Matrices

30m 14s

Intro
0:00
Diagonalization of Symmetric Matrices
1:15
Diagonalization of Symmetric Matrices
1:16
Theorem 1
2:24
Theorem 2
3:27
Example 1
4:47
Definition 1
6:44
Example 2
8:15
Theorem 3
10:28
Theorem 4
12:31
Example 3
18:00
Section 6: Linear Transformations
Linear Mappings Revisited

24m 5s

Intro
0:00
Linear Mappings
2:08
Definition
2:09
Linear Operator
7:36
Projection
8:48
Dilation
9:40
Contraction
10:07
Reflection
10:26
Rotation
11:06
Example 1
13:00
Theorem 1
18:16
Theorem 2
19:20
Kernel and Range of a Linear Map, Part I

26m 38s

Intro
0:00
Kernel and Range of a Linear Map
0:28
Definition 1
0:29
Example 1
4:36
Example 2
8:12
Definition 2
10:34
Example 3
13:34
Theorem 1
16:01
Theorem 2
18:26
Definition 3
21:11
Theorem 3
24:28
Kernel and Range of a Linear Map, Part II

25m 54s

Intro
0:00
Kernel and Range of a Linear Map
1:39
Theorem 1
1:40
Example 1: Part A
2:32
Example 1: Part B
8:12
Example 1: Part C
13:11
Example 1: Part D
14:55
Theorem 2
16:50
Theorem 3
23:00
Matrix of a Linear Map

33m 21s

Intro
0:00
Matrix of a Linear Map
0:11
Theorem 1
1:24
Procedure for Computing to Matrix: Step 1
7:10
Procedure for Computing to Matrix: Step 2
8:58
Procedure for Computing to Matrix: Step 3
9:50
Matrix of a Linear Map: Property
10:41
Example 1
14:07
Example 2
18:12
Example 3
24:31
Loading...
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Linear Algebra
Bookmark & Share Embed

Share this knowledge with your friends!

Copy & Paste this embed code into your website’s HTML

Please ensure that your website editor is in text mode when you paste the code.
(In Wordpress, the mode button is on the top right corner.)
  ×
  • - Allow users to view the embedded video in full-size.
Since this lesson is not free, only the preview will appear on your website.
  • Discussion

  • Answer Engine

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Lecture Comments (6)

1 answer

Last reply by: Professor Hovasapian
Wed Jan 27, 2016 4:15 PM

Post by Hen McGibbons on January 16, 2016

where did you learn how to teach? i am a new tutor and I am trying to help my students as much as possible. do you follow the principles of any books or mentors? or did you develop your own teaching style?

1 answer

Last reply by: Professor Hovasapian
Sun Dec 7, 2014 6:45 PM

Post by Nkosi Melville on December 4, 2014

for example if i am trying to find a transition matrix corresponding to a change of basis from the standard
basis {e1, e2} to the ordered basis {u1, u2}

Would the vectors u1 and u2 be the matrix i turn into reduced row echelon form?

1 answer

Last reply by: Professor Hovasapian
Mon Apr 15, 2013 11:36 PM

Post by Matt C on April 14, 2013

At the end when you confirm problem the identity matrix is correct, but I think you wrote the identity matrix wrong at the 31:50 mark.

At the 31:50 mark, doesn't the right side have to be equal to the identity matrix ([1,0,0], [0,1,0], [0,0,1]) you have ([1,0,0], [0,1,0], [1, 0,0]). I wrote them as columns.

Change of Basis & Transition Matrices

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • Change of Basis & Transition Matrices 0:56
    • Change of Basis & Transition Matrices
    • Example 1
    • Example 2
    • Theorem
    • Example 3: Part A
    • Example 3: Part B

Transcription: Change of Basis & Transition Matrices

Welcome back to Educator.com and welcome back to linear algebra.0000

In the previous lesson, we talked about the coordinates of a particular vector and we realized that if we had two different bases that the coordinate vector with respect to each of those bases is going to be different.0004

So, as it turns out, it is not all together... it has to be this or that.0018

One basis is as good as another. We are going to continue that discussion today, deal with coordinates some more and we are going to talk about something called a transition matrix.0022

Where, if we are given the coordinates... if we are given both bases and if we are given the coordinates with respect to one basis, can we actually transform that and is there a matrix that actually does that.0032

The answer is yes, there is a matrix. It is called the transition matrix from one basis to another, and it ends up being a profoundly important matrix.0045

So, let us just dive right in.0053

The first thing I want to talk about it just two brief properties of the coordinates that we mentioned.0058

Their properties are exactly the same as that of vectors, so, it is going to be nothing new.0066

It is just the notation is, of course, slightly different because we have that little subscript s and t underneath the coordinate vector.0070

So, let us just write it out and start with that.0076

So, we have v + w, so if I add two vectors and take the coordinate with respect to a certain basis, well, I can treat that, I can just sort of separate them.0080

That is just going to be the coordinate vector with respect to s for v, + the coordinate vector with respect to s for w.0098

Again, it is something that you already know. The sum of two vectors is nothing new here.0106

If I have a vector v and I multiply by a constant, and if I have the coordinate vector with respect to a certain basis s, well, I can just go ahead and pull that constant out and multiply by the coordinate vector for that vector first, and then multiply by the constant.0112

So, just to have these properties to describe what it is that we are going to do in a minute.0130

Okay. I am actually going to go through something that I would not normally go through.0137

It is the derivation of where this thing called a transition matrix comes from, simply because I want you to see it.0141

It is not going to be particularly notationally intensive, but there are going to be indices, you know, there are some numbers and letters, things floating around.0148

So, it is really, really important to pay attention to where things are and what each number is doing.0155

Okay. Let us say we have two bases for a vector space that we call v, of course.0161

The first basis is going to be s, it is going to consist of vectors v1, v2, and so on all the way to vN.0180

And... we have t, which is another basis.0195

We will call these w... w1, w2, all the way to wN, and again they are bases for the same vector space, so they have the same number of vectors in them. That is the dimension of the vector space.0200

Now, choose some v in v, some random in the vector space.0215

Well, we can write this particular v with respect to this basis, let us choose this basis, t.0228

So, we can write... we can say that v = c1 × w1, c2 × w2, we have done this a thousand times... + cN × wN.0235

Okay, just a linear combination of the vectors in this basis t. Well, once I actually solve for these constants, c1, c2 through cN... what I end up with is the coordinate vector v with respect to the basis t.0252

That is what this t down here means, which is c1, c2, all the way to cN. Okay.0268

Now. Here is where it gets kind of interesting. So, let us watch very, very carefully. Let me put a little arrow right here to show what we are doing.0279

If I take v, and if I want to find the coordinate vector with respect to the basis s, I am just going to take this thing that I wrote, which is the ... so the left side, I put it in this notation... sub s.0290

Well, the left side is equal to the right side. I just happen to have written the right side with respect to this basis.0310

I am just going to write... I am basically just copying it.0317

c1w1, c2w2 + ... + cNwN, with respect to s. All of them is take this thing, and subjected it to this notation. Everything should be okay.0324

Well, now I am going to use these properties. So, that is equal to c1 × w1, with respect to s + c2 × w2, with respect to s + so on + cN × wN, with respect to s.0340

Okay. Take a look at what we have done. A random vector v with respect to a basis t, and then, we want to find the coordinate vectors for... with respect to the basis s.0368

So, I have just taken this definition, and subjected it to the notation for the coordinate vector for s.0384

Then I use these properties up here, which you might call the linearity properties of these coordinate vectors, and just rewrite it.0390

Well, let us just see what this actually says... c1 × w1 with respect to s, w2 with respect to s... wN with respect to s.0400

Let us move forward, that is just this.0409

It says that coordinate vector v, with respect to s is equal to 1s, w2 respect to s, and so on and so forth.0414

I just set up, well let me actually finish up writing it and then I will tell you what it is we are doing here.0434

wN with respect to s, × c1, c2, all the way to cN.0445

So, the equation that I wrote on the previous slide is just this equation in matrix form.0457

What I am doing is I am taking the columns, I am taking each w1 in the basis t, I am expressing that vector with respect to the basis s, and whatever I get I am putting in as columns of my matrix.0461

Well, what this ends up being... this matrix that I get by doing that is precisely... well let me rewrite it.0483

So, we have the equation in front of us... p s... notice the arrow is going from right to left, not usually from left to right... × this thing, which is just coordinates of v with respect to t.0496

Okay, so what we have done, this is our ultimate goal. It is okay if you do not completely understand what it is that we did.0520

We will go through the procedure for how to find this matrix. This matrix right here, which is called the transition matrix from t to s.0525

There is a reason why I wrote it this way with the arrow going backwards from right to left. I will tell you what it is in a second.0537

It says that if I have a vector, and if I can find the coordinate vector with respect to a basis t, but I want to convert that to a coordinate vector with respect to the other basis that I have, s, I can multiply the coordinate vector with the one basis on the left by some matrix.0543

The transition matrix that takes it from t to s. This is why it is written this way. Notice this vector space -- I am sorry -- this coordinate vector with respect to s is on the left.0566

Here, the notation for the transition matrix has the s on the left, has the t on the right, because you are multiplying it by the coordinate vector for the basis t on the right.0576

It is just a way of... again, it is a notational device to remind us that we are going from the t basis to the s basis.0588

It is written this way simply because of how we wrote the equation. We wrote the coordinate vector with respect to s on the left side of the equality sign. That is why it is written this way.0597

Now, here is how we did it. We have the basis t which consists of vector w1, w2, and w3 and so on.0609

We take each of those vectors, we express them as coordinate vectors with respect to s, and we do that by solving this system, just like we did for the previous lesson.0618

Then, the coordinate vectors that we get, we just put them in columns and the final matrix that we get when we just put in all of the columns, that is our transition matrix.0630

Okay. Let us just do an example and I think it will all make sense.0645

Let us move forward here. This is going to be a bit of a long example notationally, but it should be reasonably straight forward.0650

Okay, now, let s = the set (2,0,1), (1,2,0), (1,1,1).0659

Okay. That is one basis for R3. Three vectors, three entries, it is R3.0680

T, let it be another set, let it be (6,3,3), (4,-1,3), (5,5,2).0695

So, we have two different bases. Okay, what we want to do is a, we want to compute the transition matrix.0714

What matrix will allow us to convert from t basis to s basis? The transition matrix from t to s, that is the first thing we want to do.0725

The second thing we want to do is we want to verify the equation that we just wrote.0735

That the coordinate with respect to basis s is equal to this transition matrix, multiplied by the coordinate for v with respect to t.0740

Okay. So, let us see what we have got here. Alright, so let us do the first thing first.0755

Let us go ahead and compute this transition matrix. So, we said that in order to compute the transition matrix, we have to take... so we are going from t to s.0768

That means we take the vectors in the basis t, (6,3,3) (4,-1,3), (5,5,2), and we express each of these vectors with respect to the basis s.0779

Again, these are just vectors in R3. They are random vectors, but they do form a basis and that forms a basis.0791

So, we want to change, we want to be able to write these vectors as a linear combination, each of these as a linear combination of these 3. That is what we are doing.0796

So, let us write that down. So, we have... we will take this one first, right?0808

Let us actually label these. No, that is okay, we do not need to label them.0815

So, we want (6,3,3) to equal some constant a1 × (2,0,1) + a2 × (1,2,0).0824

Actually, let us not, let us choose a different letter here. Let us choose b1, and we will make this c1 × (1,1,1).0845

So, this is one of the things that we want. We can solve this system. We can just solve this column, this column, this... let me write it on the other side.0857

We are accustomed to seeing it on the right, let us go ahead and be consistent... (6,3,3). That is one thing.0865

Okay. The other thing we want to do, is we want to express this one. The second vector in the basis t.0876

Again, t to s, so we want to take the vectors in t, the second vector expressed as a linear combination of these two.0882

So, this time we have a different set of constants, we will call them a2 × (2,0,1) + b2 × (1,2,0) + c2 × (1,1,1) = (4,-1,3).0890

Now we want to express the third vector in t as a linear combination of these.0908

So, we will take a3 ×, well, (2,0,1) + b3 × (1,2,0) + c3 × (1,1,1).0914

That is going to equal (5,5,2). So, we solve this system, we solve this system, we solve this system.0932

Well, this system, for each of these, the left hand side, these columns are the same, (2,0,1), (1,2,0), (1,1,1).0941

So, we can take all three of these and do them simultaneously. Here is what it looks like.0948

All we are doing is taking (2,0,1), (1,2,0), and (1,1,1) and then we are augmenting the (6,3,3).0959

(2,0,1), (1,2,0), (1,1,1), augmenting it with (4,-1,3).0965

And this one... (2,0,1), (1,2,0), (1,1,1)... augmenting it with (5,5,2).0970

Well, we can do all of the augmentations simultaneously. We can just add three columns and then do our matrix in reduced row echelon form. Here is what it looks like.0974

So, we get (2,0,1), (1,2,0), (1,1,1), and then we have our augmented... we have (6,3,3), (4,-1,3), and we have (5,5,2). Okay.0985

When we subject this to reduced row echelon form, let me go horizontally actually, we end up with the following.1007

We end up with (1,0,0), (0,1,0), (0,0,1), and we end up with (2,1,1), (2,-1,1), (1,2,1).1016

So, this, right here, in the red -- oh, I did not get red, oops -- right here, that is our transition matrix.1035

It is the columns of the vectors in t, expressed as... these are the coordinates of those vectors with respect to the s basis.1046

That is what we did, just like the previous lesson, so our transition matrix from t to s = (2,1,1), (2,-1,1), (1,2,1).1058

There you go. That is the first part. Okay.1074

Now, we want to confirm that that equation is actually true. In other words, we want to confirm this equation.1077

That the coordinate vector of some random vector v with respect to s is equal to this transition matrix that we just found × the coordinate vector with respect to the basis t.1095

Okay. Well, let us let v... well, let us choose a random vector. We will let v - (4,-9,5).1105

Okay, so now the first thing that we want to do is... again we are verifying so we are doing a left hand side, we are going to do a right-hand side.1120

We are verifying this. We need to check to see if this is actually equal. So, we need to do this side, and we need to do this side.1128

Okay. First of all, let us find the coordinate of this vector with respect to t, that is this right here. Okay.1136

Well, we need to set up the following. c1w1 + c2w2 + c3w3 = our (4,-9,5).1148

Well, let us take our columns, which are our basis t, so we get the following. We get (6,3,3), (4,-1,3), (5,5,2).1171

It is going to be (4,-9,5).1187

Convert to reduced row echelon form, we get (1,0,0), (0,1,0), (0,0,1), and we get (1,2,-2). Okay.1194

So, the coordinates of v with respect to the basis t is equal to (1,2,-2). That is part of the right hand side.1209

Well, we have the transition matrix, that is this, so let us circle what we have. We have that. that is our coordinate with respect to t.1223

We have our transition matrix, so we have the right-hand side. Now, we need to find the left hand side, do a multiplication, and see if they are actually equal to confirm that equation.1232

Okay. Now, let us move to the next page, so we want to find with respect to s. Well, with respect to s we set up the columns from the vectors in the basis s.1241

So, we get (2,0,1), (1,2,0), (1,1,1), and we are solving for (4,-9,5).1261

Reduced row echelon, when you do that, you end up with... I will actually write that out... and I put (4,-5,1)... so now we have the left hand side.1276

Now, what we want to do is we want to check, is (4,-5,1)... does it equal that transition matrix × the coordinate vector with respect to... yes, as it turns out, when I do the multiplication on the right hand side, I end up with (4,-5,1).1293

So, yes. It is verified.1329

So again, our equation is this. If I have some coordinate vector with respect to a basis t, and I want to find the coordinates with respect to another basis s, I multiply on the left with something called the transition matrix.1333

That will give me the coordinates with respect to s, and the columns of that transition matrix are the individual basis vectors for the basis t expressed as coordinate vectors with respect to the basis, s.1353

That is what this notation tells me. Okay.1372

Now, that is exactly what we did. We were given two bases, t and s, we took the basis, the vectors in the basis from t, we expressed them as coordinate vectors with respect to the basis s, and that which we got, we set up as columns in a matrix.1379

That matrix that we get is the transition matrix. That allows us to go from 1 basis to another, given one coordinate, or another.1405

Okay. Let us see. Let us continue with a theorem here.1417

s = v1, v2... vN. And, t = w1, w2... wN. Okay.1431

Let s and t be two bases for an n-dimensional vector space. Okay.1456

If p, from t to s, is the transition matrix... transition matrix from t to s... then, the inverse of that transition matrix from t to s is the transition matrix from s to t.1471

So, if I have 2 bases, and if I calculate a transition matrix from t to s, I can take that matrix, I can take the inverse of that matrix, and that is going to be the transition matrix from s to t.1526

So, I do not have to calculate it separately. I can if I want to, but really all I have to do is take the inverse of the matrix that I found.1539

That is the relationship between these two. Okay.1546

Also, the transition matrix which we found is non-singular.1552

Of course, invertible. Non-singular means invertible. Okay.1566

Okay. Let us see what we have got here.1576

So, let us continue with our previous example. Let us recall the conditions... we said that s is equal to (2,0,1), (1,2,0), (1,1,1).1580

And... t is equal to (6,3,3), (4,-1,3), and (5,5,2).1605

Okay. What we want to do is we want to compute the transition matrix from s to t, directly.1618

So, we can do it directly, and the other way we can do it is to take the inverse of the transition matrix from t to s that we already found. That is going to be the second part of this.1632

So, the first part a will be computed directly, and the second part, we want to show that this thing from s to t is actually equal to the inverse of the matrix from t to s.1647

Make sure you look at these very, very carefully to make sure you actually know which direction we are going in.1661

Well, in order to calculate it directly, we take the... so this, we are going from s to t, alright?1666

So, let us go with vectors in s... so we are going to write (6,3,3), so in other words, we are going to express... so this is from s to t.1680

So, we want to take the vectors in s and express them as a linear combination of these vectors.1700

These vectors are the ones that actually form the matrix over here... (6,3,3), (4,-1,3), (5,5,2), and we augment with the (2,0,1), (1,2,0), (1,1,1).1706

Again, we are going from s to t. We want to express the vectors in s as linear combinations of these. That is why s is on the augmented side, and t is over on this side. Okay?1725

These are three linear equations. This, this augment, that, that augment, this, this augment. Okay?1738

When we subject to reduce row echelon form, we end up with (1,0,0), (0,1,0), (0,0,1).1749

We end up with -- nope, we do not end up with stray lines -- we have (3/2,-1/2,-1,1/2,-1/2,0,-5/2,3/2,2) make these as clear as possible.1760

Therefore, q transition matrix from s to t is equal to (3/2,-1/2,-1,1/1,-1/2,0,-5/2,3/2,2).1793

Okay. So, that is the direct computation of u(s(t).1819

Now, let us go back to blue. Now, let us calculate the inverse to show that that equals the inverse of that. Yes.1827

Okay. So, now, let us see. We want to take, in order to find... so we have... let us recall what the transition matrix from t to s was.1844

We had (2,1,1), (2,-1,1), (1,2,1), okay? That was our transition matrix.1859

Now, you recall, when you actually, in order to find the inverse of the matrix, you set up this system... (2,1,1), (2,-1,-1), (1,2,1).1881

Then, you put the identity matrix here, (0,1,0), (1,0,0)... yes.1898

Then, of course, if you do reduced row echelon form, this right side ends up being the inverse of that.1910

In this particular case, we do not need to do that. If we say that one thing is the inverse of another, all that I have to really do is multiply them and see if I end up with the identity matrix.1915

So, part b, in order to confirm that that is the case, all I have to do is I have to take the transition matrix of t from s to t, and I multiply by the matrix from t to s, to see if I get the identity matrix.1926

While I do that, I have of course my (3/2,-1/2,-1,1/2,-1/2,0,-5/2,3/2,2).1949

Multiply that by our transition matrix (2,1,1), (2,-1,-1), (1,2,1), and I can only hope I have not messed up my minus signs or anything like that.1970

As it turns out, when I do this, I get (1,0,0), (0,1,0), (0,0,1), which is the identity matrix - n-dimensional, the 3 by 3 identity matrix, so this is not n, this is 3.1982

That confirms that q, s to t = inverse of t to s.2000

When I find a transition matrix from t to s, if I want the transition matrix from s to t, all I do is take the inverse. That is what we have done here.2014

Thank you for joining us Educator.com to discuss transition matrices, we will see you next time.2023

Educator®

Please sign in to participate in this lecture discussion.

Resetting Your Password?
OR

Start Learning Now

Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.

Membership Overview

  • Available 24/7. Unlimited Access to Our Entire Library.
  • Search and jump to exactly what you want to learn.
  • *Ask questions and get answers from the community and our teachers!
  • Practice questions with step-by-step solutions.
  • Download lecture slides for taking notes.
  • Track your course viewing progress.
  • Accessible anytime, anywhere with our Android and iOS apps.