Raffi Hovasapian

Coordinates of a Vector

Slide Duration:

Section 1: Linear Equations and Matrices
Linear Systems

39m 3s

Intro
0:00
Linear Systems
1:20
Introduction to Linear Systems
1:21
Examples
10:35
Example 1
10:36
Example 2
13:44
Example 3
16:12
Example 4
23:48
Example 5
28:23
Example 6
32:32
Number of Solutions
35:08
One Solution, No Solution, Infinitely Many Solutions
35:09
Method of Elimination
36:57
Method of Elimination
36:58
Matrices

30m 34s

Intro
0:00
Matrices
0:47
Definition and Example of Matrices
0:48
Square Matrix
7:55
Diagonal Matrix
9:31
Operations with Matrices
10:35
10:36
Scalar Multiplication
15:01
Transpose of a Matrix
17:51
Matrix Types
23:17
Regular: m x n Matrix of m Rows and n Column
23:18
Square: n x n Matrix With an Equal Number of Rows and Columns
23:44
Diagonal: A Square Matrix Where All Entries OFF the Main Diagonal are '0'
24:07
Matrix Operations
24:37
Matrix Operations
24:38
Example
25:55
Example
25:56
Dot Product & Matrix Multiplication

41m 42s

Intro
0:00
Dot Product
1:04
Example of Dot Product
1:05
Matrix Multiplication
7:05
Definition
7:06
Example 1
12:26
Example 2
17:38
Matrices and Linear Systems
21:24
Matrices and Linear Systems
21:25
Example 1
29:56
Example 2
32:30
Summary
33:56
Dot Product of Two Vectors and Matrix Multiplication
33:57
Summary, cont.
35:06
Matrix Representations of Linear Systems
35:07
Examples
35:34
Examples
35:35
Properties of Matrix Operation

43m 17s

Intro
0:00
1:11
1:12
2:30
2:57
4:20
5:22
5:23
Properties of Multiplication
6:47
Properties of Multiplication: A
7:46
Properties of Multiplication: B
8:13
Properties of Multiplication: C
9:18
Example: Properties of Multiplication
9:35
Definitions and Properties (Multiplication)
14:02
Identity Matrix: n x n matrix
14:03
Let A Be a Matrix of m x n
15:23
Definitions and Properties (Multiplication)
18:36
Definitions and Properties (Multiplication)
18:37
Properties of Scalar Multiplication
22:54
Properties of Scalar Multiplication: A
23:39
Properties of Scalar Multiplication: B
24:04
Properties of Scalar Multiplication: C
24:29
Properties of Scalar Multiplication: D
24:48
Properties of the Transpose
25:30
Properties of the Transpose
25:31
Properties of the Transpose
30:28
Example
30:29
33:25
Let A, B, C, and D Be m x n Matrices
33:26
There is a Unique m x n Matrix, 0, Such That…
33:48
Unique Matrix D
34:17
Properties of Matrix Multiplication
34:58
Let A, B, and C Be Matrices of the Appropriate Size
34:59
Let A Be Square Matrix (n x n)
35:44
Properties of Scalar Multiplication
36:35
Let r and s Be Real Numbers, and A and B Matrices
36:36
Properties of the Transpose
37:10
Let r Be a Scalar, and A and B Matrices
37:12
Example
37:58
Example
37:59
Solutions of Linear Systems, Part 1

38m 14s

Intro
0:00
Reduced Row Echelon Form
0:29
An m x n Matrix is in Reduced Row Echelon Form If:
0:30
Reduced Row Echelon Form
2:58
Example: Reduced Row Echelon Form
2:59
Theorem
8:30
Every m x n Matrix is Row-Equivalent to a UNIQUE Matrix in Reduced Row Echelon Form
8:31
Systematic and Careful Example
10:02
Step 1
10:54
Step 2
11:33
Step 3
12:50
Step 4
14:02
Step 5
15:31
Step 6
17:28
Example
30:39
Find the Reduced Row Echelon Form of a Given m x n Matrix
30:40
Solutions of Linear Systems, Part II

28m 54s

Intro
0:00
Solutions of Linear Systems
0:11
Solutions of Linear Systems
0:13
Example I
3:25
Solve the Linear System 1
3:26
Solve the Linear System 2
14:31
Example II
17:41
Solve the Linear System 3
17:42
Solve the Linear System 4
20:17
Homogeneous Systems
21:54
Homogeneous Systems Overview
21:55
Theorem and Example
24:01
Inverse of a Matrix

40m 10s

Intro
0:00
Finding the Inverse of a Matrix
0:41
Finding the Inverse of a Matrix
0:42
Properties of Non-Singular Matrices
6:38
Practical Procedure
9:15
Step1
9:16
Step 2
10:10
Step 3
10:46
Example: Finding Inverse
12:50
Linear Systems and Inverses
17:01
Linear Systems and Inverses
17:02
Theorem and Example
21:15
Theorem
26:32
Theorem
26:33
List of Non-Singular Equivalences
28:37
Example: Does the Following System Have a Non-trivial Solution?
30:13
Example: Inverse of a Matrix
36:16
Section 2: Determinants
Determinants

21m 25s

Intro
0:00
Determinants
0:37
Introduction to Determinants
0:38
Example
6:12
Properties
9:00
Properties 1-5
9:01
Example
10:14
Properties, cont.
12:28
Properties 6 & 7
12:29
Example
14:14
Properties, cont.
18:34
Properties 8 & 9
18:35
Example
19:21
Cofactor Expansions

59m 31s

Intro
0:00
Cofactor Expansions and Their Application
0:42
Cofactor Expansions and Their Application
0:43
Example 1
3:52
Example 2
7:08
Evaluation of Determinants by Cofactor
9:38
Theorem
9:40
Example 1
11:41
Inverse of a Matrix by Cofactor
22:42
Inverse of a Matrix by Cofactor and Example
22:43
More Example
36:22
List of Non-Singular Equivalences
43:07
List of Non-Singular Equivalences
43:08
Example
44:38
Cramer's Rule
52:22
Introduction to Cramer's Rule and Example
52:23
Section 3: Vectors in Rn
Vectors in the Plane

46m 54s

Intro
0:00
Vectors in the Plane
0:38
Vectors in the Plane
0:39
Example 1
8:25
Example 2
15:23
19:33
19:34
Scalar Multiplication
24:08
Example
26:25
The Angle Between Two Vectors
29:33
The Angle Between Two Vectors
29:34
Example
33:54
Properties of the Dot Product and Unit Vectors
38:17
Properties of the Dot Product and Unit Vectors
38:18
Defining Unit Vectors
40:01
2 Very Important Unit Vectors
41:56
n-Vector

52m 44s

Intro
0:00
n-Vectors
0:58
4-Vector
0:59
7-Vector
1:50
2:43
Scalar Multiplication
3:37
Theorem: Part 1
4:24
Theorem: Part 2
11:38
Right and Left Handed Coordinate System
14:19
Projection of a Point Onto a Coordinate Line/Plane
17:20
Example
21:27
Cauchy-Schwarz Inequality
24:56
Triangle Inequality
36:29
Unit Vector
40:34
Vectors and Dot Products
44:23
Orthogonal Vectors
44:24
Cauchy-Schwarz Inequality
45:04
Triangle Inequality
45:21
Example 1
45:40
Example 2
48:16
Linear Transformation

48m 53s

Intro
0:00
Introduction to Linear Transformations
0:44
Introduction to Linear Transformations
0:45
Example 1
9:01
Example 2
11:33
Definition of Linear Mapping
14:13
Example 3
22:31
Example 4
26:07
Example 5
30:36
Examples
36:12
Projection Mapping
36:13
Images, Range, and Linear Mapping
38:33
Example of Linear Transformation
42:02
Linear Transformations, Part II

34m 8s

Intro
0:00
Linear Transformations
1:29
Linear Transformations
1:30
Theorem 1
7:15
Theorem 2
9:20
Example 1: Find L (-3, 4, 2)
11:17
Example 2: Is It Linear?
17:11
Theorem 3
25:57
Example 3: Finding the Standard Matrix
29:09
Lines and Planes

37m 54s

Intro
0:00
Lines and Plane
0:36
Example 1
0:37
Example 2
7:07
Lines in IR3
9:53
Parametric Equations
14:58
Example 3
17:26
Example 4
20:11
Planes in IR3
25:19
Example 5
31:12
Example 6
34:18
Section 4: Real Vector Spaces
Vector Spaces

42m 19s

Intro
0:00
Vector Spaces
3:43
Definition of Vector Spaces
3:44
Vector Spaces 1
5:19
Vector Spaces 2
9:34
Real Vector Space and Complex Vector Space
14:01
Example 1
15:59
Example 2
18:42
Examples
26:22
More Examples
26:23
Properties of Vector Spaces
32:53
Properties of Vector Spaces Overview
32:54
Property A
34:31
Property B
36:09
Property C
36:38
Property D
37:54
Property F
39:00
Subspaces

43m 37s

Intro
0:00
Subspaces
0:47
Defining Subspaces
0:48
Example 1
3:08
Example 2
3:49
Theorem
7:26
Example 3
9:11
Example 4
12:30
Example 5
16:05
Linear Combinations
23:27
Definition 1
23:28
Example 1
25:24
Definition 2
29:49
Example 2
31:34
Theorem
32:42
Example 3
34:00
Spanning Set for a Vector Space

33m 15s

Intro
0:00
A Spanning Set for a Vector Space
1:10
A Spanning Set for a Vector Space
1:11
Procedure to Check if a Set of Vectors Spans a Vector Space
3:38
Example 1
6:50
Example 2
14:28
Example 3
21:06
Example 4
22:15
Linear Independence

17m 20s

Intro
0:00
Linear Independence
0:32
Definition
0:39
Meaning
3:00
Procedure for Determining if a Given List of Vectors is Linear Independence or Linear Dependence
5:00
Example 1
7:21
Example 2
10:20
Basis & Dimension

31m 20s

Intro
0:00
Basis and Dimension
0:23
Definition
0:24
Example 1
3:30
Example 2: Part A
4:00
Example 2: Part B
6:53
Theorem 1
9:40
Theorem 2
11:32
Procedure for Finding a Subset of S that is a Basis for Span S
14:20
Example 3
16:38
Theorem 3
21:08
Example 4
25:27
Homogeneous Systems

24m 45s

Intro
0:00
Homogeneous Systems
0:51
Homogeneous Systems
0:52
Procedure for Finding a Basis for the Null Space of Ax = 0
2:56
Example 1
7:39
Example 2
18:03
Relationship Between Homogeneous and Non-Homogeneous Systems
19:47
Rank of a Matrix, Part I

35m 3s

Intro
0:00
Rank of a Matrix
1:47
Definition
1:48
Theorem 1
8:14
Example 1
9:38
Defining Row and Column Rank
16:53
If We Want a Basis for Span S Consisting of Vectors From S
22:00
If We want a Basis for Span S Consisting of Vectors Not Necessarily in S
24:07
Example 2: Part A
26:44
Example 2: Part B
32:10
Rank of a Matrix, Part II

29m 26s

Intro
0:00
Rank of a Matrix
0:17
Example 1: Part A
0:18
Example 1: Part B
5:58
Rank of a Matrix Review: Rows, Columns, and Row Rank
8:22
Procedure for Computing the Rank of a Matrix
14:36
Theorem 1: Rank + Nullity = n
16:19
Example 2
17:48
Rank & Singularity
20:09
Example 3
21:08
Theorem 2
23:25
List of Non-Singular Equivalences
24:24
List of Non-Singular Equivalences
24:25
Coordinates of a Vector

27m 3s

Intro
0:00
Coordinates of a Vector
1:07
Coordinates of a Vector
1:08
Example 1
8:35
Example 2
15:28
Example 3: Part A
19:15
Example 3: Part B
22:26
Change of Basis & Transition Matrices

33m 47s

Intro
0:00
Change of Basis & Transition Matrices
0:56
Change of Basis & Transition Matrices
0:57
Example 1
10:44
Example 2
20:44
Theorem
23:37
Example 3: Part A
26:21
Example 3: Part B
32:05
Orthonormal Bases in n-Space

32m 53s

Intro
0:00
Orthonormal Bases in n-Space
1:02
Orthonormal Bases in n-Space: Definition
1:03
Example 1
4:31
Theorem 1
6:55
Theorem 2
8:00
Theorem 3
9:04
Example 2
10:07
Theorem 2
13:54
Procedure for Constructing an O/N Basis
16:11
Example 3
21:42
Orthogonal Complements, Part I

21m 27s

Intro
0:00
Orthogonal Complements
0:19
Definition
0:20
Theorem 1
5:36
Example 1
6:58
Theorem 2
13:26
Theorem 3
15:06
Example 2
18:20
Orthogonal Complements, Part II

33m 49s

Intro
0:00
Relations Among the Four Fundamental Vector Spaces Associated with a Matrix A
2:16
Four Spaces Associated With A (If A is m x n)
2:17
Theorem
4:49
Example 1
7:17
Null Space and Column Space
10:48
Projections and Applications
16:50
Projections and Applications
16:51
Projection Illustration
21:00
Example 1
23:51
Projection Illustration Review
30:15
Section 5: Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors

38m 11s

Intro
0:00
Eigenvalues and Eigenvectors
0:38
Eigenvalues and Eigenvectors
0:39
Definition 1
3:30
Example 1
7:20
Example 2
10:19
Definition 2
21:15
Example 3
23:41
Theorem 1
26:32
Theorem 2
27:56
Example 4
29:14
Review
34:32
Similar Matrices & Diagonalization

29m 55s

Intro
0:00
Similar Matrices and Diagonalization
0:25
Definition 1
0:26
Example 1
2:00
Properties
3:38
Definition 2
4:57
Theorem 1
6:12
Example 3
9:37
Theorem 2
12:40
Example 4
19:12
Example 5
20:55
Procedure for Diagonalizing Matrix A: Step 1
24:21
Procedure for Diagonalizing Matrix A: Step 2
25:04
Procedure for Diagonalizing Matrix A: Step 3
25:38
Procedure for Diagonalizing Matrix A: Step 4
27:02
Diagonalization of Symmetric Matrices

30m 14s

Intro
0:00
Diagonalization of Symmetric Matrices
1:15
Diagonalization of Symmetric Matrices
1:16
Theorem 1
2:24
Theorem 2
3:27
Example 1
4:47
Definition 1
6:44
Example 2
8:15
Theorem 3
10:28
Theorem 4
12:31
Example 3
18:00
Section 6: Linear Transformations
Linear Mappings Revisited

24m 5s

Intro
0:00
Linear Mappings
2:08
Definition
2:09
Linear Operator
7:36
Projection
8:48
Dilation
9:40
Contraction
10:07
Reflection
10:26
Rotation
11:06
Example 1
13:00
Theorem 1
18:16
Theorem 2
19:20
Kernel and Range of a Linear Map, Part I

26m 38s

Intro
0:00
Kernel and Range of a Linear Map
0:28
Definition 1
0:29
Example 1
4:36
Example 2
8:12
Definition 2
10:34
Example 3
13:34
Theorem 1
16:01
Theorem 2
18:26
Definition 3
21:11
Theorem 3
24:28
Kernel and Range of a Linear Map, Part II

25m 54s

Intro
0:00
Kernel and Range of a Linear Map
1:39
Theorem 1
1:40
Example 1: Part A
2:32
Example 1: Part B
8:12
Example 1: Part C
13:11
Example 1: Part D
14:55
Theorem 2
16:50
Theorem 3
23:00
Matrix of a Linear Map

33m 21s

Intro
0:00
Matrix of a Linear Map
0:11
Theorem 1
1:24
Procedure for Computing to Matrix: Step 1
7:10
Procedure for Computing to Matrix: Step 2
8:58
Procedure for Computing to Matrix: Step 3
9:50
Matrix of a Linear Map: Property
10:41
Example 1
14:07
Example 2
18:12
Example 3
24:31
Bookmark & Share Embed

## Copy & Paste this embed code into your website’s HTML

Please ensure that your website editor is in text mode when you paste the code.
(In Wordpress, the mode button is on the top right corner.)
×
• - Allow users to view the embedded video in full-size.
Since this lesson is not free, only the preview will appear on your website.

• ## Related Books

 1 answerLast reply by: Professor HovasapianWed Dec 18, 2019 6:43 AMPost by George Watson on December 17, 2019This lecture is very confusing.You should not start out with theory but rather a simple example using a vector in R2.When you change screens, all the information is lost and it is very difficult to know whatyour goal is and what your starting point is.You need to re-video tape the whole lecture over again and make it simple and clear atfirst then make it more complicated.Stay away from any space larger than R3, when do undergrads use R4/R5 ?Make sure they understand up to R3  and then let them worry about higher dimensions. 0 answersPost by Burhan Akram on November 12, 2013Fascinating!!

### Coordinates of a Vector

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

• Intro 0:00
• Coordinates of a Vector 1:07
• Coordinates of a Vector
• Example 1
• Example 2
• Example 3: Part A
• Example 3: Part B

### Transcription: Coordinates of a Vector

Welcome back to Educator.com and welcome back to linear algebra.0000

Today we are going to be talking about something... continue our discussion, of course, about the structure of a vector space.0005

We have been talking about bases, linear independence, span, things like that.0011

Today we are going to be talking about something called a coordinates and a change of basis.0016

So, up to now, we have been talking about a random basis, a set of vectors that actually spans a given space.0021

Either the entire space, or a subspace of that space.0031

Well, we have not really cared about the order of those vectors -- you know -- we just say v1, v2, v3, the basis will do.0036

In this particular lesson, we are going to start talking about the particular order of a basis.0042

So, if I put vector 1 in front of vector 2, and if I switch the order, it actually changes the basis and it is going to change something called the coordinates of the particular vector that the basis is actually representing.0048

So, let us start with a couple of definitions, and we will jump right on in.0061

Now, let us say we have v1... let us make these concepts a little bit more clear so we can see it notationally.0069

So, let us say v1 is a set of vectors, v1, v2, and v3.0080

So, we have a basis which has 3 vectors, so we are talking about a 3-dimensional vector space.0088

If we also have, let us say, a separate basis which is almost the same, it is the same vectors, except now, I am going to put v2 first, and then v3, and then v1.0096

Even though these two bases consist of the same vectors, they are not in the same order.0108

Turns out, they are not really the same, and you will see why in a minute when we do what we do.0114

So, not the same. Now, let us let v... v1, v2, all the way to vN... since there are n vectors, we are talking about an n-dimensional vector space, because again, that is what a basis is.0120

The number of vectors in the basis gives you the dimension of that particular space.0139

So, let this basis be an ordered basis for an n-dimensional vector space, v.0145

Okay. Then of course, as we know, because it is a basis, every vector in v, every v in v, symbol like that, can be written -- excuse me -- as... so v is equal to some constant... c1 × v1, c2 × v2 + all the way + cN × vN.0171

Now, some of these constants might be 0, but they cannot all be 0.0205

So, again, this is just the definition of a basis. It is a linear combination. A basis allows you to actually write any vector in a vector space as a linear combination of those basis vectors.0208

Nothing particularly new here. Now, let us take a look at c1, c2, c3, c4, all the way to cN.0220

If we take just the constants and write them as a vector we get this thing.0230

I will do the right side of the equation first, and then I will put the symbol on the left hand side.0260

So if I take just the constants, so if I have a vector v, and I can express it as some linear combination of the vectors in our basis, and if I just pull these constants out and I write them as a vector.0265

So, c1 -- oops, let me do this in red, actually -- c1, c2, all the way to cN>0275

So, I am writing it as a -- oops, we do not want these stray lines here.0287

I will write it as a column vector. Basically, if I take these, if I have some vector in n-space... well, this vector of the constants that make up the linear combination representing v... it is symbolized as that way.0298

We have the vector symbol, and we have a bracket around it, and we put a little b... the b represents the basis, okay?0320

We call this... called... this is called the coordinate vector.0327

This is called the coordinate vector of v, with respect to basis b.0342

This is unique. So, b... so the coordinate vector of any vector with respect to a given basis is unique.0357

Let us stop and think about what this means. We know that if we have a given particular vector space, let us say R3, we know that a basis for R3 has to have 3 vectors in it, because that is the definition of dimensions.0374

The number of vectors in a basis for that space.0386

We also know that there is an infinite number of bases, it does not have to be 1 or the other.0390

As it turns out, any vector in a vector space is going to be written as a linear combination of the vectors in that basis.0394

Well, the constants that make up that linear combination, I can arrange them as a vector, and I call those the coordinates with respect to that basis of that particular vector that I am dealing with.0402

So, needless to say, if I choose one basis, the coordinates are going to be one thing. If I choose another thing, the coordinates are going to be entirely different.0414

It is kind of interesting when you think about this. If I pick some random point in a vector space, as it turns out, its identity, its intrinsic identity is actually... It has nothing to do with its coordinates.0423

The coordinates are something that we attach to it so that we can actually deal with it. It all depends on the basis that we choose.0438

That is kind of extraordinary. You know, were you still thinking of a point in space like (5,6,7) as if it is specifically (5,6,7).0445

In a minute you will see that those numbers 5, 6, 7, with respect to a given basis.0454

In R3, it is the i,j,k, vectors. It is a very convenient basis because they happen to be unit length, each of the vectors, they happen to be mutually orthogonal, which we will talk about more in a minute.0462

But any basis will do, actually. As it turns out, that number (5,6,7), it is specific only to the natural basis.0473

It does not really tell me something about the point itself. That is actually kind of interesting to think that it is only so that we can handle that point mathematically as an object, that we have to assign some sort of a value to it.0482

We assign a value with respect to an arbitrary choice of basis. Arbitrary in a sense that no one basis is better than another.0494

You will have bases that are more convenient than others, but if you... if the problem might call for a basis that is completely different than the one you are used to. Again, that is kind of extraordinary.0504

Let us do an example.0514

Okay. Let us see -- let us go back to blue.0520

We will let s equal the set (1,1,0,0), that is the first vector in the set... then we have (2,0,1,0), that is the second vector in the set, (0,1,2,-1), third vector, and we have (0,1,-1,0). Okay.0528

Let s be a basis, an ordered basis for R4.0555

Again, we have 4 numbers, we have 4 vectors, so it is R4. Now, let us choose a random vector v, let v in R4 be the vector (-1,2,-6,5).0564

Okay. We want to find the coordinate vector of this vector with respect to this basis.0589

So, let us stop and think about this for a second. I have some vector, you know, that I just represented as (-1,2,-6,5).0605

But, I have a different basis than I am normally accustomed to. So, I want to find the coordinates of this vector with respect to this basis. Okay.0615

Okay. Let us see what we are going to do. Well, here is what we want.0625

We want constants, c1, c2, c3, c4, such that c1 × the first vector (1,1,0,0) + c2 × the second vector (2,0,1,0), + c3 × (0,1,2,-1), + c4 × (0,1,-1,0) is equal to our vector v, which is (-1,2,-6,5).0630

This is what we want. The idea is we take these basis vectors, we write the vector that we are looking for, it is a linear combination of these things.0667

Now, we have to solve this. Well, this is just a linear system, so we set it up as a linear system, as a 4 by 5 augmented matrix.0676

It is going to be... (1,1,0,0), we just take these as columns, (2,0,1,0), (0,1,2,-1), then we take (0,1,-1,0), (0,1,-1,0).0686

And... we augment this with (-1,2,-6,5). We are just solving a × x = b.0703

In this particular case, x are the constants. That is what we are looking for. We subject this to reduced row echelon... well, subject it to Gauss Jordan elimination to get the reduced row echelon form.0712

We end up with the following. (1,0,0,0), (0,1,0,0), (0,0,1,0) -- that is not a 6, that is a 0 -- and (0,0,0,1), and we end up with... (23,-12,-5,-16).0724

Therefore, our coordinate vector for v with respect to the basis that we were given is equal to -- nope, cannot have that -- let us make sure these are clear.0752

We have (23,-12,-5,-16). That is our answer, with respect to this basis, the vector is (23,-12,-5,-16).0774

These numbers up here, (-1,2,-6,5), this vector was given to us because that is the standard basis.0797

In R4, it is the... imagine i,j,k, with one extra vector... basically it is something in the x direction, something in the y direction, something in the z direction, and something in the L direction.0805

Again, we are talking about a 4-dimensional space. We cannot see it, but we can still treat it mathematically.0817

Mutually orthogonal vectors. That is why this and this are different. We are talking about the same point, but, in order for us to identify that point, to give it a label, to give it a name, we need to choose a basis.0823

We need to choose a point of reference, a frame of reference. That is what all of modern science is based on. All of measurement is based on.0843

We need something from which to measure something. Our frame of reference, well here it is the standard basis. The basis of mutually orthogonal unit vectors.0851

Here, it is a completely different basis. Well, this one basis is not necessarily better than this one.0864

We are just accustomed to this one. We think that that is the one, that this vector is actually (-1,2,-6, 5). It is not.0869

This (-1,2,-6,5), actually has nothing to do intrinsically with that point. It has to do with our imposing a label on that point so that we can deal with it mathematically.0877

This set of coordinates is just as good as this set of coordinates. This basis is just as good as the natural basis.0892

That is what you have to... so now, we are getting into the idea of linear algebra we want to sort of disabuse ourselves of the things that we have become accustomed to.0899

That, just because we have become accustomed to them, it does not mean that they are necessary, or necessarily better that anything that we might develop for these mathematical objects.0909

Okay. Let us actually demonstrate this mathematically. This whole idea of the standard basis. Okay.0918

Let s, this time we will let s equal the standard basis, remember? e1, e2, e3, and e4.0929

Which is equal to... (1,0,0,0), that is e1. (0,1,0,0), that is e2.0945

Again, the ei, the 1, 2, 3, 4, that means all of the entries for that vector are 0, except that number.0956

So, for example, e3, all of the entries are going to be 0 except the third entry which is going to be 1... (0,0,1,0).0965

You notice all of these vectors have a length of 1, and if you actually took the dot product of this with this, you would get 0.0974

So, they are length 1, which is very convenient, and they are also mutually orthogonal, perpendicular... and (0,0,0,1).0980

So, this is our set. Now we are using this basis. Well, we are going to let v equal the same thing... (-1,2,-6,5).0994

Okay. So, we set up the same system. We want constants c1e1 + c2e2 + c3e3 + c4e4... and these are vectors, I should actually notate them as such -- excuse me.1007

Such that they equal v. Well, again, this is just a system. Well, we take these vectors in the basis, set them up as a matrix, augment them with v, and we solve it.1029

So, we have (1,0,0,0), (0,1,0,0), e3 is (0,0,1,0), and (0,0,0,1).1041

We augment it with (-1,2,-6,5).1050

Now, we take a look at this, we want it to subject it to Gauss Jordan elimination to take it to reduced row echelon form.1055

Well, it is already in row echelon form. So, as it turns out, with respect to this basis, s, which is the natural basis... it is the vector itself (-1,2,-6,5), which is what we said from before.1060

The natural basis is the basis that we use all the time to represent a point. That is why... so... a particular vector does not own this set of numbers.1080

So this point that is represented by (-1,2,-6,5). It is only a representation of that point. It is not as if this (-1,2,-6,5) actually belong to that point. It is not an intrinsic property, in other words.1094

It is simply based on the basis that we chose for our frame of reference.1109

Those of you in engineering and physics, you are going to be changing frames of reference all the time, and you are always going to be choosing a different basis.1114

So, your coordinates are going to change. The relationship of the points themselves that you deal with, the vectors that you deal with do not change, but the coordinates are simply representations of those points - they are not intrinsic properties of those points.1123

Very curious, isn't it? Okay.1138

In other words, any basis will do. Any basis that is convenient.1143

Alright. Let us take one more look here. Let us do one more example, this time with the space of polynomials.1148

Okay. This time we will let our vector space v equal p1. It is the space of polynomials of degrees < or = 1.1164

So, for example, t + 6, 5t - 7, things like that, a degree less than or equal to 1... 8... that works because it is less than one.1187

We will let s be one of our basis and consist of t and 1, and we will let t be another basis, and it will be t + 1, and t - 1.1198

We will let our v random be 5t - 2, so first thing we want to do is we want to find... or the first issue is find the coordinate vector of v with respect to the basis s.1213

Okay -- let us go to blue here -- well, we want to solve the following.1236

We want to go c1 × t + c2 × 1 = 5t - 2.1240

That is what we are doing, it is a linear combination. Constants × the individual members of the basis, and we set it equal to the vector.1253

Well, when we see c1t + c2 × 1 = 5t - 2, this is just c1t + c2 = 5t - 2.1260

Well, c1t, this is an equality, so what is on the left has to equal what is one the right.1269

So, c1t is equal to 5t, that means c1 = 5, and c2 = -2. Well, there you go.1276

With respect to this basis, it is equal to 5 - 2. Well, 5 - 2, that is exactly what these numbers are here... 5 and -2.1289

So, you see that this basis... t and 1... this is the natural basis for the space of polynomials of degree < or = 1.1300

If you were talking about the space of polynomials of degree < or = 2, your natural basis would be t2, t and 1.1310

If you were talking about degree < or = 3, you would have t3, t2, t, and 1. This is the natural basis, the basis that we have become accustomed to talking about.1320

However, we have a different basis that can still represent -- you know -- this particular polynomial. This particular point in the space of polynomials... 5t - 2.1333

Let us calculate... now we want to find the coordinates of v with respect to the basis t.1347

So, we end up doing the same thing. We are going to go c1 × t + 1 + c2 × t - 1, and it is equal to 5t - 2.1358

Linear combination, set it equal to the vector. We go ahead and we solve for this. We get c1t + c1 + c2t - c2 = 5t - 2.1375

I collect terms... t × (c1 + c2) + c1 - c2 = 5t - 2.1387

Okay. Let me rewrite that on this page. t × c1 + c2 + c1 - c2 = 5t - 2.1402

Well, t, t, c1 + c2, so I get c1 + c2 = 5, and I get c1 - c2 = -2, right?1418

I can just go ahead and add this directly. So, I end up with 2c1 = 3, c1 = 3/2, and when I put that into any one of the other equations, I end up with c2 = 7/2.1434

Therefore, with respect to that vector... 5t - 2, let me write it up here again... this was our original random vector... 5t - 2.1452

The coordinates of that vector with respect to the other basis that we chose, is equal to 3/2 and 7/2.1465

This is extraordinary. All of our lives from early pre-algebra into algebra, algebra 1, algebra 2, a little bit of geometry, trigonometry, calculus... we think that a polynomial like 5t - 2 actually is 5t - 2.1479

Well, 5t - 2 is our way of dealing with that particular polynomial with respect to the standard basis.1499

As it turns out, this 5t - 2 can be written in a completely different way with respect to another basis.1507

I can write it as 3/2 t + 7/2, with respect to the other basis that I gave for that space of polynomials.1515

No one basis is better than another. This 5t - 2 is not... the polynomial itself is something that exists in a space... but in order for us to deal with something that exists, we have to put a label on it.1523

We need a frame of reference for it. That is what is going on with linear algebra. This is the difference between mathematics and science.1540

Science actually labels things and deals with them in a given frame of reference, but what mathematics tries to do is... these things exists but we need to understand that the labels that we give them are not intrinsic to those objects.1548

They are simply our way of dealing with them because at some point we have to deal with them in a certain way, and we deal with things from a frame of reference.1562

From a point of reference. For example, measurement does not mean anything.1570

If I said something is 5 feet long, well, it is based on a certain standard. It is based on a point of reference, and a certain definition of a distance.1574

As it turns out, those things are actually arbitrary. It has nothing to do with the relationship between the two points that I am measuring the distance between. Those are deeper mathematical properties.1583

Linear algebra is sort of an introduction to that kind of thinking.1594

So, there we go. Today we dealt with coordinates and ordered bases, and notice that we can actually deal with 2 different bases to talk about the same mathematical object.1599

Next lesson, we will actually start talking about change of basis and how we go from one to the other.1613

Thank you for joining us at Educator.com today, we will see you next time for some more linear algebra.1620

OR

### Start Learning Now

Our free lessons will get you started (Adobe Flash® required).