Enter your Sign on user name and password.

Forgot password?
Sign In | Subscribe
Start learning today, and be successful in your academic & professional career. Start Today!
Use Chrome browser to play professor video
Raffi Hovasapian

Raffi Hovasapian

Spanning Set for a Vector Space

Slide Duration:

Table of Contents

I. Linear Equations and Matrices
Linear Systems

39m 3s

Intro
0:00
Linear Systems
1:20
Introduction to Linear Systems
1:21
Examples
10:35
Example 1
10:36
Example 2
13:44
Example 3
16:12
Example 4
23:48
Example 5
28:23
Example 6
32:32
Number of Solutions
35:08
One Solution, No Solution, Infinitely Many Solutions
35:09
Method of Elimination
36:57
Method of Elimination
36:58
Matrices

30m 34s

Intro
0:00
Matrices
0:47
Definition and Example of Matrices
0:48
Square Matrix
7:55
Diagonal Matrix
9:31
Operations with Matrices
10:35
Matrix Addition
10:36
Scalar Multiplication
15:01
Transpose of a Matrix
17:51
Matrix Types
23:17
Regular: m x n Matrix of m Rows and n Column
23:18
Square: n x n Matrix With an Equal Number of Rows and Columns
23:44
Diagonal: A Square Matrix Where All Entries OFF the Main Diagonal are '0'
24:07
Matrix Operations
24:37
Matrix Operations
24:38
Example
25:55
Example
25:56
Dot Product & Matrix Multiplication

41m 42s

Intro
0:00
Dot Product
1:04
Example of Dot Product
1:05
Matrix Multiplication
7:05
Definition
7:06
Example 1
12:26
Example 2
17:38
Matrices and Linear Systems
21:24
Matrices and Linear Systems
21:25
Example 1
29:56
Example 2
32:30
Summary
33:56
Dot Product of Two Vectors and Matrix Multiplication
33:57
Summary, cont.
35:06
Matrix Representations of Linear Systems
35:07
Examples
35:34
Examples
35:35
Properties of Matrix Operation

43m 17s

Intro
0:00
Properties of Addition
1:11
Properties of Addition: A
1:12
Properties of Addition: B
2:30
Properties of Addition: C
2:57
Properties of Addition: D
4:20
Properties of Addition
5:22
Properties of Addition
5:23
Properties of Multiplication
6:47
Properties of Multiplication: A
7:46
Properties of Multiplication: B
8:13
Properties of Multiplication: C
9:18
Example: Properties of Multiplication
9:35
Definitions and Properties (Multiplication)
14:02
Identity Matrix: n x n matrix
14:03
Let A Be a Matrix of m x n
15:23
Definitions and Properties (Multiplication)
18:36
Definitions and Properties (Multiplication)
18:37
Properties of Scalar Multiplication
22:54
Properties of Scalar Multiplication: A
23:39
Properties of Scalar Multiplication: B
24:04
Properties of Scalar Multiplication: C
24:29
Properties of Scalar Multiplication: D
24:48
Properties of the Transpose
25:30
Properties of the Transpose
25:31
Properties of the Transpose
30:28
Example
30:29
Properties of Matrix Addition
33:25
Let A, B, C, and D Be m x n Matrices
33:26
There is a Unique m x n Matrix, 0, Such That…
33:48
Unique Matrix D
34:17
Properties of Matrix Multiplication
34:58
Let A, B, and C Be Matrices of the Appropriate Size
34:59
Let A Be Square Matrix (n x n)
35:44
Properties of Scalar Multiplication
36:35
Let r and s Be Real Numbers, and A and B Matrices
36:36
Properties of the Transpose
37:10
Let r Be a Scalar, and A and B Matrices
37:12
Example
37:58
Example
37:59
Solutions of Linear Systems, Part 1

38m 14s

Intro
0:00
Reduced Row Echelon Form
0:29
An m x n Matrix is in Reduced Row Echelon Form If:
0:30
Reduced Row Echelon Form
2:58
Example: Reduced Row Echelon Form
2:59
Theorem
8:30
Every m x n Matrix is Row-Equivalent to a UNIQUE Matrix in Reduced Row Echelon Form
8:31
Systematic and Careful Example
10:02
Step 1
10:54
Step 2
11:33
Step 3
12:50
Step 4
14:02
Step 5
15:31
Step 6
17:28
Example
30:39
Find the Reduced Row Echelon Form of a Given m x n Matrix
30:40
Solutions of Linear Systems, Part II

28m 54s

Intro
0:00
Solutions of Linear Systems
0:11
Solutions of Linear Systems
0:13
Example I
3:25
Solve the Linear System 1
3:26
Solve the Linear System 2
14:31
Example II
17:41
Solve the Linear System 3
17:42
Solve the Linear System 4
20:17
Homogeneous Systems
21:54
Homogeneous Systems Overview
21:55
Theorem and Example
24:01
Inverse of a Matrix

40m 10s

Intro
0:00
Finding the Inverse of a Matrix
0:41
Finding the Inverse of a Matrix
0:42
Properties of Non-Singular Matrices
6:38
Practical Procedure
9:15
Step1
9:16
Step 2
10:10
Step 3
10:46
Example: Finding Inverse
12:50
Linear Systems and Inverses
17:01
Linear Systems and Inverses
17:02
Theorem and Example
21:15
Theorem
26:32
Theorem
26:33
List of Non-Singular Equivalences
28:37
Example: Does the Following System Have a Non-trivial Solution?
30:13
Example: Inverse of a Matrix
36:16
II. Determinants
Determinants

21m 25s

Intro
0:00
Determinants
0:37
Introduction to Determinants
0:38
Example
6:12
Properties
9:00
Properties 1-5
9:01
Example
10:14
Properties, cont.
12:28
Properties 6 & 7
12:29
Example
14:14
Properties, cont.
18:34
Properties 8 & 9
18:35
Example
19:21
Cofactor Expansions

59m 31s

Intro
0:00
Cofactor Expansions and Their Application
0:42
Cofactor Expansions and Their Application
0:43
Example 1
3:52
Example 2
7:08
Evaluation of Determinants by Cofactor
9:38
Theorem
9:40
Example 1
11:41
Inverse of a Matrix by Cofactor
22:42
Inverse of a Matrix by Cofactor and Example
22:43
More Example
36:22
List of Non-Singular Equivalences
43:07
List of Non-Singular Equivalences
43:08
Example
44:38
Cramer's Rule
52:22
Introduction to Cramer's Rule and Example
52:23
III. Vectors in Rn
Vectors in the Plane

46m 54s

Intro
0:00
Vectors in the Plane
0:38
Vectors in the Plane
0:39
Example 1
8:25
Example 2
15:23
Vector Addition and Scalar Multiplication
19:33
Vector Addition
19:34
Scalar Multiplication
24:08
Example
26:25
The Angle Between Two Vectors
29:33
The Angle Between Two Vectors
29:34
Example
33:54
Properties of the Dot Product and Unit Vectors
38:17
Properties of the Dot Product and Unit Vectors
38:18
Defining Unit Vectors
40:01
2 Very Important Unit Vectors
41:56
n-Vector

52m 44s

Intro
0:00
n-Vectors
0:58
4-Vector
0:59
7-Vector
1:50
Vector Addition
2:43
Scalar Multiplication
3:37
Theorem: Part 1
4:24
Theorem: Part 2
11:38
Right and Left Handed Coordinate System
14:19
Projection of a Point Onto a Coordinate Line/Plane
17:20
Example
21:27
Cauchy-Schwarz Inequality
24:56
Triangle Inequality
36:29
Unit Vector
40:34
Vectors and Dot Products
44:23
Orthogonal Vectors
44:24
Cauchy-Schwarz Inequality
45:04
Triangle Inequality
45:21
Example 1
45:40
Example 2
48:16
Linear Transformation

48m 53s

Intro
0:00
Introduction to Linear Transformations
0:44
Introduction to Linear Transformations
0:45
Example 1
9:01
Example 2
11:33
Definition of Linear Mapping
14:13
Example 3
22:31
Example 4
26:07
Example 5
30:36
Examples
36:12
Projection Mapping
36:13
Images, Range, and Linear Mapping
38:33
Example of Linear Transformation
42:02
Linear Transformations, Part II

34m 8s

Intro
0:00
Linear Transformations
1:29
Linear Transformations
1:30
Theorem 1
7:15
Theorem 2
9:20
Example 1: Find L (-3, 4, 2)
11:17
Example 2: Is It Linear?
17:11
Theorem 3
25:57
Example 3: Finding the Standard Matrix
29:09
Lines and Planes

37m 54s

Intro
0:00
Lines and Plane
0:36
Example 1
0:37
Example 2
7:07
Lines in IR3
9:53
Parametric Equations
14:58
Example 3
17:26
Example 4
20:11
Planes in IR3
25:19
Example 5
31:12
Example 6
34:18
IV. Real Vector Spaces
Vector Spaces

42m 19s

Intro
0:00
Vector Spaces
3:43
Definition of Vector Spaces
3:44
Vector Spaces 1
5:19
Vector Spaces 2
9:34
Real Vector Space and Complex Vector Space
14:01
Example 1
15:59
Example 2
18:42
Examples
26:22
More Examples
26:23
Properties of Vector Spaces
32:53
Properties of Vector Spaces Overview
32:54
Property A
34:31
Property B
36:09
Property C
36:38
Property D
37:54
Property F
39:00
Subspaces

43m 37s

Intro
0:00
Subspaces
0:47
Defining Subspaces
0:48
Example 1
3:08
Example 2
3:49
Theorem
7:26
Example 3
9:11
Example 4
12:30
Example 5
16:05
Linear Combinations
23:27
Definition 1
23:28
Example 1
25:24
Definition 2
29:49
Example 2
31:34
Theorem
32:42
Example 3
34:00
Spanning Set for a Vector Space

33m 15s

Intro
0:00
A Spanning Set for a Vector Space
1:10
A Spanning Set for a Vector Space
1:11
Procedure to Check if a Set of Vectors Spans a Vector Space
3:38
Example 1
6:50
Example 2
14:28
Example 3
21:06
Example 4
22:15
Linear Independence

17m 20s

Intro
0:00
Linear Independence
0:32
Definition
0:39
Meaning
3:00
Procedure for Determining if a Given List of Vectors is Linear Independence or Linear Dependence
5:00
Example 1
7:21
Example 2
10:20
Basis & Dimension

31m 20s

Intro
0:00
Basis and Dimension
0:23
Definition
0:24
Example 1
3:30
Example 2: Part A
4:00
Example 2: Part B
6:53
Theorem 1
9:40
Theorem 2
11:32
Procedure for Finding a Subset of S that is a Basis for Span S
14:20
Example 3
16:38
Theorem 3
21:08
Example 4
25:27
Homogeneous Systems

24m 45s

Intro
0:00
Homogeneous Systems
0:51
Homogeneous Systems
0:52
Procedure for Finding a Basis for the Null Space of Ax = 0
2:56
Example 1
7:39
Example 2
18:03
Relationship Between Homogeneous and Non-Homogeneous Systems
19:47
Rank of a Matrix, Part I

35m 3s

Intro
0:00
Rank of a Matrix
1:47
Definition
1:48
Theorem 1
8:14
Example 1
9:38
Defining Row and Column Rank
16:53
If We Want a Basis for Span S Consisting of Vectors From S
22:00
If We want a Basis for Span S Consisting of Vectors Not Necessarily in S
24:07
Example 2: Part A
26:44
Example 2: Part B
32:10
Rank of a Matrix, Part II

29m 26s

Intro
0:00
Rank of a Matrix
0:17
Example 1: Part A
0:18
Example 1: Part B
5:58
Rank of a Matrix Review: Rows, Columns, and Row Rank
8:22
Procedure for Computing the Rank of a Matrix
14:36
Theorem 1: Rank + Nullity = n
16:19
Example 2
17:48
Rank & Singularity
20:09
Example 3
21:08
Theorem 2
23:25
List of Non-Singular Equivalences
24:24
List of Non-Singular Equivalences
24:25
Coordinates of a Vector

27m 3s

Intro
0:00
Coordinates of a Vector
1:07
Coordinates of a Vector
1:08
Example 1
8:35
Example 2
15:28
Example 3: Part A
19:15
Example 3: Part B
22:26
Change of Basis & Transition Matrices

33m 47s

Intro
0:00
Change of Basis & Transition Matrices
0:56
Change of Basis & Transition Matrices
0:57
Example 1
10:44
Example 2
20:44
Theorem
23:37
Example 3: Part A
26:21
Example 3: Part B
32:05
Orthonormal Bases in n-Space

32m 53s

Intro
0:00
Orthonormal Bases in n-Space
1:02
Orthonormal Bases in n-Space: Definition
1:03
Example 1
4:31
Theorem 1
6:55
Theorem 2
8:00
Theorem 3
9:04
Example 2
10:07
Theorem 2
13:54
Procedure for Constructing an O/N Basis
16:11
Example 3
21:42
Orthogonal Complements, Part I

21m 27s

Intro
0:00
Orthogonal Complements
0:19
Definition
0:20
Theorem 1
5:36
Example 1
6:58
Theorem 2
13:26
Theorem 3
15:06
Example 2
18:20
Orthogonal Complements, Part II

33m 49s

Intro
0:00
Relations Among the Four Fundamental Vector Spaces Associated with a Matrix A
2:16
Four Spaces Associated With A (If A is m x n)
2:17
Theorem
4:49
Example 1
7:17
Null Space and Column Space
10:48
Projections and Applications
16:50
Projections and Applications
16:51
Projection Illustration
21:00
Example 1
23:51
Projection Illustration Review
30:15
V. Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors

38m 11s

Intro
0:00
Eigenvalues and Eigenvectors
0:38
Eigenvalues and Eigenvectors
0:39
Definition 1
3:30
Example 1
7:20
Example 2
10:19
Definition 2
21:15
Example 3
23:41
Theorem 1
26:32
Theorem 2
27:56
Example 4
29:14
Review
34:32
Similar Matrices & Diagonalization

29m 55s

Intro
0:00
Similar Matrices and Diagonalization
0:25
Definition 1
0:26
Example 1
2:00
Properties
3:38
Definition 2
4:57
Theorem 1
6:12
Example 3
9:37
Theorem 2
12:40
Example 4
19:12
Example 5
20:55
Procedure for Diagonalizing Matrix A: Step 1
24:21
Procedure for Diagonalizing Matrix A: Step 2
25:04
Procedure for Diagonalizing Matrix A: Step 3
25:38
Procedure for Diagonalizing Matrix A: Step 4
27:02
Diagonalization of Symmetric Matrices

30m 14s

Intro
0:00
Diagonalization of Symmetric Matrices
1:15
Diagonalization of Symmetric Matrices
1:16
Theorem 1
2:24
Theorem 2
3:27
Example 1
4:47
Definition 1
6:44
Example 2
8:15
Theorem 3
10:28
Theorem 4
12:31
Example 3
18:00
VI. Linear Transformations
Linear Mappings Revisited

24m 5s

Intro
0:00
Linear Mappings
2:08
Definition
2:09
Linear Operator
7:36
Projection
8:48
Dilation
9:40
Contraction
10:07
Reflection
10:26
Rotation
11:06
Example 1
13:00
Theorem 1
18:16
Theorem 2
19:20
Kernel and Range of a Linear Map, Part I

26m 38s

Intro
0:00
Kernel and Range of a Linear Map
0:28
Definition 1
0:29
Example 1
4:36
Example 2
8:12
Definition 2
10:34
Example 3
13:34
Theorem 1
16:01
Theorem 2
18:26
Definition 3
21:11
Theorem 3
24:28
Kernel and Range of a Linear Map, Part II

25m 54s

Intro
0:00
Kernel and Range of a Linear Map
1:39
Theorem 1
1:40
Example 1: Part A
2:32
Example 1: Part B
8:12
Example 1: Part C
13:11
Example 1: Part D
14:55
Theorem 2
16:50
Theorem 3
23:00
Matrix of a Linear Map

33m 21s

Intro
0:00
Matrix of a Linear Map
0:11
Theorem 1
1:24
Procedure for Computing to Matrix: Step 1
7:10
Procedure for Computing to Matrix: Step 2
8:58
Procedure for Computing to Matrix: Step 3
9:50
Matrix of a Linear Map: Property
10:41
Example 1
14:07
Example 2
18:12
Example 3
24:31
Loading...
Educator®

Please sign in to participate in this lecture discussion.

Sign-In OR Create Account

Enter your Sign-on user name and password.

Forgot password?

Start Learning Now

Our free lessons will get you started (Adobe Flash® required).
Get immediate access to our entire library.

Sign up for Educator.com

Membership Overview

  • Unlimited access to our entire library of courses.
  • Search and jump to exactly what you want to learn.
  • *Ask questions and get answers from the community and our teachers!
  • Practice questions with step-by-step solutions.
  • Download lesson files for programming and software training practice.
  • Track your course viewing progress.
  • Download lecture slides for taking notes.
This is a quick preview of the lesson. For full access, please Log In or Sign up.
For more information, please see full course syllabus of Linear Algebra
  • Discussion

  • Download Lecture Slides

  • Table of Contents

  • Transcription

  • Related Books

Lecture Comments (18)

1 answer

Last reply by: Professor Hovasapian
Wed Oct 26, 2016 7:58 PM

Post by Kaye Lim on September 23, 2016

For example 1, If I choose a specific number for the random vector (a,b,c), then I will get a specific number value for (c1,c2,c3). Because the solution of (c1,c2,c3) exists, we conclude that the given 3 vectors v1,v2 and v3 span the vector space R^3.

However, for example 4, we got infinite number of solution for (x1,x2,x3,x4). I thought we would conclude that the set of 4 given vectors (1,-2,1,4),(1,-2,1,4),(0,1,-1,-1) and (2,5,3,9) would span the Null space as in example 1. Why in example 4, the solution vectors span the Null space instead of the 4 given vectors?

1 answer

Last reply by: Professor Hovasapian
Wed Sep 2, 2015 11:44 PM

Post by Alexander Tetreault on August 31, 2015

Hi Raffi,
I am somewhat confused by the definition of 'span', it seems as though it has two meanings. In the subspaces video you defined it as the
"set of all linear combinations of the elements in S" while in this video it was defined as being the set of vectors by which all other vectors are a linear combination of. Basically, by my understanding, one means the set of elements created while the other means the set of vectors that do the creating. Could you please clarify?

2 answers

Last reply by: Growth Mindset Believer
Sun Apr 3, 2016 11:44 PM

Post by Growth Mindset Believer on May 19, 2015

This is not a question.  I just wanted to say thank you for your lectures.  I've gained so much from watching them.  Often in mathematics I can understand how to do something computationally without understanding the underlying meaning of what I'm doing.  However, after watching your videos multiple times I've gained a much deeper understanding of linear algebra, which is helping me a great deal with the linear algebra course that I'm currently enrolled in.  

What I usually do is watch your lecture before reading the section of the book that I'm working on.  Then, I try a few examples and go back and watch your lecture again to gain a deeper understanding of what I've done.  Maybe some students only need to hear something once in order to fully understand it, but I've found that repetition of concepts and regular practice tend to be the only ways in which I can get an A in a course, especially as I've gone higher in mathematics and the material has gotten more complicated.  

This is why online lectures such as yours are so great since I can watch them however many times as I want whereas any instructors will grow tired of explaining something multiple times in person, and I tend to worry that I'm holding back the rest of the class from learning if I ask too many questions in a real life lecture hall since time is limited and the professor has to get through his or her lesson plan.  These issues do not exist with online lectures; if I don't understand a concept, I just watch it again until I get it.  

This is also part of the beauty of mathematics, since going over past material can shed light on new concepts that I didn't understand the first time around, so it's like a well that I can continually draw water from.  I also prefer your lectures over the ones I find on youtube since you are a trained mathematician who has taught higher education courses so your teaching style is refined and your lectures are well structured, as opposed to a lot of the videos I'll find on youtube where the person doesn't really have a good grasp of the material and just skips to computation without explaining what they are doing.  

Lastly, I appreciate your personality.  You come across as a kind uncle type; I can't detect a hint of arrogance in your personality as opposed to many of my real life professors who act like the students are beneath them.  Thank you again for making your videos and sorry for writing such a long post, I just wanted to thank you for helping me with my linear algebra course this semester.

2 answers

Last reply by: Christian Fischer
Tue Oct 1, 2013 2:24 AM

Post by Christian Fischer on September 25, 2013

Hi Raffi: Just a question for example 4. Is it correctly understood that Since we have 2 free parameters "s" and "t" this does NOT mean we have a infinate number of vectors in our nullspace because t*(-1,1,0,0) (our solution vector) is the same vector no matter what value of t we use (it has the same direction and can just be scaled up and down)??

So i mean (-1,1,0,0) is the same vector as (-5,5,0,0)?  

4 answers

Last reply by: Professor Hovasapian
Wed Sep 25, 2013 4:57 PM

Post by Manfred Berger on June 15, 2013

I have a question regarding example 2: Since P_1(t) and P_2(t) are both second degree polynomials,there are no vectors present in this set to span any vector below degree 2. Isn't it therefor obvious that this can't be a spaning set for the entire space even before checking it formally?

2 answers

Last reply by: Professor Hovasapian
Tue Feb 19, 2013 12:43 AM

Post by Megan Kell on February 17, 2013

at 21:00 you say that the only way this solution could be consistent is if b-4a+2c = 0, and the only way that this is possible is if b, a, and c are all zero, and since that would be a trivial solution, this system is inconsistent and therefore has no solution. Why can't b=0, a=1 and c=2? This would also cause b-4a+2c = 0-4(1)+2(2)=0 and it would not be a trivial solution, thus allowing the system to be consistent. Why is this not possible?

Spanning Set for a Vector Space

Lecture Slides are screen-captured images of important points in the lecture. Students can download and print out these lecture slide images to do practice problems as well as take notes while watching the lecture.

  • Intro 0:00
  • A Spanning Set for a Vector Space 1:10
    • A Spanning Set for a Vector Space
    • Procedure to Check if a Set of Vectors Spans a Vector Space
    • Example 1
    • Example 2
    • Example 3
    • Example 4

Transcription: Spanning Set for a Vector Space

Welcome back to Educator.com and welcome back to linear algebra.0000

Today we are going to talk about something called the span of a set of vectors.0004

It means exactly what you think that it means. If I have a collection of vectors, 2, 5, 10, the number does not actually matter that much.0011

We want to talk about all of the possible linear combinations of those vectors, that are possible... all of the vectors that can be built from that particular set.0021

So, for example, if I take R2... the normal plane, I know that I have the vector in the x direction, I know that I have the vector in the y direction, and if I take any collection of those, multiplied by constants, let us say 5i + 6j, I can represent every single vector in R2.0033

So, those two vectors, we say it actually spans R2.0050

So, that is just sort of the general description that a span is.0055

Unfortunately, in this case, the name actually gives you an idea of what it is that you are talking about, so it is not strange. So, let us start with a couple of definitions and let us see what we can do.0059

Okay. Now, in a vector space, there is an infinite number of elements and then the reason for that is if I have at least one element in that space, and I know there is at least one, I know that I can multiply that element by any number that I want. Any constant.0072

Therefore, since that constant is just a real number, there are an infinite number of elements in that vector space.0091

However, what we want to do is we want to see if we can find a finite number of elements in that vector space that when I take certain combinations of them, linear combinations of them, that we can describe the entire space.0098

That means all infinite vectors based on just that finite set of vectors.0114

So, let us actually write out definitions down.0122

Okay... vectors v1, v2 and so forth onto vk are said to span v, which is our vector space.0134

If every vector in v can be written as a linear combination of the v1, v2, v3.0156

SO, now we have actually written it down. If I have vectors v1 through vk, let us say 6 of them.0188

And... if any linear -- excuse me -- if any combination of those vectors, they do not have to all be included, you know some of the constants can be 0, but if some combination of those vectors can represent very single vector in that vector space, then we say that that set of vectors actually spans the vector space.0194

Okay. Now, let us list the procedure to check to see if a set of vectors actually spans a vector space.0218

Procedure to check if the set of vectors spans a vector space, so vs, for vector space.0233

So, let us see... you know what... let us leave it as blue for right now.0253

Choose an arbitrary vector v in the vector space, so when you are given a vector space, just choose some arbitrary vector.0260

So, if you are given 4-space, R4, then just choose the random vector (a,b,c,d), or you can call it (x,y,z,t), just some random vector and label it... to determine if v is a linear combination of the given vectors.0273

So, this is basically just an application of the definition, which is what definitions are all about.0311

Let me just take a quick second to talk about definitions real quickly.0315

Often in mathematics, we begin with definitions. They are a basic element.0319

We use those definitions to start to create theorems and we sort of build from there, build our way up from the bottom if you will.0325

If you find that you have lost your way in mathematics, more often than not, you want to go back to your definitions, and 90% of the time, the problem is something is either missing from a definition, or there is a definition that the student has not quite wrapped his mind fully around.0333

Again, mathematics is very, very precise. It says exactly what it wants to say, no more and no less.0349

Okay. Determine if v is a linear combination of the given vectors... definition.0357

If so, if it is a linear combination, then yes, the vectors actually span the vector space.0365

If not, if there is no way to form a linear combination, then no. Okay.0373

So, again, when you are forming a linear combination, you are taking a constant, multiplying it by a bunch of vectors and setting it equal to some, in this case an arbitrary constant.0385

So, once again, we are going to investigate the linear system, the linear systems are ubiquitous in linear algebra. Okay.0394

So, let us start with an example here.0403

Let us go to the next page. So, first example. Let us consider R3.0407

So, regular 3 space... (x,y,z), the space we live in.0418

We are going to let v1 = (1,2,1), we will let v2 = (1,0... -- oops, excuse me -- (1,0,2), and we will let our third vector be (1,1,0).0424

I wrote these out in the form of a list, in terms of their coordinates... I could write the vertically, I could write them horizontally without spaces, however you want.0447

Now, the question is these three vectors that I have chosen randomly... do they... so do v1, v2, v3 span R3?0455

Are these vectors enough to represent all of R3.0469

Is a linear combination, any linear combination, any of these three vectors... can I find any vector in R3 and use these three to represent it?0475

Well, let us see. Okay.0484

Well, first thing we do from our procedure, let us choose an arbitrary vector in R3... arbitrary v in R3.0489

So, let us choose, let us say that v is just equal to (a,b,c,d), and again variables do not matter. It is just some random vector.0503

Okay. Now, we want to see the second thing that we are going to check is are there constants c1, c2, c3 such that, well, c1v1 + c2v2 + c3v3, a linear combination equals v.0510

That is what we are doing. That is all we are doing. We are taking an arbitrary vector, we are setting it equal to the constant times the vectors that we have in our set, and we are going to solve this linear system.0538

So, this is the vector representation, the simplest representation... now we are going to break it down a bit.0548

This is, of course, equal to c1 × the first vector, which is (1,2,1)... I am going to go ahead and write them vertically.0554

That is just a personal choice of mine. You are welcome to write them any way you please. I like to do them vertically because it keeps the coefficient of the ultimate matrix that we are going to do... systematic.0561

These becomes the columns of the particular matrix.0571

Plus c2 × (1,0,2)... I hope that 2 is clear... + c3 × third vector, our third vector is (1,1,0), and we are setting it equal to (a,b,c).0576

We want to find out if this linear system actually has a... so, let us write this system... when we actually multiply these c1's out, we get this... I would like you to at least see one of them.0596

We get c1 + c2 + c3 = a.0607

We get 2c1, c20, I will just leave that over here, + c3 = b, and I get c1 + 2c2, and there is nothing over here = c.0615

This is, of course, equivalent to the following augmented matrix... (1,1,1,a)... (1,1,1,a).0632

Again, I am taking the coefficients of c1, c2, c3... c1, c2, c3 is what I am looking for.0640

Can I find a solution to this? If I can, then yes... (2,0,1,b), (1,2,0)c, so this is the system that we are going to solve.0646

We are going to subject it to reduced row echelon form. I will not go ahead and show you the reduced row echelon form... I of course did this with my computer, with my Maple software.0663

Fast and beautiful. As it turns out, this does have a solution. In other words, it has a non-trivial solution, and here is what it looks like.0672

You end up with c3 = (4a - b - 2c)/3 -- oops, these strange lines that show up on here.0683

Let us see... c2 = (a - b + c)/3... let me go ahead and put parentheses around so that you know the numerator is that way, because I am writing my fractions not two dimensional, but in a line.0703

c1 = (-2a + 2b + c)... and again, all divided by 3.0729

So, for any choice of a, b, or c, I can just put them in here and the constants that I get end up being a solution to this system.0741

So, because there is a solution to this system... this system... that means there is a solution to this because these are all equivalent.0750

That means that any vector that I choose, and again, I just chose it (a,b,c)... is... I can find constants for them and these are the explicit values of those constants no matter what vector I choose.0760

So, yes... let us try this again... so here the answer is yes... v1, v2, v3 do span R3.0771

Turn this and make this red... that vector, that vector and that vector are a perfectly good span for R3.0794

Now, you know, of course, that R3, or if you do not, I am telling you right now... that R3, the standard 3 vectors that we use as the set which spans the space is of course the i vector, the j vector in the y direction, and the k vector in the z direction.0806

Those are mutually perpendicular. Well, as it turns out, you can have any collection of vectors that may span a vector space.0825

There is no particular reason for choosing one over the other, so there are an infinite number of them, but in certain circumstances... it makes sense to choose one over the other.0833

In the case of the i,j,k, we choose it because they have the property that they are mutually orthogonal -- excuse me.0845

Which actually -- excuse me -- makes it easier to deal with certain things.0855

Okay. Let us consider another example. Let us consider... we will do this example in red... p2.0863

If you recall p2 is the vector space of all polynomials... all polynomials of degree 2 or less.0877

So, we will let our set s, this time we will actually do it in set notation... p1t, p2t, oh this p2 right here has nothing to do with this p2.0889

This is a general symbol for p2, the space of polynomials of degree 2. This just happens to be number 2 in the list.0903

p1... let us define that one as t2 + 2t + 1, and we will say that p2 of t is t2 + 2.0912

Okay. We want to know does s... do these 2 polynomials, are they enough to span all of p2.0928

In other words, can I take two constants c1 and c2, and multiply them by this... by these 2... c1 × this one, c2 × this one.0942

Can I always find constants such that every single polynomial, every second degree polynomial, or first degree polynomial, remember it is degree 2 or less... can be represented by just these 2 vectors... well, let us find out.0953

So, again, the first thing that we do is we choose an arbitrary vector in p2... and an arbitrary vector in the space of polynomials looks as follows... at2 + bt + c, right? because it is t2, right?0968

And now... we want to show the following... we want c... c1 × p1t + c2 × p2t = our arbitrary vector that we picked.0985

This one up here equals at2 + bt + c, so again, we are just setting up a basic equation... a linear combination of the vectors that we are given... does it equal our arbitrary vector?1008

Well, let us go ahead and expand this out based on what they are... so, we have c1 × p1t which is t2 + 2t + 1 + c2 × t2 + 2... okay.1020

We want that to equal at2 + bt + c... let us multiply this out.1045

Now, when you multiply this out, this is going to be c1t2, 2c1t + c1, and then 2c22 + 2c2.1053

I am going to skip that step... and just... imagine that I just multiplied it, basic distribution, you know, something from algebra 1.1063

Then I am going to combine terms in t2, in t, and in t to the 0 power.1068

So, it is going to look like this when I actually expand it. It is just one line that I am skipping.1075

It is going to be c1 + c2 × + 2c1... sorry... t2... 2c1 × t + c1 + 2c2 = at2 + bt + c.1080

Well, we have an equality sign here. We have some -- change this to blue -- this is the coefficient of t2 here.1103

This is the coefficient of t2 in the equality, so that equals that.1112

That is our first equation... c1 + c2 = a.1117

Well, 2c1 is the coefficient of t, here b is the coefficient of t. They are equal on both sides, so this one becomes 2c1 = b.1125

Then, we do this one over here. We have c1 + 2c2 = c... c1 + 2c2 = c.1139

This is going to be equivalent to the following... (1,1,a)... I am just taking coefficients... (2,0,b), and (1,2,c).1150

Now, when I subject that to reduced row echelon form, this one I do want you to see...1164

So, we subject that to Gauss Jordan elimination for reduced row echelon... we get (1,0,2a-c).1173

(0,1,c-a), and we get (0,0,b-4a+2c). We get this as the reduced row echelon of the system that we just took care of.1185

Now, this is possible if and only if this thing right there... so you see this 0 right here, this... that means this thing right here... the b - 4a + 2c has to equal 0 for this thing to be consistent.1201

The only way that that b - 4a + 2c is equal to 0 is if b = 0, a = 0, c = 0. That is just the trivial solution.1223

So, there is no solution to this system. So, the answer to this one is no.1234

p1 and p2, t, those two polynomials do not span p2, which is the vector space of polynomials of degree 2 or less.1239

So, it is not enough. I need some other vector, I do not know, maybe 2 or 3 of them.1254

Let us try... let us go back to red here for example number 3.1268

These will just list, you know these already... i and j span R2.1278

We know that i,j, and k, the three unit vectors in the (x,y,z) direction span R3, and so on, onto R4... R5.1287

So, e1, we do not give the letters anymore, after 4... we just call them e.1298

e1, e2, and so on all the way to eN... They span n-space, which is RN.1305

Now, you have probably already noticed this, but notice, I have 2 vectors that span R2, three vectors that span R3, N vectors needed to span RN, that is a general truth.1318

We will talk more about that in a minute... well, actually the next lesson.1330

Okay. Now we are going to get to a profoundly, profoundly important example.1336

This one, we want to do very, very carefully... therefore, so, let us consider the following homogeneous equation... ax = 0, such that a, and we are actually going to explicitly list this vector... this matrix, excuse me, (1,1,0,2), (-2,-2,1,-5), (1,1,-1,3), (4,4,-1,9). Okay.1343

So, we have this homogeneous system, matrix × some vector x is equal to the 0 vector. Here is our matrix a.1386

In other words, what are my x values that will actually make this true such that when I multiply this by some vector x, which I will put in blue...1397

Let us just say this is going to be x1, x2, x3, x4... what are the values of x that will actually make it true so that when I multiply these, I end up with 0.1407

It might be one vector, it might be an infinite number of vectors, it might be no vectors.1417

So, if you remember what we called the set of vectors that actually make this true, we called it the null space.1423

The solution space of the homogeneous system... in other words, the vectors that make this space be true. We called it the null space.1430

So, let me just write that down again... the null space is a very, very important space. Okay.1437

Now, one thing that we also know about the null space is the null space is a subspace... of... in this case, R4, remember?1445

So, if we happen to be dealing with a null space in R4, you have these 1, 2, 3, 4... 4 by 4, we are looking for a vector which has 4 entries in it, so it is from 4-space.1465

And... remember that we proved that it is actually a null space, it is a subspace of R4.1476

Not just a subset... it is a very special kind of subset... it is an actual subspace.1480

In other words, it is a vector space in its own right. It is as if I can ignore the rest of R4, just look at that, and I can treat it the same way I would the rest of that vector space, it has special properties... all of the properties of a vector space.1486

Okay. Now, can we find a set of vectors... here is our problem here... can we find a set of vectors that stands the null space?1499

Let me write that down... can we find a set of vectors... set which spans... singular because it is set... this null space.1511

So, again, we are not looking to expand the entire R4. We are just looking for something that will span this particular null space. The null space based on this particular matrix.1535

Well, let us solve this system... Let us see what x's we can come up with.1546

So, when we solve the system, we create the augmented matrix. When I take this and I put 0's over on the final column in matrix a, and I end up subjecting that to reduced row echelon form, which I will not do... well, actually you know what, let me go ahead and write it all out. It is not a problem.1555

So, I have 1...1... yes, (1,1,0,2,0), (-2,-2,1,-5,0), (1,1,-1,3,0)... and (4,4,-1,9,0).1581

So, this is our augmented matrix. This is just this thing in matrix form, and then I am going to subject this to Gauss Jordan elimination to convert it into reduced row echelon, so let us write out the reduced row echelon form of this.1613

It is going to be (1,1,0,2,0), (0,0,1,-1,0), (0,0,0,0,0)... 0's everywhere.1626

Okay. So, this is our reduced row echelon, and remember... this represents x1, x2, x3, x4. This is the -- we are looking for a vector... a 4 vector.1646

So, let us take a look here. This one is fine. This one has a leading entry -- let me go to red -- so, this one is good, and this one has a leading entry.1661

This one does not have a leading entry. The x2. So... x2... x3, x4, yes.1672

So, that second does not have a leading entry and the fourth does not have a leading entry.1681

Remember, when we do something, when we convert it to reduced row echelon, the columns that do not have a leading entry actually are free parameters... I can call them s, t, x, y, they can be any number I want.1686

Then I solve for the other two. So, let us go ahead and set x2 = r, and we will set x4 = s, they can be any numbers, they are free parameters.1697

Then, what I get is x1 + x2 + 2x4, that is what this line here says... x1 + x2 +2x4 = 0.1716

Therefore, x... x2 is R, x4 is S, so x1 = -R - 2s.1734

So, I have x2, I have x4, I have x1 that I just calculated, and now I will do x3... x3 here... x3... this right here...x3 - x4 = 0.1745

So, x3 = x4, which is equal to x... s... I am sorry.1768

So, we have x2 is r, x4 is s, x3 is s, excuse me, and x1 is -r - 2s. Let me rewrite that a little bit differently.1774

I am going to write that as the following -- let me go back to blue here.1785

x1 = -r - 2s, x2 = r, and there is a reason why I am writing it this way, you will see in a minute... x3 = s -- I will put the s over there in that column.1791

x4 = s, well take a look at this x1, x2, x3, x4... I have r's here, 0, I have 2, 0, s, s.1811

I can rewrite this as the following in vector form. This is equivalent to (x1, x2, x3, x4) equal to, let me pull out the r from here, and I can just take these coefficients (-1,1,0,0).1823

And... plus, now I can pull out an s from here, and I can take these coefficients, (-2,0,1,1), if that is not clear, just stop and take a look at what it is that I have done.1850

I will just treat this as a column... treat this as a column... Okay.1864

So, notice what I have done. I have taken the solution space which is x1, x2, x3, x4, this is because i have solved the system... these are all the possible solutions.1870

There is an infinite number of them because r and s can be anything. I have written it in vector form this way, as a linear combination of this vector and that vector.1884

These are just arbitrary numbers, right? The r and the s are just arbitrary numbers, therefore I have expressed the solution set, which is this thing of the homogeneous system based on that matrix.1893

I have expressed it as the linear combination of this vector and this vector.1906

Therefore, those 2 vectors (-1,1,0,0) and (-2,0,1,1), they actually span the null space.1912

The null space has an infinite number of solutions, that is what our system tells us here.1930

Well, I know that I can describe all of those solutions by reducing it to two vectors, any linear combination of which will keep me in that null space.1937

It will give me all of the vectors, all of the vectors are represented by a linear combination of this vector and that vector.1947

They span the null space of the homogeneous system.1953

Again, profoundly important example. Go through this example again carefully to understand what it is that I did.1957

I had a homogeneous system, I solved that homogeneous system for the solution space. I represented that solution space... once I have that... I represent it as vectors, and these vectors that I was able to get -- because these are arbitrary constants -- well, that is the whole idea behind a span.1965

I was able to represent the higher solution space but only 2 vectors. That is extraordinary.1983

Okay. Thank you for joining us here at Educator.com for the discussion of span, we will see you next time.1990