WEBVTT mathematics/multivariable-calculus/hovasapian
00:00:00.000 --> 00:00:04.000
Hello and welcome back to educator.com and multivariable calculus.
00:00:04.000 --> 00:00:09.000
Today we are going to start our discussion of potential functions, so let us just jump right on in.
00:00:09.000 --> 00:00:20.000
We are going to start with a definition of course, of what a potential function is. This is a very, very, very important discussion, especially for those of you in physics.
00:00:20.000 --> 00:00:31.000
Well, not necessarily just for physicists but in particular in physics because most of classic physics is based on the notion of a potential function, a conservative potential function.
00:00:31.000 --> 00:00:37.000
In any case, let us just jump in and see what we can do mathematically. Okay.
00:00:37.000 --> 00:01:57.000
So, definition. Let f be a vector field on an open set s -- okay -- if small f is a differentiable function on s subject to... the gradient of small f happens to equal capital F... the vector field, then we say f is a potential function for f. Small f is a potential function for capital F.
00:01:57.000 --> 00:02:14.000
So, what we are saying is if we are given a vector... so basically what we have been doing up to this point... at least earlier before we started the line integral discussion, we have been given functions and then we have been taking the gradient of that particular function.
00:02:14.000 --> 00:02:24.000
Now what we are saying is -- and the gradient is a vector field -- what we are saying is that we are not going to start with a function, we are actually going to give you some vector field.
00:02:24.000 --> 00:02:30.000
We want to know if it happens to be the gradient of some function. That is all we are saying.
00:02:30.000 --> 00:02:37.000
You might think to yourself, of course if you have a vector field it comes from somewhere -- as it turns out, that is not the case.
00:02:37.000 --> 00:02:48.000
This is analogous to single variable calculus when you are given the particular function, can you find the function that this is the derivative of. That is what we are asking.
00:02:48.000 --> 00:03:01.000
So, if we are given a vector field first, does there exist a function such that this is the gradient of... that is what a potential function is, it is that particular function.
00:03:01.000 --> 00:03:33.000
So, let us just sort of write all of this out. So, up to now, we have started with f and taken grad f to get a vector field. The vector field is the gradient.
00:03:33.000 --> 00:03:56.000
If given a vector field first, can we recover a function f? That is the question.
00:03:56.000 --> 00:04:29.000
If we can, then f is a potential function. f is a potential function for the given vector field.
00:04:29.000 --> 00:04:40.000
The idea -- well, this one I do not have to write out -- so again, the idea of finding a potential function for a vector field is analogous to finding the integral of a function over an interval.
00:04:40.000 --> 00:04:53.000
Does one exist? Precisely because the idea of taking the gradient of a particular function which we have been doing is analogous to taking the derivative. That is what we are doing when we are taking the gradient. We are taking partial derivatives.
00:04:53.000 --> 00:05:01.000
But essentially a gradient is, taken as a whole, is the derivative of a function of several variables.
00:05:01.000 --> 00:05:10.000
Now we are just working backwards. This is analogous to trying to find, we are just trying to integrate a gradient is essentially what we are doing. Can we integrate it?
00:05:10.000 --> 00:05:25.000
If we do, does a function exist? Well, in all questions like this, that is essentially the central question. Does the potential function exist, and if it does, well, is it unique, and how can we find it?
00:05:25.000 --> 00:05:39.000
Does are sort of the 3 questions that occupy pretty much the majority of science, of mathematics. Does something exist, is it unique, and can we find it? Is there a way that we can find it, or can we just say that it exists.
00:05:39.000 --> 00:05:47.000
It is nice to be able to say that it exists, or it does not exist, but from a practical standpoint, we want to be able to find it. Okay.
00:05:47.000 --> 00:06:01.000
Let us see what we have, so, let us talk about uniqueness first. That is actually the easy part, so, uniqueness.
00:06:01.000 --> 00:06:12.000
So, let us start out with a definition here, so we are going to define something called a connected set. Excuse me.
00:06:12.000 --> 00:07:12.000
An open set s is called connected if given 2 points, p1 and p2 in s there exists a differentiable curve contained in s, oops, let me write that a little bit better here, contained in s, which joins p1 and p2.
00:07:12.000 --> 00:07:18.000
So, this is just a fancy definition for something that is intuitively clear, so let us go ahead and draw a picture.
00:07:18.000 --> 00:07:32.000
We might have some set s like that, this might be p1, this might be p2, so this is p1, this is p2, is there some differentiable curve that actually connects them.
00:07:32.000 --> 00:07:45.000
It could be a straight line, it could be some series of straight lines, a piecewise continuous, is there some path that actually connects them, some differentiable path that connects them. So that is a connected set.
00:07:45.000 --> 00:08:03.000
That is it. Essentially it is just a set that is one piece, and if I had something like this, this is p1 this is p2, well then this particular case there is no differentiable curve that actually connects them, this is still an open set and it is still a viable open set, it is just not connected.
00:08:03.000 --> 00:08:15.000
There is no path that connects them that stays in the open set. I am going to cross the boundary and I am going to be out in this no man's land. So, this is not connected.
00:08:15.000 --> 00:08:29.000
So, we need this notion of connection here. This is a not connected, and this is a connected set. Now we will go ahead and state our theorem.
00:08:29.000 --> 00:10:02.000
Let s be a connected set -- uhh, let us say a little more than just connected -- connected open set... okay... if f and g are two differentiable functions on s and if the gradient of f happens to equal the gradient of g for every point of s, then there exists a constant -- oh by the way this reverse e is a symbol for "there exists" -- a constant c such that f of some point in s is equal to g of some point + c for all points x in s.
00:10:02.000 --> 00:10:19.000
So, basically what this means is that open connected set, if you have two functions, f and g, where if you take the gradient of f and it equals the gradient of g at every point, then essentially what we are saying is that those two functions are exactly the same up to a constant.
00:10:19.000 --> 00:10:29.000
Again, this constant is not much of an issue. They are the same, they only differ by a constant.
00:10:29.000 --> 00:10:50.000
This is sort of analogous to what we did in single variable calculus. On a given interval, they have 2 functions f and g. If f = g everywhere along that interval, well then the 2 functions are the same, the only different between them is this particular constant, which does not really matter because again, when you differentiate a constant it goes to 0.
00:10:50.000 --> 00:11:02.000
For all practical purposes, they are the same. So, on an open connected set, what this theorem is ultimately saying is that a potential function is unique. Okay.
00:11:02.000 --> 00:11:24.000
Now, we will go ahead and deal with existence. This is going to be the important one. So, now existence of a potential function and hopefully eventually we will talk about how to actually construct this potential function. Now, existence.
00:11:24.000 --> 00:11:40.000
Let f(x,y) = f(x,y), g(x,y), so these are my coordinate functions for this vector field.
00:11:40.000 --> 00:12:10.000
We want to know if and when a potential function exists for f. We want conditions that will tell us when something exists, when this potential function exists, for the vector field.
00:12:10.000 --> 00:12:34.000
So, let us go ahead and give this a symbol that we are going to use over and over again. Let us call this capital p < font size="-6" > f < /font > (x,y). Potential function for the vector field f, that is what this symbol means.
00:12:34.000 --> 00:13:23.000
That is, we want to know if -- again, I am going to use this reverse e -- if there exists a p < font size="-6" > f < /font > of (x,y) such that dp < font size="-6" > f < /font > dx = f, and d(p < font size="-6" > f < /font > )/dy = g. That is what we are doing.
00:13:23.000 --> 00:13:32.000
We want to know if there is a function that when I take the gradient of that function, this p < font size="-6" > f < /font > , I end up with a vector field. That is the idea of a potential function.
00:13:32.000 --> 00:13:39.000
If given a vector field, is this a gradient of some function.
00:13:39.000 --> 00:13:47.000
The idea is actually really, really simple in and of itself. What is going to be difficult from a practical standpoint is keeping things straight.
00:13:47.000 --> 00:14:14.000
Up to now we have been going from functions to gradients. Now we are just going to drop a vector field on you and we are going to be trying to go backwards, so, take a couple of extra seconds to make sure that you have your function separated, that you know whether you are dealing with a function or a vector field f, its coordinate functions, and that these might be the derivatives, this f and g of the original function we are trying to recover.
00:14:14.000 --> 00:14:31.000
There are going to be lots of f's and g's and f1's and f2's floating around. Make sure you know which direction we are moving in. That is really going to be the only difficult part, the logistics of this, in and of itself, it is not... the concept is not difficult. Just be careful in the execution.
00:14:31.000 --> 00:14:45.000
So, again, so we want to know if there exists some potential function such that the partial with respect to x is equal to this, and the partial with respect to y is equal to this. okay.
00:14:45.000 --> 00:14:52.000
let us see if we can come up with a test here. Let us suppose there is.
00:14:52.000 --> 00:15:30.000
Suppose yes. A p < font size="-6" > f < /font > exists for a given vector field. Then, df/dy if I take the partial -- let me write these out again, up here, because I want them to be on the same page -- so RF is equal to f and g.
00:15:30.000 --> 00:15:53.000
Now, suppose yes, that some potential function exists. Then, if I take the derivative of f with respect to y, that is going to equal the partial with respect to y of well, f happens to be the partial derivative with respect to x of the potential function.
00:15:53.000 --> 00:16:05.000
This is just d(p < font size="-6" > f < /font > )/dx, right? because we are supposing that the potential function actually exists. The potential function is this.
00:16:05.000 --> 00:16:27.000
The d/dx is f, now if I take the df/dy, it is going to be this thing. Well, that equals d² p < font size="-6" > f < /font > dy, dx, right? Okay. So we have this thing.
00:16:27.000 --> 00:16:57.000
Now, let us go ahead and do the other. Let us go ahead and differentiate g with respect to x. That is going to equal d/dx, and we said that g was equal to the partial of the potential function with respect to y, so that equals d² p < font size="-6" > f < /font > dx, dy.
00:16:57.000 --> 00:17:19.000
d² of this, dy dx, d² of this dx, dy, what do we know about mixed partials? that it does not matter what order you take them in. They are actually equal. These 2 are equal, because, remember? Way back in previous lessons, mixed partial derivatives are equal.
00:17:19.000 --> 00:17:29.000
Because these are equal, what we get is this and this are equal.
00:17:29.000 --> 00:17:46.000
So, df/dy = dg/dx. By supposing that a potential function actually exists, we actually derived a relation between the partial derivatives of the vector field, component functions of the vector field.
00:17:46.000 --> 00:18:05.000
If I have a vector field, if df/dy = dg/dx, that is a product of the fact that a potential function exists. This actually gives me a nice way of testing to see whether a potential function exists given a vector field. Okay.
00:18:05.000 --> 00:18:23.000
So, now we have a test, but we have to be careful in how we implement this test, and now we are going to get into a little bit of mathematical logic. If this, then this, we are going to have to see what implies what, we are going to have to be a little extra careful.
00:18:23.000 --> 00:18:56.000
Now we have a test. Okay. So we are going to express this test as a theorem. Now, let f and g be differentiable with continuous partial derivatives.
00:18:56.000 --> 00:19:10.000
This just means that they are well behaved, and again, the functions that we are going to be dealing with are well behaved, so for all practical purposes whenever I write this out in a theorem, it is not necessary to write them out but I think it is sort of good to get used to see how theorems are written.
00:19:10.000 --> 00:19:15.000
You are going to see all of these things and it is important that all of these hypotheses be in place formally.
00:19:15.000 --> 00:19:33.000
Partial derivatives on an open set, s in 2-space, so we are going to deal with 2-space first and we will deal with 3-space in just a minute... 2-space.
00:19:33.000 --> 00:20:00.000
Okay. Here is the if-then part. If df/dy does not equal dg/dx, or using capital denotation if d₂(f) does not equal d₁(g), remember this capital D notation? It says take the derivative with respect to the second variable of f.
00:20:00.000 --> 00:20:09.000
Take the derivative with respect to the first variable of g. In this case we have 2 variables, x and y, so the second variable is y, the first variable is x.
00:20:09.000 --> 00:20:48.000
This and this are just two different notations for the same thing. Then, there is no potential function for the vector field, let me write this a little bit better, for the vector field f which is comprised of the functions f and g.
00:20:48.000 --> 00:21:00.000
Okay. So, if I am given a vector field f and g, all I need to do is I need to take the derivative of f with respect to y, the derivative of g with respect to x and I need to see if they are equal.
00:21:00.000 --> 00:21:10.000
If they are note equal, I can automatically conclude that there is no potential function and I can stop right there. That is what this test is telling me.
00:21:10.000 --> 00:21:18.000
We came up with that by supposing that a potential function exists. If a potential function exists, then df/dy has to equal dg/dx.
00:21:18.000 --> 00:21:35.000
Our test is written in the contrapositive form. We work in reverse. If the potential function exists, then this has to be true. That is the same as if this is not true, then a potential function does not exist. I am going to be talking about that in just a minute.
00:21:35.000 --> 00:21:51.000
But, again, from a practical standpoint, our test is this, if I am given a vector field, one coordinate function, second coordinate function, I take the derivative, I take df with respect to dy, I take df/dy, I take dg/dx, I see if they are equal.
00:21:51.000 --> 00:22:07.000
If they are not equal, then I can conclude that a potential function does not exist. However, if they are equal, that does not mean that I cannot conclude that a potential function does exist. The direction of implication is very important.
00:22:07.000 --> 00:22:19.000
This is profoundly important. Be very, very careful in how you actually use these theorems. Use it as written. If df/dy does not equal dg/dx, there is no potential function.
00:22:19.000 --> 00:22:32.000
So, let us just do an example real quickly. Then we will continue on with a discussion of the subtleties of the mathematical logic involved here. So, example 1.
00:22:32.000 --> 00:22:54.000
Let f(x,y) = sin(x,y) -- I should write this a little bit better here -- = sin(x,y) and cos(x,y), so that is our vector field.
00:22:54.000 --> 00:23:02.000
This is our f and this is our g. Well, let us go ahead and find df/dy and dg/dx.
00:23:02.000 --> 00:23:33.000
df/dy = x × cos(x,y) and dg/dx = -y × sin(x,y). Well, this and this are not equal. Therefore, there does not exist a potential function.
00:23:33.000 --> 00:23:57.000
In other words, this vector field does not have a potential function. So, df/dy not being equal to dg/dx implies that there does not exist a potential function for this vector field f.
00:23:57.000 --> 00:24:12.000
This is how we use our test. Very, very nice. Let me go ahead and give you the analog for this in 3-space.
00:24:12.000 --> 00:24:29.000
The analogous test for 3-space -- now we are going to be talking about 3 variables, we just take them 2 at a time -- is as follows.
00:24:29.000 --> 00:25:05.000
The same hypotheses, of course, and plus the partial derivative part of the test is -- I am going to write this in terms of capital D notation -- D1(f2) does not equal D2(f1), D1(f3) does not equal D3(f1).
00:25:05.000 --> 00:25:07.000
Again this is where you have to be careful. Just take a couple of extra seconds to do this nice and slow.
00:25:07.000 --> 00:25:46.000
D2(f3) does not equal D3(f2). Okay. Then, f(x,y,z), which is the first coordinate function f1, f2, f3, so here instead of using fg whatever, I am just using first function, second function, third function... does not have a potential function.
00:25:46.000 --> 00:25:52.000
So, let us go ahead and talk about the notation a little bit more. I am going to be using this one, but I am also going to be using the other one.
00:25:52.000 --> 00:26:05.000
Okay. This says -- what I have to do is I have to check the derivative of the second coordinate function with respect to the first variable, and then I have to compare that with the derivative of the first function with respect to the second variable.
00:26:05.000 --> 00:26:15.000
The derivative of the third coordinate function with respect to the first variable, I have to check to see if it is equal to the derivative of the first function with respect to the third variable.
00:26:15.000 --> 00:26:24.000
I have to check the derivative of the third function with respect to the second variable, and see if it is equal to the derivative of the second function with respect to the third variable.
00:26:24.000 --> 00:26:35.000
The three variables are the first, second, and third, respectively, are x, y, and z. The three functions are f1, f2, f3.
00:26:35.000 --> 00:26:45.000
So, this looks like, out here I will do this in blue, let me see if I can get this right.
00:26:45.000 --> 00:27:03.000
This is going to be d -- that is plenty of room I do not need to squeeze it in here -- this is d(f2)/dx does not equal d(f1)/dy.
00:27:03.000 --> 00:27:15.000
This one is d(f3)/dx does not equal d(f1)/dz, third.
00:27:15.000 --> 00:27:28.000
This one is d(f3)/dy does not equal -- excuse me, d(f2)/dz.
00:27:28.000 --> 00:27:38.000
Take some time to look at this carefully. First, second, third variable, x, y, z. First, second, third coordinate function of the vector field.
00:27:38.000 --> 00:27:53.000
I am mixing -- it is okay, so this sometimes, some people like this notation, some people like this notation, just make sure that you pair up properly. This notation is nice because you have 1, 2, 2, 1... 1, 3, 3, 1... 2, 3, 3, 2.
00:27:53.000 --> 00:28:03.000
All of the indices are taken care of so you can see that. It is just a question of getting used to this, you are probably not used to this too much, depending on what notation your teacher uses.
00:28:03.000 --> 00:28:11.000
So, now let us go ahead and discuss a little bit more about how this theorem is stated, and talk about some mathematical logic.
00:28:11.000 --> 00:28:38.000
So, pay close attention to how the theorem is stated. How the theorem is stated.
00:28:38.000 --> 00:28:54.000
It is stated in something called contra-positive form.
00:28:54.000 --> 00:29:23.000
Contra-positive form is if not a, then not b. Okay? This is equivalent to its positive form.
00:29:23.000 --> 00:30:00.000
That is the standard form. If a -- oops, sorry -- if b, then a. So if I am given if b, then a, that is how we actually did it. That is how we derived... how we presumed if it has a potential function, then df/dy = dg/dx.
00:30:00.000 --> 00:30:06.000
But we use it in contra-positive form because we want to have a test. If not a, then not b.
00:30:06.000 --> 00:30:19.000
In other words, if df does not equal dg/dx, then we know that a potential function does not exist. These 2 are equivalent. Generally, well, actually we will talk a little bit more about it.
00:30:19.000 --> 00:30:39.000
So let me write that down, so if there exists a potential function, then df/dy = dg/dx.
00:30:39.000 --> 00:30:48.000
The contrapositive form is the way we used it as a test. If this does not exist, then, there is no potential function. Okay.
00:30:48.000 --> 00:31:00.000
Now, the converse is not true. Let me circle this. This is the positive form. If there exists a potential function, then df/dy = dg/dx.
00:31:00.000 --> 00:31:22.000
Okay. The converse is not generally true. The converse would be if df/dy = dg/dx, then a potential function exists.
00:31:22.000 --> 00:31:33.000
All I do is reverse the if/then part, I leave the if/then, except I take this over here, this over there. If I just switch the places, that is the converse, that is not generally true.
00:31:33.000 --> 00:31:41.000
However if I switch places and I negate both, that is the contrapositive, that is true. It is equivalent to the positive.
00:31:41.000 --> 00:31:57.000
Now let us go ahead and write this out, so, positive, we have if b, then a. That is the positive, that is the standard form.
00:31:57.000 --> 00:32:14.000
The contrapositive is if not a, then not b. This is equivalent.
00:32:14.000 --> 00:32:20.000
This is equivalent to the positive. That we can always use.
00:32:20.000 --> 00:32:43.000
The converse is, notice, if b, then a... if a, then b... in other words, can I conclude that it works in the other direction? No. This is a completely different statement in fact -- I will not write different statement, let me write not equivalent to the positive.
00:32:43.000 --> 00:32:58.000
This is not equivalent to the positive, so, now in terms of our theorem.
00:32:58.000 --> 00:33:26.000
So, our theorem, the positive of our theorem says if a potential function for a vector field exists, then, d2f1 = d1f2, and so on for all of the other partials depending on how many we are talking about.
00:33:26.000 --> 00:33:34.000
In order to be useful in a test, we take the contra-positive form.
00:33:34.000 --> 00:34:05.000
So, the contra-positive, if d2f1 does not equal d1f2, then the potential function for f... well, let me go ahead and use some symbols... then there does not exist a potential function for f.
00:34:05.000 --> 00:34:34.000
The converse, which is if d2f1 = d1f2, if I just switch this and switch this, then there exists a potential function -- I will do this in red -- not true.
00:34:34.000 --> 00:34:47.000
This is true. The contra-positive is true, it is how we use it as a test. The converse, not generally true. In generally you have to actually prove the converse if it were true. In this case it is not.
00:34:47.000 --> 00:34:58.000
You are going to find potential functions where the partial, where d2f1 = d1f2, but a potential function does not exist, so I cannot necessarily conclude that this is the case.
00:34:58.000 --> 00:35:08.000
So, think about it this way. I can conclude that if it is raining outside, then it is cloudy. That is an automatic implication. If it is raining, then it is cloudy.
00:35:08.000 --> 00:35:23.000
Well, I can also conclude the contra-positive. If it is not cloudy, then I know that it is not raining. That is the contra-positive. Now let us go back to the positive, if it is raining, then it is cloudy, but I cannot necessarily conclude that if it is cloudy, it is raining.
00:35:23.000 --> 00:35:31.000
It can be cloudy, but it cannot be remaining, that is possible. The converse and the contra-positive are two entirely different things.
00:35:31.000 --> 00:35:37.000
Often, in this particular case, we have the positive that if a potential function exists, then these partials are equal.
00:35:37.000 --> 00:35:48.000
We use it as a test, we use the contra-positive version to exclude the possibility. If the partials are not equal, then the potential function does not exist.
00:35:48.000 --> 00:36:02.000
Be very, very careful in how you use these and do not mix the converse with the contra-positive. Just because something implies something else does not mean that it works in reverse. That is not true in general.
00:36:02.000 --> 00:36:14.000
So, I am going to just say one more thing about this. Just to make sure that we are solid. Compare this with the nth term test.
00:36:14.000 --> 00:36:36.000
You actually have seen this type of mathematical logic before back when you were doing infinite series, so you compare the nth term test for divergence of an infinite series... for divergence of an infinite series.
00:36:36.000 --> 00:36:57.000
The positive of that theorem is this: if the series, a < font size="-6" > n < /font > converges, then the limit as n approaches infinity of the nth term is equal to 0. That is the positive.
00:36:57.000 --> 00:37:05.000
I know that if I have a series that converges, I know that if I take the limit of the nth term, this thing, I know that it is going to equal 0.
00:37:05.000 --> 00:37:37.000
Now, for testing purposes, we use the contra-positive. We say if the limit as n approaches infinity of the nth term does not equal 0, then the infinite series does not converge.
00:37:37.000 --> 00:37:57.000
Now, the converse would be the following. It would be the limit of a < font size="-6" > n < /font > as it goes to infinity = 0, then the series converges.
00:37:57.000 --> 00:38:10.000
All I have done is switch places with this and this. This is not true. The harmonic series, the limit goes to 0, one over... the harmonic series was the sum of 1 over n, the limit goes to 0 but the harmonic series does not converge.
00:38:10.000 --> 00:38:19.000
So, positive was fine, the contra-positive is an equivalent form of that. The converse is not. So, that is it.
00:38:19.000 --> 00:39:15.000
One final statement, so for our theorem let me go back to blue here. So, for our theorem, d2(f1), in other words, the quality of the partials, d1(f2) is a necessary condition -- let me actually capitalize this, you are going to see these words used a lot, they are very, very important words in mathematics, and they should be more important in science -- is a necessary condition for existence of a potential function for f.
00:39:15.000 --> 00:39:46.000
But alone, it does not suffice. It is not sufficient. So, just because I have the d2(f1) = d1(f2) that is necessary for a potential function to exist, but it is not enough for the potential function to exist.
00:39:46.000 --> 00:39:55.000
I need other things to be there in order for the potential function to exist. But, if it does exist, then yes, that is certainly going to hold. So it is necessary but it is not sufficient.
00:39:55.000 --> 00:40:03.000
In other words clouds are necessary for a rainy day, but they are not sufficient for a rainy day they have to be a particular type of cloud, there have to be other conditions that are met.
00:40:03.000 --> 00:40:13.000
That is what we are saying. So the words necessary and sufficient are very, very important in mathematics. Just because something is necessary does not necessarily mean that it is sufficient.
00:40:13.000 --> 00:40:19.000
I will go ahead and leave it at that. Thank you for joining us here at educator.com, we will see you next time, bye-bye.