Download CBrayMath216-4-1-b.mp4 So another theorem about these sorts of

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Covariance and contravariance of vectors wikipedia , lookup

Exterior algebra wikipedia , lookup

Vector space wikipedia , lookup

Four-vector wikipedia , lookup

Matrix calculus wikipedia , lookup

System of linear equations wikipedia , lookup

Generalizations of the derivative wikipedia , lookup

Transcript
CBrayMath216-4-1-b.mp4
So another theorem about these sorts of differential equations-- now, you'll notice we have some of the same
conditions here that we had when we were talking about the existence and uniqueness theorem. We require
that the coefficient function to be continuous. We require that the lead coefficient be nonzero.
And we also require now here a little bit more-- so the right-hand side is 0, so we're looking specifically at a
homogeneous differential equation satisfying the conditions of the existence and uniqueness theorem. And of
course, this does satisfy the continuity requirement, because of course, the zero function is continuous.
So what this theorem says is that if those conditions are satisfied-- in other words, for any homogeneous
differential equation satisfying the conditions of the existence and uniqueness theorem-- every solution is a cn
function. Namely, it's n times differentiable, and the nth derivative is continuous.
Furthermore, the collection forms a vector subspace of cn. And even more so, it is an n-dimensional vector
subspace. And n specifically is the order of the differential equation.
So this is a neat theorem. We're going to make an enormous amount of use out of this theorem in this chapter.
We're going to start by proving most of this theorem. We won't prove the whole thing right now, but we will
prove most of it.
I'll start with cn. How do we know that these solutions to this differential equation are cn? And the good news
here-- well, it's just not that hard to solve for this nth derivative of y. You can take all of the rest of the left side
of the equation, move it over to the right like so.
And then you can divide by that coefficient qn. By the way, this is why we need q in the not equal to 0. You
can't divide by it if it's equal to 0. So that's nice. And lo and behold, we have solved for this nth derivative of y.
And notice that what that nth derivative of y is equal to. Everything here-- all these coefficients-- are known to
be continuous, the denominator as well. Furthermore, I know that all of these values and derivatives of y are
continuous.
Why do I know that they're continuous? Well, because we have a solution to this differential equation, which
means that there is an nth derivative. That means that even the n minus 1's derivative has to be differentiable
in order for that nth derivative even to exist. So if this is differentiable, well, then it's certainly continuous.
So what we have, then, is that yn is a combination of known continuous functions. And therefore, it, too, is
continuous. And that means, of course, if the nth derivative is continuous, that means that the function is cn.
OK. So we have proved that.
By the way, it looks like I forgot a minus sign here. Sorry about that. OK. Next-- how do we know that these
form a vector subspace? How do I know that this complete set of solutions is a subspace? Well, we have a
method for how to decide if something is a subspace, and that is you just check that it's closed under addition
and you check that it's closed under scalar multiplication.
So how do I know-- suppose that y1 and y2 are solutions. How do I know that y1 plus y2 is a solution? Well, I
need to compute l of y1 plus y2. By the way, I need to make sure that it's equal to 0. That's what it would mean
for y1 plus y2 to be a solution.
OK. So how do I know that that is actually true? Well, by linearity, I know that. And given that y1 and y2 are
solutions, that tells me-- y1 being a solution gives me that and y2 being a solution gives me that. And of
course, 0 plus 0 is 0. And so as required, y1 plus y2 is a solution. So therefore, this set is closed under
addition.
And then very similarly, for being closed on the scale of multiplication, we assume that y1 is a solution. And
using linearity and that fact, we conclude that any scalar multiple of y1 must be a solution.
Now, there is-- before I go on, yeah, this is a good callback moment. You look at this argument here, the
algebra that we went into to show that this collection of solutions is closed under addition and closed under
scalar multiplication-- I strongly encourage you to do this. Go back and compare what we've just argued to a
previous argument that we made about homogeneous solutions to matrix equations.
We argued a very similar argument at that time. And it's worth making the observation that there is a strong
similarity there, because this is an example of many similarities that we're going to notice here between
homogeneous differential equations and matrix equations, specifically homogeneous linear differential
equations and matrix equations.
Now, there is one last thing to confirm. We've confirmed that these are cn. We've confirmed that they form a
subspace. It remains to consider the question of whether this subspace is n-dimensional. We do technically
have tools that would allow us to prove this at this moment, but there's a lot of details.
And very soon in the course-- not far at all in the future-- we will have some more powerful tools that will allow
us to draw this conclusion more easily. So I'm going to delay this proof until we have this more powerful linear
algebra tool called a linear transformation.
All right. So we've established this theorem to the extent we're going to at the moment, and this proof is
Coming. And there's a big consequence of this theorem that we're going to take advantage of on the next
page, and that is, roughly speaking, what this has done is it has taken the question of solving a differential
equation, which appears at a glance to be an analysis problem-- calculus, that kind of thing. You see there's
derivatives. You can suspect that there might be some integrals involved somewhere. And it just feels like
basically a calculus problem.
But what we realize now, what this theorem tells us is that maybe it is, but it is also very much a linear algebra
problem. This is not just a calculus problem. This is a linear algebra problem. We're trying to understand a
linear algebra object. We're not just trying to find a bunch of solutions that have nothing to do with each other.
We're trying to understand an n-dimensional subspace of a known vector space.
So an important realization-- these kinds of differential equations, they're not analysis problems alone. They
are linear algebra problems. And that's going to be enormously important as we go through the rest of the
differential equations in this course.
A little bit of lingo-- since we are talking about a vector space, we're going to talk about a basis for that vector
space, specifically the vector space of solutions to the homogeneous linear differential equation. And this basis
has a name. It's classically called a fundamental set of solutions.
One could argue that this terminology is not needed. We could just call it a basis for the set of solutions. And
that's fine, but this terminology is very much in the culture, and students should be aware of it. And in keeping
with what everyone else does, we will also be using this term, fundamental set of solutions, throughout this
course.
But importantly, keep in mind what it is. It's just a basis. It's a basis for the vector space that we now realize is
actually the object of our interest. So here's an example of how we can use this. Here's a differential equation.
Easy differential equation to write down.
Not immediately clear how you can go about solving this at a glance and what calculus methods would be
used to solve this differential equation. It's seemingly a pretty hard problem.
OK. With that in mind, we do note that it's homogeneous. We see from the form of the left-hand side that it is
linear. And you'll notice that it is second order. You furthermore notice that these coefficient functions, being as
they are 1, and there's a 0y prime in there, and then this 1y-- all those coefficient functions are continuous.
The lead coefficient function is never 0, because it's always 1. And of course, the right-hand side being the 0
function is also continuous. So all of our conditions are satisfied. And we know from the previous theorem,
then, that, again, we're not just looking for some solutions. We are looking for a vector space, a subspace of
c2.
So we're not going to look at this question from a calculus point of view. We're going to look at this question
from a linear algebra point of view. So we know that this vector space that we're looking for is two
dimensional.
So if I can find, by hook or by crook, two solutions-- maybe we get lucky. We find two solutions that aren't
multiples of each other. In other words, if I can find an independent pair of solutions, then we're in good shape.
And in fact, that happens here to us. Casual observation-- sine x works.
We all remember how to take derivatives of sine. And everybody knows that when you take the derivative of
sine, you get cosine. Take the derivative of cosine, you get negative sine. And so weirdly, sine x is a function
whose second derivative is the negative of itself. And therefore, it satisfies this differential equation. Fine. And
of course, cosine is the same thing. That also works.
So as required, we have found two solutions. They're not multiples of each other. Therefore, they are an
independent pair of solutions in a two-dimensional vector space. And using our linear algebra theorems, we
know that we have a basis. This pair here is a basis for our solution set.
Being as they are a basis, that means that every solution is of this form. Every solution is a linear combination
of those solutions. And we're done.
So you can write this in a couple of different ways. You can say that this is the general solution, or you might
just prefer to take a linear algebra point of view and say that this is the fundamental set of solutions. Either
way, it says roughly the same thing.
So it's important to realize that linear algebra is what allowed us to do this. We did make the casual
observation about a couple of specific individual functions, but ultimately, the analysis contribution to this
solution was to take a couple of derivatives.
And we found, with analysis, two solutions, all infinite of the remainder of them. We found with linear algebra. It
was the idea of the linearity of this differential equation that allows for these linear combinations to work. And
that's our more sophisticated idea of the fact that the solutions are a two-dimensional vector space.
So our idea of the fact that they're a vector space at all, and then our more sophisticated idea of the idea of
what the dimension of a vector space is-- these very significantly linear algebra ideas are what allowed us to
conclude not only that these are also solutions, but that because of the dimension, those are the only solutions.
So there aren't other solutions out there that we might inadvertently find in the same way that we found sine
and cosine. That's it-- just sine and cosine and its linear combinations.