Sometimes you want to model a short impulse:
you give a swing a push, so that it started to swing up
you plucked a string on the guitar, and it started to vibrate
This can be modeled by an inhomogenous equation of the form
D2u(t)+ω2u(t)=g(t).
where g(t) models the force (which varies over time). Sometimes we don't care about the precise shape of the function g(t), but only the 'total effect' of the force given, then it is useful to use the delta function to model it.
delta function as a limit
Consider the following sequence of functions
gn(t)=n⋅1[0,1/n](t).
where 1[a,b](t) means the value of the function is 1 if t∈[a,b] and 0 otherwise. We have
that
∫gn(t)dt=1
for all n, but the graph of the function is getting narrower and taller.
We also have the property that, suppose we have a smooth function f(t), then we have
n→∞lim∫gn(t)f(t)dt=f(0).
consider the case that f(t)=1 or t.
We will write
δ(t)=n→∞limgn(t).
which is meaningful inside an integral.
delta function on the RHS of an equation
Ex 1
Consider the equation
y′(t)=δ(t),t≥0
and with initial condition y(t)=0 for t<0.
We can solve it by integration: for b>0, we have
y(b)=y(−ϵ)+∫−ϵby′(t)dt=0+1=1.
where ϵ is any positive number.
The shape of y(t) is a 'step function',
y(t)={01t<0t>0
the value of y(0) is undefined (and doesn't matter)
Ex 2
Consider the equation
(d/dt)2y(t)=δ(t),t≥0
and with initial condition y(t)=0 for t<0.
Let g(t)=y′(t), then g(t)$ satisfies
(d/dt)g(t)=δ(t)
which is just the case in Ex 1. We see g(t)=1 only if t>0. Then, we get, for t>0
y(t)=y(0)+∫0ty′(s)ds=∫0t1ds=t.
The Green's function
Let P(D) be a differential operator, where D=d/dt and P(x) is a degree n polynomial. Suppose we are facing an equation of the type
P(D)y(t)=g(t)
with some homogeneous boundary condition (meaning the function y(t) vanishes on the boundary of the domain).
Suppose we know the solution for
P(D)G(t;s)=δ(t−s).
Then, we can solve the original equation by doing an integral
y(t)=∫G(t;s)g(s)ds.
Indeed, we have
P(D)y(t)=∫P(D)G(t;s)g(s)ds=∫δ(t−s)g(s)ds=g(t).
Ex 3
Consider the domain being [0,∞), and we have
d/dty(t)=g(t),g(t)=1[1,2](t).y(0)=0
In this case, we can first solve for the Green's function
d/dtG(t;s)=δ(t−s),G(0;s)=0
we get G(t;s)=1 for t>s, and 0 else.
Thus, we can get get
y(t)=∫g(s)G(t;s)ds=∫121t>sds.
if t>2, then y(t)=∫12ds=1, if 1<t<2, we get ∫1tds=(t−1). if t<1, we get y(t)=0.
More examples about delta function
Consider the equation that, for x∈[0,2]
(d/dx)2y(x)+y(x)=δ(x−1)
with boundary condition y(0)=y(2)=0.
If we integrate this equation over the interval (1−ϵ,1+ϵ), across the location of the delta function, we find that
ϵlimy′(1+ϵ)−y′(1−ϵ)=1
(the term ∫(1−ϵ,1+ϵ)y(x)dx=O(ϵ)→0)
hence we had a discontinuity of the slope of y(x) when x=1.
We may write down the general solution over the interval (0,1) that vanishes on x=0
y−(x)=asin(x),x∈[0,1]
and the general solution over x∈(1,2) that vanishes on x=2
y+(x)=bsin(x−2),x∈[1,2]
then we can solve the condition that
y−(1)=y+(1),y−′(1)−y+′(1)=1
this gives
asin(1)=bsin(−1),acos(1)−bcos(−1)=1
so a=−b and a=1/(2cos(1)).
We learnd Laplace transformation, which can be used to solve diff eq.
Let f(t) be a function defined for t>0, and we recall the following
F(p)=[LT(f)](p)=∫0∞f(t)e−ptdt.
and the inverse Laplace transformation is
f(t)=(1/2πi)∫c−i∞c+i∞F(p)eptdp.
for c≫1.
For derivatives, we have
LT(Df)=pLT(f)−f(0)=pF(p)−f(0).
and we can repeatedly use it to get
LT(DDf)=pLT(Df)−Df(0)=p(pLT(f)−f(0))−Df(0)=p2F(p)−pf(0)−f′(0).
Example
solve equation
(D−1)(D−2)f(t)=0
with condition f(0)=0,f′(0)=1.
We may apply Laplace transform to the equation and get
LT[(D2−3D+2)f]=0
which says
p2F(p)−pf(0)−f′(0)−3[pF(p)−f(0)]+2F(p)=0
plug in the initial condition for f(0),f′(0), we get
F(p)[p2−3p+2]=1
thus
F(p)=1/(p2−3p+2)=(p−1)(p−2)1
Now we can either look up the inverse Laplace transformation table, or do the inverse Laplace transformation integral, to get
f(t)=(1/2πi)∫c−i∞c+i∞F(p)eptdp=p∑Resp(F(p)ept)
we have two poles,
one is at
p=1, with residue
e1t/(1−2)=−et and
another at
p=2 with residue
e2t/(2−1)=e2t
so the answer is
f(t)=e2t−et.
We may check that they indeed satisfies the initial condition.
Inhomogenous term
If you had a equation of the form
Df(x)=1
then it is not a homogeneous equation: the term on the right hand side is does not contain factor f. What does its solution space look like? We know it is of the form
f(x)=c+x.
Note that, the solution space is not a vector space anymore. indeed, if you have f1(x) and f2(x) both satisfies the equation,
Df1(x)=1
Df2(x)=1
then add the two equation up, we see
D(f1(x)+f2(x))=2
so f1(x)+f2(x) is not a solution (1=2).
Nonetheless, the solution space is a so called 'affine space' V, which means there is an associated vector space V′, and for any two elements v1,v2∈V, we have their difference v1−v2∈V′.
In our case, the associated vector space is the solution space for the homogenous equation
V′={f(x)∣Df=0}
That means, if we pick a 'base point' v0∈V, and pick a basis e1,⋯,en of V′, and then we can express any element v∈V as
v=v0+(c1e1+⋯+cnen).
for some coefficients ci.
Back to our problem here, any solution to the inhomogenous equation can be written as a 'particular solution' (playing the role of v0 above), and plus a solution to the homogenous equation (an element of V′).
Today we considered solving homogenous constant coefficient differential equation.
In general, the equation you meet looks like
(d/dx)nf(x)+cn−1(d/dx)n−1f(x)+⋯+c1(d/dx)f(x)+c0f(x)=0.
If we call D=d/dx and factor out f(x) on the right, we can write the above equation as
(Dn+cn−1Dn−1+⋯+c0)f(x)=0.
We sometimes use P(D)=Dn+cn−1Dn−1+⋯+c0, P(D) is a degree n polynomial in D.
the solution space
Let V denote the set of solutions for the equation. Because the equation is homogeneous in f (meaning, each term in the equation has one and only one factor of f), the solution space is a vector space, meaning you can add two solutions together and still get a solution.
If P(D) is a degree n operator, then the solution space is n dimensional.
general solutions: P(x) has distinct roots
Here we try to find general solution for the equation P(D)f(x)=0.
As long as you can find n linearly independent solutions, call them f1(x),⋯,fn(x), then you win. Since they will form a basis of the solution space.
We factorize P(D) into linear factors
P(D)=(D−λ1)⋯(D−λn)
we can always do this by the 'fundamental theorem of algebra', which says any polynomials admits such factorization.
Suppose all λi are distinct, then the following is a list of n linearly independent solutions
eλ1x,⋯,eλnx
It is important to note that, D does not 'commute' with x
D(xf(x))=xDf(x).
but D commute with itself,
(D−a)(D−b)f(x)=(D−b)(D−a)f(x).
case with repeated roots
What if we had repeated roots?
See Monday's example.
If you face an equation like this
Dkf(x)=0.
then you know you can have a basis of solution like 1,x,⋯,xk−1.
What about
(D−λ)kf(x)=0?
We introduced a trick
D(eaxf(x))=(aeaxf(x)+eaxDf(x))=eax(D+a)f(x).
hence
(D−a)(eaxf(x))=eaxDf(x).
or a variant
e−axD(eaxf(x))=(D+a)f(x).
We can write f(x)=eλxg(x), then we have
(D−λ)kf(x)=(D−λ)k[eλxg(x)]=(D−λ)k−1[eλxDg(x)]=(D−λ)k−2[eλxD2g(x)]=eλxDkg(x)
now, we know the equation for g(x) is Dkg(x)=0, and we know the general solution for g(x) is
g(x)=c0+c1x+⋯+ck−1xk−1.
hence general solution for f(x) is
f(x)=eλx(c0+c1x+⋯+ck−1xk−1)
In general, we can write
P(D)=(D−λ1)m1⋯(D−λr)mr
where λi are distinct, and the multiplicity m1,⋯,mr add up to n.
Then we have the following general solutions
f(x)=i=1∑rj=0∑mi−1ci,jxjeλix.
warm-up:
how to solve the equation
(z−a)(z−b)=0
well, we would have two solutions z=a and z=b (assuming a=b).
1
how to solve equation ( a=b)
(d/dx−a)(d/dx−b)f(x)=0?
Instead of saying we have two solutions, we say the solution space is a two dimensional vector space.
it is easy to check that eax and ebx solves the equations, and so are their linear combinations, thus we can write down the general solution as
f(x)=c1eax+c2ebx
wait, how do you know you have found ALL the solutions? how do you know you didn't miss any? There is a theorem saying that, a constant coeff ODE of order n will have a n-dimensional solution space.
2
How about the case where a=b?
(d/dx−a)(d/dx−a)f(x)=0?
Here we claim that the general solution is
f(x)=(c0+c1x)eax
Proof of the claim: suppose f(x) solves the equation, then we can always write f(x)=g(x)eax (since eax is never 0), and figure out the equation that g(x) satisfies. We see
(d/dx−a)[g(x)eax]=d/dx[g(x)eax]−ag(x)eax=g′(x)eax+aeaxg(x)−ag(x)eax=eax(d/dx)(g(x))
thus, we have
eax(d/dx)2(g(x))=0
That means g(x)=c0+c1x. Hence we get the claimed general solution
3
Initial condition / Boundary condition.