User Tools

Site Tools


math121a-f23:blog

Oct 30

Sometimes you want to model a short impulse:

  • you give a swing a push, so that it started to swing up
  • you plucked a string on the guitar, and it started to vibrate

This can be modeled by an inhomogenous equation of the form D2u(t)+ω2u(t)=g(t). D^2 u(t) + \omega^2 u(t) = g(t). where g(t)g(t) models the force (which varies over time). Sometimes we don't care about the precise shape of the function g(t)g(t), but only the 'total effect' of the force given, then it is useful to use the delta function to model it.

delta function as a limit

Consider the following sequence of functions gn(t)=n1[0,1/n](t). g_n(t) = n \cdot 1_{[0,1/n]}(t). where 1[a,b](t)1_{[a,b]}(t) means the value of the function is 11 if t[a,b]t \in [a,b] and 00 otherwise. We have that gn(t)dt=1 \int g_n(t) dt = 1 for all nn, but the graph of the function is getting narrower and taller.

We also have the property that, suppose we have a smooth function f(t)f(t), then we have limngn(t)f(t)dt=f(0). \lim_{n \to \infty} \int g_n(t) f(t) dt = f(0). consider the case that f(t)=1f(t) = 1 or tt.

We will write δ(t)=limngn(t). \delta(t) = \lim_{n \to \infty} g_n(t). which is meaningful inside an integral.

delta function on the RHS of an equation

Ex 1

Consider the equation y(t)=δ(t),t0 y'(t) = \delta(t), \quad t \geq 0 and with initial condition y(t)=0y(t)=0 for t<0t<0.

We can solve it by integration: for b>0b > 0, we have y(b)=y(ϵ)+ϵby(t)dt=0+1=1. y(b) = y(-\epsilon) + \int_{-\epsilon}^b y'(t) dt = 0 + 1 = 1. where ϵ\epsilon is any positive number.

The shape of y(t)y(t) is a 'step function', y(t)={0t<01t>0 y(t) = \begin{cases} 0 & t < 0 \cr 1 & t > 0 \end{cases} the value of y(0)y(0) is undefined (and doesn't matter)

Ex 2

Consider the equation (d/dt)2y(t)=δ(t),t0 (d/dt)^2 y(t) = \delta(t), \quad t \geq 0 and with initial condition y(t)=0y(t)=0 for t<0t<0.

Let g(t)=y(t)g(t) = y'(t), then g(t)$ satisfies (d/dt)g(t)=δ(t) (d/dt) g(t) = \delta(t) which is just the case in Ex 1. We see g(t)=1g(t) = 1 only if t>0t>0. Then, we get, for t>0t>0 y(t)=y(0)+0ty(s)ds=0t1ds=t. y(t) = y(0) + \int_0^t y'(s) ds = \int_0^t 1 ds = t.

The Green's function

Let P(D)P(D) be a differential operator, where D=d/dtD=d/dt and P(x)P(x) is a degree nn polynomial. Suppose we are facing an equation of the type P(D)y(t)=g(t) P(D) y(t) = g(t) with some homogeneous boundary condition (meaning the function y(t)y(t) vanishes on the boundary of the domain).

Suppose we know the solution for P(D)G(t;s)=δ(ts). P(D) G(t; s) = \delta(t-s).

Then, we can solve the original equation by doing an integral y(t)=G(t;s)g(s)ds. y(t) = \int G(t; s) g(s) ds. Indeed, we have P(D)y(t)=P(D)G(t;s)g(s)ds=δ(ts)g(s)ds=g(t). P(D) y(t) = \int P(D) G(t; s) g(s) ds = \int \delta(t-s)g(s) ds = g(t).

Ex 3

Consider the domain being [0,)[0,\infty), and we have d/dty(t)=g(t),g(t)=1[1,2](t).y(0)=0 d/dt y(t) = g(t), \quad g(t)=1_{[1,2]}(t). y(0) = 0 In this case, we can first solve for the Green's function d/dtG(t;s)=δ(ts),G(0;s)=0 d/dt G(t; s) = \delta(t-s), \quad G(0;s)=0 we get G(t;s)=1 G(t;s) = 1 for t>st>s, and 0 else.

Thus, we can get get y(t)=g(s)G(t;s)ds=121t>sds. y(t) = \int g(s) G(t;s) ds = \int_{1}^2 1_{t>s} ds. if t>2t>2, then y(t)=12ds=1y(t) = \int_1^2ds = 1, if 1<t<21<t<2, we get 1tds=(t1)\int_1^t ds = (t-1). if t<1t<1, we get y(t)=0y(t)=0.

More examples about delta function

Consider the equation that, for x[0,2]x \in [0,2] (d/dx)2y(x)+y(x)=δ(x1) (d/dx)^2 y(x) + y(x) = \delta (x-1) with boundary condition y(0)=y(2)=0.y(0) = y(2) = 0.

If we integrate this equation over the interval (1ϵ,1+ϵ)(1-\epsilon, 1+\epsilon), across the location of the delta function, we find that limϵy(1+ϵ)y(1ϵ)=1 \lim_{\epsilon} y'(1+\epsilon) - y'(1-\epsilon) = 1 (the term (1ϵ,1+ϵ)y(x)dx=O(ϵ)0\int_{(1-\epsilon, 1+\epsilon)} y(x) dx = O(\epsilon) \to 0) hence we had a discontinuity of the slope of y(x)y(x) when x=1x=1. We may write down the general solution over the interval (0,1)(0,1) that vanishes on x=0x=0 y(x)=asin(x),x[0,1] y_-(x) = a \sin(x) , \quad x \in [0,1] and the general solution over x(1,2)x \in (1,2) that vanishes on x=2x=2 y+(x)=bsin(x2),x[1,2] y_+(x) = b \sin(x-2), \quad x \in [1,2] then we can solve the condition that y(1)=y+(1),y(1)y+(1)=1 y_-(1) = y_+(1), \quad y_-'(1) - y'_+(1) = 1 this gives asin(1)=bsin(1),acos(1)bcos(1)=1 a \sin(1) = b \sin(-1), \quad a\cos(1) - b \cos(-1) = 1 so a=ba=-b and a=1/(2cos(1))a = 1 / (2 \cos(1)).

2023/10/29 12:06 · pzhou

HW 9

Boas

  • section 8.5, #22, #24, #27
  • section 8.8 #8, 9, 10
  • section 8.9 #2, 3
2023/10/28 07:25 · pzhou

Oct 27

We learnd Laplace transformation, which can be used to solve diff eq.

Let f(t)f(t) be a function defined for t>0t>0, and we recall the following F(p)=[LT(f)](p)=0f(t)eptdt. F(p) = [LT(f)] (p) = \int_0^\infty f(t) e^{-pt} dt. and the inverse Laplace transformation is f(t)=(1/2πi)cic+iF(p)eptdp. f(t) = (1/2\pi i) \int_{c-i \infty}^{c+i \infty} F(p) e^{pt} dp. for c1c \gg 1.

For derivatives, we have LT(Df)=pLT(f)f(0)=pF(p)f(0). LT(Df) = p LT(f) - f(0) = p F(p) - f(0). and we can repeatedly use it to get LT(DDf)=pLT(Df)Df(0)=p(pLT(f)f(0))Df(0)=p2F(p)pf(0)f(0). LT(DDf) = p LT(Df) - Df(0) = p (p LT(f) - f(0)) - Df(0) = p^2 F(p) - p f(0) - f'(0).

Example

solve equation (D1)(D2)f(t)=0 (D-1)(D-2) f(t) = 0 with condition f(0)=0,f(0)=1.f(0) = 0, f'(0) = 1.

We may apply Laplace transform to the equation and get LT[(D23D+2)f]=0 LT [(D^2 - 3D + 2) f] = 0 which says p2F(p)pf(0)f(0)3[pF(p)f(0)]+2F(p)=0 p^2 F(p) - p f(0) - f'(0) - 3 [p F(p) - f(0)] + 2 F(p) = 0 plug in the initial condition for f(0),f(0)f(0), f'(0), we get F(p)[p23p+2]=1 F(p) [p^2 - 3p + 2] = 1 thus F(p)=1/(p23p+2)=1(p1)(p2) F(p) = 1 / (p^2 - 3p + 2) = \frac{1}{(p-1)(p-2)}

Now we can either look up the inverse Laplace transformation table, or do the inverse Laplace transformation integral, to get f(t)=(1/2πi)cic+iF(p)eptdp=pResp(F(p)ept) f(t) = (1/2\pi i) \int_{c-i \infty}^{c+i \infty} F(p) e^{pt} dp = \sum_{p} Res_p( F(p) e^{pt}) we have two poles,

  • one is at p=1p=1, with residue e1t/(12)=ete^{1 t} /(1-2) = - e^t and
  • another at p=2p=2 with residue e2t/(21)=e2te^{2 t} /(2-1) = e^{2t}

so the answer is f(t)=e2tet. f(t) = e^{2t} - e^t. We may check that they indeed satisfies the initial condition.

Inhomogenous term

If you had a equation of the form Df(x)=1 D f(x) = 1 then it is not a homogeneous equation: the term on the right hand side is does not contain factor ff. What does its solution space look like? We know it is of the form f(x)=c+x. f(x) = c + x.

Note that, the solution space is not a vector space anymore. indeed, if you have f1(x)f_1(x) and f2(x)f_2(x) both satisfies the equation, Df1(x)=1 D f_1(x) = 1 Df2(x)=1 D f_2(x) = 1 then add the two equation up, we see D(f1(x)+f2(x))=2 D (f_1(x) + f_2(x)) = 2 so f1(x)+f2(x)f_1(x) + f_2(x) is not a solution (121 \neq 2).

Nonetheless, the solution space is a so called 'affine space' VV, which means there is an associated vector space VV', and for any two elements v1,v2Vv_1, v_2 \in V, we have their difference v1v2Vv_1 - v_2 \in V'.

In our case, the associated vector space is the solution space for the homogenous equation V={f(x)Df=0} V' = \{f(x) \mid Df = 0 \}

That means, if we pick a 'base point' v0Vv_0 \in V, and pick a basis e1,,ene_1, \cdots, e_n of VV', and then we can express any element vVv \in V as v=v0+(c1e1++cnen). v = v_0 + (c_1 e_1 + \cdots + c_n e_n). for some coefficients cic_i.

Back to our problem here, any solution to the inhomogenous equation can be written as a 'particular solution' (playing the role of v0v_0 above), and plus a solution to the homogenous equation (an element of VV').

2023/10/26 22:17 · pzhou

Oct 25: Wednesday

Today we considered solving homogenous constant coefficient differential equation.

In general, the equation you meet looks like (d/dx)nf(x)+cn1(d/dx)n1f(x)++c1(d/dx)f(x)+c0f(x)=0. (d/dx)^n f(x) + c_{n-1} (d/dx)^{n-1} f(x) + \cdots + c_1 (d/dx) f(x) + c_0 f(x) = 0. If we call D=d/dxD = d/dx and factor out f(x)f(x) on the right, we can write the above equation as (Dn+cn1Dn1++c0)f(x)=0. (D^n + c_{n-1} D^{n-1} + \cdots + c_0 ) f(x) = 0. We sometimes use P(D)=Dn+cn1Dn1++c0P(D) = D^n + c_{n-1} D^{n-1} + \cdots + c_0, P(D)P(D) is a degree nn polynomial in DD.

the solution space

Let VV denote the set of solutions for the equation. Because the equation is homogeneous in ff (meaning, each term in the equation has one and only one factor of ff), the solution space is a vector space, meaning you can add two solutions together and still get a solution.

If P(D)P(D) is a degree nn operator, then the solution space is nn dimensional.

general solutions: P(x)P(x) has distinct roots

Here we try to find general solution for the equation P(D)f(x)=0P(D) f(x) = 0.

As long as you can find nn linearly independent solutions, call them f1(x),,fn(x)f_1(x), \cdots, f_n(x), then you win. Since they will form a basis of the solution space.

We factorize P(D)P(D) into linear factors P(D)=(Dλ1)(Dλn) P(D) = (D - \lambda_1) \cdots (D - \lambda_n) we can always do this by the 'fundamental theorem of algebra', which says any polynomials admits such factorization.

Suppose all λi\lambda_i are distinct, then the following is a list of nn linearly independent solutions eλ1x,,eλnx e^{\lambda_1 x}, \cdots, e^{\lambda_n x}

It is important to note that, DD does not 'commute' with xx D(xf(x))xDf(x). D (x f(x)) \neq x D f(x). but DD commute with itself, (Da)(Db)f(x)=(Db)(Da)f(x). (D-a) (D-b) f(x) = (D-b) (D-a) f(x).

case with repeated roots

What if we had repeated roots?

See Monday's example.

If you face an equation like this Dkf(x)=0. D^k f(x) = 0. then you know you can have a basis of solution like 1,x,,xk11, x, \cdots, x^{k-1}.

What about (Dλ)kf(x)=0? (D-\lambda)^k f(x) = 0 ?

We introduced a trick D(eaxf(x))=(aeaxf(x)+eaxDf(x))=eax(D+a)f(x). D (e^{ax} f(x)) = (a e^{ax} f(x) + e^{ax} D f(x)) = e^{ax} (D+a) f(x). hence (Da)(eaxf(x))=eaxDf(x). (D-a) (e^{ax} f(x)) = e^{ax} D f(x). or a variant eaxD(eaxf(x))=(D+a)f(x). e^{-ax} D (e^{ax} f(x)) = (D+a) f(x).

We can write f(x)=eλxg(x)f(x) = e^{\lambda x} g(x), then we have (Dλ)kf(x)=(Dλ)k[eλxg(x)]=(Dλ)k1[eλxDg(x)]=(Dλ)k2[eλxD2g(x)]=eλxDkg(x) (D-\lambda)^k f(x) = (D-\lambda)^k [e^{\lambda x} g(x)] = (D-\lambda)^{k-1} [e^{\lambda x} D g(x)] = (D-\lambda)^{k-2} [e^{\lambda x} D^2 g(x)] = e^{\lambda x} D^k g(x) now, we know the equation for g(x)g(x) is Dkg(x)=0D^k g(x) = 0, and we know the general solution for g(x)g(x) is g(x)=c0+c1x++ck1xk1. g(x) = c_0 + c_1 x + \cdots + c_{k-1} x^{k-1}. hence general solution for f(x)f(x) is f(x)=eλx(c0+c1x++ck1xk1) f(x) = e^{\lambda x} (c_0 + c_1 x + \cdots + c_{k-1} x^{k-1})

'formula'

In general, we can write P(D)=(Dλ1)m1(Dλr)mr P(D) = (D - \lambda_1)^{m_1} \cdots (D - \lambda_r)^{m_r} where λi\lambda_i are distinct, and the multiplicity m1,,mrm_1, \cdots, m_r add up to nn. Then we have the following general solutions f(x)=i=1rj=0mi1ci,jxjeλix. f(x) = \sum_{i=1}^r \sum_{j=0}^{m_i-1} c_{i,j} x^j e^{\lambda_i x}.

2023/10/26 21:58 · pzhou

Oct 23: constant coeff diffrential equation

warm-up:

how to solve the equation (za)(zb)=0 (z-a) (z-b) = 0 well, we would have two solutions z=az=a and z=bz=b (assuming aba \neq b).

1

how to solve equation ( aba \neq b) (d/dxa)(d/dxb)f(x)=0? (d/dx - a) (d/dx - b) f(x) = 0? Instead of saying we have two solutions, we say the solution space is a two dimensional vector space. it is easy to check that eaxe^{ax} and ebxe^{bx} solves the equations, and so are their linear combinations, thus we can write down the general solution as f(x)=c1eax+c2ebx f(x) = c_1 e^{ax} + c_2 e^{bx}

wait, how do you know you have found ALL the solutions? how do you know you didn't miss any? There is a theorem saying that, a constant coeff ODE of order nn will have a nn-dimensional solution space.

2

How about the case where a=ba=b? (d/dxa)(d/dxa)f(x)=0? (d/dx - a) (d/dx - a) f(x) = 0? Here we claim that the general solution is f(x)=(c0+c1x)eax f(x) = (c_0 + c_1 x) e^{ax}

Proof of the claim: suppose f(x)f(x) solves the equation, then we can always write f(x)=g(x)eaxf(x) = g(x) e^{ax} (since eaxe^{ax} is never 0), and figure out the equation that g(x)g(x) satisfies. We see (d/dxa)[g(x)eax]=d/dx[g(x)eax]ag(x)eax=g(x)eax+aeaxg(x)ag(x)eax=eax(d/dx)(g(x)) (d/dx - a)[ g(x) e^{ax} ] = d/dx [ g(x) e^{ax} ] - ag(x) e^{ax} = g'(x) e^{ax} + a e^{ax} g(x) - a g(x) e^{ax} = e^{ax} (d/dx) (g(x)) thus, we have eax(d/dx)2(g(x))=0 e^{ax} (d/dx)^2 (g(x)) = 0 That means g(x)=c0+c1xg(x) = c_0 + c_1 x. Hence we get the claimed general solution

3

Initial condition / Boundary condition.

2023/10/22 22:16 · pzhou

<< Newer entries | Older entries >>

math121a-f23/blog.txt · Last modified: 2023/10/04 09:31 by pzhou