User Tools

Site Tools


math121a-f23:blog

Oct 30

Sometimes you want to model a short impulse:

  • you give a swing a push, so that it started to swing up
  • you plucked a string on the guitar, and it started to vibrate

This can be modeled by an inhomogenous equation of the form $$ D^2 u(t) + \omega^2 u(t) = g(t). $$ where $g(t)$ models the force (which varies over time). Sometimes we don't care about the precise shape of the function $g(t)$, but only the 'total effect' of the force given, then it is useful to use the delta function to model it.

delta function as a limit

Consider the following sequence of functions $$ g_n(t) = n \cdot 1_{[0,1/n]}(t). $$ where $1_{[a,b]}(t)$ means the value of the function is $1$ if $t \in [a,b]$ and $0$ otherwise. We have that $$ \int g_n(t) dt = 1 $$ for all $n$, but the graph of the function is getting narrower and taller.

We also have the property that, suppose we have a smooth function $f(t)$, then we have $$ \lim_{n \to \infty} \int g_n(t) f(t) dt = f(0). $$ consider the case that $f(t) = 1$ or $t$.

We will write $$ \delta(t) = \lim_{n \to \infty} g_n(t). $$ which is meaningful inside an integral.

delta function on the RHS of an equation

Ex 1

Consider the equation $$ y'(t) = \delta(t), \quad t \geq 0 $$ and with initial condition $y(t)=0$ for $t<0$.

We can solve it by integration: for $b > 0$, we have $$ y(b) = y(-\epsilon) + \int_{-\epsilon}^b y'(t) dt = 0 + 1 = 1. $$ where $\epsilon$ is any positive number.

The shape of $y(t)$ is a 'step function', $$ y(t) = \begin{cases} 0 & t < 0 \cr 1 & t > 0 \end{cases} $$ the value of $y(0)$ is undefined (and doesn't matter)

Ex 2

Consider the equation $$ (d/dt)^2 y(t) = \delta(t), \quad t \geq 0 $$ and with initial condition $y(t)=0$ for $t<0$.

Let $g(t) = y'(t)$, then g(t)$ satisfies $$ (d/dt) g(t) = \delta(t) $$ which is just the case in Ex 1. We see $g(t) = 1$ only if $t>0$. Then, we get, for $t>0$ $$ y(t) = y(0) + \int_0^t y'(s) ds = \int_0^t 1 ds = t. $$

The Green's function

Let $P(D)$ be a differential operator, where $D=d/dt$ and $P(x)$ is a degree $n$ polynomial. Suppose we are facing an equation of the type $$ P(D) y(t) = g(t) $$ with some homogeneous boundary condition (meaning the function $y(t)$ vanishes on the boundary of the domain).

Suppose we know the solution for $$ P(D) G(t; s) = \delta(t-s). $$

Then, we can solve the original equation by doing an integral $$ y(t) = \int G(t; s) g(s) ds. $$ Indeed, we have $$ P(D) y(t) = \int P(D) G(t; s) g(s) ds = \int \delta(t-s)g(s) ds = g(t). $$

Ex 3

Consider the domain being $[0,\infty)$, and we have $$ d/dt y(t) = g(t), \quad g(t)=1_{[1,2]}(t). y(0) = 0$$ In this case, we can first solve for the Green's function $$ d/dt G(t; s) = \delta(t-s), \quad G(0;s)=0$$ we get $ G(t;s) = 1 $ for $t>s$, and 0 else.

Thus, we can get get $$ y(t) = \int g(s) G(t;s) ds = \int_{1}^2 1_{t>s} ds. $$ if $t>2$, then $y(t) = \int_1^2ds = 1$, if $1<t<2$, we get $\int_1^t ds = (t-1)$. if $t<1$, we get $y(t)=0$.

More examples about delta function

Consider the equation that, for $x \in [0,2]$ $$ (d/dx)^2 y(x) + y(x) = \delta (x-1) $$ with boundary condition $y(0) = y(2) = 0. $

If we integrate this equation over the interval $(1-\epsilon, 1+\epsilon)$, across the location of the delta function, we find that $$ \lim_{\epsilon} y'(1+\epsilon) - y'(1-\epsilon) = 1 $$ (the term $\int_{(1-\epsilon, 1+\epsilon)} y(x) dx = O(\epsilon) \to 0$) hence we had a discontinuity of the slope of $y(x)$ when $x=1$. We may write down the general solution over the interval $(0,1)$ that vanishes on $x=0$ $$ y_-(x) = a \sin(x) , \quad x \in [0,1] $$ and the general solution over $x \in (1,2)$ that vanishes on $x=2$ $$ y_+(x) = b \sin(x-2), \quad x \in [1,2] $$ then we can solve the condition that $$ y_-(1) = y_+(1), \quad y_-'(1) - y'_+(1) = 1 $$ this gives $$ a \sin(1) = b \sin(-1), \quad a\cos(1) - b \cos(-1) = 1 $$ so $a=-b$ and $a = 1 / (2 \cos(1))$.

2023/10/29 12:06 · pzhou

HW 9

Boas

  • section 8.5, #22, #24, #27
  • section 8.8 #8, 9, 10
  • section 8.9 #2, 3
2023/10/28 07:25 · pzhou

Oct 27

We learnd Laplace transformation, which can be used to solve diff eq.

Let $f(t)$ be a function defined for $t>0$, and we recall the following $$ F(p) = [LT(f)] (p) = \int_0^\infty f(t) e^{-pt} dt. $$ and the inverse Laplace transformation is $$ f(t) = (1/2\pi i) \int_{c-i \infty}^{c+i \infty} F(p) e^{pt} dp. $$ for $c \gg 1$.

For derivatives, we have $$ LT(Df) = p LT(f) - f(0) = p F(p) - f(0). $$ and we can repeatedly use it to get $$ LT(DDf) = p LT(Df) - Df(0) = p (p LT(f) - f(0)) - Df(0) = p^2 F(p) - p f(0) - f'(0). $$

Example

solve equation $$ (D-1)(D-2) f(t) = 0 $$ with condition $f(0) = 0, f'(0) = 1. $

We may apply Laplace transform to the equation and get $$ LT [(D^2 - 3D + 2) f] = 0 $$ which says $$ p^2 F(p) - p f(0) - f'(0) - 3 [p F(p) - f(0)] + 2 F(p) = 0 $$ plug in the initial condition for $f(0), f'(0)$, we get $$ F(p) [p^2 - 3p + 2] = 1 $$ thus $$ F(p) = 1 / (p^2 - 3p + 2) = \frac{1}{(p-1)(p-2)} $$

Now we can either look up the inverse Laplace transformation table, or do the inverse Laplace transformation integral, to get $$ f(t) = (1/2\pi i) \int_{c-i \infty}^{c+i \infty} F(p) e^{pt} dp = \sum_{p} Res_p( F(p) e^{pt}) $$ we have two poles,

  • one is at $p=1$, with residue $e^{1 t} /(1-2) = - e^t$ and
  • another at $p=2$ with residue $e^{2 t} /(2-1) = e^{2t}$

so the answer is $$ f(t) = e^{2t} - e^t. $$ We may check that they indeed satisfies the initial condition.

Inhomogenous term

If you had a equation of the form $$ D f(x) = 1 $$ then it is not a homogeneous equation: the term on the right hand side is does not contain factor $f$. What does its solution space look like? We know it is of the form $$ f(x) = c + x. $$

Note that, the solution space is not a vector space anymore. indeed, if you have $f_1(x)$ and $f_2(x)$ both satisfies the equation, $$ D f_1(x) = 1 $$ $$ D f_2(x) = 1 $$ then add the two equation up, we see $$ D (f_1(x) + f_2(x)) = 2 $$ so $f_1(x) + f_2(x)$ is not a solution ($1 \neq 2$).

Nonetheless, the solution space is a so called 'affine space' $V$, which means there is an associated vector space $V'$, and for any two elements $v_1, v_2 \in V$, we have their difference $v_1 - v_2 \in V'$.

In our case, the associated vector space is the solution space for the homogenous equation $$ V' = \{f(x) \mid Df = 0 \}$$

That means, if we pick a 'base point' $v_0 \in V$, and pick a basis $e_1, \cdots, e_n$ of $V'$, and then we can express any element $v \in V$ as $$ v = v_0 + (c_1 e_1 + \cdots + c_n e_n). $$ for some coefficients $c_i$.

Back to our problem here, any solution to the inhomogenous equation can be written as a 'particular solution' (playing the role of $v_0$ above), and plus a solution to the homogenous equation (an element of $V'$).

2023/10/26 22:17 · pzhou

Oct 25: Wednesday

Today we considered solving homogenous constant coefficient differential equation.

In general, the equation you meet looks like $$ (d/dx)^n f(x) + c_{n-1} (d/dx)^{n-1} f(x) + \cdots + c_1 (d/dx) f(x) + c_0 f(x) = 0. $$ If we call $D = d/dx$ and factor out $f(x)$ on the right, we can write the above equation as $$ (D^n + c_{n-1} D^{n-1} + \cdots + c_0 ) f(x) = 0. $$ We sometimes use $P(D) = D^n + c_{n-1} D^{n-1} + \cdots + c_0$, $P(D)$ is a degree $n$ polynomial in $D$.

the solution space

Let $V$ denote the set of solutions for the equation. Because the equation is homogeneous in $f$ (meaning, each term in the equation has one and only one factor of $f$), the solution space is a vector space, meaning you can add two solutions together and still get a solution.

If $P(D)$ is a degree $n$ operator, then the solution space is $n$ dimensional.

general solutions: $P(x)$ has distinct roots

Here we try to find general solution for the equation $P(D) f(x) = 0$.

As long as you can find $n$ linearly independent solutions, call them $f_1(x), \cdots, f_n(x)$, then you win. Since they will form a basis of the solution space.

We factorize $P(D)$ into linear factors $$ P(D) = (D - \lambda_1) \cdots (D - \lambda_n) $$ we can always do this by the 'fundamental theorem of algebra', which says any polynomials admits such factorization.

Suppose all $\lambda_i$ are distinct, then the following is a list of $n$ linearly independent solutions $$ e^{\lambda_1 x}, \cdots, e^{\lambda_n x} $$

It is important to note that, $D$ does not 'commute' with $x$ $$ D (x f(x)) \neq x D f(x). $$ but $D$ commute with itself, $$ (D-a) (D-b) f(x) = (D-b) (D-a) f(x). $$

case with repeated roots

What if we had repeated roots?

See Monday's example.

If you face an equation like this $$ D^k f(x) = 0. $$ then you know you can have a basis of solution like $1, x, \cdots, x^{k-1}$.

What about $$ (D-\lambda)^k f(x) = 0 ?$$

We introduced a trick $$ D (e^{ax} f(x)) = (a e^{ax} f(x) + e^{ax} D f(x)) = e^{ax} (D+a) f(x). $$ hence $$ (D-a) (e^{ax} f(x)) = e^{ax} D f(x). $$ or a variant $$ e^{-ax} D (e^{ax} f(x)) = (D+a) f(x). $$

We can write $f(x) = e^{\lambda x} g(x)$, then we have $$ (D-\lambda)^k f(x) = (D-\lambda)^k [e^{\lambda x} g(x)] = (D-\lambda)^{k-1} [e^{\lambda x} D g(x)] = (D-\lambda)^{k-2} [e^{\lambda x} D^2 g(x)] = e^{\lambda x} D^k g(x) $$ now, we know the equation for $g(x)$ is $D^k g(x) = 0$, and we know the general solution for $g(x)$ is $$ g(x) = c_0 + c_1 x + \cdots + c_{k-1} x^{k-1}. $$ hence general solution for $f(x)$ is $$ f(x) = e^{\lambda x} (c_0 + c_1 x + \cdots + c_{k-1} x^{k-1}) $$

'formula'

In general, we can write $$ P(D) = (D - \lambda_1)^{m_1} \cdots (D - \lambda_r)^{m_r} $$ where $\lambda_i$ are distinct, and the multiplicity $m_1, \cdots, m_r$ add up to $n$. Then we have the following general solutions $$ f(x) = \sum_{i=1}^r \sum_{j=0}^{m_i-1} c_{i,j} x^j e^{\lambda_i x}. $$

2023/10/26 21:58 · pzhou

Oct 23: constant coeff diffrential equation

warm-up:

how to solve the equation $$ (z-a) (z-b) = 0 $$ well, we would have two solutions $z=a$ and $z=b$ (assuming $a \neq b$).

1

how to solve equation ( $a \neq b$) $$ (d/dx - a) (d/dx - b) f(x) = 0? $$ Instead of saying we have two solutions, we say the solution space is a two dimensional vector space. it is easy to check that $e^{ax}$ and $e^{bx}$ solves the equations, and so are their linear combinations, thus we can write down the general solution as $$ f(x) = c_1 e^{ax} + c_2 e^{bx} $$

wait, how do you know you have found ALL the solutions? how do you know you didn't miss any? There is a theorem saying that, a constant coeff ODE of order $n$ will have a $n$-dimensional solution space.

2

How about the case where $a=b$? $$ (d/dx - a) (d/dx - a) f(x) = 0? $$ Here we claim that the general solution is $$ f(x) = (c_0 + c_1 x) e^{ax} $$

Proof of the claim: suppose $f(x)$ solves the equation, then we can always write $f(x) = g(x) e^{ax}$ (since $e^{ax}$ is never 0), and figure out the equation that $g(x)$ satisfies. We see $$ (d/dx - a)[ g(x) e^{ax} ] = d/dx [ g(x) e^{ax} ] - ag(x) e^{ax} = g'(x) e^{ax} + a e^{ax} g(x) - a g(x) e^{ax} = e^{ax} (d/dx) (g(x)) $$ thus, we have $$ e^{ax} (d/dx)^2 (g(x)) = 0 $$ That means $g(x) = c_0 + c_1 x$. Hence we get the claimed general solution

3

Initial condition / Boundary condition.

2023/10/22 22:16 · pzhou

<< Newer entries | Older entries >>

math121a-f23/blog.txt · Last modified: 2023/10/04 09:31 by pzhou