===== Final Solution=====
$$\gdef\E{\mathbb E}$$
** Due Date **: May 10th (Sunday) 11:59PM. Submit online to gradescope.
** Policy **: You can use textbook and your notes. There should be no discussion or collaborations, since this is suppose to be a exam on your own understanding. If you found some question that is unclear, please let me know via email.
-----
==== 1. Vector spaces and Curvilinear Coordinates (30 pts) ====
All vectors spaces are finite dimensional over $\R$.
1. True or False (10 pts)
- (F) Any vector space has a unique basis.
- (F) Any vector space has a unique inner product.
- (T) Given a basis $e_1, \cdots, e_n$ of $V$, there exists a basis $E^1, \cdots, E^n$ on $V^*$, such that $E^i (e_j) = \delta_{ij}$.
- (F) If we change $e_1$ in the basis $e_1, \cdots, e_n$, in the dual basis only $E^1$ will change.
* The correct statement would be, "If we change $e_1$ in the basis $e_1, \cdots, e_n$, in the dual basis, not only $E_1$ will change, all other $E_i$ might also change."
- (F) Given a vector space with inner product, there exists a unique orthogonal basis.
- (T) Let $V$ and $W$ be two vector spaces with inner products and of the same dimension.Then there exists a linear map $f: V \to W$, such that for any $v_1, v_2 \in V$, $$\la v_1, v_2 \ra = \la f(v_1), f(v_2) \ra.$$
- (F) If $V$ and $W$ are vector spaces of dimension $3$ and $5$, then the tensor product $V \otimes W$ have dimension $8$. (should be $3 \times 5 = 15$)
- (T) If $V$ has dimension $5$, then the exterior power $\wedge^3 V$ is a vector space with dimension $10$.
- (T) The solution space of equation $y'(x) + x^2 y(x) = 0$ forms a vector space.
- (F) The solution space of equation $y'(x) + x y^2(x) = 0$ forms a vector space.
2. (10 pt) Let $V_n$ be the vector space of polynomials whose degree is at most $n$. Let $f(x)$ be any smooth function on $[-1,1]$. We fix $f(x)$ once and for all. Show that there is a unique element $f_n \in V_n$ (depending on our choice of $f$), such that for any $g \in V_n$, we have
$$ \int_{-1}^1 f_n(x) g(x) dx = \int_{-1}^1 f(x) g(x) dx. $$
//Hint: //
* (1) Equip $V_n$ with an inner product $\la g_1, g_2 \ra = \int_{-1}^1 g_1(x) g_2(x) dx$.
* (2) Show that $f(x)$ induces an element in $V_n^*$: $g \mapsto \int_{-1}^1 f(x) g(x) dx$.
* (3) use inner product to identify $V_n$ and $V_n^*$.
A remark: if $f(x)$ were a polynomial of degree less than $n$, then you could just take $f_n(x) = f(x)$. But, we have limited our choices of $f_n$ to be just degree $ \leq n$ polynomial, so we are looking for a 'best approximation' of $f(x)$ in $V_n$ in a sense. Try solve the example case of $n=1$, $f(x) = \sin(x)$ if you need some intuition.
//Solution:// First some remark about what we are trying to prove: say you have an friend called $A$ who want to challenge you by playing a game:
* $A$ provide $n$ and $f(x)$ to you,
* then you need to provide $f_n(x)$ to $A$,
* then $A$ will examine if your $f_n$ pass the quality-check, i.e., $A$ take an arbitrary $g(x) \in V_n$, and test if $\int_{-1}^1 g(x) f(x) dx = \int_{-1}^1 g(x) f_n(x) dx$.
Note that, when you produce $f_n(x)$, you have no knowledge of what $g(x)$ would be.
Here is a solution, that is of a concrete flavor, not quite following the hint. First, we define an inner product $\la g_1, g_2 \ra = \int_{-1}^1 g_1(x) g_2(x) dx$, whenever the integral make sense. Then, we may take an orthonormal basis $e_0, \cdots, e_n$ of $V_n$ (note $\dim V_n= n+1$), and then
$$ f_n(x) = \sum_{i=0}^n \la f, e_i \ra e_i. $$
Then, we have for any $g \in V_n$,
$$ \la f_n, g \ra = \sum_{i=0}^n \la f, e_i \ra \la e_i, g \ra = \la f, \sum_{i=0}^n \la e_i, g \ra e_i \ra = \la f, g \ra. $$
Conceptually, if $V$ denote the space of smooth functions on $[0,1]$ with inner product, then $V \supset V_n$, then there is an orthogonal projection
$$\Pi_n: V \to V_n. $$
You have seen this orthogonal projection in different guises, for example the least square regression, the truncation of Fourier series expansion of some function, ... Here $f_n = \Pi_n(f)$.
3. (10 pt) Let $\R^3$ be equipped with curvilinear coordinate $(u,v,w)$ where
$$ u = x, v = y, w = z - x^2 + y^2. $$
- (3pt) Write the vector fields $\d_u, \d_v, \d_w$ in terms of $\d_x, \d_y, \d_z$.
- (3pt) Write the 1-forms (co-vector fields) $du,dv,dw$ in terms of $dx, dy, dz$.
- (4pt) Write down the standard metric of $\R^3$ in coordinates $(u,v,w)$.
//Solution:// We write $x,y,z$ in terms of $u,v,w$
$$ x = u, y = v, z = w + u^2 - v^2 $$
then
$$\d_u = \d_u(x) \d_x + \d_u(y) \d_y + \d_u(z) \d_z = \d_x + 2u \d_z = \d_x + 2x \d_z$$
The others are similar.
$$dw = dz - 2x dx + 2y dy$$
Then finally
$$ g = (dx)^2 + (dy)^2 + (dz)^2 = (du)^2 + (dv)^2 + (dz - 2x dx + 2y dy)^2$$
open up the parenthesis if you wish.
==== 2. Special Functions and Differential Equations (50 pts) ====
1. (10 pt) Orthogonal polynomials. Let $I = [-1,1]$ be a closed interval. $w(x) = x^2$ a non-negative function on $I$. For functions $f,g$ on $I$, we define their inner products as
$$ \la f, g \ra = \int_{-1}^1 f(x) g(x) w(x) dx $$
The normalized orthogonal polynomials $P_0, P_1, \cdots$ are defined by
- $P_n(x)$ is a degree $n$ polynomial.
- $\la P_n, P_n \ra = 1$
- $\la P_i, P_j \ra = 0$ if $i \neq j$.
Find out $P_0, P_1, P_2$.
$$P_0(x) = \pm \sqrt{3/2}, \quad P_1(x) = \pm \sqrt{5/2} x, \quad P_2(x) = \pm \sqrt{14}/4(-3+5 x^2). $$
2. (10 pt) Find eigenvalues and eigenfunctions for the Laplacian on the unit sphere $S^2$, i.e., solve
$$ \Delta F(\theta, \varphi) = \lambda F(\theta, \varphi) $$
for appropriate $\lambda$ and $F$. The Laplacian on a sphere is
$$ \Delta f = \frac{1}{\sin \theta} \d_\theta(\sin \theta \d_\theta(f)) + \frac{1}{\sin^2 \theta} \d_\varphi^2 f. $$
//Solution:// Eigenvalue $\lambda = -l(l+1)$ and eigenfunctions
$$F(\theta, \varphi) = P_l^m(\cos \theta) \cos (m \varphi) , P_l^m(\cos \theta) \sin (m \varphi) $$
3. (5 pt) Find eigenvalues and eigenfunctions for the Laplacian on the half unit sphere $S^2$ with Dirichelet boundary condition, i.e., solve
$$ \Delta F(\theta, \varphi) = \lambda F(\theta, \varphi), \quad F(\theta=\pi/2, \varphi)=0. $$
for appropriate $\lambda$ and $F$.
Here the trick is that, any eigenfunction on the upper-semisphere, by reflection, can be extended to an eigenfunction on the whole sphere
$$ F(\pi/2 - \theta, \varphi) = - F(\pi/2 + \theta, \varphi) $$
hence, we want those eigenfunction on the whole sphere that satisfies
$$ P_l^m(x) = - P_l^m(-x) $$
this turns out to be satisfies if $l+m$ is odd.
Note that, even though the boundary condition is rotational symmetric, it does not mean the solution is rotational symmetry (after all, the $S^2$ itself is symmetric, but the eigenfunction can have fluctuations).
4. (15 pt) (Heat flow). Consider heat flow on the closed interval $[0,1]$
$$ \d_t u(x,t) = \d_x^2 u(x,t), $$ where $u(x,t)$ denote the temperature. \
Let $u(0, t) = u(1, t) = 0$ for all $t$. Let the initial condition be
$$ u(x, 0) = \begin{cases} 2x & x \in [0, 1/2] \cr
2(1-x) & x \in [1/2, 1] \end{cases} $$
* (12pt) Solve the equation for $t > 0$.
* (3 pt) Does the solution make sense for any negative $t$? Why or why not?
The problem is standard, I won't repeat the solution.
The solution will diverge for any negative $t$, no matter how small $|t|$ is, since
$$ \sum_n c_n e^{-n^2 t} = \sum_n c_n e^{n^2 |t|} $$
will diverge quite fast due to $e^{n^2}$, and $c_n$ is only decaying as $1/n^c$ for some constant $c$.
Note that, how much you can go negative in time, depends on how smooth the initial condition is. Here the inital condition is already non-smooth, hence you cannot extend the solution to $t \in (-\epsilon, +\infty)$ from $t \in (0, +infty)$.
5. (10 pt) (Steady Heat equation). Let $D$ be the unit disk. We consider the steady state heat equation on $D$
$$ \Delta u(r, \theta) = 0 $$
* (3 pt) Write down the Laplacian $\Delta$ in polar coordinate
* (5 pt) Show that, if the boundary value is $u(r=1, \theta) = 0$, then $u=0$ on the entire disk.
* (2 pt) Is it possible to have a boundary condition $u(r=1, \theta) = f(\theta)$, such that there are two different solutions $u_1(r,\theta)$ and $u_2(r,\theta)$ to the problem?
It is impossible to have a boundary condition $u(r=1, \theta) = f(\theta)$, such that there are two different solutions $u_1(r,\theta)$ and $u_2(r,\theta)$ to the problem. Otherwise, let $u_1$ and $u_2$ be the two solutions, and we may take their difference
$$ u = u_1 - u_2$$
then
$$ \Delta u = 0, \quad u|_{\d D} = 0 $$
by part (b), $u=0$.
==== 3. Probability and Statistics (20 pts) ====
1. (5 pt) Throw a die 100 times. Let $X$ be the random variable that denote the number of times that $4$ appears. What distribution does $X$ follow? What is its mean and variance?
Binomial distribution, with $n=100, p=1/6$. Mean is $np$, variance is $np(1-p)$.
2. (5 pt) Let $X \sim N(0,1)$ be a standard normal R.V . Compute its moment generating function
$$ \E(e^{t X}). $$
Use the moment generating function to find out $\E(X^4)$. Let $Y = X^2$. What is the mean and variance of $Y$?
Do the integral, we get $$\E(e^tX) = e^{t^2/2}$$.
To compute $\E(X^4)$, we note that
$$ \E(e^tX) = m_0 + m_1 t + \frac{t^2}{2!} m_2 + \frac{t^3}{3!} m_3+ \frac{t^4}{4!} m_4 + \cdots $$
where $\E(X^k) = m_k$. Hence, we may get the coefficients of Taylor expansion
$$ e^{t^2/2} = 1 + (t^2/2) + \frac{(t^2/2)^2}{2!} + \cdots = 1 + t^2/2 + t^4/8 + \cdots $$
comparing the $t^4$ coefficients, we see
$$ m_4/4! = 1/8 \Rightarrow m_4 = 3 $$
3. (5 pt) There are two bags of balls. Bag A contains 4 black balls and 6 white balls, Bag B contains 10 black balls and 10 white balls. Suppose we randomly pick a bag (with equal probability) and randomly pick a ball. Given that the ball is white, what is the probability that we picked bag A?
Note that
$$P(\z{white ball}) = P(\z{white} | \z{bag A}) P( \z{bag A}) + P(\z{white} | \z{bag B}) P( \z{bag B}) = (6/10)(1/2) + (10/20)(1/2) $$
instead of total number of white ball divided by total number of balls.
4. (5 pt) Consider a random walk on the real line: at $t=0$, one start at $x=0$. Let $S_n$ denote the position at $t=n$, then $S_n = S_{n-1} + X_n$, where $X_n = \pm 1$ with equal probability.
* (3pt) What is the variance of $S_n$?
* (2pt) Use Markov inequality, prove that for any $c > 1$, we have
$$ \P(|S_n| > c \sqrt{n}) \leq 1/c^2 $$
Since $S_n = X_1 + \cdots + X_n$ and $X_i$ are independent, we have
$$Var(S_n) = \sum_{i=1}^n Var(X_i) = n (1^2 (1/2) + (-1)^2 (1/2) = n $$
Then, by Markov inequality, we have
$$ \P(|X| > c \sqrt{Var(X)}) \leq \frac{1}{c^2}$$
take $X = S_n$.