Hi! I am Jianzhi from Singapore, currently an undergraduate studying CS and Mathematics at UC Berkeley.
1. Axioms of Measurable Sets
2. Properties of Outer Measure
0. [Open box] An open box in $\mathbb{R}^n$ is any set of the form $B = \Pi (a_i, b_i)$. Define volume of the box to be $vol(B) = \Pi (b_i - a_i)$
1, [Outer measure] Let $\Omega \subset \mathbb{R}^n$, then $$ m^*(\Omega) = \inf \{\sum_i |B_i|, \Omega \subset \bigcup_i B_i, B_i \subset \mathbb{R}^n \text{ open boxes} \} $$ The nice thing about outer measure is that it is defined for any set, not just measurable ones.
2. [$\sigma$-algebra] A $\sigma$-algebra is a collection of sets that includes the empty set, is closed under complement and closed under countable union.
3. [Zero set] Say $Z \subset \mathbb{R}^n$ is a zero set if it has outer measure $0$.
4. [Measurability] Let $E$ be a subset of $\mathbb{R}^n$. Say $E$ is asurable if and only if $\forall A \subset \mathbb{R}^n$, we have the identity $m^*(A) = m^*(A \cap E) + m^*(A \RM E)$. Denote its asure as $mE = m^*E$.
5. [Abstract outer measure] An abstract outer measure on $M$ is a function $\omega : 2^M \rightarrow [0, \infty]$ that satisfies $\omega(\phi) = 0$, $\omega$ is monotone, and $\omega$ is countably additive. Say a set $E \subset M$ is measurable wrt $\omega$ if it satisfies (4) but with $M$ instead of $\mathbb{R}^n$.
6. [Measurable function] Let $\Omega \subset \mathbb{R}^n$ be a measurable set. A function $f:\Omega → \mathbb{R}^m$ is measurable if and only if $f^{-1}(V)$ is measurable for every open set $V \subset \mathbb{R}^m$.
7. The collection $M$ of measurable sets with respect to any outer measure on any set $M$ is a $\sigma$-algebra and the outer measure restricted to this $\sigma$-algebra is countably additive. All zero sets are measurable and have no effect on measurability. In particular, asure has these properties.
Proof (written in my own words)
TODO
7.5 [Measure space] A measure space is a triple $(M, \mathcal{M}, \mu)$ where $M$ is a set, $\mathcal{M}$ is a $\sigma$-algebra of subsets of $M$ and $mu$ is a measure on $\mathcal{M}$ (i.e. $\mu : \mathcal{M} → [0, \infty]$).
7.6 [$G_\delta$ set] A $G_\delta$ set is a countable intersection of open sets.
7.7 [$F_\sigma$ set] A $F_\sigma$ set is a countable union of closed sets.
Example: (just so that I can remember) In probability, $M$ is the set of outcomes. $\mathcal{M}$ is the set of events (defined as the set of subsets of outcomes). $\mathbb{P}$ (the probability measure) is a measure.
7.8 [Inner measure] Inner measure can be thought of as a dual to outer measure. $m_*(A) = \text{sup}\{m(F) : F \text{ closed}, F \subset A\}$
7.9 [Hull] Denote the hull of $A$ as $H_A$: the $G_\delta$ set that achieves the infimum of the measure of open sets that contain $A$. The $G_\delta$ set is unique up to a measure zero set. (note that $H_A$ achieves the infimum)
7.10 [Kernel] Denote the kernel of $A$ as $K_A$: the $F_\sigma$ set that achieves the supremum of the measure of closed sets that is contained in $A$. The $F_\sigma$ set is unique up to a measure zero set. (note that $K_A$ achieves the supremum)
7.11 [Measure Theoretic Boundary] The measure theoretic boundary of $A$ is $\delta_m(A) = H_A \RM K_A$. A bounded subset of $\mathbb{R}^n$ is measurable if and only if $m^*(A) = m_*(A)$ (i.e. $m (\delta_m(A)) = 0$).
7.12 [Slice] Define the slice of $E \subset \mathbb{R}^n \times \mathbb{R}^k$ at $x \in \mathbb{R}^n$ to be $E_x = \{y \in \mathbb{R}^k: (x, y) \in E\}$
7.13 [Undergraph] The undergraph of $f:\mathbb{R} → [0,\infty)$ is defined to be $\mathcal{U}f = \{(x,y) \in \mathbb{R} \times [0, \infty): 0 \leq y < f(x)\}$. Intuitively, the positive region bounded between $f(x)$ and the $x$-axis. Then $f$ is asurable if $\mathcal{U}f$ is measurable w.r.t. the planar asure, and denote the gral to be $\int f = m (\mathcal{U}f)$ (permit $\infty$).
7.135 [Completed Undergraph] The completed undergraph is just the undergraph plus the graph.
7.14 [grable] A function $f$ is Lebesgue integrable if its integral is finite. Note by definition $f$ must also be measurable.
7.15 [Lower and Upper Envelope Sequence] Let $f:X → [0, \infty)$ be a sequence of functions. Then define the lower and upper envelope sequences to be $\underline{f}_n(x) = \text{inf}\{f_k(x) : k \geq n\}$ and $\bar{f}_n(x) = \text{sup}\{f_k(x) : k \geq n\}$. Permit $\bar{f}_n(x) = \infty$.
8. [Simple function] Let $\Omega$ be a measurable subset of $\mathbb{R}^n$ and $f:\Omega → \mathbb{R}$. Say $f$ is a simple function if the image $f(\Omega)$ is finite (i.e. $\exists$ a finite set $\{c_1, …, c_N\}$ s.t. $\forall x \in \Omega$, $f(x) = c_i$ for some $1 \leq i \leq N$).
9. [Lebesgue integral (of simple function)] Let $\Omega \subset \mathbb{R}^n$ be a measurable set and $f:\Omega → \mathbb{R}$ be a simple nonnegative function. Define the Lebesgue integral of $f$ to be $\int_{\Omega} f = \Sigma_{\lambda \in f(\Omega);\lambda > 0} \lambda m (\{ x \in \Omega | f(x) = \lambda \})$.
10. [Majorization] Let $f:\Omega → \mathbb{R}$ and $g:\Omega → \mathbb{R}$. Say $f$ majorizes $g$ if and only if $f(x) \geq g(x) \forall x \in \Omega$.
11. [Lebesgue integral (of nonnegative function)] Let $\Omega \subset \mathbb{R}^n$ be a measurable set and $f:\Omega → [0,\infty]$ be measurable and nonnegative. Define the Lebesgue integral of $f$ to be $\int_{\Omega} f = \text{sup}\{\int_{\Omega} s | s \text{ simple, nonnegative and s.t. } f \text{ majorizes } s\}$.
Let $T:M → M'$ be a mapping between measure spaces $(M, \mathcal{M}, \mu)$ and $(M', \mathcal{M}', \mu')$.
12. [Mesemorphism] Say $T$ is a mesemorphism if $\forall E \in \mathcal{M}$ to $TE \in \mathcal{M'}$. Intuitively, $T$ sends a subset of $M$ to a subset of $M'$.
13. [Meseomorphism] Say $T$ is a meseomorphism if both $T$ and $T^{-1}$ are mesemorphisms. Intuitively, $T$ is an invertible mapping between the measure spaces.
14. [Mesisometry] Say $T$ is a mesisometry if $T$ is a meseomorphism and $\mu'(TE) = \mu(E) \forall E \in \mathcal{M}$. Intuitively, besides preserving measure structures, $T$ also preserves the measure. $T$ can be thought of isomorphisms between measure spaces, similar to orthogonal transformation of vector spaces in Linear Algebra. Note: mesisometry may disrespect topology (Pugh Ex 6.19)
15. [Diffeomorphism] A mapping that preserves smooth structure.
16. [Isomorphism] A mapping that preserves algebraic structure.
17. [Homeomorphism] A mapping that preserves topological structure.
18. [Absolutely Integrable Functions] Let $\Omega$ be a measurable subset of $\mathbb{R}^n$. A measurable function $f:\Omega → \mathbb{R}$ is absolutely integrable if the integral $\int_{\Omega} |f| < \infty$. (i.e. is finite)
19. [Lebesgue integral]
<in progress>
1. [Tao 7.1.1]
2. [Tao 7.4.4] Properties of measurable sets
3. [Pugh 2.5] The collection $M$ of measurable sets wrt any other measure on any set $M$ is a $\sigma$-algebra and the outer measure restricted to this $\sigma$-algebra is countably additive.
4. [Tao 7.5.2] Let $\Omega \subset \mathbb{R}^n$ and $f:\Omega → \mathbb{R}^m$ be continuous. Then $f$ is measurable.
5. [Tao 7.5.3] $f$ is measurable if and only if $f^{-1}(B)$ is measurable for every open box $B$.
Proof (I came up with this by myself; please forgive any errors) By Lemma 7.4.10, every open set can be written as a countable or finite union of open boxes.
If $\forall B$ open box, $f^{-1}(B)$ is measurable, then $\forall V \subset \mathbb{R}^m$ open, $V$ can be expressed as the union of countab of boxes, say $(B_n)_n$. Then, since $f^{-1}(B)$ are all measurable by assumption, the sequence of sets $(f^{-1}(B_n))_n$ are all measurable, hence the union of all of these, which is exactly $f^{-1}(V)$, is measurable.
On the other hand, if $f$ is measurable, then $\forall V \in \mathbb{R}^n$, $f^{-1}(V)$ is measurable. Then simply just take $V$ as any open box. ()
6. [Pugh 6.11] Each measurable set $E$ can be sandwiched between an $F_\sigma$-set (say $F$) and a $G_\delta$-set (say $G$) (i.e. $F \subset E \subset G$), such that $m^*(G \RM F) = 0$ (i.e. $G \RM F$ is a zero set). Conversely, if $\exists G \in G_\delta, F \in F_\sigma$ s.t. $F \subset E \subset G$, then $E$ is measurable.
Corollary: A bounded subset $E \subset \mathbb{R}^n$ is measurable if and only if it has a regularity sandwich $F \subset E \subset G$ s.t. $F \in F_\sigma$, $G \in G_\delta$ and $m(E) = m(G)$.
Corollary: All measurable sets are $F_\sigma$-sets and $G_\delta$ sets modulo measure zero sets.
Proof of Unbounded Case TODO
7. [Pugh 6.15] An affine motion $T:\mathbb{R}^n → \mathbb{R}^n$ is a meseomorphism and multiplies measure by $|det(T)|$.
8. If $A \subset B$ where $B$ is a box, then $m(B) = m_*(A) + m^*(B \RM A)$.
9. If $A \subset B \subset \mathbb{R}^n$ and $B$ is a box, then $A$ is measurable if and only if it divides $B$ cleanly (i.e. $m(A) = m(A \cap B) + m(A \RM B)$)
10. [Measurable Product Theorem] If $A \subset \mathbb{R}^n$ and $B \subset \mathbb{R}^k$ are measurable, then $A \times B$ is measurable and $m(A \times B) = m(A)\cdot m(B)$. We treat $0\cdot \infty = 0$.
11. [Zero Slice Theorem] If $E \subset \mathbb{R}^n \times \mathbb{R}^k$ is measurable, then $E$ is a zero set if and only if almost every slice of $E$ is a (slice) zero set.
12. [Measure Continuity Theorem] Let $\omega$ be any outer measure. If $\{E_k\}$ and $\{F_k\}$ are sequences of measurable sets, then upward measure continuity yields $E_k \uparrow E \Rightarrow \omega(E_k) \uparrow \omega(E)$; downward measure continuity yields $F_k \downarrow F$ and $\omega(F_1) < \infty$ $\Rightarrow \omega(F_k) \downarrow \omega(F)$. (Notation: $E_k \uparrow E$ means $E_1 \subset E_2 \subset …$ and $E = \bigcup E_k$. $F_k \downarrow F$ means $F_1 \supset F_2 \supset …$ and $F = \bigcap F_k$)
Proof TODO Key Idea Disjointize
Trinity of Theorems (MCT, DCT, FL)
12. [Monotone Convergence Theorem] Assume that $(f_n)_n$ is a sequence of measurable functions with $f_n:\mathbb{R} → [0, \infty)$ and $f_n \uparrow f$ as $n → \infty$, then $\int f_n \uparrow \int f$
PROOF TODO!!
12.1 [Corollary] If $(f_n)_n$ is a sequence of integrable functions that converges monotonically downward to a limit function $f$ almost everywhere, then $\int f_n \downarrow \int f$.
5. [Monotone Convergence Theorem] Let $\Omega \subset \mathbb{R}^n$ be a measurable set and $(f_n)_n$ be a sequence of nonnegative measurable functions, where $f_i:\Omega → \mathbb{R}$ are increasing s.t. $f_{i+1}$ majorizes $f_i$. Then:
13. [Dominated Convergence Theorem] If $\forall n$ $f_n: \mathbb{R} → [0, \infty)$ is a measurable functions such that $(f_n)_n → f$ (what is a.e.) and if $\exists g:\mathbb{R} → [0, \infty)$ whose integral is finite and is an upper bound $\forall f_n$, then $f$ is integrable and $\int f_n → \int f$ as $n → \infty$.
PROOF TODO!!
14. [Fatou's Lemma] Let $\Omega$ be a measurable set. If $f_n:\mathbb{R} → [0, \infty)$ is a sequence of nonnegative measurable functions then $\int \text{lim inf} f_n \leq \text{lim inf} \int f_n$.
PROOF TODO!!
15. [Borel-Cantelli Lemma] Let $\Omega_1, \Omega_2, …, $ be measurable subsets of $\mathbb{R}^n$ such that $\Sigma_{n=1}^{\infty} m (\Omega_n)$ is finite. Then the set $\{x \in \mathbb{R}^n: x \in \Omega_n \text{ for infinitely many } n\}$ is a set of measure $0$. (i.e. almost every point belongs to only finitely many $\Omega_n$. (Proof: see below)
16. [Fubini] Let $f:\mathbb{R}^n → \mathbb{R}$ be an absolutely integrable function. Then exists absolutely integrable functions $F:\mathbb{R} → \mathbb{R}$ and $G:\mathbb{R} → \mathbb{R}$ s.t. for almost every $x$, $f(x,y)$ is absolutely integrable in $y$ with $F(x) = \int_{\mathbb{R}} f(x,y) dy$ and for almost every $y$, $f(x,y)$ is absolutely integrable in $x$ with $G(y) = \int_{\mathbb{R}} f(x,y) dx$. We also have: $\int_{\mathbb{R}} F(x) dx = \int_{\mathbb{R}^2} f = \int_{\mathbb{R}} G(y) dy$.
Proof: Assume on the contrary that $\exists v \in E \subset \mathbb{R}^n$ s.t. $b_J(v) > 0$ for some increasing $k$-index $J = \{j_1, …, j_k\}$.
Since $b_J$ continuous (by definition of $k$-form), $\exists h > 0$ s.t. for all $x \in \mathbb{R}^n$ s.t. $|x_i - v_i| < h$, $b_J(x) > 0$. (i.e. inside the cube of dimension $2h$ centered at $v$, $b_J > 0$) Let $D' \subset \mathbb{R}^k$ s.t. $u \in D' \Leftrightarrow |u_r| \leq h$.
Define $\Phi:D' \subset \mathbb{R}^k \rightarrow \mathbb{R}^n$ s.t. $\Phi(u) = v + \Sigma_{r=1}^k u_r e_{j_r}$. (i.e. $\Phi$ is a $k$-surface in $E$ with parameter domain $D'$ and $b_J(\Phi(u)) > 0 \forall u \in D'$ since $|(\Phi(u))_i - v_i| < h$)
Note that $\int_{\Phi} \omega = \int_{D'} b_{\{j_1, …, j_k\}}(\Phi(u)) \frac{\partial(x_{j_1}, …, x_{j_k})}{u_1, …, u_k} du = \int_{D'} b_J(\Phi(u)) det(\mathbb{I}) du = \int_{D'} b_J(\Phi(u)) du > 0$. Hence, contradiction! (since $\exists$ increasing $J$ s.t. $b_J(x) \neq 0 \Rightarrow \omega \neq 0$ in $E$)
Proof: Suffices to consider $\omega = a(x) dx_{i_1} \wedge … \wedge dx_{i_k}$, since if we can prove so, then every other term will be equal.
Note $\omega_{\Phi} = a(\Phi(u)) d\phi_{i_1} \wedge d\phi_{i_2} \wedge … \wedge d\phi_{i_k}$.
Define $\alpha(p, q) = \frac{\partial \phi_{i_p}}{\partial x_q}(u)$. Then $d\phi_{i_p} = \Sigma_{q} \alpha(p, q) du_q$. The $k \times k$ matrix with entries $\alpha(i, j)$ has determinant $J(u)$
Hence, $d\phi_{i_1} \wedge … \wedge d\phi_{i_k} = \Sigma \alpha(1, q_1)\cdot … \cdot \alpha(k, q_k) du_{q_1} \wedge … \wedge du_{q_k} = \Sigma \alpha(1, q_1)\cdot … \cdot \alpha(k, q_k) s(q_1, …, q_k) du_{1} \wedge … \wedge du_{k} = du_{1} \wedge … \wedge du_{k} \Sigma \alpha(1, q_1)\cdot … \cdot \alpha(k, q_k) s(q_1, …, q_k) = du_{1} \wedge … \wedge du_{k} J(u) = J(u) du_{1} \wedge … \wedge du_{k}$
Hence, $\int_{\Phi} \omega = \int_{D} a(\Phi(u))J(u)du = \int_{D} a(\Phi(u))J(u) du_1 \wedge … du_k = \int_{D} a(\Phi(u)) d\phi_{i_1} \wedge … \wedge d\phi_{i_k} = \int_{\Delta} \omega_{\Phi}$
Proof: $\int_{T\Phi} \omega = \int_{\Delta} \omega_{T\Phi} = \int_{\Delta} (\omega_T)_{\Phi} = \int_{\Phi} \omega_T$
Discussion | Link | Remarks |
---|---|---|
1 | disc01.pdf | (TODO: latter part of 4 and 5) |
4 | disc04.pdf | Proved Lemma 7.4.4 (b) to (e) using alternate definition of measurability |
? |
Homework | Link | Revision |
---|---|---|
HW1 | hw1_jianzhi.pdf | hw1_r_jianzhi.pdf |
HW2 | hw2_jianzhi.pdf | |
HW3 | hw3_jianzhi.pdf | hw3_r_jianzhi.pdf |
HW4 | hw4_jianzhi.pdf | |
HW5 | hw5_jianzhi.pdf | |
HW6 | hw6_jianzhi.pdf | hw6_r_jianzhi.pdf |
HW7 | hw7_jianzhi.pdf | hw7_r_jianzhi.pdf |
HW8 | hw8_jianzhi.pdf | |
HW9 | hw9_jianzhi.pdf | |
HW10 | hw10_jianzhi.pdf | hw10_r_jianzhi.pdf |
HW11 | hw11_jianzhi.pdf | |
HW12 | hw12_jianzhi.pdf |
Date | Brief Description |
---|---|
01/18 | Introduction; Outer Measure |
01/20 | Measurable set |
01/25 | Lemmas |
01/27 | Lemmas and Alternative “Measurability” Def |
02/01 | Abstract measure theory, Regularity ($F_\sigma, G_\delta)$ |
02/03 | Slices |
02/08 | Monotone Convergence Theorem, Dominated Convergence Theorem (Pugh) |
02/10 | Fatou's Lemma; Mesometry (Pugh) |
02/15 | Measurable functions; Simple functions (Tao) |
02/17 | Monotone Convergence Theorem (Tao) |
02/22 | |
02/24 | |
03/01 | |
03/03 | |
03/08 | |
03/10 | |
03/15 | |
03/17 | Implicit Function Theorem (Samuel) |
03/22 | 🍃 Spring Break 🍃 |
03/24 | 🍃 Spring Break 🍃 |
03/29 | Inverse Function Theorem (Chloe) |
03/31 | Differential Forms |
04/05 | Stokes' Theorem |
04/07 | Poincaré's Lemma |
04/12 | |
04/14 | |
04/19 | |
04/21 |
help X.X
Theorem: Let $E \subset \mathbb{R}^n \times \mathbb{R}^k$ be measurable. Denote by $E_x := E \cap (\{x\} \times \mathbb{R}^k) \subset \{x\} \times \mathbb{R}^k \approx \mathbb{R}^k$. Let $Z = \{x \in \mathbb{R}^n : m_{\mathbb{R}^k}(E_x) \neq 0\}$ be a measure $0$ set in $\mathbb{R}^n$. Then $m(E) = 0$.
Proof: (I took down Prof Peng's proof in class and tried to fill in some missing steps which I didn't catch and some of my own commentary)
Let $\tilde{E} = E \RM (Z \times \mathbb{R}^k)$. Then, since $m(Z \times \mathbb{R}^k) = 0$, so $m (\tilde{E}) = m(E)$, because these two differ by a measure $0$ set. Suffices to show $m (\tilde{E}) = 0$.
WLOG, we can assume $E$ in place of $\tilde{E}$ with $Z = \phi$. Hence, now, $m_{\mathbb{R}^k}(E_x) = 0 $ $\forall x$ (as $Z = \phi$).
Assume $E$ is bounded and $n = 1, k = 1$ (i.e. $E \subset \mathbb{R} \times \mathbb{R}$). Further assume $E \subset [0, 1]^2$ (i.e. $E$ is in the unit square; we make this because we can always scale our $\epsilon$ later).
Know $m_{\mathbb{R}}(E_x) = 0$ $\forall x$. Want to show $\forall \epsilon > 0, m_{\mathbb{R}\times\mathbb{R}}(E) < \epsilon$.
By inner-regularity of $E$ (since $E$ is measurable), we can always find closed $K \subset E$ such that $m(E \RM K) \leq \epsilon/2$. Then, $K$ is bounded, so $K$ is compact. Also, $m_{\mathbb{R} \times \mathbb{R}}(K_x) \leq m_{\mathbb{R} \times \mathbb{R}}(E_x) = 0$ by monotonicity, hence $m_{\mathbb{R} \times \mathbb{R}}(K_x) = 0$.
Claim: We can cover $K$ by boxes of total volume (actually, area in this case since $n = k = 1$) $\leq \epsilon/2$
Proof of Claim: $\forall x \in \mathbb{R}$, if $K_x \neq \phi$, then we can find an open set $V(x) \in \mathbb{R}$ s.t. $m_{\mathbb{R}}(V(x)) \leq \epsilon/2$ and $V(x)$ contains $K_x$.
Lemma: $\exists$ an enlargement $U(x) \subset \mathbb{R}$ with $x \in U(x)$ s.t. $\pi^{-1}(U(x)) \cap K \subset U(x) \times V(x)$ (i.e. $\forall \tilde{x} \in U(x), V_x \supset K_{\tilde{x}}$). Note that here $\pi : \mathbb{R}^2 → \mathbb{R}$, so what the claim is saying is that we can always find an interval containing $x$ such that the intersection of the vertical strip with $K$ is entirely contained in the Cartesian product $U(x) \times V(x)$. Another way of putting it is: suppose $K$ is a potato. Then a slice with width as the orange interval across $\mathbb{R}^n$ in the direction of $\mathbb{R^k}$ cuts the potato. Then the potato strip is entirely contained in $U(x) \times V(x)$.
Proof of Lemma: Suppose not true, i.e. does not exist $U(x) \subset \mathbb{R}$ containing $x$ with this property. That means for every interval $(x - \frac{1}{n}, x + \frac{1}{n}), n \in \mathbb{Z}$ containing $x$, $\exists \tilde{x}_n \in (x - \frac{1}{n}, x + \frac{1}{n})$ s.t. $K_{\tilde{x}}$ is not strictly contained in $V(x)$. Then $\exists$ a sequence $(\tilde{x}_n, \tilde{y}_n)_n \in K$ s.t. $\text{lim}_{n → \infty} \tilde{x}_n = x$ and $\tilde{y}_n \notin V(x)$. Since this sequence exists in a compact set, $\exists$ a subsequence that converges, i.e. $\exists (\tilde{x}_{n_k}, \tilde{y}_{n_k})_k \in K$ that converges to, say, $(x, y) \in K$. But $\tilde{y}_n \in V(x)^c \Rightarrow y \in V(x)^c$. (Contradiction! since by assumption $K_x \in V(x)$, so $y \in V(x)$)
Thus, $\forall x \in \mathbb{R}$, $\exists V(x) \supset K_x$ s.t. $V(x)$ is open and $m_{\mathbb{R}}(V(x)) < \epsilon/2$. Also, $\exists U(x) \subset \mathbb{R}$ containing $x$ s.t. $U(x) \times V(x) \supset \pi^{-1}(U(x)) \cap K$.
Hence, $K \subset \bigcup_{x \in \mathbb{R}} (V(x) \times U(x))$. Since $K$ is compact, we can obtain a finite subcover by Heine-Borel theorem $\Rightarrow K \subset (V(x_1) \times U(x_1)) \cup (V(x_2) \times U(x_2)) \cup … \cup (V(x_N) \times U(x_N))$, $N \in \mathbb{N}$.
Now define $U'_i = U(x_i) \RM (\bigcup^{i-1}_{k=1} U(x_{k}))$. The motivation is to disjointize $K$. Then $K \subset (V(x_1) \times U'_{1}) \sqcup (V(x_2) \times U'_{2}) \sqcup … \sqcup (V(x_N) \times U'_{N})$. This is possible because for any point $x \in (V(x_1) \times U(x_1)) \cup … \cup (V(x_N) \times U(x_N))$, suppose $x$ first appeared in $(V(x_i) \times U(x_i))$ for some $i$, then $x$ will only appear in $(V(x_i) \times U'_i)$.
The union of $U'_i$ are contained in the unit interval. Hence, $\Sigma_i m(V(x_i) \times U'_i) = \Sigma_i m(V(x_i)) \times m(U'_i) \leq \frac{\epsilon}{2} \Sigma_i m(U'_i) \leq \frac{\epsilon}{2}$. (The last inequality is due to the disjointness of $U'_i$). Hence $m(K) < \epsilon/2$.
Finally, since we obtained $m(K) < \frac{\epsilon}{2}$, we have $m(E) \leq m(K) + m(E \RM K) = \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$. Hence, $m(E) = 0$ as desired.
Key Ideas: Approximate $E$ from the inside by compact set $K$. Generate an open covering of $K$ by slices along $\mathbb{R}^n$ in the direction of $\mathbb{R}^k$ such that these slices approximate $K$ really well. Apply Heine-Borel and disjointness.
In progress
Theorem: Let $(f_n)_n$ be a sequence of measurable functions with $f_n : \mathbb{R} → [0, \infty)$ and $(f_n)_n → f$ almost everywhere. Suppose $\exists g:\mathbb{R} → [0, \infty)$ s.t, $|f_n(x)| \leq g(x)$ almost everywhere and $\int g < \infty$. Then $f$ is integrable and $\int f_n → \int f$ as $n → \infty$.
Proof: (in my own words)
Firstly, $\bar{f_n} \leq g$ almost everywhere, thus $U(f_n) \subset U(g) \Rightarrow m(U(f_n)) \leq m(U(g)) = \int g < \infty$ by assumption. Hence, $f_n$ are all integrable, since $m(U(f_n)) < \infty$.
Consider $\underline{f_n}$ and $\bar{f_n}$. Since $(\underline{f_n})_n$ is a sequence of measurable functions that converges to $f$ almost everywhere and $\underline{f_{n+1}} \geq \underline{f_{n}}$ (infimum of a smaller set), $\underline{f_n} \uparrow f$. Similarly, $(\bar{f_n})_n$ is a sequence of measurable functions that converges to $f$ almost everywhere and $\bar{f_{n+1}} \leq \bar{f_{n}}$ (supremum of a smaller set), so $\bar{f_n} \downarrow f$.
Applying Monotone Convergence Theorem, $\int \underline{f_n} \uparrow \int f$ and $\int \bar{f_n} \downarrow \int f$.
We also have: $U(\underline{f_n}) \subset U(f_n) \subset \hat{U}(\bar{f_n})$. This is because $\underline{f_n}(x) = \text{inf}\{f_m(x): m \geq n\} \leq f_n(x)$, hence $\forall (x,y) \in U(\underline{f_n})$, $(x, y) \in U(f_n)$. Similarly, $\bar{f_n}(x) = \text{sup}\{f_m(x): m \geq n\} \geq f_n(x)$, hence $\forall (x,y) \in U(f_n)$, $(x, y) \in \hat{U}(\bar{f_n})$.
Since $U(\underline{f_n}) \subset U(f_n) \subset \hat{U}(\bar{f_n})$ and $\int \underline{f_n} \uparrow \int f$ and $\int \bar{f_n} \downarrow \int f$, $\int f_n → \int f$.
A website which I found during my search for DCT: https://www.math3ma.com/blog/dominated-convergence-theorem
Definition
Properties
Suppose otherwise, then $\exists \epsilon > 0$ s.t. $\forall \frac{1}{i}$ with $i \in \mathbb{N}$, exists a cube $Q_i$ with side length less than $\frac{1}{i}$ s.t. $|\frac{m(E \cap Q_i)}{m(Q_i)} - \delta(p, E)| \geq \epsilon$. Then, this sequence of cubes converge to $p$ since the side length converges to $0$, but $\text{lim}_{i \rightarrow \infty} (\frac{m(E \cap Q_i)}{m(Q_i)} - \delta(p, E)) = 0 > \epsilon$. (contradiction!)
Theorem [Lebesgue Density]: If $E$ is measurable, then almost every $p \in E$ is a density point of $E$.
Examples:
Proof: WLOG, assume $E$ is bounded (will justify this later, don't worry). Take $a$ s.t. $0 \leq a < 1$ and consider $E_{a} = \{p \in E | \underline{\delta}(p, E) < a\}$. Want to show $m^{*}(E_a) = 0$. (The reason why we use outer measure is because we do not know a priori that $E_a$ is measurable.)
For every $p \in E_a$, $\underline{\delta}(p, E) < a \Rightarrow $ exists arbitrarily small cubes $\{Q_\alpha\}$ in which $\frac{m(E \cap Q_{\alpha})}{m(Q_{\alpha}} < a$. (This is the reason why $E_a$ was constructed with strict inequality, otherwise we are unable to have strict inequality here.) Let $S$ be the set of all such cubes $\forall p$. Then $S$ is a Vitali covering of $E_a$, since $\forall p \in E_a, \forall r > 0, \exists Q \in S$ s.t. $p \in Q \subset B_r(p)$.
By Vitali Covering Lemma, for an arbitrarily chosen $\epsilon > 0$, we can select a countable collection $Q_1, Q_2, Q_3, …$ from $S$ s.t.: (1) $Q_i \cap Q_j = \phi$ for $i \neq j$, (2) $m (\bigcup_{i=1}^{\infty}m(Q_i)) = \Sigma_{i=1}^{\infty} m(Q_i) \leq m^{*}(E_a) + \epsilon$ and (3) $E_a \RM (\bigcup_{i=1}^{\infty} Q_i)$ is a null set.
Thus, we have the following: $m^{*}(E_a) \leq \Sigma_{i=1}^{\infty} m^{*}(E_a \cap Q_i) \leq \Sigma_{i=1}^{\infty} m(E \cap Q_i) = \Sigma_{i=1}^{\infty} \frac{m(E \cap Q_k)}{m(Q_k)}\cdot m(Q_k) < a \Sigma_{i=1}^{\infty} m(Q_i) \leq a(m^{*}(E_a) + \epsilon)$
Note that in the above, the first inequality is due to sub-additivity of outer measure. The second is due to monotonicity of outer measure, where we used $m^{*}(E \cap Q_i) = m(E \cap Q_i)$ since $E \cap Q_i$ measurable.
Hence, $m^*(E_a) < \frac{a \epsilon}{1 - a} \Rightarrow m^*(E_a) = 0$ since $\epsilon$ is chosen arbitrarily. We also demonstrated that $E_a$ is in fact measurable, since it has outer measure $0$.
Now consider the sequence $(a_n)_n$ where $a_n = 1 - 1/n$. Then $m(E_{a_n}) = 0$ $\forall n$ as discussed above. Since $E_{a_i} \subset E_{a_{i+1}}$, the sequence $(m(E_{a_n}))_n$ converges $m (\bigcup_n E_{a_n})$ by upward measure continuity $\Rightarrow m (\bigcup_n E_{a_n}) = 0$.
Consider $E \RM (\bigcup_n E_{a_n})$. Then $\text{lim inf}_{Q \downarrow p} \frac{m(E \cap Q)}{m(Q)} = 1 \Rightarrow \text{lim}_{Q \downarrow p} \frac{m(E \cap Q)}{m(Q)} = 1 \Rightarrow \delta(p, E) = 1$ $\forall p \in E \RM (\bigcup_n E_{a_n})$. Thus, almost every point of $E$ is a density point of $E$.
We assumed $E$ is bounded, because we can always divide $E$ into countable disjoint sets. Then, for each set, the measure of non-density points is $0$, thus the union of all non-density points is also $0$. Hence, almost all $p \in E$ is a density point of $E$.
Corollary: If $E$ is measurable, then for almost every $p \in \mathbb{R}^n$, we have: $\chi_{E}(p) = \text{lim}_{Q \downarrow p} \frac{m(E \cap Q)}{m(Q)}$. Here $\chi_{E}(p)$ refers to the indicator function that returns $1$ if $p \in E$ and $0$ else.
Proof: From above, for almost every $p \in E$, we have $\text{lim}_{Q \downarrow p} \frac{m(E \cap Q)}{m(Q)} = 1 = \chi_{E}(p)$. Since $E$ measurable $\Rightarrow$ $E^c$ measurable, for almost every $p \in E^c$, $\text{lim}_{Q \downarrow p} \frac{m(E^c \cap Q)}{m(Q)} = 1$. The measurability of $E$ implies $m(Q \cap E) + m(Q \cap E^c) = m(Q) \Rightarrow \frac{m(Q \cap E)}{m(Q)} + \frac{m(Q \cap E^c)}{m(Q)} = 1$. Hence, $\frac{m(Q \cap E)}{m(Q)} = 1 - \frac{m(Q \cap E^c)}{m(Q)} = 0 = \chi_{E}(p)$. And, we are done.
In this presentation, I will cover Stokes' Theorem including closed forms and exact forms. If I can present the following properly and well, I would be really satisfied.
Update: My presentation notes in PDF stokes_presentation.pdf
Review of concepts from differential forms:
$x = (x_1, …, x_k) \in I^k$, $y = (y_1, …, y_n) \in \mathbb{R}^n$. $\phi:[0,1]^k \rightarrow \mathbb{R}^n$ is a $k$-cell in $\mathbb{R}^n$. $y = \phi(x)$. $\phi_i(x) = \phi_i(x_1, …, x_k) = y_i$.
Geometric interpretation
Theorem [Pugh 36]: Reparametrization of $k$-cell produces the same answer up to a sign change.
Proof: $\int_{\phi \circ T} f dy_{i_1, …, i_k} = \int_{[0,1]^k} f(\phi \circ T(u)) \frac{\partial(\phi \circ T)_{i_1, …, i_k}}{\partial v} \frac{\partial T}{\partial u} du = \int_{[0,1]^k} f(\phi(v)) \frac{\partial \phi_{i_1, …, i_k}}{\partial v} (-1)^s dv = (-1)^s \int_{\phi} \omega$
Proposition [Pugh 37]: Each $k$-form $\omega$ has a unique expression as a sum of simple $k$-forms with ascending $k$-tuple indices, $\omega = \Sigma f_A dy_A$.
Proof: For every $k$ tuple $(i_1, …, i_k)$, it can be arranged such that it ascends. Hence, existence is guaranteed.
Let $A = (i_1, …, i_k)$ be ascending. Fix $y \in \mathbb{R}^n$ and $L:\mathbb{R}^k \rightarrow \mathbb{R}^n$ be s.t. $L(u) = u_1 e_{i_1} + … + u_k e_{i_k}$. For $r > 0$, define function $g_{r, y}(u) = y + rL(u)$, i.e. $g_{r, y}$ sends $[0, 1]^k$ to a $k$-dimensional cube of side length $r$ at $y$.
$f_A(y) = lim_{r \rightarrow 0} \frac{1}{r^k} \omega(g)$
Wedge Products $\alpha = \Sigma_{i_1, …, i_k} \alpha_{i_1, …, i_k} dy_{i_1, …, i_k}$ $\beta = \Sigma_{j_1, …, j_l} \beta_{j_1, …, j_l} dy_{j_1, …, j_l}$
$\alpha \wedge \beta = \Sigma_{i_1, …, i_k, j_1, …, j_l} \alpha_{i_1, …, i_k} \beta_{j_1, …, j_l} dy_{i_1, …, i_k, j_1, …, j_l}$
$k$-form wedge product with $l$-form gives a $(k+l)$-form
Exterior derivative: If $f$ is $k$-form. Then $df$ is $(k+1)$-form. Think of it as one more differential. $df = \frac{\partial f}{\partial x_1} dx_1 + … + \frac{\partial f}{\partial x_n} dx_n$
For a $k$-form $\omega = \Sigma f_{i_1, …, i_k} dy_{i_1, …, i_k}$ $d\omega = \Sigma df_{i_1, …, i_k} \wedge dy_{i_1, …, i_k}$ Intuitively, it is how the coefficient $f_{i_1, …, i_k}$ changes. Example $d(f dx + g dy) = f_y dy \wedge dx + g_x dx \wedge dy = (g_x - f_y) dx \wedge dy$ $d(d\Omega) = 0 \forall \omega$ $k$-form on $\mathbb{R}^n$.
Pushforward and Pullback Consider $T: \mathbb{R}^n \rightarrow \mathbb{R}^m$ be smooth. Then since $\phi: [0,1]^k \rightarrow \mathbb{R}^n$ is a $k$-cell in $\mathbb{R}^n$, we have an induced $k$-cell $T \circ \phi: [0, 1]^k \rightarrow \mathbb{R}^m$. This inducing function, we call it $T_*: \phi \rightarrow T \circ \phi$ is a transformation of $k$-cells.
Pullback is the dual of pushforward. Let $\alpha$ be a $k$-form on $\mathbb{R}^m$, then its pullback to $\mathbb{R}^n$ is a $k$-form on $\mathbb{R}^n$, denoted by $T^*(\alpha)$ that sends each $k$-cell $\phi$ to $\alpha(T \circ \phi)$
i.e. $(T^*(\alpha))(\phi) = \alpha(T \circ \phi) = \alpha(T_*(\phi))$
Definition: A $k$-chain is a linear combination of $k$-cells. i.e. $\Phi = \Sigma_{j=1}^N a_j \phi_j$, where $a_i \in \mathbb{R}$. $\int_{\Phi} \omega = \Sigma_{j=1}^N a_j \int_{\phi_j} \omega$ (i.e. just integrate separately)
Definition: The boundary of a $(k+1)$-cell $\phi$ is the $k$-chain $\partial \phi = \Sigma_{j=1}^{k+1} (-1)^{j+1} (\phi \circ i^{j, 1} - \phi \circ i^{j, 0})$, where $i^{j, 0}(u_1, …, u_k) \rightarrow (u_1, …, u_{j - 1}, 0, u_j, …, u_k)$ and $i^{j, 1}(u_1, …, u_k) \rightarrow (u_1, …, u_{j - 1}, 1, u_j, …, u_k)$
Here, $i^{j,0}(u_1, …, u_k) = (u_1, …, u_{j-1}, 0, u_j, …, u_k)$ and $i^{j,1}(u_1, …, u_k) = (u_1, …, u_{j-1}, 1, u_j, …, u_k)$. They are $k$-cells in $\mathbb{R}^{k+1}$ because they map the unit cube in $\mathbb{R}^k$ i.e. $[0,1]^k$ to $\mathbb{R}^{k+1}$.
Geometrically, they map to the faces of the unit cube in $\mathbb{R}^{k+1}$, which is why $\partial \phi$ is called the boundary.
Definition: The $j$th dipole of $\phi$ to be $\delta^j \phi = \phi \circ i^{j, 1} - \phi \circ i^{j, 0}$, i.e. the $j$th term or the function representing the $j$th faces, so $\partial \phi = \Sigma_{j=1}^{k+1} (-1)^{j+1} \delta^j \phi$.
Claim: $\delta^j \phi$ is the pushforward of $\delta^j i$; i.e. $\delta^j \phi = \phi_*(\delta^j i)$
Proof: $\delta^j \phi = \phi \circ i^{j,1} - \phi \circ i^{j,0} = \phi \circ (i^{j,1} - i^{j,0}) = \phi_*(i^{j,1} - i^{j,0}) = \phi_*(i \circ i^{j,1} - i \circ i^{j,0}) = \phi_*(\delta^j i)$
Stoke's Formula for Cubes
If $\omega$ is a $(n - 1)$-form in $\mathbb{R}^n$ (i.e. taking $k = n - 1$) and $i: [0, 1]^n \rightarrow \mathbb{R}^n$ is the identity inclusion $n$-cell in $\mathbb{R}^n$, then $\int_i d\omega = \int_{\partial i} \omega$.
Proof: Write $\omega = \Sigma_{i=1}^{n} f_i(x) dx_1 \wedge … \wedge \hat{dx_i} \wedge … \wedge dx_n$ in standard, ascending form (this is why $\omega$ can be written as the sum of $n$ terms). The exterior derivative is then $d\omega = \Sigma_{i=1}^{n} df_i(x) \wedge dx_1 \wedge … \wedge \hat{dx_i} \wedge … \wedge dx_n = \Sigma_{i=1}^{n} \frac{\partial f_i}{\partial x_i} dx_i \wedge dx_1 \wedge … \wedge \hat{dx_i} \wedge … \wedge dx_n = \Sigma_{i=1}^{n} (-1)^{i - 1} \frac{\partial f_i}{\partial x_i} dx_1 \wedge … \wedge dx_n = (\Sigma_{i=1}^{n} (-1)^{i - 1} \frac{\partial f_i}{\partial x_i}) dx_1 \wedge … \wedge dx_n$.
Hence $\int_{i} d\omega = \Sigma_{i=1}^n (-1)^{i-1} \int_{[0,1]^n} \frac{\partial f_i}{\partial x_i} \cdot 1 \cdot dx_1…dx_n$. It's worth emphasizing that the Jacobian is the identity matrix since it is the inclusion map, so determinant is $1$.
We have now settled one side of the equation. For the other side:
$\int_{\partial i} \omega = \int_{\Sigma_{j=1}^{n} (-1)^{j+1} \delta^j i} \omega = \Sigma_{j=1}^{n} (-1)^{j+1} \int_{\delta^j i} \omega$
Note that $\delta^j i = i \circ i^{j,1} - i \circ i^{j,0} = i^{j,1} - i^{j,0}$ and $i^{j,0}, i^{j,1}: \mathbb{R}^{n-1} \rightarrow \mathbb{R}^n$
Then when we calculate the Jacobians: $\frac{\partial(i^{j,0})_{x_1, …, \hat{x_i}, …, x_n}}{\partial(u_1, …, u_{n-1})} = 1$ if $i=j$ and $0$ otherwise. Similarly, $\frac{\partial(i^{j,1})_{x_1, …, \hat{x_i}, …, x_n}}{\partial(u_1, …, u_{n-1})} = 1$ if $i = j$ and $0$ otherwise. This is because for the $j$th face (which is represented by the $j$th dipole), the $j$th coordinate is fixed at either $0$ or $1$. Deleting the $i$th component for $i \neq j$ will still result in the $j$th component being constant, so Jacobian $= 0$. Otherwise, since it is inclusion, the Jacobian is $1$.
$\int_{\delta^j i} \omega = \int_{\delta^j i} \Sigma_{l=1}^n f_l(x) dx_1 \wedge … \wedge \hat{dx_l} \wedge … \wedge dx_n = \int_{\delta^j i} (f_j(i^{j, 1}(u)) - f_j(i^{j, 0}(u))) du_1 \cdot … \cdot du_{n-1} = \int_{\delta^j i} (f_j(u_1, …, u_{j-1}, 1, u_j, …, u_{n-1}) - f_j(u_1, …, u_{j-1}, 0, u_j, …, u_{n-1})) du_1 \cdot … \cdot du_{n-1} = \int_{\delta^j i} \frac{\partial f_j(u_1, …, u_{j-1}, y, u_{j}, …, u_{n-1})}{\partial y} dy \cdot du_1 \cdot … \cdot du_{n-1} = \int_{\delta^j i} \frac{\partial f_j}{\partial x_j} dx_1 … dx_n$
In the second last equality, we used the fundamental theorem of calculus to split up the difference of $f_j(u_1, …, u_{j-1}, 1, u_j, …, u_{n-1}) - f_j(u_1, …, u_{j-1}, 0, u_j, …, u_{n-1}) = \int_{0}^{1} \frac{\partial f(u_1, …, u_{j-1}, y, u_j, …, u_{n-1})}{\partial y} dy$ where $y$ refers to $j$th coordinate.
In the last equality, we used Fubini's Theorem i.e. order of integration in ordinary multiple integration is irrelevant. Also, $y, u_1, …, u_{n-1}$ are just arbitrary variables that can be looked at as $x_1, …, x_n$
Hence, $\int_{\partial i} \omega = \Sigma_{j=1}^n (-1)^{j-1} \int_{\delta^j i} \omega = \Sigma_{j=1}^n (-1)^{j-1} \int_{0}^{1} … \int_{0}^{1} \frac{\partial f_j}{\partial x_j} dx_1 … dx_n = \Sigma_{j=1}^n (-1)^{j-1} \int_{[0,1]^n} \frac{\partial f_j}{\partial x_j} dx_1 … dx_n$, exactly the same as what we've got before.
So, $\int_{\partial i} \omega = \int_{i} d\omega$ for cubes.
Stoke's Formula for General Cell: Let $\omega$ be a $(n-1)$-form in $\mathbb{R}^m$ and $\phi$ an $n$-cell in $\mathbb{R}^m$, then $\int_{\phi} d\omega = \int_{\partial \phi} \omega$.
Proof: $\int_{\phi} d\omega = \int_{\phi \circ i} d\omega = \int_i \phi^* d\omega = \int_i d\phi^*\omega = \int_{\partial i} \phi^* \omega = \int_{\phi_* \partial i} \omega = \int_{\partial \phi} \omega$
The first equality is because $\phi = \phi \circ i$ since $i$ is the identity inclusion map from $[0,1]^n \rightarrow \mathbb{R}^n$. Second equality follows from 43d since $\int_{T \circ \phi} \alpha = \int_{\phi} T^*(\alpha)$. Third equality follows from 43c commutativity property of exterior derivative and pullback operator $\phi^* d\omega = d\phi^* \omega$. The fourth equality follows from Stoke's Formula for Cubes. The fifth equality follows from duality equation $T^*(\alpha)(\phi) = \alpha(T_*(\phi))$, so $\int_{\phi} T^*\alpha = \int_{T_*(\phi)} \alpha$.
The final equality follows from the fact that $\phi_*(\delta i) = \phi_*(\Sigma_{j=1}^{k+1} (-1)^{j+1} (i \circ i^{i,1} - i \circ i^{i,0})) = \phi_*(\Sigma_{j=1}^{k+1} (-1)^{j+1} (i^{j,1} - i^{j,0})) = \phi \circ (\Sigma_{j=1}^{k+1} (-1)^{j+1} (i^{j,1} - i^{j,0})) = \Sigma_{j=1}^{k+1} (-1)^{j+1} (\phi \circ i^{j,1} - \phi \circ i^{j,0}) = \partial \phi$
Stoke's Formula for Manifolds: Let $\omega$ be a $(m-1)$-form in $\mathbb{R}^n$ and $M \subset \mathbb{R}^n$ divides into $m$-cells diffeomorphic to $[0,1]^m$ and its boundary divides into $(m-1)$-cells diffeomorphic to $[0,1]^{m-1}$, then $\int_{M} d\omega = \int_{\partial M} \omega$.
Stoke's Theorem: $\int_{M} d\omega = \int_{\partial M} \omega$
Take $M = [a, b] \subset \mathbb{R}$ and $\omega = f$ (i.e. a zero-form). Then $\int_{\partial M} \omega = f(b) - f(a)$ (since it is the evaluation of $f$ on the boundaries of $M$ which is just $\{a, b\}$). $d\omega = f' dx$. Hence, $\int_{M} d\omega = \int_a^b f'(x) dx = f(b) - f(a)$ as we know from FTC.
Let $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ be smooth. Then $df = f_x dx + f_y dy$. Let $M$ represent a path in $\mathbb{R}^2$ from point $p$ to $q$ (i.e. $M$ is a $1$-cell in $\mathbb{R}^2$), then $\partial M = {p, q}$. By Stoke's Theorem, $\int_{M} df = \int_{\partial M} f = f(q) - f(p)$. For any two paths, their boundaries are the same. Hence, we can say $df = f_x dx + f_y dy$ is path independent, which corresponds with what we know.
Take $\omega = f dx + g dy$. Take $M = D$ a region. Then $\partial M = C$, the curve bounding $D$. Then:
$\int_C f dx + g dy = \int_C \omega = \int_{\partial M} \omega = \int_{M} d\omega = \int_{D} d(f dx + g dy) = \int_{D} f_y dy \wedge dx + g_x dx \wedge dy = \int_{D} (g_x - f_y) dx dy$
Let $F = (f, g, h)$ be a smooth vector field on $U \subset \mathbb{R}^3$. Then, take $M = D$, then $\partial M = S$. Consider the $2$-form $\omega = f dy \wedge dz + g dz \wedge dx + h dx \wedge dy$.
$\int_{S} f dy \wedge dz + g dz \wedge dx + h dx \wedge dy = \int_{\partial M} \omega = \int_{M} d(f dy \wedge dz + g dz \wedge dx + h dx \wedge dy) = \int_{M} (f_x dx \wedge dy \wedge dz + g_y dy \wedge dz \wedge dx + h_z dz \wedge dx \wedge dy) = \int_{M} (f_x + g_y + h_z) dx \wedge dy \wedge dz = \int_{D} \nabla \cdot F$
We call LHS the flux. Intuitively, it is a statement about the conservation of flow. For an surface, the total flow out of the surface (i.e. flux) is equal to the integral of the divergence (measure of source / sink) across the whole enclosed surface.
Take $\omega = f dx + g dy + h dz$ and $M = S$ (a surface), then $\partial M = C$ a curve (the boundary of the surface).
$\int_{C} f dx + g dy + h dz = \int_{\partial M} \omega = \int_{M} d\omega = \int_{S} d(f dx + g dy + h dz) = \int_{S} f_y dy \wedge dx + f_z dz \wedge dx + g_x dx \wedge dy + g_z dz \wedge dy + h_x dx \wedge dz + h_y dy \wedge dz = \int_{S} (h_y - g_z) dy \wedge d_z + (f_z - h_x) dz \wedge dx + (g_x - f_y) dx \wedge dy$ as desired.
Intuitively, it is saying that the line integral of a loop in a vector field is equal to the flux of the curl of the vector field through the enclosed surface.
Definition: Say a $k$-form $\omega$ is closed if its exterior derivative is $0$. (i.e. $d\omega = 0$)
Definition: Say a $k$-form $\omega$ is exact if it is the exterior derivative of a $(k-1)$-form. (i.e. $\exists \alpha$ s.t. $d(\alpha) = \omega$)
Proposition: Every exact form is closed.
Proof: Let $\omega$ be an exact form, then $\exists \alpha$ s.t. $d(\alpha) = \omega$. Since $d^2 = 0$, $d\omega = d(d(\alpha)) = 0$ as desired.
Motivation: Exact always implies closed. But when does closed imply exact? When can we find an anti-derivative for a closed form $\omega$? i.e. find an $\alpha$ s.t. $\omega = d\alpha$?
Ans: If forms are defined on $\mathbb{R}^n$, then the answer is always, by Poincaré Lemma.
Poincaré Lemma: If $\omega$ is a closed $k$-form ($k > 0$) on $\mathbb{R}^n$, then it is exact.
Motivation: We can show for a $k$-form $\omega$ in $\mathbb{R}^n$, we have an integral operator $L_k$ such that $(L_{k+1}d + dL_k)(\omega) = \omega$. Hence, if $\omega$ is closed, then $d\omega = 0$, hence $dL_k(\omega) = d(L_k(\omega)) = \omega$, so $\omega$ has an anti-derivative. Suffices to show the construction of such $L_k$.
Claim: $\exists L_k$ that maps $k$-form on $\mathbb{R}^n$ to $(k-1)$-form on $\mathbb{R}^n$ with the property that $\forall \omega$ s.t. $\omega$ is a $k$-form, $(L_{k+1}d + dL_k)(\omega) = \omega$.
If such function $L_k$ exists, then since $\omega$ is closed, $d\omega = 0$, so $\omega = (L_{k+1}d + dL_k)(\omega) = L_{k+1}d\omega + dL_k\omega = d(L_k\omega)$, in particular, $\omega$ has an anti-derivative, so it is exact.
Proof of Claim:
Let $\beta$ be a $k$-form on $\mathbb{R}^{n+1}$ instead. Let $(x, t)$ be the representation of a point in $\mathbb{R}^{n+1}$, where $x \in \mathbb{R}^n$ and $t \in \mathbb{R}$. Then $\beta = \Sigma_{i_1, …, i_k} f_{i_1, …, i_k} dx_{i_1, …, i_k} + \Sigma_{j_1, …, j_{k-1}} dt \wedge dx_{j_1, …, j_{k-1}}$.
Then $d\beta = \Sigma_{i_1, …, i_k, l} \frac{f_{i_1, …, i_k}}{\partial x_l} dx_l \wedge dx_{i_1, …, i_k} + \Sigma_{i_1, …, i_k} \frac{\partial f_{i_1, …, i_k}}{\partial t} dt \wedge dx_{i_1, …, i_k} + \Sigma_{j_1, …, j_{k-1}, l} \frac{\partial g_{j_1, …, j_{k-1}}}{\partial x_l} dx_l \wedge dt \wedge dx_{j_1, …, j_{k-1}}$ where $1 \leq l \leq n$.
Define operators $N: \Omega^k(\mathbb{R}^{n+1}) \rightarrow \Omega^{k-1}(\mathbb{R}^n)$ (i.e. $N$ maps $k$-forms in $\mathbb{R}^{n+1}$ to $k-1$-forms in $\mathbb{R}^{n}$) as the following:
$N(\beta) = \Sigma_{j_1, …, j_{n-1}}(\int_{0}^{1} g_{j_1, …, j_{n-1}}(x, t)dt) dx_{j_1, …, j_{n-1}}$ i.e. $N$ is defined to ignore terms which $dt$ doesn't appear. In the terms that $dt$ appears, it integrates them and reduces the form by $1$ differential. Here, $\beta$, a $k$-form in $\mathbb{R}^{n+1}$ is reduced to a $(k-1)$-form in $\mathbb{R}^n$.
Subclaim: $\forall \beta \in \Omega^k(\mathbb{R}^{n+1})$, $(dN + Nd)(\beta) = \Sigma_{i_1, …, i_k} (f_{i_1, …, i_k}(x, 1) - f_{i_1, …, i_k}(x, 0)) dx_{i_1, …, i_k}$
Proof of Subclaim:
$N(d\beta) = N(\Sigma_{i_1, …, i_k, l} \frac{f_{i_1, …, i_k}}{\partial x_l} dx_l \wedge dx_{i_1, …, i_k} + \Sigma_{i_1, …, i_k} \frac{\partial f_{i_1, …, i_k}}{\partial t} dt \wedge dx_{i_1, …, i_k} + \Sigma_{j_1, …, j_{k-1}, l} \frac{\partial g_{j_1, …, j_{k-1}}}{\partial x_l} dx_l \wedge dt \wedge dx_{j_1, …, j_{k-1}}) = \Sigma_{i_1, …, i_k}(\int_{0}^{1} \frac{\partial f_{i_1, …, i_k}}{\partial t} dt) dx_{i_1, …, i_k} - \Sigma_{j_1, …, j_{k-1}, l}(\int_{0}^{1} \frac{\partial g_{j_1, …, j_{k-1}}}{\partial x_l} dt) dx_l \wedge dx_{j_1, …, j_{k-1}}$
The first term in $d\beta$ is ignore since it has no $dt$ term. The second and third term contain $dt$, hence are integrated. The $-$ sign appears due to the shifting of $dt$ forward by one position.
On the other side,
$dN(\beta) = d(N(\beta)) = d(\Sigma_{j_1, …, j_{k-1}}(\int_{0}^{1} g_{j_1, …, j_{k-1}}(x, t)dt) dx_{j_1, …, j_{k-1}}) = \Sigma_{j_1, …, j_{k-1},l} \frac{\partial(\int_{0}^{1} g_{j_1, …, j_{k-1}}(x, t)dt)}{\partial x_l} dx_l \wedge dx_{j_1, …, j_{k-1}} = \Sigma_{j_1, …, j_{k-1}, l}(\int_{0}^{1} \frac{\partial g_{j_1, …, j_{k-1}}}{\partial x_l} dt) dx_l \wedge dx_{j_1, …, j_{k-1}}$
In the last equality, we interchanged the integration and differentiation, because $g$ is smooth so it and its partial derivatives are continuous.
Hence: $(dN + Nd)(\beta) = \Sigma_{i_1, …, i_k}(\int_{0}^{1} \frac{\partial f_I}{\partial t} dt) dx_{i_1, …, i_k} = \Sigma_{i_1, …, i_k} (f_{i_1, …, i_k}(x, 1) - f_{i_1, …, i_k}(x, 0)) dx_{i_1, …, i_k}$ as desired, where the last inequality comes from the Fundamental Theorem of Calculus. (whew!)
Now we define the cone map $\rho(x, t) = tx$. $\rho: \mathbb{R}^{n+1} \rightarrow \mathbb{R}^n$.
Claim: $L = N \circ \rho^*$ is our desired construction i.e. $(Ld + dL)\omega = \omega$
Proof of Claim:
$Ld + dL = N\rho^* d + dN \rho^* = Nd\rho^* + dN\rho^* = (Nd + dN)\rho^*$ where the second equality comes from the commutativity of $\rho^*$ (a pullback) and $d$.
Let $\omega = h dx_{i_1, …, i_k}$ be a simple $k$-form in $\mathbb{R}^n$.
$\rho^*(\omega) = \rho^*(h dx_{i_1, …, i_k}) = (\rho^*h)\frac{\partial \rho_{i_1, …, i_k}}{idk} = h(tx) d\rho_{i_1, …, i_k} = h(tx) d(tx_{i_1}) \wedge … \wedge d(tx_{i_k})$
$= h(tx)( (t dx_{i_1} + x_{i_1} dt) \wedge … \wedge (t dx_{i_k} + x_{i_k}dt) ) = h(tx)(t^k dx_{i_1, …, i_k}) + \text{terms with dt}$
I'm not sure about the second equality. I think it's due to my lack of understanding of pushforward and pullback.
Hence $(Nd + dN) \circ \rho^*(h dx_{i_1, …, i_k}) = (Nd + dN)(h(tx)(t^k dx_{i_1, …, i_k}) + \text{terms with dt}) = (Nd + dN)(h(tx)(t^k dx_{i_1, …, i_k})) = (h(1\cdot x)1^k - h(0\cdot x)0^k)dx_{i_1, …, i_k} = h dx_{i_1, …, i_k}$.
Hence $(Ld+dL)(h dx_{i_1, …, i_k}) = (Nd + dN) \circ \rho^*(h dx_{i_1, …, i_k}) = h dx_{i_1, …, i_k}$.
Since $L, d$ are linear, we have $(Ld+dL)(\omega) = \omega$, hence closed forms on $\mathbb{R}^n$ are exact. Reminder that $L = N \circ \rho^*$.
I am very grateful to Professor Morgan Weiler for her YouTube video on Poincaré Lemma, in which she proved Poincaré Lemma inductively and helped me visualize the key steps in the proof: https://www.youtube.com/watch?v=C-LhZ9Tkc2g (she's also previously from Berkeley! )
Corollary 47: If $U$ diffeomorphic to $\mathbb{R}^n$, then all exact forms on $U$ are exact.
Proof of Corollary 47: Let $T:U \rightarrow \mathbb{R}^n$ be a diffeomorphism and assume $\omega$ closed $k$-form on $U$.
Let $\alpha = (T^{-1})^*\omega$ be a $k$-form on $\mathbb{R}^n$. Since $\omega$ closed, $d\omega = 0$, hence $d\alpha = d( (T^{-1})^*\omega) = (T^{-1})^* d\omega = (T^{-1})^*0 = 0$, so $\alpha$ is closed. By Poincaré Lemma, $\exists (k-1)$-form $\mu$ in $\mathbb{R}^n$ with $\alpha = d\mu$.
Then $dT^*\mu = T^*d\mu = T^*\alpha = T^* \circ (T^{-1})^*\omega = (T^{-1} \circ T)^*\omega = \omega$, hence $\omega$ has an antiderivative $T^*\mu$.
Hence, $\omega$ is exact.
Corollary 48: Locally, closed forms defined on open subsets of $\mathbb{R}^n$ are exact.
Proof of Corollary 48: Locally, an open subset of $\mathbb{R}^n$ is diffeomorphic to $\mathbb{R}^n$. (Why?)
Corollary 49: If $U \subset \mathbb{R}^n$ is open and star-like (in particular, if $U$ is convex), then closed forms on $U$ are exact. A starlike set $U \subset \mathbb{R}^n$ contains a point $p$ s.t. the line segment from each $q \in U$ to $p$ lies in $U$.
Proof of Corollary 49: Every starlike open sets in $\mathbb{R}^n$ is diffeomorphic to $\mathbb{R}^n$. (Why?)
Corollary 50: A smooth vector field $F$ on $\mathbb{R}^3$ (or on an open set diffeomorphic to $\mathbb{R}^3$ is the gradient of a scalar function if and only if its curl is everywhere zero.
Proof of Corollary 50:
($\Leftarrow$)
Let $F = (f, g, h)$. Define $\omega = f dx + g dy + h dz$. Then $d\omega = f_y dy \wedge dx + f_z dz \wedge dx + g_x dx \wedge dy + g_z dz \wedge dy + h_x dx \wedge dz + h_y dy \wedge dz = (g_x - f_y) dx \wedge dy + (h_y - g_z) dy \wedge dz + (f_z - h_x) dz \wedge d_x = 0$
The last equality follows since $\nabla \times F = 0$, so $g_x - f_y = 0$, $h_y - g_z = 0$ and $f_z - h_x = 0$. Hence, $\omega$ is closed and therefore exact, i.e. it has an antiderivative $\phi$. Note $\nabla \phi = \omega \Leftrightarrow \phi_x dx + \phi_y dy + \phi_z dz = f dx + g dy + h dz$, hence $\nabla \phi = F$ as desired.
($\Rightarrow$)
For completeness, let $F = \nabla \phi = [\phi_x, \phi_y, \phi_z]^T$. Then $\nabla \times F = [\phi_zy - \phi_yz, \phi_xz - \phi_zx, \phi_yx - \phi_xy] = 0$ where the last equality holds by equivalence of partial derivatives.
Corollary 51: A smooth vector field on $\mathbb{R}^3$ (or on an open set diffeomorphic to $\mathbb{R}^3$) has everywhere zero divergence iff it is the curl of some other vector field.
Proof of Corollary 51:
($\Leftarrow$)
Let $G = (A, B, C)$. Define $\omega = A dy \wedge dz + B dz \wedge dx + C dx \wedge dy$. Since $\nabla \cdot G = A_x + B_y + C_z = 0$, $d\omega = A_x dx \wedge dy \wedge dz + B_y dy \wedge dz \wedge dx + C_z dz \wedge dx \wedge dy = (A_x + B_y + C_z) dx \wedge dy \wedge dz = 0$, hence $\omega$ closed and therefore exact. Hence, exists anti-derivative $\alpha = f dx + g dy + h dz$ with $d\alpha = f_y dy \wedge dx + f_z dz \wedge dx + g_x dx \wedge dy + g_z dz \wedge dy + h_x dx \wedge dz + h_y dy \wedge dz = (g_x - f_y) dx \wedge dy + (h_y - g_z) dy \wedge dz + (f_z - h_x) dz \wedge dx = A dy \wedge dz + B dz \wedge dx + C dx \wedge dy = \omega$.
So, $A = (h_y - g_z)$, $B = (f_z - h_x)$ and $C = (g_x - f_y)$. Define $F = (f, g, h)$, then $\nabla \times F = [h_y - g_z, f_z - h_x, g_x - f_y]^T = G$.
($\Rightarrow$)
For completeness, let $F = (f, g, h)$ and $G = \nabla \times F$. Then $G = [h_y - g_z, f_z - h_x, g_x - f_y]^T$ so $\nabla \cdot G = h_{yx} - g_{zx} + f_{zy} - h_{xy} + g_{xz} - f_{yz} = 0$.
We start with developing further the theory of Fourier series for functions of a fixed period $L$, so as to (hopefully) motivate the Discrete Fourier Transform. Then, I will introduce FFT. If there is enough time, I will go through Wavelet Transform and Uncertainty Principle.
Fourier series for functions of fixed period $L$
A good example is to solve Tao 5.5.6.
Let $L > 0$ and $f:\mathbb{R} \rightarrow \mathbb{C}$ be continuous, $L$-periodic. Define $c_n = \frac{1}{L} \int_{[0,L]} f(x)e^{\frac{2 \pi inx}{L}} dx$.
(a)
Define $g(x) = f(Lx)$. Note that $g(x) \in C(\mathbb{R}/\mathbb{Z}; \mathbb{C})$. Hence, by Fourier's Theorem, $\text{lim}_{N \rightarrow \infty} \| g - \Sigma_{n=-N}^{n=N} \hat{g}(n)e_n \|_2 = 0 \Rightarrow \text{lim}_{N \rightarrow \infty} \| g - \Sigma_{n=-N}^{n=N} \hat{g}(n)e_n \|_2^2 = 0 \Rightarrow \text{lim}_{N \rightarrow \infty} \int_{[0,1]} |g(x) - \Sigma_{n=-N}^{N}\hat{g}(n)e_n(x)|^2 dx = 0$.
Applying a change of variable $x \rightarrow \frac{x}{L}$, we have: $\int_{[0,1]} |g(x) - \Sigma_{n=-N}^{N}\hat{g}(n)e_n(x)|^2 dx = \int_{[0,L]} |g(\frac{x}{L}) - \Sigma_{n=-N}^{N}\hat{g}(n)e_n(\frac{x}{L})|^2 \frac{dx}{L} = \frac{1}{L}\int_{[0,L]} |f(x) - \Sigma_{n=-N}^{N}\hat{g}(n)e_n(\frac{x}{L})|^2 dx$
$\hat{g}(n) = \langle g, e_n \rangle = \int_{[0, 1]} g(x)e^{-2\pi inx} dx = \int_{[0, L]} g(x/L)e^{\frac{-2\pi inx}{L}} \frac{dx}{L} = \frac{1}{L} \int_{[0, L]} f(x)e^{\frac{-2\pi inx}{L}} dx = c_n$
Hence, $\text{lim}_{N \rightarrow \infty} \int_{[0,1]} |g(x) - \Sigma_{n=-N}^{N}\hat{g}(n)e_n(x)|^2 dx = \frac{1}{L} \text{lim}_{N \rightarrow \infty} \int_{[0,L]} |f(x) - \Sigma_{n=-N}^{N}c_n e^{\frac{2\pi inx}{L}}|^2 dx = 0$
This is exactly what we want to show, that $\Sigma_{n=-\infty}^{\infty} c_n e^{\frac{2 \pi inx}{L}}$ converges in $L_2$ to $f$.
(b, c) Subsequently, the proof for these two parts are exactly similar to the proof in Tao with the change of variables.
Discrete Fourier Transform
The discrete Fourier transform (DFT) converts a sequence of $N$ numbers $(y_j)_{j=0}^{N-1} = (y_0, …, y_{N-1}) \in \mathbb{C}$ to a new sequence of $N$ numbers $c_0, …, c_{N-1} \in \mathbb{C}$, given by: $c_k = \Sigma_{j=0}^{N-1} y_j e^{\frac{-2 \pi ijk}{N}}$
Think of $(y_j)$ as the values of a function or signal at equally spaced times $x = 0, …, N - 1$. The output $c_k$ encodes the amplitude and phase of $e^{\frac{2 \pi ikx}{N}}$. Key idea: approximate the signals by a linear combination of waves that has wavelength that are an integer factor of $N$ i.e. all such waves has wavelength that are an integer
Normally, we know the wave equation as $y(x) = cos(kx) + i\cdot sin(kx) = e^{ikx}$ where $k = \frac{2\pi}{\lambda}$ is the wavenumber; $\lambda$ is the wavelength.
Hence, the wave corresponding to $c_k$, which is $e^{\frac{2 \pi ikx}{N}}$ has wavenumber $k = \frac{2\pi ik}{N}$, thus it has a wavelength of $\lambda = \frac{N}{k}$. Suffices to compute $c_k$ to find the coefficients of an approximation of the original signal $(y_j)_j$ by a linear combination of the waves
For the encoding of signals, suffices to solve a system of linear equations. Hence, introduce the Vandermonde Matrix for roots of unity
Suffices to matrix multiply the Vandermonde matrix for roots of unity with the signal vector.
Remark: This problem is also equivalent to evaluating a polynomial at the roots of unity, the so-called changing from coefficient representation to value representation.
Fast Fourier Transform
A naive implementation of the Discrete Fourier Transform takes $O(N^2)$. However, Fast Fourier Transform performs it in $O(N log N)$ by exploiting the symmetry of the roots of unity.
An illustration of idea is: break down the polynomial into even and odd terms. We can recycle many computations. Use $1$ and $-1$ as an example.
Inverse Fourier Transform
$y_j = \frac{1}{N} \Sigma_{k=0}^{N-1} c_k e^{\frac{2 \pi ijk}{N}}$
The trick here is to notice that the inverse matrix is just another Vandermonde matrix with $\bar{\omega}$ instead of $\omega$.
Applications
Many applications come from the ability to encode a signal:
Wavelet Transformation and the Uncertainty Principle
The problem with Fourier Transform is that it produces a periodic function over the whole space. What if we just want a localized wave? This is so called a “wavelet”.
A function $\psi:\mathbb{R} \rightarrow \mathbb{C} \in L^2(\mathbb{R}$ (square integrable functions i.e. $\int_{-\infty}^{\infty} |\psi|^2 < \infty$) is said to be an orthonormal wavelet if it can be used to define a Hilbert basis.
The Heisenberg Uncertainty Principle from Physics states: $\Delta E \Delta t \geq \frac{\hbar}{4\pi}$. Translated to signal processing, that becomes $\Delta \omega \Delta t \geq \frac{1}{2}$. Intuitively, it means we cannot measure and clearly resolve both the frequency and the time to a very large degree.
References
Reminder of definitions to self:
Motivation: why consider simplex? The partitioning of sets into simplexes is called triangulation, which plays important role in topology. Also, it allows for the introduction of boundaries.
Theorem [Stokes]: If $\Phi$ is a $k$-chain of class $\mathcal{C}“$ in open set $V \subset \mathbb{R}^m$ and if $\omega$ is a $(k-1)$-form of class $\mathcal{C}'$ in $V$, then $\int_{\Psi} d\omega = \int_{\partial\Psi} \omega$.
Remark: $k = m = 1$ is Fundamental Theorem of Calculus. $k = m = 2$ is Green's Theorem.
Proof (here we go):
Firstly, we perform some reductions. We know $\Psi = \Sigma \Phi_i$ and $\partial \Psi = \Sigma \partial \Phi_i$. Suffices to show $\int_{\Phi} d\omega = \int_{\partial \Phi} \omega$ for every oriented $k$-simplex $\Phi$ of class $\mathcal{C}$” in $V$. This is because then $\int_{\Psi} d\omega = \Sigma_{i=1}^{r} \int_{\Phi_i} d\omega = \Sigma_{i=1}^{r} \int_{\partial \Phi_i} \omega = \int_{\partial \Psi} \omega$ (the middle equality is due to the result).
Fix an oriented $k$-simplex $\Phi$ and let $\sigma = [0, e_1, …, e_k]$ be the oriented affine $k$-simplex with parameter domain $Q^k$ which is defined by the identity mapping. (?)
Do some work on LHS: $\int_{T\sigma} d\omega = \int_{\sigma} (d\omega)_T = \int_{\sigma} d(\omega_T)$
Do some work on RHS: $\int_{\partial(T\sigma)} \omega = \int_{T(\partial \sigma)} \omega = \int_{\partial \sigma} d(\omega_T)$
Suffices to show: $\int_{\sigma} d\lambda = \int_{\partial \sigma} \lambda$ for the special simplex and for every $(k-1)$-form $\lambda$ of class $\mathcal{C}'$ in $E$.
…
Definition [exact]: Let $\omega$ be a $k$-form in open set $E \subset \mathbb{R}^n$. If there is a $(k-1)$-form $\lambda \in E$ s.t. $\omega = d\lambda$, then say $\omega$ is exact in $E$.
Definition [closed]: If $\omega$ is of class $\mathcal{C}'$ and $d\omega = 0$, then $\omega$ is closed.
Theorem: Suppose $E \subset \mathbb{R}^n$ convex and open, $f \in \mathcal{C}'(E)$, $p \in \mathbb{Z}$ s.t. $1 \leq p \leq n$ and $\frac{\partial f}{\partial x_j}(x) = 0$ for $p < j \leq n, x \in E$. Then, $\exists F \in \mathcal{C}'(E)$ s.t. $\frac{\partial F}{\partial x_p}(x) = f(x)$ and $\frac{\partial F}{\partial x_j}(x) = 0$ for $p < j \leq n, x \in E$.
Proof:
Theorem [Poincare's Lemma]: If $E \subset \mathbb{R}^n$ convex and open, $k \geq 1$, $\omega$ is a $k$-form of class $\mathcal{C}'$ in $E$ and $d\omega = 0$, then there is a $(k-1)$-form $\lambda$ in $E$ s.t. $\omega = d\lambda$.
Closed forms are exact in convex sets.
Proof:
Theorem: Fix $k$ with $1 \leq k \leq n$. Let $E \subset \mathbb{R}^n$ be an open set in which every closed $k$-form is exact. Let $T$ be 1-1 $\mathcal{C}“$-mapping of $E$ onto open set $U \subset \mathbb{R}^n$ whose inverse $S$ is of class $\mathcal{C}“$. Then every closed $k$-form in $U$ is exact in $U$.
Proof:
Riemann integral of a function $f$ is defined to be the upper Darboux integral ($U(f) = \text{inf}\{U(f, P) | P \text{ partition of } [a, b]\}$) or the lower Darboux integral ($L(f) = \text{sup}\{L(f, P) | P \text{ partition of } [a, b]\}$) (both should agree if the function is integrable). It relies heavily on the notion of a partition of an interval $[a, b]$.
On the other hand, the al of a function $f$ is defined to be the measure of the undergraph of $f$ (i.e. $m (\mathcal{U}(f))$). It relies heavily on the concept of measure and measurability.
I feel that Lebesgue's integral is more intuitive in defining integral, because unlike Riemann integral, it does not rely heavily on an axis. In Riemann integral, we will run into problem when we take the $\text{sup}$ or $\text{inf}$ over an interval on an axis, which may result in the upper and lower integrals disagreeing, but for gral, we can fundamentally only care about the region under the graph, so integrals like $\int f(x) dx = 0$ where $f(x) = 1 \forall x\in \mathbb{Q}$ and $f(x) = 0$ elsewhere can be defined.
I also feel that Lebesgue Theory is richer, so it feels more interesting.
<to be updated after Spring Break, where I will do a formal compilation>
Here I complete the proofs of several lemmas which Tao left as exercises. (Some of the proofs are presented by Prof Peng in class.)
Ex 8.1.1 If $f$ and $g$ are simple functions, their images are finite. Let the image of $f$ be $\{c_1, …, c_N\}$ and the image of $g$ be $\{d_1, …, d_M\}$. Then the image of $f+g$ is a subset of $\{c_i + d_j : 1 \leq i \leq N, 1 \leq j \leq M\}$ which is finite, hence $f+g$ is a simple function. Similarly, the set $\{c \cdot c_1, …, c \cdot c_N\}$ is finite, so $cf$ is also a simple function.
Ex 8.1.2 For each $i \in \{1, …, N\}$, consider the open interval $I_i = (c_i - \epsilon, c_i + \epsilon)$. Then since $f$ is measurable, $f^{-1}(I_i)$ is measurable. Denote $E_i = f^{-1}(I_i)$. Then, note that $E_i \cap E_j = \phi$ for $i \neq j$, since if $x \in E_i$ and $x \in E_j$ implies $f(x) = c_i$ and $f(x) = c_j$ (contradiction!). Hence, the $E_i$s are disjoint. Thus $f = \Sigma^{N}_{i=1} c_i \chi_{E_i}$ as desired.
Ex 8.1.3 Consider $f_n(x) := \text{sup}\{\frac{j}{2^n} : j \in \mathbb{Z}^{+} \cup \{0\}, \frac{j}{2^n} \leq \text{min}(f(x), 2^n)\}$ (i.e. $f_n(x)$ is the greatest integer multiple of $2^{-n}$ that does not exceed both $f(x)$ or $2^n$. Clearly, $f_n$ are nonnegative and increasing (since the restriction on multiple of $2^{-n}$ is relaxed as $n$ increases). $f_n$ are also simple functions, s of possible values $f_n(x)$ can take on is a subset of $\{\frac{j}{2^n} : j \in \mathbb{Z} \cup \{0\}, 0 \leq j \leq (2^n)^2 \}$ For each point $x \in \Omega$, $\exists N \in \mathbb{N}$ s.t. $2^N > f(x)$, hence $n > N$ implies $|f(x) - f_n(x)| = f(x) - f_n(x) \leq 2^{-n}$. Hence, $\text{lim}_{n→\infty} |f(x) - f_n(x)| = 0$ by squeeze lemma. Thus, exists a sequence of nonnegative, increasing functions $(f_n)_n$ that converges pointwise to $f$.
Ex 8.2.1 (a') Note that $0$ as a function minorizes $f$, hence $0 \leq \int_{\Omega} f \leq \infty$. If $f(x) = 0$ a.e., then for any simple function $s$ majorized by $f$, if $s = \Sigma c_i \mathbb{1}_{E_i}$ for some $c_i > 0$, then $E_i \subset Z$ where $Z$ is a measure zero set (due to definition of almost everywhere). Hence $E_i$ is a null set, and $\int_{\Omega} f = 0$. On the other hand, if $\int_{\Omega} f = 0$, then let $E_n$ be the open pre-image of $(\frac{1}{n}, \infty)$, $n \in \mathbb{N}$. Then $E_n$ is measurable (since $f$ is measurable function). Thus $E_n$ must have measure $0$, else, the simple function $g = \frac{1}{n}\mathbb{1}_{E_n}$ minorizes $f$. Since this holds true $\forall n$, the pre-image of $(0, \infty)$ must have measure $0$ (due to upward measure continuity), hence $f = 0$ a.e.
(b') For any simple function $s$ that is majorized by $f$, $cs$ is also majorized by $cf$ and vice-versa. Hence, $\int_{\Omega}cf = c\int_{\Omega}f$.
(c') Any simple function $s$ that is majorized by $f$ is also majorized by $g$. Hence $\int_{\Omega}f \leq \int_{\Omega}g$ simply because the latter is the $\text{sup}$ of a bigger set.
(d') Let $Z = \{x | f(x) \neq g(x)\}$, therefore $Z$ is a zero set (by definition of a.e.). Let $Z^{c}$ denote $\Omega \setminus Z$. Then $\forall s$ simple function, $\int s = \int_{Z^c} s$. Then $\int f = \int_{Z^c} f = \int_{Z^c} g = \int g$.
(e') Similar to (c'), any simple function that is majorized by $f_{X_{\Omega'}}$ is also majorized by $f$. Hence $\int_{\Omega'} f = \int_{\Omega} f_{\chi_{\Omega'}} \leq \int_{\Omega} f$.
Ex 8.2.2 By definition, for any simple function $s$ that minorizes $f$ and $t$ that minorizes $g$, $s + t$ (which is also a simple function) minorizes $f + g$. Thus, let $\epsilon > 0$, then let $\int_{\Omega}f = A$ and $\int_{\Omega}g = B$, then $\exists s, t$ simple functions s.t. $s$ minorizes $f$ and $t$ minorizes $g$ and $\int_{\Omega}s \geq A - \frac{\epsilon}{2}$ and $\int_{\Omega}t \geq B - \frac{\epsilon}{2}$. Thus, $\int_{\Omega}(f+g) \geq A + B - \epsilon$ $\forall \epsilon>0$. Thus, $\int_{\Omega}(f+g) \geq \int_{\Omega}f + \int_{\Omega}g$.
Ex 8.2.3 Consider the partial sum $s_N = \Sigma_{n=1}^{N}g_n$. Then $s_N$ is nonnegative and $(s_n)_n$ is a sequence of increasing functions. By Monotone Convergence Theorem, $\int_{\Omega} \text{sup}s_n = \text{sup} \int_{\Omega} s_n$. Note $\text{sup}s_n = \text{lim}_{n → \infty} \Sigma_{i=1}^{n}g_i = \Sigma_{i=1}^{\infty}g_i$. Also, $\int_{\Omega} s_n = \int_{\Omega} \Sigma_{i=1}^{n}g_i = \Sigma_{i=1}^{n} \int_{\Omega} g_i$ (interchange of addition and integration), hence $\text{sup} \int_{\Omega} s_n = \Sigma_{i=1}^{\infty} \int_{\Omega} g_i$. Thus $\int_{\Omega} \Sigma_{i=1}^{\infty}g_i = \Sigma_{i=1}^{\infty} \int_{\Omega} g_i$, as desired.
Ex 8.2.4 Consider the function $\Sigma_{n=1}^{\infty}f_n$. The function $s = \chi_{[1, 2)}$ minorizes it, therefore, $\int_{\Omega} \Sigma_{n=1}^{\infty}f_n \geq 1$. However, $\int_{\mathbb{R}} f_n = 0$ (actually, it might not even be defined in this case, since no simple function minorizes it). Thus, $\int_{\Omega} \Sigma_{n=1}^{\infty}f_n \neq \Sigma_{n=1}^{\infty} \int_{\mathbb{R}}f_n$.
Ex 8.2.5 Firstly, note that the set $E = \{x \in \Omega: f(x) = +\infty\}$ is measurable. Let $E_n$ be the preimage of $(n, \infty)$, then $E_n$ are all measurable (since $f$ measurable). Thus, since $E_n \downarrow E$, $E$ is measurable by downward measure continuity. Suppose otherwise, that the set $E$ has nonzero measure. Then consider the simple function $s_n = n\cdot \chi_E$. Then, $\forall n \in \mathbb{N}$, $s_n$ minorizes $f$. Hence, $\int_{\Omega} f \geq \int_{E} f \geq \int_{E} s_n = n \cdot m(E)$. If $m(E)$ nonzero, $\text{lim}_{n → \infty} n \cdot m(E) = \infty$. (contradiction!) Hence, $f$ must be almost finite a.e.
Ex 8.2.6 Consider the indicator functions $\chi_{\Omega_n}$.
Ex 8.2.7 HW5
Ex 8.2.8
Ex 8.2.9 HW5
Ex 8.2.10 HW5
Ex 8.3.1 $|\int_{\Omega} f| = |\int_{\Omega} f^+ - \int_{\Omega} f^-| \leq |\int_{\Omega} f^+| + |\int_{\Omega} f^-| = \int_{\Omega} f^+ + \int_{\Omega} f^- = \int_{\Omega} (f^+ + f^-) = \int_{\Omega} |f|$ as desired.
The first equality is by definition. The first inequality is by the triangle inequality of $\mathbb{R}$.
Ex 8.3.2 HW6
Ex 8.3.3 HW6
One of the main reasons why I took this course is to gain a deeper understanding of probability, so here are some of my attempts in making this connection.
Preliminaries: The Borel sets on $\mathbb{R}$, denoted as $\mathcal{B}(\mathbb{R})$ is the $\sigma$-field on $\mathbb{R}$ that is generated by all open intervals of the form $(a, b)$ with $a < b$.
Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a measure space. $\mathcal{F}$ is a $\sigma$-algebra denoting a collection of subsets of $\Omega$, i.e. $\mathcal{F}$ satisfies the following:
The sets in $\mathcal{F}$ are measurable sets. $\mathbb{P}: \mathcal{F} \rightarrow [0, 1]$ is a probability measure which satisfies all the axioms of measure and the following:
A random variable $X$ is a measurable function $X: \mathcal{F} \rightarrow \mathbb{R}$, i.e. $X^{-1}(B) = {\omega \in \Omega: X(\omega) \in B} \in \mathcal{F}$ $\forall B \in \mathcal{B}(\mathbb{R})$.
Let $(X_n)_n$ be a sequence of random variables. For $\omega \in \Omega$, then the sequence of random variables acting on this $\omega$ gives a sequence os: $(X_n(\omega))_n$. Say $(X_n)_n$ converges almost surely to $X$ if $\text{lim}_{n \leftarrow \infty} \mathbb{P}[\{\omega: X_n(\omega) = X(\omega)\}] = 1$.
Borel-Cantelli Lemmas
https://www.colorado.edu/amath/sites/default/files/attached-files/almost_sure_conv.pdf
- Tao Section 3: Here I attempt to write a concise summary/key idea of Tao Section 3 without all the (why?)
Say a set $A$ is a coset of $\mathbb{Q}$ if $A = x + \mathbb{Q}$ for some $x \in \mathbb{R}$.
Claim 1: $x + \mathbb{Q}$ and $y + \mathbb{Q}$ are either disjoint or entirely the same
Claim 2: Every coset has a non-empty intersection with $[0, 1]$
Consider the set of all cosets, and from each of them pick a representative $x_A$ such that $x_A \in [0, 1]$ (by axiom of choice; there are an uncountably infinite of cosets, corresponding to an uncountably infinite number of $x \in \mathbb{R}$). Construct $E = \{x_A\}$. Construct $X = \bigcup_{q \in \mathbb{Q} \cap [-1, 1]} (q + E)$.
The key idea here is that, due to this construction of $X$, $[0, 1] \subset X \subset [-1, 2]$, hence we can bound the outer measure of $X$ by monotonicity (if it exists).
On one hand, any element $x \in X$ can be expressed as $x = q + e$ where $q \in \mathbb{Q} \cap [-1, 1]$ and $e \in E$, hence, $x \in [-1, 2]$, hence $m^*(X) < 3$ by monotonicity.
On the other hand, for any $x \in [0, 1]$, $x \in x + \mathbb{Q}$, so $\exists y \in E \cap [0, 1]$ s.t. $x = y + q$ where $q \in \mathbb{Q}$. (in particular, this $y$ is the representative of the coset that $x$ belongs to) Then, since $|x - y| \leq 1$, $q \in [-1, 1]$, so $x \in X$. Hence, since $[0, 1] \subset X$, $m^*(X) \geq 1$ by monotonicity.
This yields $1 \leq m^*(X) \leq 3$.
The nail in the coffin comes from the assumption that if $m^*(\cdot)$ satisfies countable additivity, then since $X = \bigcup_{q \in \mathbb{Q} \cap [-1, 1]} (q + E)$, and $\mathbb{Q} \cap [-1, 1]$ is countable (because $\mathbb{Q}$ is countable, so is its intersection with another set) then $m^*(X) = \Sigma m^*(q + E)$. (*) But by the translational invariance property of outer measure, $m^*(q + E) = m^*(E)$ where $q \in \mathbb{Q} \cap [-1, 1]$ is a subset of $\mathbb{R}$. Hence, the RHS is summing the same thing $m^*(E)$ many times. Thus, RHS takes on either the value $0$ or $\infty$, corresponding to when $m^*(E) = 0$ or $m^*(E) > 0$ respectively. Either way, it does not fall in the bounds $1 \leq m^*(X) \leq 3$.
(*) Note that only here did we use Claim 1. It shows that for two rational numbers $q_1, q_2 \in [-1, 1]$, the two sets $q_1 + E$ and $q_2 + E$ are disjoint. Suppose otherwise, then $\exists x_A, x_B \in E$ s.t. $q_1 + x_A = q_2 + x_B$, then $x_A - x_B = q_2 - q_1 \in \mathbb{Q}$ which implies $x_A, x_B$ belong to the same coset! This is impossible since only one representative is chosen from each coset by construction.
The failure of finite additivity follows directly (but non-trivially though, in my honest opinion). Since $1 \leq m^*(X) \leq 3$, by countable subadditivity $X = \bigcup_{q \in \mathbb{Q} \cap [-1, 1]} (q + E) \leq \Sigma m^*(q + E) = \Sigma m^*(E)$, we can eliminate the case where $m^*(E) = 0$, otherwise $m^*(X) = 0$.
Since we know $m^*(X)$ now takes a finite value, say $\epsilon$, we can upset it by taking a finite number of rational numbers $\in \mathbb{Q} \cap [-1, 1]$, say a set $J$ with $\lceil 4/\epsilon \rceil$ elements. Then, by countable subadditivity again, $\bigcup_{q \in J}(q+E) \subset \bigcup_{q\in [-1,1]}(q+E) = X$, so $\Sigma_{q \in J}m^*(q + E) \leq m^*(X)$. But if finite additivity works, $\Sigma_{q \in J}m^*(q + E) = |J|m^*(E) \geq (4/\epsilon)\epsilon = 4 > 3 \geq m^*(X)$. (Contradiction!)
Proof (I followed the proof given in Princeton Measure Theory but filled in some in-between steps):
Let $\{B_i\}$ denote the set of boxes that are $(0, 1/2^k)^d$ but translated by some integer multiple of $1/2^k$ possibly in each of the $d$ coordinates. Then, for any point $u \in U$, since $U$ is open, $\exists r > 0$ s.t. $B_r(u) \subset U$. We can always pick a power of $2$ such that $\frac{1}{2^k} < \frac{r}{2\sqrt{d}}$. Hence, $u$ is either contained in $B_i$ for some $i$ or $u$ has a coordinate that is exactly an integer multiple of $\frac{1}{2^k}$.
Let $P$ be the set of all planes such that one coordinate is a multiple of $\frac{1}{2^k}$. This set is countable (since $\mathbb{Q}$ is countable). We have proven before that we can cover a plane with a coun of open boxes with outer measure $0$.
Just for completeness, for the plane $x_1 = 0$. The consider open boxes centered at $0$, with each dimension other than $x_1$ being length $2^k$ and $x_1$ being dimension $\epsilon/(2^k)^{d}$. Then enumerating $k = 1, 2, 3, …$ covers the whole plane, and the total volume of all open boxes that we used is $\epsilon$.
Hence, from the set of all $\{B_i\}$, we choose the boxes that completely fits in the open set $U$ as described above, and denote it as the set $B$. Then we have the following relationship: $B \subset U \subset B \cup P$. By monotonicity and subadditivity of outer measure, $m^*(B) \leq m^*(U) \leq m^*(B \cup P) \leq m^*(B) + m^*(P) = m^*(B)$. Hence $m^*(U) = m^*(B)$. This shows that every open set $U$ can be written as a countable union of almost disjoint open boxes.
Proof (I followed the proof given in Princeton Measure Theory but filled in some in-between steps):
$F$ is closed, so for any $x \in K$, $\exists \delta_x$ s.t. $d(x, F) > 3\delta_x$. (Suppose otherwise that $\delta_x$ does not exist, then for every $n$, we can find $f_n \in F$ s.t. $d(x, f_n) < \frac{1}{n}$. Then since $F$ closed and by Bolzano Weierstrass, $\exists$ subsequence $f'_n$ s.t. $\text{lim} f'_n = x \in F$. Contradiction!)
Since $\bigcup_{x \in K} B_{2\delta_x}(x)$ covers $K$ and $K$ is compact, by Heine-Borel Theorem, exists a finite subcover $\bigcup_{i=1}^{n} B_{2\delta_{x_i}}(x_i)$.
Pick $\delta = \text{min} \delta_{x_i}$. Then, for any $x \in K$ and $y \in F$, then for any $j$ an index in the finite subcover, $|y - x| \geq |y - x_j| - |x_j - x| \geq 3\cdot \delta_{x_j} - 2\cdot \delta_{x_j} = \delta_{x_j} \geq \delta$. Hence, $\text{dist}(F, K) \geq \delta > 0$.
Proof (I came up with this by myself; please forgive any errors)
Suppose $f$ is continuous at $x_0$, then for every sequence $(x_n)$ that converges to $x_0$, we have $(f(x_n))_n → f(x_0)$. Suppose otherwise, that for some open set $V \subset Y$ s.t. $f(x_0) \in V$ but does not exist open $U \subset X$ containing $x_0$ with the image $f(U) \subset V$. Since $V$ is open, suffices to consider a smaller set $V' = B_r(f(x_0)) \subset V$ for some $r \in \mathbb{R}$. Consider open sets $U_n = B_{1/n}(x_0)$ in $X$ with $n \in \mathbb{N}$. None of these balls map entirely into $V'$, so we can pick a point $x'_n$ s.t. $f(x'_n) \notin V'$. This gives us a sequence $(x'_n)_n$. It is clear that $(x'_n)_n → x_0$ since $\text{lim}_{n→\infty}\frac{1}{n} = 0$, hence by continuity of $f$ at $x_0$, $\text{lim}_{n→\infty}f(x'_n) = f(x_0)$. But this is impossible, since the value sequence all lies outside $V' = B_r(f(x_0))$, contradicting the assumption that $f$ is continuous at $x_0$.
Now let's consider the case: for every open set $V \subset Y$ s.t. $f(x_0) \in V$, $\exists U \subset X$ open containing $x_0$ s.t. $f(U) \subset V$. Suppose otherwise that $f$ is not continuous at $x_0$. Then, it means that $\exists (x'_n)_n$ s.t. $(x'_n) → x_0$ but $f(x'_n)$ does not converge to $f(x_0)$. Then exists a subsequence of $(x'_n)_n$, say $(y'_n)_n$ such that $d_Y(f(x_0), y'_n) = \delta > 0$ (for if otherwise, the sequence $(f(x'_n))_n$ converges to $f(x_0)$). Consider the open ball $B_{\delta/2}(f(x_0))$. By assumption, $\exists U \subset X$ containing $x_0$ such that $f(U) \subset B_{\delta/2}(f(x_0))$. This means none of the values $y'_i$ falls in $U$ (for if otherwise their image falls in $B_{\delta/2}(f(x_0))$. Thus $\text{lim}y'_i \neq x_0$. Contradiction, since $(y'_n)_n$ is a subsequence of $(x'_n)_n$ which must converge to $x_0$ too.
A metric space is sequentially compact if and only if it is compact.
Lemma If the metric space $(X, d)$ is compact and an open cover of $X$ is given, then $\exists \delta > 0$ s.t. every subset of $X$ with diameter less than $\delta$ is contained in some member of its cover. (Wiki)
If subspace of a metric space, say $K$, is compact/sequentially compact, then for any open cover $\{U_i\}$ of $K$, $\exists \delta > 0$, s.t. for $\forall x \in K$, $B_\delta(x) \subset U_\alpha$ for some $\alpha$. (Prof Peng's notes)
Key idea of proof: Prove by contradtion. Suppose otherwise, then $\forall n \in \mathbb{Z}$, $\exists x_k$ s.t. $B_{1/n}$ is not contained within all open sets $U_i$. Consider this sequence, then apply triangle inequality at the limit point $x = lim_{k → \infty} x_k$.
- $\delta$ is referred to as the of this cover. Intuitively, it's like a breathing space () for the compact set within the cover.
- Diameter is the supremum of the distance between pairs of points
If $A$ and $B$ are measurable, then $A \RM B$ is measurable.
Idea by @selenajli on Discord: $B$ is measurable $\Rightarrow$ $\mathbb{R}^n \RM B$ is measurable (complement) $\Rightarrow$ $\mathbb{R}^n \RM B \cap A = A - B$ is measurable (finite union)
✅ What are all the -morphisms? Are there intuitive pictures so that I can have a better idea? See definitions
Banach-Tarski paradox (Tao 163) ✅ Lebesgue number (Pugh)
From David's non-measurable presentation rehearsal: why are these two definitions equivalent to the definition we know from Tao?
$m^*(P) = \text{inf}\{m(G) | P \subset G, G \text{ open}\}$
$m^*(P) = \text{sup}\{m(G) | P \supset G, G \text{ closed}\}$
Lipschitz constant
Lipschitz condition
locally Lipschitz
Homeomorphism and diffeomorphism: are they connected? It seems they preserve measurability. ✅ A diffeomorphism $T: U → V$ is a meseomorphism, i.e. preserves measurability. Prove by $n$-dimensional mean value theorem (TODO!!!!) (Pugh Ex 6.23)
https://adebray.github.io/lecture_notes/m381c_notes.pdf
Egoroff's Theorem (the not special case; special case was HW5 Tao 8.2.10)
Professor Wodzicki's notes on differential forms from Math 53 (!): https://math.berkeley.edu/~wodzicki/H185.S11/podrecznik/2forms.pdf
Brouer's fixed point theorem
Hairy Ball Theorem