Questions can be found at the very bottom.
$\mathbb{N}$ is the set of natural numbers $\{1, 2, 3, \dots\}$. Key properties of $\mathbb{N}$ are that $1 \in \mathbb{N}$ and $n \in \mathbb{N} \implies n + 1 \in \mathbb{N}$. This natually leads to the idea of mathematical induction, which allows us to prove statements for all $\mathbb{N}$.
Mathematical induction works as follows: given a proposition $P_n$, prove for the base case $P_1$ (or some other starting place) and then show that $P_k \implies P_{k+1}$. This shows that $P_n$ is true for all $\mathbb{N}$ (or subset if your base case is $n = 4$ for instance).
Integers $\mathbb{Z}$ extend $\mathbb{N}$ to include $0$ and negative numbers. Rationals $\mathbb{Q}$ are ratios of integers.
Property: If $r = \frac{c}{d} \in \mathbb{Q}$ ($c$ and $d$ are coprime) and $r$ satisfies $\sum_{i=0}^{n} c_i x^i$ with $c_i \in \mathbb{Z}$, $c_n \ne 0$, $c_0 \ne 0$, then $d$ divides $c_n$ and $c$ divides $c_0$.
A natural corollary to this can be found by taking the contrapositive, stating that if $d$ does not divide $c_n$ or $c$ does not divide $c_0$, then $\frac{c}{d}$ cannot satisfy the equation. This allows us to enumerate all possible rational solutions to equations of that form.
Example: prove $\sqrt{2} \notin \mathbb{Q}$. $\sqrt{2}$ solves $x^2 - 2 = 0$. By the rational root theorem, the only possible rational roots are $\pm \{1, 2\}$. Plugging these in, we can clearly see that none of them solve the equation. Thus the equation has no rational solution, and so $\sqrt{2}$ cannot be rational.
Maximum of a set of $S \in \mathbb{R}$ is $\alpha \in S$ such that $\alpha \ge \beta$ $\forall \beta \in S$. Minimum is defined similarly. Maxima and minima need not exist. However, they must exist for finite, nonempty subsets of $\mathbb{R}$. Upper and lower bounds are defined similarly, but now $\alpha$ need not exist in $S$. These also need not exist, e.g. $S = \mathbb{R}$ has no upper/lower bounds.
We define the least upper bound of S to be the $\sup{S}$ and the greatest lower bound to be $\inf{S}$. Once again, these need not exist. However, we assume the Completeness Axiom to state that if $S$ is bounded from above, $\sup{S}$ exists, and likewise for $\inf{S}$.
This allows us to prove the Archimedian Property, which states $a, b > 0 \implies \exists n \in N \text{ s.t. } na > b$. A quick proof sketch: assume for sake of contradiction that $a,b > 0$ but $\forall n \in \mathbb{N}$, $na \le b$. Bounded above, so use completeness axiom to show that $\sup_n{na}$ exists but is not an upper bound.
A sequence is a function $\mathbb{N} \to \mathbb{R}$, typically denoted as $a_n$. We define the limit of a sequence to be the number $\alpha \in \mathbb{R}$ if $\forall \epsilon > 0$, $\exists N > 0$ such that $\forall n > N$, we have $|a_n - \alpha| < \epsilon$.
An important theorem is that all convergent sequences are bounded (but not all bounded sequences are convergent!!). Proof sketch: fix an $\epsilon$, which gives us an $N$ for which $n > N \implies a_n$ within $\epsilon$ of the limit. Then the function is either bounded by the $\epsilon$ bound or by the maximum of $|a_n|$ for $n < N$, which is a finite set.
Limits of addition, multiplication, and division are equal to limits of addition, multiplication, and division of limits (assuming the individual limits exist and for division, taking care not to let the denominator equal 0).
Note that $a_n \ne 0$ does not imply $\lim a_n \ne 0$. Examples: $a_n = 1/n$, which $\ne 0 \forall n$ but $lim a_n = 0$.
We define $\lim a_n = +\infty$ if $\forall M > 0, \exists N > 0$ such that $a_n > M$ $\forall n > N$.
For example, $a_n = n + \sqrt{n} \sin n$ goes to $+\infty$ because $n + \sin n \ge n - \sqrt{n}$. Informally, because $n$ grows faster than $\sqrt(n)$, $n - \sqrt{n}$ goes to infinity.
Monotone sequences: if a sequence is increasing (resp. decreasing) and bounded, then it is convergent. Proof sketch: consider the set $S$ of all $a_n$. By completeness theorem, $\sup$ (resp. $\inf$) exists, so there is a sequence element greater than $\sup(S) - \epsilon$. Monotonicity implies that all elements after that satisfy the $\epsilon$ bound.
Given a sequence $a_n$, define $S_N = \sup {a_n | n \ge N}$. Notably, $N < M \implies S_N \ge S_M$. So decreasing and has a (possibly infinite) limit. We define $\lim \sup a_n = \lim_{N\to \infty} S_N = \lim_{N\to\infty}\left( \sup_{n>N} a_n \right)$
Example: $a_n = 3 + (-1)^n = \{2, 4, 2, 4, \dots\}$. $\lim\sup{a_n} = 4$, $\lim\inf{a_n} = 2$.
Some properties of these: $$\lim\inf(a_n) \le \lim\sup(a_n)$$, since nonstrict inequalities are preserved under limits. $$\lim\sup s_{n_k} \le \lim\sup s_n$$ Proof sketch: use the fact that subsets have smaller (or equal) $\sup$ compared to the superset.
Cauchy sequence: $a_n$ is a Cauchy sequence if $\forall \epsilon > 0$, $\exists N > 0$ such that $\forall n, m > N$, we have $|a_n - a_m| < \epsilon$.
Cauchy sequence $\iff a_n$ converges. Proof sketch for reverse direction: consider N for bound $\epsilon / 2$. Then use triangular inequality.
Proof sketch for forward direction: first show (a_n) converges iff $\lim\sup(a_n) = \lim\inf a_n$. Then show that $\lim\sup a_n = \lim\sup a_n$ exist and are equal, taking advantage of the fact that Cauchy implies bounded.
Recursive sequences: if $S_n = f(S_{n-1})$, then if the limit exists and equals $\alpha$, then $\alpha = f(\alpha)$.
Example: If $S_n = \frac{s_{n-1} + 5}{2 S_{n-1}}$ with $S_1 =5$, $\lim_{n\to\infty} a_n = \alpha$ \implies $\frac{\alpha^2 +5}{2\alpha} = \alpha$ which means $\alpha = \pm \sqrt{5}$. Be careful though, because the initial condition matters. Given out initial condition, we can bound $S_n$ between $\sqrt{5}$ and $5$ inclusive for all relevant $n$ using induction, which implies $\sqrt{5}$ is the correct answer.
Subsequences: If $s_n$ is a subsequence, let $n_k$ be a strictly increasing sequence in $\mathbb{N}$. Then we can define a new sequence $t_k = \left(S_{n_k}\right)_k$, called a subsequence of $s_n$.
An important property of subsequences is that even if a sequence does not converge to a value, a subsequence of that sequence may. Specifically, let $(s_n)$ be any sequence. $(s_n)$ has a subsequence converging to $t$ iff $\forall \epsilon > 0$, the set $A_{\epsilon} = \{ n \in \mathbb{N} | |s_n-t| < \epsilon\}$ is infinite. In other words, there are infinite elements within the epsilon bound of $t$ in $s_n$.
Forward direction proof sketch: just use the definition of convergence. Reverse direction proof sketch: Consider taking $\epsilon$ smaller and smaller and taking an element from each $\epsilon$ shrinkage to create the convergent subsequence (make sure that each element has a greater index than the one before it).
Every sequence has a monotone subsequence. Proof sketch: Define a dominant term to be $s_n$ such that $\forall m > n$, $s_n > s_m$. Either there are infinitely many dominant terms (then dominant terms form a monotone subsequence), or there are finitely many. In the second case, then after the last dominant term, each element has some element greater than it with a higher index. Use this to form a monotone subsequence.
This theorem allows us to prove that every bounded sequence has a convergent subsequence, because a monotone subsequence must exist and it will be bounded, implying that it is convergent.
Given a sequence (s_n), we say $t \in \mathbb{R}$ is a subsequence limit if there exists a subsequence that converges to $t$. For example, $\lim\inf s_n$ and $\lim\sup s_n$ are subsequence limits. Proof sketch: Use definition of $\lim\sup$ and $\lim\inf$, specifically that it is a limit of $\sup$ values of the tail. Use the limit to produce an $\epsilon$ bound on the $\sup$ of tails for large enough $n$. Then show that this implies infinite elements of the original sequence in that $\epsilon$ bound.
Containing a single subsequence limit is necessary and sufficient for the existence of a convergent sequence.
Closed subsets: $S \subset \mathbb{R}$ is closed if for all convergent sequences in $S$, the limit also belongs to $S$.
Example: $(0, 1)$ is not closed, because the sequence $a_n = 1/n$ converges to $0$, which is not in the set.
An extremely important result is that for positive sequences, $$\lim\inf \frac{s_{n+1}}{s_n} \le \lim\inf \left(s_n\right)^{1/n} \le \lim\sup \left(s_n\right)^{1/n} \le \lim\sup \frac{s_{n+1}}{s_n}.$$ Proof sketch for last inequality (first is similar): Use fact that if $S_{n+1}/S_n \le L + \epsilon$, then $S_{n+k}/S{n} \le (L + \epsilon)^k$. For $n > N$, we have $S_n^{1/n} \le \left( S_N (1+\epsilon)^{n-N} \right)^{1/n}$. Take $\lim\sup$ of both sides, and the right side can be massaged to show $\lim\sup(S_n)^{1/n} \le L + \epsilon$.
A metric space is a set $S$ and a function $d: S \cross S \to \mathbb{R}^{0+}$. The function $d$ must satisfy nonnegativity, $d(x,y) = 0 \iff x = y$, commutativity, and the triangular inequality.
We can generalize sequences to use $S$ instead of $R$. Specifically, if $s_n$ is a sequence in $S$, then we define $s_n$ to be Cauchy if $\forall \epsilon > 0, \exists N > 0$ s.t. $\forall n,m > N$, $d(s_n, s_m) \le \epsilon$. Also, we say $s_n$ converges to $s \in S$ if $\forall \epsilon > 0$, $\exists N > 0$ s.t. $\forall n > N$, $d(s_n, s) < \epsilon$. Just like the real number case, these two notions are equivalent.
We call a metric space $(S, d)$ complete if every Cauchy sequence has a limit in $S$. For example, $\mathbb{Q}$ is not complete, but $\mathbb{R}$ is.
The Bolzano-Weierstrass Theorem states that every bounded sequence in $\mathbb{R}$ has a convergent subsequence.
A topology on a set $S$ is a collection of open subsets such that $S, \varnothing$ are open; (possibly infinite) union of open subsets are open, and finite intersection of open sets are open. We can define a topology on a metric space by defining the ball $B_r(p) = \{ x \in S | d(p,x) < r \}$ to be open.
We define $E \subset S$ to be closed iff $E^c$ is open, that is, $\forall x \notin E$, $\exists \delta > 0$ such that $B_\delta(x) \cap E = \varnothing$.
We define the closure of a set $E \in S$ to be $$\bar{E} = \cap \{ F | F \subset S \text{ closed set, } F \subset E \}$. We similarly define the interior to be the union of all open subsets of $E$. The boundary of $E$ is the set difference bewteen the closure and the interior.
We define the set of limit points $E'$ to be all such points $p$ where $\forall \epsilon > 0, \exists q \in E, q \ne p$, such that $d(p, q) < \epsilon$. Very importantly, $E \cup E' = \bar{E}$. (It's also possible to *define* $\bar E$ this way as Rudin does)
An important property relating closedness and sequences is that $$E \subset S \text{ is closed } \iff \forall \text{ convergent sequence } x_n in S, x = \lim x_n$$. The proof for both directions utilize proof by contradiction.
We define an open cover of $E \subset S$ to be a collection of open sets $\{G_d\}_{\alpha\in A}$ such that $E \subset \bigcup _\alpha G_\alpha$. From there, we can define compactness. A set $K \subset S$ is compact if for any open cover of $k$, there exists a finite subcover.
For example, if $K$ is finite, we can produce a finite subcover by, for each $x \in K$, picking a $G_x$ such that $x \in G_x$. Then the subcover is finite.
The Heine-Borel Theorem states that a subset of $\mathbb{R^n}$ is compact iff it is closed and bounded.
Lemma: a closed subset in a compact subset is compact.
Sequential compactness is an alternate definition of compactness, which states that $X$ is sequentially compact if every sequence of points in $X$ has a convergent subsequence converging to a point in $X$.
Series is an infinite sum of a sequence: $\sum _{n=1}^\infty a_n$. We define convergence as the convergence of partial sums $S_n = \sum_{j=1}^n a_j$.
We say that a series satisfies the Cauchy Condition if $\forall \epsilon > 0,$ $\exists N > 0$ such that $\forall n,m > N$, $\left| \sum_{j=n}^m a_n < \epsilon \right|$. A Cauchy series is equivalent to the sequence of partial sums being Cauchy which is equivalent to the convergence of both.
If the series converges, then $\lim a_n = 0$. Note that the opposite is not necessarily true.
To determine whether series converge, we can use comparison tests.
Comparison test: $\sum_n a_n \text{ converges } \wedge |b_n| < a_n \implies b_n \text{converges} $ Proof sketch: Show that $\sum_n b_n$ is Cauchy. Related, we say that $\sum_n b_n$ converges absolutely if $\sum_n |b_n|$ converges.
A classic example of a series that does not converge absolutely is the alternating harmonic series.
For the following tests, we recall that
$$\lim\inf \frac{s_{n+1}}{s_n} \le \lim\inf \left(s_n\right)^{1/n} \le \lim\sup \left(s_n\right)^{1/n} \le \lim\sup \frac{s_{n+1}}{s_n}.$$
Root Test Let $\alpha = \lim\sup |a_n|^{1/n}$.
1. If $\alpha > 1$, then series diverges
2. If $\alpha < 1$, then series converges absolutely.
3. If $\alpha = 1$, then series could converge or diverge.
Ratio Test Let $\alpha = \lim\sup \left| \frac{a_{n+1}}{a_n} \right|$.
1. If $\alpha \le 1$, the series converges absolutely.
2. If $\alpha > 1$, the series diverges.
Alernating Series Test For “nonincreasing” (i.e. the absolute values of terms are nonincreasing) alternating series where the terms tend to 0, the series converges.
Example: $\sum (-1)^n \frac{1}{\sqrt{n}}$ converges since terms tend to 0 and $\frac{1}{\sqrt n}$ is nonincreasing.
Integral Test If $f$ is continuous, positive, and decreasing such that $f(n) = a_n$, then convergence of the integral is same as convergence of series.
I already LaTaX'd notes on continuity which can be found here: https://rpurp.com/2021-03-02.pdf
Now let's examine the link between compactness and continuity. Specifically, if $f$ is a continuous map from $X$ to $Y$, then $f(E) \subset Y$ is compact if $E$ is compact. This can be proven by sequential compactness: a quick proof sketch is that for a sequence in $f(E)$, we can reverse the map to find a sequence in $E$. Since $E$ is compact, there's a subsequence that converges to a value in $E$. Since continuity preserves limits, this is a subsequence limit for the sequence in $Y$ as well.
A corollary to this is that if $f: x\to \mathbb{R}$ is continuous and $X \subset E$ is compact, then $\existsp, q \in E$ such that $f(p) = \sup f(E)$ and $f(q) = \inf f(E)$.
However, a preimage of a compact set may not be compact.
Uniform Continuity
$f: X \to Y$ is uniformly continuous if $\forall \epsilon > 0$, $\exists \delta > 0$ such that $\forall p,q \in X, d_X(p, q) < \delta \implies d_Y(f(p), f(q)) < \epsilon$. Compared with the normal definition of continuity, we notice that we must select a $\delta$ that works with *all* points. For regular continuity, we can choose a \delta per-point$.
Theorem: For $f: X \to Y$, if $f$ is continuous and $X$ is compact, then $f$ is uniformly continuous.
Example: $f(x) = 1/x$. Normally continuous except at $x= 0$. But if we restrict the domain to a compact interval $[1, 2]$, $f$ becomes uniformly continuous.
Interesting theorem: Given $f: X \to Y$ continuous with $X$ compact, and $f$ a bijection, then $f^{-1}: Y \to X$ is continuous. Proof sketch: Show the preimage of an open set in $f^{-1}$ is open.
Connectedness
We define connectedness as follows: $X$ is connected iff the only subset of $X$ that is both open and closed are $X$ and $\varnothing$.
An alternate definition is that $X$ is not connected iff $\forall U, V \subset X \text{ nonempty, open }$, $\U \cap V = \varnothing$ and $X = U \coprod V$. Or $\exists S \subset X, \varnothing \ne S \ne X$ such that $S$ is both open and closed. Then $X = S \coprod S^c$.
Example: $X = [0,1] \union [2, 3]$. Then let $S = [0,1]$ and $S^c = [2,3]$.
Theorem: Continuous functions preserve connectedness.
Theorem: $E \subset \mathbb{R} \text{ connected } \iff \forall x,y \in E, x < y, [x,y] \subset E$.
Rudin definition for connectedness: $S$ cannot be written as $A \cup B$, where $\bar A \cap B = A \cap \bar B = \varnothing$.
Intermediate value theorem: This falls almost directly out of the fact that continuous functions preserve connectedness. If $f: [a,b] \to \mathbb{R}$ is continuous, and $f(a) < f(b)$, then $\forall y \in (f(a), f(b)), \exists x \in (a,b)$ s.t. $f(x) = y$.
Discontinuities: Given the left and right-hand limits of $f$ at $x_0$, if $f(x_0) = f(x_0^+) = f(x_0^-)$, we say the function is continuous at $x_0$.
If both left and right-handed limits exist, but they disagree with each other or the value of the function, we call this a simple discontinuity. If one of the sides doesn't exist at all (classic example is $\sin(1/x)$ around $x=0$, then we call it a 2nd-type discontinuity.
Monotonicity and Continuous Functions: If $f: (a,b) \to \mathbb{R}$ is monotone-increasing, then $f$ has countably many discontinuities.
We say that a sequence of functions $f_n$ converges pointwise to $f$ if $\forall x \in X$ we have $\lim f_n(x) =f(x)$.
Example: $f_n(x) = 1 + \frac{1}{n} \cdot \sin x$. Then $f_n \to f$ pointwise where $f(x) = 1$.
An extremely useful object is the bump function, which we typically denote as
$\phi(x) = \begin{cases}0 & x<0
2x & x \in [0, 1/2]
1 -2x & x \in [1/2, 1]
0 & x > 1 \end{cases}$
For example, $f_n(x) = \phi(x-n)$ converges pointwise to 0.
We can see here that pointwise convergence does not preserve integrals. Also, convergence of $f$ doesn't imply convergence of $f'$ (pointwise).
Vectors converging pointwise is equivalent to them converging in a $d_2$ sense. (Actually, as long as it is a norm).
We can also have uniform convergence of functions. We say that $f_n$ converges to $f$ uniformly if $\forall \epsilon > 0$, $\exists N > 0$ such that $\forall n > N$, $\forall x \in X$, we have $|f_n(x)-f(x)| < \epsilon$.
We can also express uniform convergence using the equivalent notion of Uniformly Cauchy: $\forall \epsilon > 0$ $\exists N > 0$ s.t. $\forall n,m > N$, $|f_n(x) -f_m(x) | < \epsilon$.o
We also can talk about series of functions $\sum_{n=1}^\infty f_n$. We say that this converges uniformly to $f$ if the partial sums $F_n = \sum_{n=1}^N f_n$ converge uniformly to $f$.
A test we can apply to see whether a series of functions converges uniformly is the Weierstrass M-test. If $M_n \in \mathbb{R}$ s.t. $M_n \ge \sup_{x\in X} |f_n(x)|$: if $\sum_{n=1}^\infty$, then $\sum_{n=1}^{\infty}$ converges uniformly. A proof sketch: Consider the absolute value of the series and use the triangular inequality.
Unlike pointwise convergence, uniform convergence preserves continuity.
Theorem: If $K$ is compact, and $f_n: K \to \mathbb{R}$ with the following conditions:
1. $f_n$ is continuous
2. $f_n\to f$ pointwise, $f$ continuous
3. $f_n(x) > f_{n+1}(x)$
Then $f_n \to f$ uniformly.
We define the derivative of $f$ as $f'(x) = \lim_{t\to x} \frac{f(t) -f(x)}{t-x}$.
Differentiability implies continuity. This can be verified by multiplying the above by $t-x$ and using limit rules.
Chain rule: If $h(t) = g(f(t))$, then $h'(x) = g'(f(x))f'(x)$.
We say f has a local maximum at $p \in X$ if $\exists \delta > 0$ such that $f(q) \le f(p)$ for all $q \in X$ with $d(p,q) < \delta$.
If $f$ has a local maximum at $x \in (a,b)$, and if $f'(x)$ exists, then $f'(x) = 0$. This can be shown by bounding above and below by 0, proving it's 0.
Generalized MVT: If $f$ and $g$ are continuous on $[a,b]$ and differentiable on $(a,b)$ then $\exists x \in (a,b)$ such that $$[f(b) - f(a)]g'(x) = [g(b)-g(a)]f'(x)$$
If we let $g(x) = x$, we get the regular MVT: $$f(b) - f(a) = (b-a) f'(x)$$
Intermediate Value Theorem for Derivatives: If $f$ is differentiable on $[a,b]$ and $f'(a) < \lambda < f'(b)$ then $\exists x \in (a,b)$ such that $f'(x) = \lambda$.
Proof sketch: Let $g(t) = f(t) - \lambda t$. $g'$ goes from negative to postive, so somewhere $g$ attains its minimum and $g'(x) = 0$.
Correlary: If $f$ is differentiable on $[a,b]$, $f'$ cannot have simple discontinuities, only discontinuities of the second kind.
L'Hospital's Rule: Suppose $f$ and $g$ are real and differentiable on $(a,b)$, and $g'(x) \ne 0$ for all $x\in (a,b)$. Let $\lim_{x\to a} \frac{f'(x)}{g'(x)} = A$ if it exists.
If $f(x) \to 0$ and $g(x) \to 0$ as $x \to a$, or if $g(x) \to \infty$ as $x \to a$, then $$\lim_{x\to a} \frac{f(x)}{g(x)} = A.$$
The proof is quite involved and is included as a question later.
Taylor's Theorem
Let $$P(t) = \sum_{k=0}^{n-1} \frac{f^{(k)}(a)}{k!}(t-\alpha)^k$$. Then there exists a point $x$ between $\alpha$ and $\beta$ such that $$f(\beta) = P(\beta) + \frac{f^{(n)}(x)}{n!}(\beta - \alpha)$$.
This is essentially a different way to generalize the mean value theorem.
This proof will also be included as a question.
Unless otherwise stated, assume $f$ is bounded and bounds of all integrals are from $a$ to $b$.
We define a partition $P$ of $[a,b]$ as finite set of points $x_0, x_1, \dots, x_n$ such that $a = x_0 \le x_1 \le \dots \le x_n = b$ and define $\delta x_i = x_i - x_{i-1}$ for convenience.
We further define $M_i = \sup f(x)$, $m_i = \inf f(x)$, $U(P,f) = \sum_{i=1}^n M_i \delta x_i$, and $L(P,f) = \sum_{i=1}^n m_i \delta x_i$.
We define the upper and lower Riemann integrals of $f$ to be
$ \overline { \int } f dx = \inf U(P, f) $ and $ \underline {\int } f dx = \sup L(P, f)$.
If the upper and lower integrals are equal, we say that $f$ is Riemann-integral on $[a,b]$ and write $f \in \mathcal{R}$.
Since $\exists m, M$ s.t. $m \le f(x) \le M$, we have for all $P$, $$m(b-a) \le L(P,f) \le U(P,f) \le M(b-a).$$ So $L(P,f)$ and $U(P,f)$ are bounded, so the upper and lower integrals are defined for every bounded function $f$.
We now generalize a bit. Consider a partition $P$ and let $\alpha$ be a monotonically increasing function on $[a,b]$. We write $\delta \alpha_i = \alpha (x_i) - \alpha (x_{i-1})$.
We define $U(P,f, \alpha) = \sum_{i=1}^n M_i \delta\alpha_i$ and $L(P,f,\alpha) = \sum_{i=1}^n m_i \delta \alpha_i$.
We define $ \overline { \int } f d\alpha = \inf U(P, f, \alpha) $ and $ \underline {\int } f d\alpha = \sup L(P, f, \alpha)$.
If they are equal, we have a Riemann-Stieltjes integral $\int f d\alpha$
We define $P^*$ to be a refinement of $P$ if $P \subset P^*$. The common refinement of $P_1$ and $P_2$ is defined to be $P_1 \union P_2$.
Theorem: $L(P, f, \alpha) \le L(P^*, f, \alpha)$ and $U(P^*, f, \alpah) \le U(P,f, \alpha)$.
Proof sketch: To start, assume $P^*$ contains one more point $x^*$ than $P$ and consider the $\inf f(x)$ of the two “induced” intervals by $x^*$'s inclusion. Clearly both are larger than $m_i$ (corresponding to original interval containing $x^*$.
Theorem: $ \overline { \int } f d\alpha = \inf U(P, f, \alpha) \ge \underline {\int } f d\alpha = \sup L(P, f, \alpha)$. Proof sketch: $L(P_1, f, \alpha) \le U(P_2, f, \alpha)$. Fix $P_2$ and take $\sup$ over $P_1$, then take $\inf$ over all $P_2$.
Theorem 6.6: $f\in \mathcal{R}(\alpha)$ on $[a,b]$ iff $\forall \epsilon > 0$ $\exists P$ such that $U(P,f,\alpha) - L(P,f,\alpha) < \epsilon$.
If direction: $L \le \underline{ \int } \le \overline \int \le U$ so $0 \le \overline { \int }- \underline { \int } < \epsilon$, so they are equal and $f \in \mathcal r$.
Only if direction: If $f \in \mathcal R$, $\exists P_1, P_2$ s.t. $U(P_2, f, \alpha) - \int f d\alpha < \epsilon/2$ and $\int f d\alpha - L(P_1, f, \alpha) < \epsilon/2$.
Take common refinement $P$. Show $U(P, f, \alpha) \le L(P, f, \alpha) + \epsilon$.
Fun facts about 6.6:
1. If holds for some $P$ and some $\epsilon$, then holds with same $\epsilon$ for refinement of $P$.
2. If $s_i, t_i$ \in $[x_{i-1},x_i]$, then $\sum_{i=1}^n |f(s_i) - f(t_i)| \delta \alpha_i < \epsilon$.
3. If f is integrable and hypotheses of $b$ hold, then $$\left| \sum_i f(t_i) \delta\alpha_i - \int f d\alpha \right| < \epsilon.$$
Theorem: Continuitiy implies integrability. Proof: use fact uniformly continuous.
Theorem: Monotonicity of $f$ and continuous $\alpha$ implies integrability.
Theorem: If $\alpha$ is continuous where $f$ is discontinuous, and $f$ has only finite number of discontinuities, then f is integrable.
Theorem: Suppose $f \in \mathcal {R}(\alpha)$ on $[a,b]$, $m \le f\le M$, $\phi$ continuous on $[m, M]$, and $h(x) = \phi(f(x))$ on $[a,b]$. Then $h \in \mathcal{R}{\alpha}$ on $[a,b]$.
Properties of the integral (That you didn't learn in Calculus BC): $$\left| f d\alpha \right| \le M[\alpha(b) - \alpha(a)]$$
$$\int f d(\alpha_1 + \alpha_2) = \int f d\alpha + \int f d\alpha_2$$.
$f$,$g$ integrable implies $fg$ integrable. Proof sketch: let $\phi(t) = t^2$ and use identity $4fg = (f+g)^2 - (f-g)^2$.
$$\left| \int f d\alpha \right| \le \int |f| d\alpha$$. Proof sketch: let $\phi(t) = |t|$.
Given unit step function $I$, if $\alpha(x) = I(x-s)$ then $\int f d\alpha = f(s)$ (This is quite similar to using the Dirac delta function in signal processing).
Relating sums and integrals: Given nonnegative $c_n$, $\sum_n c_n$ converges, and sequence $s_n$, and $f$ continuous, If $\alpha(x) = \sum_{n=1}^\infty c_n I(x-s_n)$ then $\int f \d \alpha = \sum_{n=1}^\infty c_n f(s_n)$.
Relating Derivatives and Integrals Assume $\alpha'$ is integrable. $f$ is integrable $\iff$ $f\alpha$ is integrable. In that case $\int f d\alpha = \int f(x) a'(x) dx$.
Proof sketch: Use Theorem 6.6. Then use mean value theorem to obtain points $t_i$ such that $\delta \alpha_i = \alpha'(t_i) \delta x_i$.
Change of Variable Theorem If $\varphi$ is strictly increasing continuous function that maps $[A,B]$ onto $[a,b]$, and $\alpha$ is monotonically increasing on $[a,b]$ and $f\in \mathcal R(\alpha)$ on $[a,b]$. Define $\beta(y) = \alpha(\varphi(y))$ and $g(y) = f(\phi(y))$. Then $\int _A^B g d\beta = \int_a^b f d\alpha$.
Integration and Differentiation.
If $F(x) = \int_a^x f(t)dt$, then $F$ continuous on $[a,b]$, and if $f$ is continuous at $x_0$, $F$ is differentiable at $x_0$, and $F'(x_0) = f(x_0)$.
Proof sketch: Suppose $|f(t)| \le M$. For $x < y$: $|F(y) - F(x)| = \left| \int_x^y f(t) dt \right| \le M(y-x)$. So we can bound $|F(y) - F(x)|$ by epsilon since we can make $|y-x|$ as small as we want, therefore uniformly continuous.
If $f$ is continuous at $x_0$, then given $\epsilon$ choose $\delta$ such that $|f(t) - f(x_0)| < \epsilon|$ if $|t-x_0| < \delta$ and $a \le t \le b$. Hence $\left| \frac{F(t) -F(s)}{t-s} - f(x_0) \right| = \left| \frac{1}{t-s} \int_s^t [f(u) - f(x_0)] du\right| < \epsilon$. Therefore, $F'(x_0) = f(x_0)$.
Fundamental Theorem of Calculus: If $f$ integrable and $\exists F$ such that $F' = f$ then $\int_a^b f dx = F(b) - F(a)$.
1: Prove there is no rational number whose square is 12 (Rubin 1.2).
A: Such a number would satify $x^2 - 12 = 0$. B the rational roots theorem, we can enumerate possible rational solutions $x = \pm \{1, 2, 3, 4, 6, 12 \}$. It can be verified that none of these satisfy the equation, so the equation has no rational solution, so no rational number has the square of 12.
2. Why is $S = (0, \sqrt 2]$ open in $\mathbb Q$?
A: For all points in S we can construct a ball such that the ball is entirely contained within the set S. This is because $\sqrt 2 \notin \mathbb Q$.
3. Construct a bounded set of real numbers with exactly 3 limit points.
A: ${1, 1/2, 1/3, 1/4, \dots} \cup ${2, 1+ 1/2, 1 + 1/3, 1+1/4, \dots} \cup ${3, 2 + 1/2, 2+ 1/3, 2 + 1/4, \dots} $ with limit points 0, 1, 2
4. Why is the interior open?
A: It is defined to the the union of all open subsets in $\E$, and union of open subsets are open.
5. Does convergence of $|s_n|$ imply that $|s_n|$ converges?
A: No. Consider $-1, 1, -1, 1, \dots$. $|s_n|$ converges to $1$ but $s_n$ does not converge.
6. Rudin 4.1: Suppose $f$ is a real function which satisfies $$\lim_{h\to 0} [f(x+h)-f(x-h)] = 0$$ for all $x \in \mathbb{R}$. Is $f$ necessarily continuous?
A: No: a function with a simple discontinuity still passes the test. In fact, since limits imply approaching from both sides, $+h$ and $-h$ when approaching zero are the same thing, anyway.
7. What's an example of a continuous function with a discontinuous derivative?
A: Consider $f(x) = |x|$. The corner at $x= 0$ has different left and right-hand derivatives of $-1$ and $1$, respectively. This implies the derivative does not exist at $x=0$, and a type-1 discontinuity exists there.
8. What's an example of a derivative with a type-2 discontinuity?
A: An example would be $f(x) = x^2 \sin(1/x)$ with $f(0) := 0$ The derivative not zero is $f'(x) = 2x\sin(1/x) - \cos(1/x)$ which has a type-2 discontinuity at $x=0$. (Source: https://math.stackexchange.com/questions/292275/discontinuous-derivative)
9: 3.3: Let $C$ be s.t. $|C| < 1$. Show $C^n \to 0$ as $n \to \infty$.
A: Assume WOLOG $C$ is positive (this extends naturally to the negative case with a bit of finagling) the limit exists since the sequence is decreasing and bounded below. Using recursive sequence $C^{n+1} = C C^n$ we get $L = CL$ which implies $L = 0$.
10: Can differentiable functions converge uniformly to a non-differentiable function?
A: Yes! Consider $f_n(x) = \sqrt { x^2 + \frac{1}{n} }.$ It is clearly differentiable, and converges to $|x|$ uniformly. It develops a “kink” that makes it non-differentiable.
11: 3.5: Let $S$ be a nonempty subset of $\mathbb{R}$ which is bounded above. If $s = \sup S$, show there exists a sequence ${x_n}$ which converges to $s$.
Consider expanding $\epsilon$ bounds. By definition, $[s - \epsilon, s]$ must contain a point in $S$, otherwise $s - \epsilon$ is a better upper bound than the supremum! Thus we can make a sequence of points by starting with an epsilon bound of say, $1$ and sampling a point within it, and then shrinking the epsilon bound to $1/2$, $1/4$, $1/8$, etc.
12: Show that $f(x) = 0 $ if $x$ is rational and $1$ if $x$ is irrational has no limit anywhere.
A: Use the fact that $\mathbb{Q}$ is dense on $\mathbb{R}$. Given an $\epsilon < 1/2$, no matter what $\delta$ you pick you're always going to get both rational and irrational numbers within that epsilon bound, which means the function will take on both $1$ and $0$ within that bound, which exceeds the $\epsilon$ bound.
13: What exactly does it mean for a function to be convex?
A: A function is convex if $\forall x,y$, $0 \le \alpha \le 1$ we have $$f(\alpha x + (1-\alpha) y) \le \alpha f(x) + (1-\alpha) f(y)$$.
14. 5.2: Show that the value of $\delta$ in the definition of continuity is not unique.
A: Given some satisfactory $\delta > 0$, we can just choose $\delta / 2$. The set of numbers in the input space implied by $\delta / 2$ is a subset of those from the old $\delta$, so clearly all of the points must satisfy $|f(x) - f©| < \epsilon$ required for continuity.
15: Why is the wrong statement as presented in lecture for the Fundamental Theorem of Calculus Flawed?
A: The reason is that the derivative of a function is not necessarily Riemann integral.
16: What function is Stieltjes integrable but not Riemann integrable?
A: Imagine a piecewise function that is the rational indicator function for $0\le x \le 1$ and 0 elsewhere. This is obviously not Riemann integrable but we can assign $\alpha$ to be constant from $0$ to $1$ (i.e. assigning no weight to that part) to make it Stieltjes integrable. https://math.stackexchange.com/questions/385785/function-that-is-riemann-stieltjes-integrable-but-not-riemann-integrable
17: Why do continuous functions on a compact metrix space $X$ achieve their $\sup$ and $\inf$?
A: $f(X)$ is compact, which implies that it contains its $\sup$ and $\inf$.
18. What's a counterexample to the converse of the Intermediate Value Theorem?
A: Imagine piecewise function $f(x) = x$ for $0 \le x \le 5$ and $f(x) = x - 5$ for $5 \le x \le 10$.
19: Sanity check: why is $f(x) = x^2$ continuous at $x = 3$?
A: $\lim_{x\to 3} x^2 = 9 = f(3)$ (Definition 3). Proving with Definition 1 is annoying but you can see the proof in the problem book.
20: Prove the corellary to the 2nd definition of continuity: $f: X \to Y$ is continuous iff $f^{-1}(C)$ is closed in $X$ for every closed set $C$ in $Y$.
A: A set is closed iff its complement is open. We can then use the fact that $f^{-1}(E^c} = [f^-1(E)]^c$ for every $E \subset Y$. (Basically, imagine the 2nd definition but you just took the complement of everything).
21: What's a function that is not uniformly continuous but is continuous?
A: A simple example is $f(x) = x^2$ which is clearly continuous but finding a single value for $\delta$ for a given $\epsilon$ is impossible.
22: If we restrict $x^2$ to the domain $[0,1]$, why does it become uniformly continuous?
A: This is because we restricted the domain to a compact set!
23: Give an example where $E'$ is a subset of $E$.
A: Consider the classic set $\{0\} \union \{1, 1/2, 1/3, 1/4, \dots}$. $E' = \{0\}$.
24: Give an example where it's a superset of $E$.
A: Consider $E = (0, 1)$ then $E' = [0, 1]$.
25: Give an example where it's neither.
A: Consider $E = \{1, 1/2, 1/3, 1/4, \dots \}$. $E' = \{0\}$. But this is neither subset nor superset of $E$.
26: Why is continuity defined at a point and uniform continuity defined on an interval?
A: Continuity allows you to pick a different $\delta$ for each point, allowing you to have continuity at a single point. Uniform continuity needs to have a interval share the same value of $\delta$.
27: What is a smooth function, and give an example one.
A: A smooth function is infinitely differentiable. All polynomials are smooth.
28: Prove differentiability implies continuity.
A: Differentiability implies $\lim_{t \to x} \frac{f(t) - f(x)}{t-x} = L$. Now multiply both sides by $\lim_{t \to x} (t -x)$, taking advantage of limit rules. so we get $\lim_{t\to x} \frac{f(t) -f(x)}{t-x}(t-x) = f'(x) \cdot 0 = 0$ which means $\lim_{t\to x} f(t) = f(x)$.
29: Use L'Hospital's Rule to evaluate $\lim_{x\to \infty} \frac{x}{e^x}$.
A: We clearly see that the limits of the top and bottom go to infinity, so we can use L'Hospital's Rule. Taking derivative of top and bottom, we get $1/e^x$ which goes to $0$.
30: 5.1 Prove that if $c$ is an isolated point in $D$, then $f$ is automatically continuous at $c$.
A: Briefly, no matter what $\epsilon$ is chosen, we can just choose a small enough neighborhood such $c$ is within the neighborhood. Then all $x$ (i.e., just c) in the neighborhood have $|f(x)-f©| = 0$.