math104-s21:s:ryotainagaki:problems

Miscelaneous Problems/Questions for Math 104 Final Guide

Things conceptual, concrete, and stuff I didn't get the first time, and stuff that caught my eye:

Note: For any f, $X$ is the arbitrary space containing domain of $f$ and Y is the arbitrary space containing the range of $f$.

1. How exactly does continuity preserve compactness?

Short answer: A combination of the definition of compactness and the third definition of continuity.

Long answwer: Here's the proof where S is a compact set in the domain of f. Let $\{A\}$ be an open cover of set $f(S)$. We know that $f(S) \subseteq \cup A_i$ and therefore $S = \subseteq \cup f^{-1}(A_i)$. Since S is compact we only need a finite set of $\cup f^{-1}(A_i)$ to create an open cover for S. Call the $f^{-1}(A_i)$'s we pulled out $f^{-1}(A_1), …, f^{-1}(A_n)$.

Now, $S \subseteq f^{-1}(A_1) \cup … \cup f^{-1}(A_n)$. Apply $f$ and some set theory theorems to get that $S \subseteq f(f^{-1}(A_1)) \cup … \cup f(f^{-1}(A_n)) = (A_1) \cup … \cup (A_n)$. Recalling that $\{A\}$ was an arbitrary open cover of $f(S)$, we know that $\{A\}$ has a finite open subcover. Thus, we know that $f(S)$ is compact by definition of compactness.

2. (From midterm 2) Is the set $(-2, 2) \bigcap \mathbb{Q}$ connected?

Answer: False. Consider that $[(-2, \sqrt{2})\bigcap \mathbb{Q}] \bigcup [(\sqrt{2}, 2)\bigcap \mathbb{Q}] = (-2, 2) \bigcap \mathbb{Q}$. Because of the previous sentence and since $[(-2, \sqrt{2})\bigcap \mathbb{Q}]$ and $[(\sqrt{2}, 2)\bigcap \mathbb{Q}]$ are disjoint sets, we can say by definition of disconnected set, $(-2, 2) \bigcap \mathbb{Q}$ is a not a connected set.

3. (From midterm 2) Give an example (if there is) of a subset of $\mathbb{Q}$ that is open with respects to the metric $d(x, y) = |x - y|$, but is nonetheless compact.

Answer: There is no such thing as a set that is open in $\mathbb{Q}$ and compact. Suppose such a set exists. We know that since the set is compact, it is a closed, bounded subset of $\mathbb{R}$. Now, since that set, call it S from now on, is closed subset of $\mathbb{R}$ it MUST be the case that $\forall s' \in S', s' \in S$. Now we know that $s'$ can be an irrational number since $S$ is a closed subset of the set of all real numbers. But remember that S is an open subset of $\mathbb{Q}$. The penultimate sentence just does not make sense.

4. Give an example of subset of $\mathbb{Q}$ that is an infinite set but is compact.

Answer: Consider the set $\{1\} \cup \{1 - e^{-n}: n \in \mathbb{N}\}$. We know that it is a closed subset of $\mathbb{R}$ because it has a countable number of elements. It is also bounded as all elements in the set have absolute value of less than or equal to 1. Thus the set is compact in $\mathbb{R}$ and consequently compact in $\mathbb{Q}$.

5. Give an example of a sequence that is cauchy in some metric space $(M, d)$ (where d is defined by $$d(x, y ) = \sup_{t \in \mathbb{R}}{|x(t) - y(t)|}$$) of all functions $f: \mathbb{R} \to \mathbb{Q}$ but is not convergent in that metric space.

Answer: Consider the sequence of functions $f_n(x) = (1 + \frac{1}{n})^n, x \in \mathbb{R}$. Now, we know $f_n \in M$ since $ (1 + \frac{1}{n})^n$ is rational. We also know that $(1 + \frac{1}{n})^n \to e$ and therefore, by definition of convergence, $\forall \epsilon > 0, \exists N: \forall n > N, |(1 + \frac{1}{n})^n - e| < \epsilon$. Since $\mathbb{R}$ and since the $(1 + \frac{1}{n})^n$ is a converging sequence in $\mathbb{R}$, it is Cauchy. That being said $\forall \epsilon > 0, \exists N : \forall n, m > N, |(1 + \frac{1}{n})^{n} - (1 + \frac{1}{m})^{m}| < \epsilon$.

Now consider that $f_n(x) = (1 + \frac{1}{n})^n$ regardless of what X is. Thus we can reuse the cauchy condition from the previous paragraph and work with the definition of the metric $d$ to mean that $\forall \epsilon > 0, \exists N : \forall n, m > N, d(f_n, f_m)= \sup_{x \in \mathbb{R}}|f_n(x) - f_m(x)| = |(1 + \frac{1}{n})^{n} - (1 + \frac{1}{m})^{m}| < \epsilon$. Thus I have proved that $f_n$ is a cauchy sequence in M,

However go back to what $\lim_{n \to \infty} f_n =f$ is. $f(x) = e$, and $e$ is NOT a rational number and this makes $f \not\in M$. Thus, we know that $f_n$ is a cauchy sequence in $M$ that does not converge to an element in $M$. This is a relatively simple example, though thoroughly explained, proving that $(M, d)$ is not a complete metric space.

6. Prove that $f(x) = \frac{x+y}{x^2 + y^2}$ is continuous at $(x, y) = (1, 1)$. (Spin off of another professor's problems).

This involves using some very clever inequalities and lots of dirty scratch work

$$|\frac{x + y}{x^2 + y^2} - \frac{2}{2}| = |\frac{x+y-x^2-y^2}{2(x^2+y^2)}| = |\frac{x(1- x) + y(1-y)}{x^2 + y^2}|$$

We intend to prove the continuity through the $\epsilon - \delta$ definition. So we want to find $\delta > 0$ where $\forall d(<x , y>, <1 , 1>) < \delta, |f(x, y) - f(1, 1)| < \epsilon$.

Using the $d(<x, y>, <1, 1>) < \delta$, we can say that $|y-1| < \delta$ and $|x-1| < \delta$.

Using clever inequalities and upper bounds, we can say… (assuming that our delta is going to be smaller than one, $|\frac{x(1-x) + y(1-y)}{x^2 + y^2}| \geq |\frac{(\delta + 1)(\delta) + (\delta + 1)(\delta)}{(\sqrt{2} - \delta)^2 + (\sqrt{2} - \delta)^2}|$.

Consider the case where $\epsilon \geq 1$. In that case, we can make $\delta < \frac{2}{1 + 2\sqrt{2}}$; that way $|f(x, y) - f(1, 1)| \leq |\frac{(\delta + 1)(\delta) + (\delta + 1)(\delta)}{(\sqrt{2} - \delta)^2 + (\sqrt{2} - \delta)^2}| \leq 1$. Just plug it in arithmetically.

Consider the case where $\epsilon < 1$. In that case, we may have to use the quadratic formula. $(\delta +1)\delta < \epsilon(\sqrt{2} - \delta)^2 = \epsilon(\delta^2 - 2\sqrt{2}\delta + 2)$.

Then $$(1-\epsilon)\delta^2 + (1 + 2\epsilon\sqrt{2} - 2\epsilon)\delta - 2\epsilon < 0$$. Solving for delta, we must ensure in that case that $\delta < \frac{-(1 + 2\epsilon\sqrt{2}) + \sqrt{(4\sqrt{2} + 8)\epsilon + 2}}{2(1 - \epsilon)}$. That is a positive number on the right hand side (you may want to prove that as a VERY CHALLENGING exercise).

In other words, we know $\forall \epsilon > 0, \exists \delta > 0: \forall <x, y> \in \mathbb{R^2}, (\sqrt{(x - 1)^2 + (y - 1)^2} < \delta) \to (|f(x, y) -f(1, 1)| < \epsilon)$.

Thus by the $\epsilon, \delta$ definition of continuity, we know that $f(x, y)$ is continuous at $(x, y)$.

7. Sanity check: prove that if $f_n \to f$ uniformly on $X$ and $f_n$ is continuous on $X$, then $f$ is continuous.

Consider the definition of continuity that $\forall x \in D$, $forall \epsilon > 0, \exists \delta > 0: (x_1 \in (B_{\delta}(x))) \to d_Y(f_n(x_1), f_n(x)) < \epsilon/3$. We fix x, $\epsilon$ for purposes of this problem.

Now consider from the fact that $f_n \to f$ uniformly that

  1. $\exists N > 0: \forall n > N, (x \in X) \to d_Y(f_n(x), f(x)) < \epsilon/3$.
  2. $\exists N_2 > 0: \forall n > N_2, (x \in X) \to d_Y(f_n(x), f(x)) < \epsilon/3$.

Now, making $n > \max(N, N_2)$ we combine the last two paragraphs to say that $\exists \delta > 0: \forall x_1 \in B_{ \delta}, d_Y(f(x_1),f_n(x_1))+ d_Y(f_n(x_1), f_n(x)) + d_Y(f_n(x), f(x)) < \epsilon$. Now using the triangle inequality, we no longer need to worry about $f_n(x)$ and we get that

$$\exists \delta > 0: \forall x_1 \in B_{\delta}(x), d_Y(f(x_1), f(x)) < \epsilon$$.

Recalling that $x, \epsilon$ was fixed (and thus can be any real number), we must note that $\forall x \in X, \forall \epsilon > 0, \exists \delta > 0: \forall x_1 \in B_{\delta}(x), d_Y(f(x_1), f(x)) < \epsilon$.

By definition of continuity, $f$ is continuous on $X$.

8. (From Midterm 1 Question 7) For any $t \in \mathbb{R}$, show that the following sequence $t_n = 2^{-n}\sum_{k=0}^{n}\lfloor 2^kx \rfloor is convergent, and find its limit.

Answer: We are given the hint that $$0 \leq x - \lfloor x \rfloor \leq 1$$. I modify that hint to obtain that $$x - 1 \leq \lfloor x \rfloor \leq x$$. We fix t.

This means that $$2^{-n}\sum_{k=0}^{n}(t2^k - 1) \leq t_n = 2^{-n}\sum_{k=0}^{n}\lfloor t2^k \rfloor \leq 2^{-n}\sum_{k=0}^{n}t2^k$$.

Then, $$2^{-n}t\sum_{k=0}^{n}(2^k) - 2^{-n}(n+1) = 2^{-n} t(2^{n+1}-1) - \frac{n+1}{2^n} \leq t_n = 2^{-n}\sum_{k=0}^{n}\lfloor t2^k \rfloor \leq 2^{-n}t\sum_{k=0}^{n}2^k = t2^{-n}(2^{n+1}-1)$$.

In other words $$2t - 2^{-n} - \frac{n+1}{2^n} \leq t_n \leq 2t - \frac{t}{2^n}$$.

Apply the sandwich theorem (soon to find out that $t_n$ is between two convergent sequences) and I get that $$\lim_{n \to \infty} (2t - 2^{-n} - \frac{n+1}{2^n}) = 2t \leq t_n \leq \lim_{n \to \infty} (2t - \frac{t}{2^n}) = 2t$$.

Therefore, I can tell that $\lim t_n$ converges to $2t$. (And since t is fixed early in this proof, this shows that $\lim t_n = 2t$ regardless of what t is).

9. Does the taylor series of a function map to the function it is of?

Answer: Generally, no. And I have to say this again and again, and, although this came up in class, I wanted to make this more clear but this time through a simpler example (credit to this website by Thomas Vogel, Associate Professor Emeritus at Texas A & M University.

Consider the function $f(x) = |x|$ and its taylor series about $x =1$. Intuitively, $f(1) =1$, $f'(1) = 1$, and $\forall n \in \{2, 3, 4, …\}, f^{(n)}(1) = 0$. Thus the Taylor series about $x = 1$ is $1 + 1(x-1) = x$ . However we obviously know that $f$ is not equal to the taylor series.

If you want to look a bit further, consider the idea of non-analytic, smooth functions. These include some of the fun examples from the April 13 lecture.

10.Conceptual: What strategy is used in compactness proofs?

Answer: Create an arbitrary open cover (don't assume it is finite) and show that that arbitrary open cover can be reduced to a finite subcover using some given, magic property defined in the prompt of the problem. Also, you can try to use the fun facts that a continuous map preserves compactness.

The rest seems a bit too difficult to recognize as a hard and fast pattern.

11.Problem 5 on midterm 2: given that there exists continuous function $f: \mathbb{Q} \to \mathbb{R}$, is there always a way to extend the function so that it maps from $\mathbb{R} \to \mathbb{R}$ in a continuous fashion?

Answer: No. Suppose not. Consider what happens if $f(x) = 1$ if $x$ is less than some irrational quantity r and if $f(x) = -1$ if $x$ is greater than r. Now we know that $f(x)$ if a continuous function $f: \mathbb{Q} \to \mathbb{R}$.We do not need to worry about what happens at $r$ since $r \not\in \mathbb{Q}$ and we know that $f(x) = 1$ (constant 1 and hence continuous) on $(-\infty. r) \cap \mathbb{Q}$ and $f(x) = 1$ (constant) on $(-\infty. r) \cap \mathbb{Q}$ on $(r, \infty) \cap \mathbb{Q}$. Now consider what would happen if this function somehow and supposedly got extended to be continuous on $\mathbb{R}$. In this case, consider that one sequence of reals $q_n$ consistent of rational numbers less than r converging to $r$ would end up making $\lim_{n \to \infty}f(q_n) = 1$ while a sequence of reals $q_n$ consistent of rational numbers greater than r converging to $r$ would end up making $\lim_{n \to \infty}f(q_n) = -1$. In this case $\lim_{x \to r}f(x)$ would not exist and hence $f$ would not be continuous.

12. Really a Sanity check: Does closed and bounded always equate to compact?

Answer No. Consider the solution to Problem 2 on Midterm 2.

13. (From Yu-Wei Fan's Practice Problems for the Final:) Given $ 0< a_1 < 1$ and given that $a_{n+1} = \cos(a_n)$, prove that $a_n$ converges

Answer: We know that $a_n$ is bounded since $0 \leq a_1 \leq 1$ and since the cosine function is bounded by $|\cos x| \leq 1$. Furthermore we know that $a_n$ is monotonously decreasing. Consider this: $\frac{d}{dx}[\cos(x)] = -\sin(x)$. We can start by proving that $0 \leq a_n \leq 1$ by induction. The base case is already covered by what is given. Now, in the inductive step, since $0 \leq a_n \leq 1$ $(\leq \pi/2)$, we know that $\cos(a_n) = a_n$ is in $[0, 1]$ by the definition of cosine and because $0 \leq a_n \leq 1$ $(\leq \pi/2)$.

Since $0 \leq a_n \leq 1$, the next question is whether $a_{n+1} \geq a_{n}$. Recall from trigonometry that $\cos(x)$ decreases on $[0, \pi]$. Now, we know that since $a_n \in [0, 1]$, we know that $a_{n+1}= \cos(a_n) \leq a_n$. Thus $a_n$ is monotonically decreasing.

Since $a_n$ is bounded and monotonically decreasing, we know that $a_n$ MUST converge.

14. Give an example of a well known set that is both open and closed.

Answer: The empty set is both open and closed with respect to $\mathbb{R}$ and the usual metric $d(x, y) = |x - y|$. We already know trivially from class that the empty set is open. We can also prove that the empty set is also closed. This is since if we consider the complement of the empty set, $\mathbb{R}$, is an open set. Why? This is since for any real number, I can draw any open interval (or more precisely open ball) around an arbitrary real number and still that interval would still be a subset of the set of real numbers.

15. Review: Prove VERY precisely that P(x) = x^5 + x + 1 has 0 rational roots.

Answer: We know by the rational zeroes theorem, the only candidates for rational zeros $P(x)$ are of form $c/d$ where c is an integer divisor of the coefficient associated with $x^5$ and where d is an integer divisor of the constant term. Thus the only candidates for rational zeros of $P(x)$ are $1 = \frac{1}{1} = \frac{-1}{-1}, -1 = \frac{1}{-1}, -1/1$.

Now we compute $P(1) = 1^5 + (1) + 1 = 3 \neq 0$ and $P(-1) = (-1)^5 + (-1) + 1 = -1 \neq 0$. In this case we know that all of the candidates for rational zeros of $P(x)$ are not zeros of $P(x)$.

Thus, we have to conclude that all rational zeros of $P(x)$ are irrational.

16. Prove that the union of finite closed sets need not be an closed set.

Answer: Consider my counterexample to the statement that the union of finite closed sets is always a closed set. Consider the set $S = \bigcup_{n=1}^{\infty}\{1/n\}$. We know individually $\{1/n\}$ is a closed set. However, we know that $\bigcup_{n=1}^{\infty}\{1/n\}$ has 0 as one of its limit points. (We can construct $x_n = \frac{1}{n}$ which is in the set $S$ and $x_n \neq 0$ but it approaches zero.) However $0 \not\in S$. Thus $S$ is not closed.

17. (From Midterm 1) Find $\lim{(\frac{1}{2^n} + \frac{1}{3^n})^{\frac{1}{n}}}$

Consider that $\lim{(\frac{1}{2^n} + \frac{1}{3^n})^{\frac{1}{n}}} = \lim{(\frac{1}{2^{n \frac{1}{n}}}(1 + (\frac{2}{n})^n))^{\frac{1}{n}}}$.

Therefore, we can note that $$\frac{1}{2}(1)^{\frac{1}{n}} \leq \lim{(\frac{1}{2^n} + \frac{1}{3^n})^{\frac{1}{n}}} = \lim{(\frac{1}{2^{n \frac{1}{n}}}(1 + (\frac{2}{n})^n))^{\frac{1}{n}}} \leq \frac{1}{2}(2)^{\frac{1}{n}}$$.

We are given that $\forall a > 0, \lim a^{\frac{1}{n}} = 1$. Use that and the sandwich theorem to obtain that, $$\lim \frac{1}{2}(1)^{\frac{1}{n}} = 1/2 \leq \lim{(\frac{1}{2^n} + \frac{1}{3^n})^{\frac{1}{n}}} \leq \lim \frac{1}{2}(2)^{\frac{1}{n}} = 1/2$$.

Therefore $\lim{(\frac{1}{2^n} + \frac{1}{3^n})^{\frac{1}{n}}} = \frac{1}{2}$.

This problem was included since I did not get this problem on the first try during the midterm 1 (which seemed hard for practically everyone).

18. (Source: UC Berkeley Past Exam Archive from Math Department). Prove that if $\lim_{x \to \infty} f(x) = 0 = \lim_{x \to -\infty} f(x)$, then $f(x)$ is uniformly continuous.

19. Prove that if $A$ is connected subset of $X$, then $f(A)$ is connected if f is a continuous map $f: X \to Y$.

Answer: This is quite an important proof, so I may need to add a detail or two not presented in the proof in class while not go into as much detail as the Rudin book does.

Let's consider $h(x) = f(x)$ where $h: A \to f(A)$ where $f(A)$ is not connected. We may want to recall some details about the induced topology: we know that a set S is open in A iff $\exists T \subseteq X: X \cap T = S$ where $T$ is open in X. Recall that open and closed are relative to the metric space. Since $f(A)$ is not connected, there exists open $S_1, S_2 \in f(A)$ that are disjoint open sets such that $f(A) = S_1 \cup S_2$. Now what happens is $$f^{-1}(f(A)) = A = f^{-1}(S_1) \cup f^{-1}(S_2)$$ and that $f^{-1}(S_1), f^{-1}(S_2)$ are disjoint sets and are open too (by third definition of continuity we went over in class).

It is important to know why $f^{-1}(S_1), f^{-1}(S_2)$ are both disjoint. Suppose that the two are not disjoint. Then there exists a pt $x \in f^{-1}(S_1) \cap f^{-1}(S_2) $. Therefore, by definition of $f^{-1}(S_1), f^{-1}(S_2)$ we know that $f(x) \in S_1$ and $f(x) \in S_2$. This would contradict how $S_1, S_2$ are disjoint.

In any case because $A$ can be written as a union of two open disjoint sets, we can say that $A$ is NOT connected.

Thus we have shown through Proof by contrapositive that if $A \subseteq X$ is a connected subset of $X$ and $f$ is a continuous map $X \ to Y$, then $f(A)$ must be a connected subset of Y.

20. (From Dr. Ian Charlesworth's Fall 2020 midterm) Give an example of an infinite subset of R with no limit points.(I just found the result so important yet so simple…)

Answer: Here's an easy, but important: $\mathbb{N}$. We know that the set of natural numbers is an infinite set ( by definition, there are infinitely many natural numbers) that is sparse . By that, I know that there is a positive infimum of the separation of between any two distinct numbers in the set. In the case of natural numbers, that positive infimum is $1$. We know that Suppose I have a limit point p in $\mathbb{N}$. Then, by definition there is its the associated $(p_n) \in \mathbb{N}$ such that $p_n \neq p$ and $p_n \to p$. In that case, we know that $\forall \epsilon > 0, \exists N > 0: \forall n> N, |p_n - p| < \epsilon$. Now the rhetorical question here would be what happens if $0< \epsilon < 1$; in that case, there is no $p_n$ such that $|p_n - p| < \epsilon$, and we have to conclude that our limit point does not exist.

Because of this result, we can also conclude interesting facts such as how $\mathbb{N}$ is a non-open, closed set.

21. What is meant when a set is countable? How do I prove that a set is uncountable?

Answer: Great question relevant to CS 70 and to Math 104. Although it may not be directly relevant to Math 104, it may come up in later courses and has shown up in the beginning of Rudin Chapter 2.

If a set is countable, there is a) a finite number of elements in the set or b) I can create a one to one correspondence between each element in the set and the set of integers, natural numbers, or rational numbers.

We can prove that a set is uncountable through the process of Cantor's diagonalization. In that process, we basically create an arbitrary enumeration of all elements in the set. Then we can change the ith bit, digit, element, or subcomponent of the ith object in the list. That way, we have created an element that is in the set but not in the enumeration and hence proven that enumumeration cannot capture all of the elements. It's a neat proof by contradiction for those interested in set theory and CS theory.

22. Comments Rolle's problems? What are possible strategies in solving problems that would require some creative kind of Rolle's Theorem?

In Rolle's theorem problems, the proof oftentimes is to show that there exists an $x \in (a, b)$ where a bizzare expression involving continuous functions and derivatives is 0. For these problems, recall that exponential functions are very beneficial, remember the product and chain rules, and if possible change and combine functions so that you have a function F such that the equation F' = 0 can be simplified into that bizarre expression. Also, try to find $x_1, x_2$ where your $F(x_1) = F(x_2)$. Other than that, it is quite hard to identify any possible tricks.

23. Prove that if $f$ is differentiable, then $f$ is continuous.

Answer: Since $f$ is differentiable over $\mathbb{R}$. For fixed $x_0 \in \mathbb{R}$, $\lim_{h \to x_0}\frac{f(x_0) -f(h)}{x_0 - h}$ is a finite quantity (call it M).

We can use this fact to find out that $\exists M > 0: |\frac{f(x_0) -f(h)}{x_0 - h}| \leq M$.

Then $|f(x_0) - f(h)|\leq M(x_0 - h)$. Now, what kind of $\delta$ guarantees that $|f(x_0) - f(h)|\leq M|x_0 - h| < \epsilon$ where $h \in (x_0 - \delta, x_0 + \delta)$. Accounting for the “worst case” where $M|x_0 - h|$ is at its supremum, we can use $M|x_0 - h| < M|x_0 - (x_0 \pm \delta)| = M\delta = \epsilon$. Then we can make $\delta = \epsilon / M$.

Thus we have found that $\forall \epsilon > 0, \exists \delta > 0: \forall h \in \mathbb{R}, (|x_0 - h| < \delta) \to (|f(x_0) -f(h)| < \epsilon)$. In other words, $f$ is continuous at $x_0$. Recall $x_0$ is a fixed real number.

In sum, we have proved that given $f$ is differentiable at any $x_0 \in \mathbb{R}$, then it is continuous at any $x_0 \in \mathbb{R}$.

24. Problem 10 on https://ywfan-math.github.io/104s21_final_practice.pdf

Answer one of the easiest ways to go about this is to first calculate what $a_{n+1}^2$ ends up converging to.

Consider that $(a_{n+1})^2 = (\sqrt{(a_n)+\frac{1}{2^n}})^2 = a_n^2+\frac{1}{2^n}$.

Considering that $a_1=1$, I may as well show that $a_n^2 = \sum_{i=0}^{n-1}2^{-n}$.

Base case is already fulfilled $a_0^2 = \sum_{i=0}^{0}2^{-n} =1$

Now the inductive step $a_{n+1}^2 = a_n^2 + \frac{1}{2^{n}} =\sum_{i=0}^{n-1}2^{-n} + \frac{1}{2^{n}} =\sum_{i=0}^{n + 1 -1}2^{-n} $

Because $a_n^2 = \sum_{i=0}^{n-1}2^{-n}$, we know that $(a_{n+1})^2 \to 2$ as $n \to \infty$. This makes $a_{n}^2$ bounded and, especially since we know that $a_{n}$ bounded.

Furthermore we know that $a_{n+1} \geq a_{n}$ because $a_{n+1}^2 = a_n^2 + \frac{1}{2^n}$.

Since $a_n$ is bounded and monotonically increasing, we know that $a_n$ is convergent.

25. What is one strategy of proving that a metric space $M$ is complete?

Answer: Oftentimes, there is going to be another metric space $M_2$, possibly as the domain of the elements in the set or a superspace of the space of interest. In that case, we can utilize the metric from that space $M_2$ from the metric of $M$. In that case, we can prove that a cauchy sequence using that metric of $M_2$ of elements in $M$ does converge. (It's kind of hard to word, but [https://ywfan-math.github.io/104s21_final_practice.pdf|[try Problem 4 in this set of problems to see what I mean. ]]. Then try to prove that the value to which the sequence converged actually satisfies the inherent properties of an element in $M$.

26. Why is $d(x, y) = (x - y)^2$ not a metric of $\mathbb{R}$?

Answer: Consider $x = 0, y = 1/3, z = 1$. We know that if $d$ is a metric of $\mathbb{R}$, then $d(x, z) \leq d(x, y) + d(y, z)$ by the triangle inequality. However, note that $d(x, z) = (1-0)^2=1$. That is greater than $d(x, y) + d(y, z) = (1/3 -0)^2 + (1 - 1/3)^2 = 1/9 + 4/9 = 5/9$. Thus the third axiom of metric spaces is violated.

27. (From Ross) Why is $d(x, y) = \sum_{i=1}^{\infty}|x_i - y_i|$ not a metric of $S_{\mathbb{R}}$, the set of all real, bounded sequences.

Answer: Suppose that $x_n =1, y_n = 0$. Then we end up saying that $d(x_n, y_n) = \sum_{i=1}^{\infty}|x_i - y_i| = \sum_{i=1}^{\infty}|1 - 0| = \infty$. This should not happen since a metric is supposed to always return a nonnegative number by definition, and infinity is NOT a real number.

28. Prove that for any nonempty bounded subset $S$ of reals, I can create a monotonically increasing sequence that converges to $\sup S$. (Something I did not get when I did HW 4.)

Answer: Consider the following construction: we pick each $s_n$ to be an element x in S such that $\sup S - \frac{1}{n}< x < \sup S$; such an x is guaranteed to exist for each n since $\sup S$ is the minimum upper bound of S.

We know that $s_n$ converges to $\sup S$ by the sandwich theorem. We know $\sup S - \frac{1}{n} \leq s_n \leq \sup S$. Taking limits we get $\lim (\sup S - \frac{1}{n}) = \sup S \leq \lim s_n \leq \sup S$ and therefore $\lim s_n = \sup S$.

29. Sanity check: Given sequence s_n, prove that $S_N = \sup{s_n: n \geq N} $ is a monotonically decreasing subsequence.

Answer: Suppose not. Suppose $\exists m \in \mathbb{N}: S_m < S_{m+1}$. We know $S_m = \sup\{s_n: n \geq m \}$ and $S_{m+1} = \sup\{s_n: n \geq m+1 \}$. Then, by definition of supremum of a set, we can find an element x in $\{s_n: n \geq m+1 \}$ such that $\sup\{s_n: n \geq m \} < x \leq \sup\{s_n: n \geq m+1 \}$. But consider that $x \in \{s_n: n \geq m \}$ because $\{s_n: n \geq m+1 \} \subset \{s_n: n \geq m \}$. And since $x > \sup\{s_n: n \geq m \}$, we cannot say that $\sup\{s_n: n \geq m \}$ is an upper bound of elements in $\{s_n: n \geq m \}$. However this cannot happen because of the definition of supremum.

30.(From Dr. Ian Charlesworth's Fall 2020 Final) Suppose that $f: \mathbb{R} \to \mathbb{R}$ and that f is differentiable. Now consider $g(x) = f(|x|)$. Show that $g$ is differentiable at 0 iff $f'(0) = 0$. * Consider for this problem $abs(x) = |x|$. Answer:

First I will show that if $g$ is differentiable at 0, then $f'(0) = 0$. Suppose that $g$ is differentiable at 0 but $f'(0) \neq 0$. We know from application of chain rule that $g'(x) = abs'(x)f'(|x|), x \neq 0$ keeping in mind that g is differentiable at 0. Consider the rightahnd derivative of g. In this case, applying the chain rule $g'_{+}(0) = abs'_{+}(0)f'_{+}(0) =abs'_{+}(0)f'(0)= f'(0)$. Consider the lefthand derivative of g, $g'_{-}(0) = abs'_{-}(0)f'_{-}(0)=abs'_{-}(0)f'(0)= -f'(0)$. In such case, because $f'(0) \neq 0$ then $g$ unfortunately is not differentiable since $g'_{-}(0) \neq g'_{+}(0)$.

Now I show that $f'(0) = 0$ would mean that $g$ is differentiable at x = 0. Consider $\lim_{h \to 0^+}\frac{f(|h|) -f(|0|)}{h - 0} = \frac{f(|h|) - f(0)}{h}$. Since we are considering a righthand limit, $h > 0$ and therefore $\lim_{h \to 0^+}\frac{f(h) -f(0)}{h - 0}$. This is a quantity we already know: it is equal to $0 = f'(0) = \lim_{h \to 0}\frac{f(h) -f(0)}{h - 0}$. Therefore $g'_{+}(0) = 0$. Now consider what the left-hand derivative is, keeping in mind our h is going to be negative by definition of the lefthand derivative being a lefthand limit. $$g'_{-}(0) = \lim_{h \to 0^-}\frac{f(|h|) -f(0)}{h - 0} = \lim_{h \to 0^-}\frac{f(-h) - f(0)}{h - 0}$$. Since we know that $\lim_{h \to 0^+}\frac{f(h) f(0)}{h - 0}$, we know $\forall \epsilon > 0, \exists \delta > 0: (h \in (0, \delta)) \to (|\frac{f(h) - f(0)}{h} - 0| < \epsilon)$.

Consider a variable change of $h_2 = - h$. That way $$g'_{-}(0) = \lim_{h \to 0^-}\frac{f(|h|) -f(0)}{h - 0}= \lim_{h \to 0^-}\frac{f(-h) - f(0)}{h - 0} = \lim_{h_2 \to 0^+}\frac{f(h_2) - f(0)}{-h_2 - 0}$$. Now we know what $\lim_{h_2 \to 0^+}\frac{f(h_2) - f(0)}{-h_2 - 0}$. Recall $$\forall \epsilon > 0, \exists \delta > 0: (h_2 \in (0, \delta)) \to (|\frac{f(h_2) - f(0)}{h_2} - 0| < \epsilon)$$ Since we know the previous relation and how $|\frac{f(h_2) -f(0)}{-h_2} - 0| = |\frac{f(h_2) - f(0)}{h_2} - 0|$, we know $$\forall \epsilon > 0, \exists \delta > 0: (h_2 \in (0, \delta)) \to (|\frac{f(h_2) - f(0)}{-h_2} - 0| < \epsilon).$$ In this case we know that by definition of limit that $\lim_{h_2 \to 0^+}\frac{f(h_2) - f(0)}{-h_2} = 0$. Therefore $$g'_{-}(0) = \lim_{h \to 0^-}\frac{f(|h|) -f(0)}{h - 0}= \lim_{h \to 0^-}\frac{f(-h) - f(0)}{h - 0} = \lim_{h_2 \to 0^+}\frac{f(h_2) - f(0)}{-h_2 - 0} = 0$$.

Because we found out that $$g'_{-}(0) = g'_{+}(0) = 0$$, we can say that $g$ is differentiable at x = 0.

31. Sanity check: Create (if possible) a metric space M and a sequence such that the sequence converges to a point in metric space M but is not a Cauchy Sequence.

Answer: Such a sequence does not exist since a convergent sequence is always a cauchy sequence. See Lemma 10.9 from Ross.

32. (Credits to Midterm 2) In less than 3 sentences and in less than 3 minutes, formally prove or disprove: given $A \subseteq B$ is bounded and f is a continuous map $f: B \to Y$, then $f(A)$ is bounded.

Answer: On the spot and on a timed fast exam, this may seem like a hard problem and deceptive. We know that the given statement is false. To show that consider the metric space $(S, d(x, y )=|x- y|)$, $U = (0, 1)$, and $f(x) = \ln(x)$. We know that $U$ is bounded but $f(U) = (-\infty, 0)$ by the property of natural log and is NOT bounded. Thus, just because $A$ is bounded doesn't mean that $f(A)$ is bounded.

33. Suppose that (a_n)_n is a sequence in a metric space (M, d), which converges to a limit $a \in M$. Prove that $K = \{a_n | n \in \mathbb{N}\} \cup {a}$ is compact.

Answer: Firstly, consider an arbitrary open cover $\{C_{\alpha}\}$ of K. Since $\{C_{\alpha}\}$ is an open cover of $K$, we know that there exists an open set $S$ in the open cover that contains $a$. Since $S$ contains a and since $S$ is open, we know that $\exists r > 0: \{x \in M: d(x, a) < r\} \subseteq S$. Because $a_n \to a$, we know that $\exists N \mathbb{N}: \forall n > N, d(a_n, a) < r$. This means that $\{a_n: n > N\} \subseteq \{x \in M: d(x, a) < r\} \subseteq S$. Now consider the first $N$ points in $a_n$. In order for the open cover $\{C_{\alpha}\}$ to cover $\{a_n: n \in \{1, 2, … , N\}\}$, one needs only at most $N$ sets from $\{C_{\alpha}\}$ whose union contain $a_1, …, a_N$. Therefore, there exists a finite number of sets that cover the union of a) $\{a\}\cup \{a_n: n > N\}$ and b) finite set $\{a_n: n \in \{1, … , N\}\}$. Thus, there exists a finite subcover of $\{C_{\alpha}\}$. By definition of of compact, $K$ is compact.

34. Problem 5 on Practice Midterm on https://ywfan-math.github.io/104s21_midterm2_practice.pdf

Answer: The key idea for this problem is to use that the function of interest utilizes the distance function.

I will do some scratch work…

Now, let $x, y \in X$. ( Note I did not put x and y in an open interval that is particularly and intentionally centered at some fixed $x_0$ )

Consider that $|f(x) - f(y)| = |\inf\{d(x, z): z \in X\} - \inf\{d(y, z): z \in X\}|$. WLOG, let $x$ be such that $$\inf\{d(x, z): z \in X\} - \inf\{d(y, z): z \in X\} \geq 0$$. Thus $$|f(x) - f(y)| = |\inf\{d(x, z): z \in X\} - \inf\{d(y, z): z \in X\}| = \inf\{d(x, z): z \in X\} - \inf\{d(y, z): z \in X\}$$. Now pick a point $z_0 \in X$. In that case, using the definition of infimum we know that $$ d(x_z_0) > \inf\{d(x, z): z \in X\}$$ and therefore $$\inf\{d(x, z): z \in X\} - \inf\{d(y, z): z \in X\} \leq d(x, z_0) - \inf\{d(y, z): z \in X\}$$. (A)

We know from the triangle inequality that $d(x, z_0) \leq d(x, y) + d(y, z_0)$; in this case keeping in mind the definition of $\inf{d(y, z): z\in X}$, we know that $\inf \{d(y, z): z\in X\} \leq d(y, z_0)$ and, keeping $d(x, y)$ as a constant and keeping in mind that the infimum is the greatest lower bound of $\{d(y, z): z\in X\}$, we can say that $$d(x, z_0)\geq d(x, y) + \inf\{d(y, z): z \in X\} = \inf \{d(x, y) + d(y, z): z \in X\}\geq d(x, y) + d(y, z_0)$$. Therefore we know that $$d(x, z_0) - \inf\{d(y, z): z \in X\} \leq d(x, y)$$ (B)

Combining the results of (A) and (B). We know that $$|f(x) - f(y)| = \inf\{d(x, z): z \in X\} - \inf\{d(y, z): z \in X\} \leq d(x, z_0) - \inf\{d(y, z): z \in X\} \leq d(x, y)$$.(C)

Now, what happens when $d(x, y) < \delta = \epsilon$ where $\epsilon$ is a fixed positive real? In that case, the big inequality in (C) would yield us that $|f(x) - f(y)| < \epsilon$.

Thus, we have found out that for any $\epsilon > 0, \exists \delta > 0$ (e.g. $\delta > 0$) such that given $x, y \in X, (d(x, y) < \delta) \to (|f(x) - f(y)| < \epsilon)$.

By definition of uniform continuity, we have found that $f$ is uniformly continuous on $X$. QED.

35. (From Dr. Ian Charlesworth's Fall 2020 final) Suppose that $F \subseteq \mathbb{R}^2$. If it is the case that $\forall R > 0, \{x \in F: (x)^2 + y^2 \leq R^2\}$ is compact, show that F is closed

Answer: Suppose that there exists a situation where it is the case that $\forall R > 0, \{x \in F: (x)^2 + y^2 \leq R^2\}$ is compact but F is not closed. Since F is not closed, there exists a limit point p of F such that $p \not\in F$. Since p is a limit point of $F$, $p_n$ be a sequence of points that converges to $p$. Since $p_n$ we know it is bounded. Since I mentioned the $\forall R > 0$ in the problem statement and since $p_n$ is a bounded sequence, we can make $R$ big enough so that the points in $p_n$ are included in $S = \{x \in F: (x)^2 + y^2 \leq R^2\}$. Now we know that $p_n$ is a sequence of points in $S$ but does not converge to a point in $S$ because $p \not\in F$ and thus $p \not\in S$. Thus, we have to say that $S$ is NOT closed and hence S is not compact. Contradiction is reached since the problem statement claims that $\forall R > 0, \{x \in F: (x)^2 + y^2 \leq R^2\}$.

Thus we have shown that if $F \subseteq \mathbb{R}^2$ and that $\forall R > 0, \{x \in F: (x)^2 + y^2 \leq R^2\}$ is compact, then F is closed.

36. (From Dr. Ian Charlesworth's Midterm for Fall 2020) A metric space (M, d) is said to have property TB if for every r > 0 there is a finite list of points $x1, . . . , x_n \in M$ so that for any $y \in M$, we have $d(x_i , y) < r$ for at least one i. Show that if a set M is compact, then M has property TB.

Answer: Fix r. Consider an infinite list of (for simplicity's sake) unique points in $x_1, x_2, x_3, … \in M$. Consider the set of $B_r(x_i)$, where each ball $B_r(x_i)$ has a one to one and onto correspondence with an element in $\{x_1, x_2, …, x_n\}$. As a matter of fact, we can make the list of points so that the collection of $B_r(x_i)$ is an open cover of $M$. That way we make it so that $\forall y \in M, \exists x \in x_n: d(x, y) < r$. Since M is compact, we know that we only need a finite number n of these open sets to create a cover of $M$; in other words, we only need a finite number of $B_r(x_i)$ to create an open cover of $M$. We use only a finite subcover $C = \{B_r(x_{\alpha, 1}), …, B_r(x_{\alpha, n})\}$ cover consistent of the n necessary open sets as our open cover from now on. Let $C_x$ be the set of points each ball in $C$ corresponds to in a one to one and onto fashion. Given the definition of the open balls and open cover, we know that $\forall y \in M, \exists x_i \in C_x: y \in B_r(x_i)$ and thus $d(x_i, y) < y$. Add that to the fact that C consists of only finite number of balls each corresponding to one $x \in M$ they are centered around, we know that $\exists$ finite list of points $x_1, …, x_n \in M$ such that $\forall y \in M, \exists x_i \in M: d(x_i, y) < r$.

Thus by definition of TB property, given M is compact, it satisfies the TB property.

37.(From Ian Charlesworth's fall 2020 Final) A check on definitions: Show that $S = \mathbb{R}^2 \setminus \{0\}$

Answer: I don't think we have encountered too much on connectedness, so I believe this should be used as a teaching moment. Without getting too much into the details, I suppose that S is not a connected set. I must show that if $S = U \cup V$ where U and V are disjoint open sets, then one of U and V must be the empty set. Suppose not, consider open, nonempty, disjoint $U , V$ such that $S = U \cup V$. Since $S = U \cup V$ we know that an element in S is either in U or in V. Now consider the boundary between U and V in S; such a boundary exists since U and V are nonempty. In this case, we know that since U and V are open in S, we know that U and V do not contain their boundaries. Let x be in the boundary in S. We know that x is not in U nor in V but still in S. This contradicts how $S = U \cup V$ since not all points in S are in the union of U and V.

Therefore, by proof by contradiction, $\mathbb{R}^2 \setminus \{0\}$ is a connected set.

38. Prove from ONLY first principles and definitions that $f_n(x) = \frac{1}{nx}$ uniformly converges on $(1, \infty)$, where (for purposes of this problem) we say f is defined on.

Answer: Consider that pointwise $\lim_{n \to \infty}f_n(x) = \lim_{n \to \infty} \frac{1}{nx} = 0 =f$ on $(1, \infty)$. Now also note that $|f_n(x) - 0| = |\frac{1}{nx}| < |\frac{1}{n}|$ for any x. Now we can ask ourselves, does the following hold: $\forall \epsilon > 0, \exist N > 0: \forall n> N (|\frac{1}{n} - 0| < \epsilon)$. Yes. This holds; let $N$ be any number greater than $\frac{1}{\epsilon}$.

Ultimately we can say that the $\forall \epsilon > 0, \exists N > 0: \forall n> N, (x \in (1, \infty)) \to (|f_n(x) - 0| < \epsilon)$. Thus $f_n \to f$ uniformly by definition of uniform convergence.

39. (From Yu-Wei Fan's Practice Problems) Let f and g be continuous functions on [a, b] that are differentiable on (a, b) . Suppose that f(a) = f(b) = 0 . Prove that there exists $x \in (a, b)$ such that $g'(x)f(x) + f'(x) = 0 $.

Answer: This one took me alot of time to figure out since it is kind of hard to arrive at a derivative that looks like $g'(x)f(x) + f'(x)$. It turned out that I needed to use an exponential function.

Consider the function $h(x) = f(x)e^{g(x)}$. Since $f, g$ are continuous on $[a, b]$ and $e^x$ is continuous on $\mathbb{R}$ we know that $h(x)$ is also continuous (A). Furthermore, since $f, g$ are differentiable on $(a, b)$ and $e^x$ is differentiable on $\mathbb{R}$, we know that $h(x) = f(x)e^{g(x)}$ is differentiable on $(a, b)$ (B).

Consider that $h(a) = f(a)e^{g(a)} = 0$ and $h(b) = f(b)e^{g(b)} = b$ (C).

By A, B, and C, we know that by Rolle's theorem that $\exists x \in (a, b): h'(x) = 0$.

From the previous, we can obtain that $\exists x \in (a, b): f'(x)e^{g(x)}+f(x)g'(x)e^{g(x)} = 0$.

Since we know for all real numbers x $e^x > 0$, we can divide both sides of the previous sentence's equation to get that $\exists x \in (a, b): f'(x)+f(x)g'(x) = 0$.

And there it was. Finding out that there was an exponential involved was a bit of a pain in the butt.

40. What is sequential compactness? Why is this useful

Answer: A set K is sequentially compact iff that for every sequence of points $p_n$, there always exists a subsequence of those points that converges to a point in $K$.

We CAN prove that sequential compactness = compactness; you can take that as granted or I may leave that as an exercise.

We can say that if K is compact, then it is sequentially compact since K is closed and bounded. Because K is bounded, any sequence of $K$ has a convergent subsequence. Such sequences MUST converge to a point in K because $K$ is closed.

An interesting proof of the other direction of the proof is given in this page.

From using that property of sequentially compact = compact, we can prove compactness of sets such as intersections more easily through the alternate division.

41. Prove that the intersection of finitely many compact sets $K_1, K_2, .., K_n$ that all overlap in one region $K$ is still a compact set.

Answer: We know that since $K_1, K_2, …, K_n$ are all compact, they are all bounded and hence we know that $K$ is bounded. Thus for every sequence of points in K, there always exists a subsequence that converges. Now, since $K$ is closed (because $K_1, K_2, …, K_n$ are closed and $\bigcap_{i=1}^{n}K_i$ is therefore closed), we know that such subsequences that converge must converge to a point in $K$. Therefore, for every sequence of points in K, there always exists a subsequence that converges to a point in $K$. By definition of sequential compactness, $K$ is sequentially compact and is therefore compact.

42. (From Dr. Ian Charlesworth's Fall 2020 Final): Create an open cover of $B_5(0, 1) = \{ (x, y) \in \mathbb{R}^2: (x)^2 + (y - 1)^2 \leq 5^2\}$ that does not have a finite subcover

Answer: The purpose of this problem is a bit more illustrative than challenging and is to exemplify, in perhaps a visualizable manner, the idea of the finite subcover. Consider the open cover $\{C\}$ consistent of $C_{n} = \{(x, y) \in \mathbb{R}^2:(\frac{5}{n} - \frac{5}{2n})^2 < (x)^2 + (y-1)^2 < (\frac{5}{n} + \frac{5}{2n})^2\}$. For this problem, I took some inspiration for the bad open cover for $(0, 1)$.

One way to prove that there is no finite subcover of of $\{C\}$ is to show that if there is any supposed subcover of $\{C\}$ the union of that collection would miss points in $B_5(0, 1)$ .

Now, suppose that C does have a finite subcover. In other words, suppose $C_{n_1}, …, C_{n_m}$ formed a finite subcover of m elements from $\{C\}$. Without loss of generality and to make this problem more manegable, we shall make $n_1 < … < n_m$. Now consider the points in $\{(x, y) \in \mathbb{R}^2: 0 \leq (x)^2 + (y-1)^2 < (\frac{5}{n} - \frac{5}{2n_m})^2\}$; these points are not included because of how $C_1, .., C_{n_m}$. Therefore, the finite subcover does not contain all points in $B_5(0, 1)$. Contradiction has been reached.

Therefore, I have given an example of an open cover of $B_5(0, 1)$ that does NOT have a finite cover. This ultimately shows us that $B_5(0, 1)$ is not compact (even though the original problem has already given us that).

43. Prove that the function $f(x) = x^2 + 4$ is integrable on $[0, 4]$ with respects to $\alpha(x) = \lfloor 2^x \rfloor$. Then find the integral.

Answer: We can prove that $f(x)$ is integrable with respects to $\alpha$ on $[0, 1]$ since $f$, being a polynomial function, is continuous on $[0, 2]$. (Refer to theorem 6.8).

Things get interesting when we try to integrate this function. In order to do the integral, consider the upper and lower integrals. Consider arbitrary partition $P$. Because of how the step function is set up, we know that the only intervals (defined by $x_{i-1}, x_i$ from partition) that matter in this integral are those containing $x_{interest}$ such that $2^x_{interest} = \lfloor 2^{x_{interest}} \rfloor$. In those places $\triangle \alpha = 1$ Otherwise $\triangle \alpha =0$. Now, consider that on $[0, 2]$, the x values of interest are $0, 1, \log_2(3), 2$.

As we make our partitions fine (so that there exists an interval each containing just one of 0,1, $\log_2(3),2$), we can get that: $U(f, P, \alpha) = f(0) + f(1) + f(\log_2(3)) + f(2)$ $L(f, P, \alpha) = f(0) + f(1) + f(\log_2(3)) + f(2)$.

Recalling that $L(f, P, \alpha) \leq L(f, \alpha) \leq U(f, \alpha) \leq U(f, P, \alpha)$ and thus $L(f, \alpha) = U(f, \alpha) = f(0) + f(1) + f(\log_2(3)) + f(2)$ and therefore $\int_{x=0}^{x=2}f d\alpha = f(0) + f(1) + f(\log_2(3)) + f(2)$.

44. Does $f_N(x) = \sum_{i=0}^{N} \frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}$ converge uniformly on $(1, \infty)$? Support your claim with a proof.

Answer: Consider that $\forall x \in (1, \infty), |\frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}}| \leq \frac{1}{n^2 + 1}\sqrt{1} = \frac{1}{n^2}$

We know that $\sum_{n=1}^{\infty}\frac{1}{n^2}$ converges by the P series test. Thus, by the Weitrass-M test, we know that $f_N(x) = \sum_{i=0}^{N} \frac{1}{x^2n^2 + 1}\sqrt{\frac{1}{x}$ converges uniformly to some function $f$.

45. (From Dr. Ian Charlesworth's Fall 2020 Midterm) Given $A, B \subset \mathbb{R}$ and let A be bounded from above and B be bounded from below. Let $A - B = \{s - t: s\ in A, t \in B\}$. Prove that $\sup{(A - B)} = \sup A - \inf B$.

Answer: Consider arbitrary $s_0 \in A, t_0 \in B$. Consider $s_0 - t_0$. Now consider the definition of supremum and that $\sup A \geq s_0$. Therefore $\sup A - t_0 \geq s_0 - t_0$. Then, consider that $t_0 \geq \inf B$ and $-t_0 \leq -\inf B$. Therefore $$\sup A - \inf B \geq \sup A - t_0 \geq s_0 - t_0$$.

Therefore, I can say that $\forall x \in (A - B), x \leq \sup A - \inf B$. $\sup A - \inf B$ is an upper bound of $A - B$.

Next, I need to show that $\sup A - \inf B$ is a minimum upper bound. Consider any $d \in (A - B): d < \sup A - \inf B$. I will show that $\exists d_2 \in (A - B): d < d_2 \leq \sup A - \inf B$.

Since $d \in (A -B)$ we know that $d = s_1 - t_1$ for some $s_1 \in A, t_1 \in B$. Consider that in this proof it cannot be the case that both $s_1 = \sup A, t_1 = \inf B$.

Case 1: $s_1 < \sup A$

In that case by definition of supremum, $\exists s_2 \in A: s_1 < s_2 \leq \sup A$. If we consider $s_2 - t_1$ which is in A- B by definition of A - B, we know that $d = s_1 - t_1 < s_2 - t_1 \leq \sup A - \inf B$. Thus I have found an element $d_2 \in (A - B): d < d_2 < \leq \sup A - \inf B$.

Case 2: $s_1 = \sup A$.

Then, $t_1 > \inf B$. By definition of infumum, there exists $t_2 \in B: t_1 > t_2 \geq \inf B$. Thus I can say that, considering $s_1 - t_2$ which is an element in $A - B$, $s_1 - t_1< s_1 - t_2 \leq \sup A - \inf B$. Thus I have found an element $d_2 \in (A - B): d < d_2 < \leq \sup A - \inf B$.

Thus, we know from both cases that $\sup A - \inf B$ is a minimum upper bound of $A - B$. Therefore, $\sup (A - B) = \sup A - \inf B$.

46. Prove that given $f$ is Riemann integrable on $[a, b]$, $|\int_{a}^{b}fdx| \leq \int_{a}^{b}|f|dx$

Answer: . The proof for this one is simple. Given $\int_{a}^{b}fdx$, we know by definition of absolute value that that $\exists c = \pm 1: c(\int_{a}^{b}fdx) = \int_{a}^{b}fdx = |(\int_{a}^{b}fdx)|$. Now consider that, also by definition of absolute value that $cf(x) \leq |f(x)|$ (the latter of which, by the way, is integrable because of thm 6.11. Therefore we know that $|\int_{a}^{b}f(x)dx| = \int_{a}^{b}cf(x)dx \leq \int_{a}^{b}|f(x)|dx$

47. Find the interval of convergence of $$f(x) = \sum_{n=0}^{\infty}\frac{x^{n}}{(n+1)^n}$$.

Answer: Consider $\lim_{n \to \infty}|\frac{\frac{x^{n+1}}{(n+2)^{n+1}}}{\frac{x^n}{(n+1)^n}}| < 1$ . $$\lim_{n \to \infty}|\frac{x(n+1)^{n}}{(n+2)^{n+1}}| = \lim_{n \to \infty} \frac{x}{n+1}\cdot (\frac{n+2}{n+1})^n=\lim_{n \to \infty} \frac{x}{n+1}\cdot \lim_{n \to \infty} (\frac{n+2}{n+1})^n = < 1$$.

Use L'Hopital's Rule to compute $\ln{(\lim_{n \to \infty} (\frac{n+2}{n+1})^n)} = \lim_{n \to \infty} \frac{\ln{(1 + \frac{1}{n+1})}}{\frac{1}{n}} =$ $\lim_{n \to \infty} \frac{-\frac{1}{(n+1)^2}\frac{1}{1 + \frac{1}{n+1}}}{-\frac{1}{n^2}} = \lim_{n \to \infty} \frac{n^2}{(n+1)^2}\frac{1}{1 + \frac{1}{n+1}} = 1 \cdot 1 = 1$. This leaves us with $\lim_{n \to \infty} (\frac{n+2}{n+1})^n) = e$.

Therefore apply the multiplication rule for limits to get that $lim_{n \to \infty} \frac{x}{n+1}\cdot \lim_{n \to \infty} (\frac{n+2}{n+1})^n = 0 \dot 1 = 0 < 1$. Thus, we know that for all $x \in \mathbb{R}$, we know that $f(x)$ converges.

48. Prove that if I is a closed and bounded interval and if $f$ is a continuous real function, show that $f$ is uniformly continuous on $f$.

Answer: We are given that $f$ is a continuous function. That means $\forall x \in I, \forall \epsilon/2 > 0, \exists \delta_{x} > 0: \forall y \in [a, b] (|y - x| < \delta) \to (|f(y) - f(x)| < \epsilon/2)$.

For all $x$, for all $\epsilon/2 > 0$, we can construct intervals $I_{x} = (x -\delta_{x}/2, x +\delta_{x}/2) \cap [a, b]$ that are open on $[a, b]$ where $\forall y \in I_{x}, |f(x) - f(y)| < \epsilon/2$.

We know that the collection of all possible $I_{x}$ is an open cover of $I$. Since I is closed, bounded, and hence compact by Heine-Borrel, we can say that finite $I_{x_1}, I_{x_2}, …, I_{x_n}$ of those $I_{x}$'s can be used to form an open cover.

Thus consider $p_1, p_2 \in [a, b]$ such that $|p_1 - p_2| < \delta_{now} = \min{\{\delta_i/2: i \in \{1, 2, …, n \}\}}$. Where $\delta_{i}$ is the $\delta$ associated with $I_{x_i}$ and is $\delta_{i}$ = diameter of ${I_{x_i}}$. We also know that since $|p_1 - p_2| < \delta_{now} = \min{\{\delta_i/2: i \in \{1, 2, …, n \}\}}$. and by the definition of $I_{x_i}$ intervals, $\exists i \in \{1, 2, …, n \}: p_1, p_2 \in I_{x_i}$.

In that case we know that, by definition of $I_{x}$, $|f(p_1) - f(x_i)| < \epsilon/2$ and $|f(p_2) - f(x_i)| < \epsilon/2$. By triangle inequality and the previous two inequalities, $|f(p_1) - f(x_i)| + |f(x_i) - f(p_2)| \geq |f(p_1) - f(p_2)| < \epsilon$.

Thus we can say that, $\forall \epsilon > 0, \exists \delta > 0, \forall p_1, p_2 \in [a, b], (|p_1 - p_2| < \delta) \to (|f(p_1) - f(p_2)| < \epsilon)$.

Thus, given that $f$ is continuous on $[a, b]$, we know that $f$ is uniformly continuous on $[a, b]$.

49. From Ian Charlesworth's Fall 2020 Final (Illustrative question) Without invoking the change of variables, show that $\int_{x=C}^{x = 2C}f(\frac{x}{C})dx$ is integrable and find the integral. We are given that $\int_{x=1}^{x=2}f(x)dx$ is a valid integral. (let h(x) = f(x/C) for $x \in [C, 2C]$ and note that $C > 0$)

Answer: This really tests our understanding of the definition of integrability. We know that since $\int_{x=1}^{x=2}f(x)dx = V$ means that $\forall \epsilon. > 0, \exists P: U(f, P) - L(f, P) < \epsilon $. Recall that $$U(f, P) = \sum_{i=1}^{n}\sup_{x \in [x_{i-1}, x_i]} f(x) (x_i - x_{i-1})$$ and $$L(f, P) = \sum_{i=1}^{n}\inf_{x \in [x_{i-1}, x_i]} f(x) (x_i - x_{i-1})$$. Now consider that for each partition$ P = \{1 = x_0 < x_1 < x_2 … < x_n = 2\}$ of $[1, 2]$ we can create a partition of $[C, 2C]$ by doing $P* = \{C = Cx_0 < Cx_1 < … < Cx_n = 2\}$. Now consider $$U(h, P*) = \sum_{i=1}^{n}\sup_{x \in [Cx_{i-1}, Cx_i]} h(x) (Cx_i - Cx_{i-1})= \sum_{i=1}^{n}\sup_{x \in [Cx_{i-1}, Cx_i]} f(x/C) (Cx_i - Cx_{i-1})$$ and $$L(h, P*) = \sum_{i=1}^{n}\sup_{x \in [Cx_{i-1}, Cx_i]} h(x) (Cx_i - Cx_{i-1}) = \sum_{i=1}^{n}\inf_{x \in [Cx_{i-1}, Cx_i]} f(x/C) (Cx_i - Cx_{i-1})$$.

Notice that because of the x/C used in h(x), we can effectively say that $\sup_{x \in [Cx_{i-1}, Cx_{i}]}f(x/C) = \sup_{x \in [x_{i-1}, x_{i}]}f(x)$. Therefore, $$U(h, P*) = \sum_{i=1}^{n}\sup_{x \in [x_{i-1}, x_i]} f(x) (Cx_i - Cx_{i-1}) = C U(h, P)$$ and $$L(h, P*) = \sum_{i=1}^{n}\inf_{x \in [x_{i-1}, x_i]} f(x) (Cx_i - Cx_{i-1}) = C L(h, P)$$.

Therefore, $$U(h, P*) - L(h, P*) < C(U(f, P) - L(f, P)) < C\epsilon$$.

In order to make $$U(h, P*) - L(h, P*) < \epsilon$$ we can first find $P = \{1 = x_0 < x_1 < … , < x_n = 2\}$ such that $U(f, P) - L(h, P) < \epsilon/C$ and then we can make $P* = \{C = x_0 < Cx_1 < … < Cx_n = 2C}$ and that way $U(h, P*) - L(h, P*) = C (\cdot U(f, P) - \cdot L(f, P)) < C \frac{\epsilon}{C} = \epsilon $.

Thus, $\forall \epsilon > 0, \exists P*$ (partition of $[C, 2C]$) such that $U(h, P*) - L(h, P*) < \epsilon$. Thus $h$ is integrable on $[C, 2C]$.

We have shown that $U(h, P*) = C U(f, P)$, $L(h, P*) = C L(f, P)$. We know that $\inf U(f, P) = U(f)$. Since $U(h, P*) = C U(f, P)$ and how $P*$ is setup as $\{Cx_0 < C x_1 < .. < C x_n<\}$, I know $U(h) = \inf U(h, P*) = \inf C(U(f, P)) = C U(f)$. Likewise, we know that $\sup L(f, P) = L(f)$. Since $L(h, P*) = C L(f, P)$ and how $P*$ is setup as $\{Cx_0 < C x_1 < .. < C x_n<\}$, I know $L(h) = \inf L(h, P*) = \inf L(U(f, P)) = C L(f)$. Since $L(f) = U(f) = \int_{x=1}^{x=2}f(x)dx$ by definition of integrability, we know that $L(h) = U(h) = C \int_{x=1}^{x=2}f(x)dx$.

50. Give an example of a function that is Stieltjes Integrable but not Riemann Integrable.

Answer: I went to tutoring earlier this week and figured this out…

Consider function $f$ defined by $f(x) = 1, x \in \mathbb{Q} \cup [0, 1]$ and $f(x) = 0, $ otherwise.

Consider the step function defined by $\alpha(x) := 0$ when $x < 1/2$ and $\alpha(x) := 1$ when $x \geq 1/2$.

We can integrate the function with respects to $\alpha$ through $\int_{x=-1}^{x=1}f(x)d\alpha = f(\frac{1}{2}) = 1/2$. (There's a theorem in Rudin/ April 22nd's notes concerning the behaviour of step functions). f is integrable with respects to $\alpha$ on [-1, 1].

However, we cannot say that $f$ is Riemann integrable over $[-1, 1]$. This is due to the behaviour of $f$. And when we consider interval $[-1, 0]$ we start to understand that $f$ is not integrable because of the upper integral for that interval is 1 whereas the lower integral for that interval would be 0.

51. (From MIT Opencourseweare Math 18.100B Course) True or False? If $f_n : [a, b] \to R$is a sequence of almost everywhere continuous functions, and $f_n \to f$ converges uniformly, then the limit f is almost everywhere continuous. (I wanted an explanation with the specific theorem numbers from Rudin)

Answer: I kind of do not like how the term “almost everywhere continuous” is used in this problem; it just sounds imprecise. It turns out that the problem meant f has only a finite number of discontinuities. In order to address this problem we may want to first note that since $f_n \to f$ continuous on $[a, b]$ except for a finite number of points on $[a, b]$ we know that $f_n$ is Riemann integrable ($\alpha(x) = x$ and that is continuous for all $x \in \mathbb{R}$) on $[a, b]$ by theorem 6.10. Because of that and by theorem 7.16 in Rudin, we can supposedly say that $f$ is integrable as well. And since f is integrable, f MUST have only finitely many discontinuities.(This one does not seem to have a direct theorem from Rudin I can use.)

One thing that I am still rightfully confused about is whether it is true that if $f$ has only finite number of discontinuities then and only then it is riemann integrable. I may be able to puncture this argument by considering $f(x) = \frac{1}{x^2}$. We may all know that $f(x)$ is continuous at any point but at $x = 0$, where $\lim_{x \to 0} f(x) = \infty$. Now, we know that f has only a finite number (1) of discontinuities on $[-a, b]$ (let a, b be any positive number of your choosing), but we cannot integrate $\int_{x= -a}^{x = b}f(x)dx$. It will diverge to infinity (not a real number)!

Sources for the Questions and Inspiration: Past midterms and/or finals from Dr. Peng Zhou, Ian Charlesworth, Sebastian Eterovic, Yu-Wei Fan, Dr. Charles Rycroft, MIT Opencourseware (MIT's Math 18.100 courses), summer 2018 math 104, spring 2017 math 104, Principles of Analysis by Rudin, A Problem Book in Real Analysis, and Principles of Analysis by Ross.

math104-s21/s/ryotainagaki/problems.txt · Last modified: 2022/01/11 10:57 by pzhou