Table of Contents

Anton's Notebook

Posting list of questions and practice final solution sketches here. Will add as time goes on.

Practice Final Solution Sketches

Problems from https://ywfan-math.github.io/104s21_final_practice.pdf.

Solutions here vary in how detailed they are; they could be full solutions or only guiding steps.

Problem 1

(a)

We only consider the interval $[0,1]$, as $cos(x)$ is positive on $[-1,0]$, and everything else is outside the range of $cos(x)$. We consider $f(x) = cos(x) - x$ on $[0,1]$. $f$ is strictly decreasing on this interval, since $f'$ is always negative. Then, use the IVT to argue that $f$ has only one zero.

(b)

Sequence is clearly bounded, since $cos(x)$ is bounded. Then show that $x_{n+1} \geq x_n$ from results shown above. Since the sequence is bounded and monotone, it must be convergent.

To show divergence, we show that $lim x_n > 0$ from result earlier in part (b). Series can only be convergent if $lim x_n = 0$, so the series must be divergent.

Problem 2

A step function consists of a finite number of constant pieces, so it must be bounded. So, if we replace $f$ with a step function, its absolute value must be less than $|lim_{n \to \infty} \int_a^b Msin(nx)dx| = |lim_{n \to \infty} -\frac{M}{n} cos(nx)| = 0$. This proves the first part of the hint.

The second part of the hint can be shown from definition of Riemann integration. By definition, we know there is some partition $P$ such that $\int_a^b f(x)dx - L(f, P) < \epsilon$. We define our step function off of $L(f, P)$. We note that $S(x) \leq f(x)$, since we defined it off the lower bound.

For the final part: $$0 \leq \int_a^b (f(x) - S(x))dx < \epsilon$$ Since $f - S$ is positive, and $sin(nx) \leq 1$: $$|\int_a^b (f(x) - S(x))sin(nx)|dx < \epsilon$$ Take the limit $n \to \infty$: $$lim |\int_a^b (f(x) - S(x))sin(nx)|dx \leq \epsilon$$ Using result from first hint: $$lim |\int_a^b f(x)sin(nx)|dx \leq \epsilon$$ $$lim \int_a^b f(x)sin(nx)dx \leq \epsilon$$ This holds for all $\epsilon$, so from this we get our desired result.

Problem 3

(a)

Compute the determinant, and then conclude that the resulting function is continuous/differentiable due to multiplication/addition properties associated with continuity/differentiability.

(b)

Show that $F(a) = F(b) = 0$, then use Rolle's Theorem or MVT to show the desired result.

(c)

Use $h(x) = 1$ and part (b) to get the result.

Problem 4

(a)

We show that Cauchy condition on $B(X)$ is the definition of being uniformly Cauchy. Uniformly Cauchy implies uniform convergence, and our convergent function is bounded, so $B(X)$ is complete.

(b)

Consider the complement of the set, all discontinuous functions. We focus on one discontinuity, meaning that for some x, there is some $\epsilon > 0$ such that there is no $\delta > 0$ that satisfies $x' \in (x - \delta, x + \delta) \implies f(x') \in (f(x) - \epsilon, f(x) + \epsilon)$. \We then take an open ball with radius $\epsilon/3$, so for all functions $g$ in the open ball, $g(x) \in (f(x) - \epsilon/3, f(x) + \epsilon/3)$. Combining this with the previous discontinuity in $f$, we can show for $\epsilon' = \epsilon/3$, $g$ is discontinuous at $f$. So, an open ball around a discontinuous function still is completely contained in the set of discontinuous functions. The set of discontinuous functions is open.

Therefore, the set of continuous functions is closed.

Easier way: consider that a set is closed iff it contains all its limit points, and consider a sequence of continuous functions that uniformly converges.

(c)

Say we have a Cauchy sequence in the closed subset. Since we are in a a complete metric space, the Cauchy sequence converges to something inside the metric space. Since a closed set contains all its limit points, it converges to something inside the closed set. Therefore, a closed subset of a complete metric space is complete. See https://math.stackexchange.com/questions/244661/showing-that-if-a-subset-of-a-complete-metric-space-is-closed-it-is-also-comple.

Problem 5

Going off of the hint, we can assume our covering is finite since $[a,b]$ is closed and bounded, therefore compact. We can use the definition of compactness to argue this.

Then, we can take a set of $n$ open intervals and reduce it to $n-1$ intervals by taking the union of two intervals, since the union of two open intervals gives us an open interval.

Using an inductive argument, we get 1 open interval that covers the entire space, which gives us a contradiction.

Problem 6

Set $\epsilon = 1$. By def. of uniform continuity, we can choose a $\delta$ that satisfies $|x-y| < \delta \implies |f(x)-f(y)| < \epsilon$.

Now, consider some $x \geq 1$, and let $d = \left \lfloor{\frac{x-1}{\delta/2}+1}\right \rfloor$. Intuitively, $d$ is the minimum number of intervals of length $\delta/2$ that would cover the interval $[1, x]$. Through the above inequality and the triangle inequality, we show that $|f(x) - f(1)| < d \leq \frac{x-1}{\delta/2} + 1$.

Using reverse triangle inequality (https://math.stackexchange.com/questions/214067/triangle-inequality-for-subtraction/214074), $|f(x)| < \frac{x-1}{\delta/2} + 1 + |f(1)|$.

We can rewrite the above form into $|f(x)| < ax + b$, and we can show that there exists some $M$ such that $Mx \geq ax + b$ for $x \geq 1$. So, we have $|f(x)| < Mx \implies |f(x)|/x < M$, as desired.

Problem 7

(a)

Let the term inside the series be $s_n$. Note that $|s_n| \leq e^{-nx}$.

For $x \in (0, \infty)$, use the comparison test and sum of geometric series to show that $x \in E$.

For $x \in (-\infty, 0]$, show that $lim |s_n| \neq 0$, so the series does not converge, and $x \not \in E$.

So, $E = (0, \infty)$.

(b)

It does not converge uniformly. Let $f_n = \sum_{k=1}^n e^{-kx}cos(kx)$. $$\lim_{n \to \infty} \sup_x |f_n - f| = \lim_{n \to \infty} \sup_x |\sum_{k=n+1}^\infty e^{-kx}cos(kx)|$$ To be continued. Intuition is that as x goes to 0, the term inside the sup goes to infinity, so our sequence does not converge uniformly.

Problem 8

(a)

Rudin 4.29 Intuition (for a right-handed limit) is that we show that the right-handed limit is $\inf_{x \in (x, x+\delta)} f(x)}$. We build up a sequence of $x_n \in (x, x+\delta)$ that converges to $x$ from the right so that $f(x_n)$ converges to $\inf_{x \in (x, x+\delta)} f(x)}$.

(b)

Rudin 4.30 Intuition is that we can sandwich a unique rational number between $f(x-)$ and $f(x+)$ for every discontinuity, creating an injection between the discontinuities and rational numbers; therefore, the number of discontinuities is countable.

Problem 9

(a)

Easy to show that for a fixed x, for $y \neq 0$, the function is continuous by division of continuous functions. Then, we just show that for any fixed x, $lim_{y \to 0} f(x, y) = 0$, so our function is continuous. Do this by using L'Hopital's for $x=0$ or evaluating the limit directly.

(b)

Take the limit, by fixing $x=y$.

Problem 10

We can show through an inductive argument that $a_n = \sqrt{1 + \sum_{k=1}^{n-1} 1/2^k}$. From this, it is clear that when we take the limit as $n \to \infty$, $a_n \to \sqrt{2}$.

Problem 11

Say $P(x)$ has roots $r_k$, each with multiplicity $m_k$. We must have $\sum m_k = n$. WLOG, say that $r_k$ is increasing.

We can factor $P(x) = \prod (x-r_k)^{m_k}$. By product rule, for any $k$ with $m_k>1$, $P'(x)$ has the root $r_k$ with multiplicity $m_k - 1$. So far, we have $\sum (m_k - 1) = n-k$ roots, leaving us with $k-1$ roots left to find.

Now we consider the intervals $(r_i, r_{i+1})$ for $k = 1, \ldots, k-1$. Since $P(r_i) = P(r_{i+1}) = 0$, by MVT (after checking conditions), there is at least one point $r'$ in $(r_i, r_{i+1})$ that satisfies $P'(r') = 0$. Since we have $k-1$ such intervals, and we have at most $k-1$ roots left to find, there is a root in each one of these intervals.

We have found all the roots of $P'$, and they are all real numbers, so we're done.

Problem 12

We want to show $$\lim_{n \to \infty} \sup_x |\sum_{k=n+1}^\infty (-1)^kf_k(x)| = 0$$

The conditions for the Alternating Series Test are met, so we can say $|\sum_{k=n+1}^\infty (-1)^kf_k(x)| \leq f_n(x)$ for all $x$. So, we can rewrite the limit: $$\lim_{n \to \infty} \sup_x |\sum_{k=n+1}^\infty (-1)^kf_k(x)| \leq \lim_{n \to \infty} \sup_x |f_n(x)| = 0$$ $$\lim_{n \to \infty} \sup_x |\sum_{k=n+1}^\infty (-1)^kf_k(x)| = 0$$ This is what we wanted, so the series of functions converges uniformly.

Problem 13

We set $\epsilon > 0$. By continuity, there is a $\delta > 0$ such that $|x-y| < \delta \implies |f(x)-f(y)| < \epsilon$. We set $n$ large enough such that $1/n < \delta$, implying that $|f( (k+1)/n) - f(k/n)| < \epsilon$. This makes it for even $n$: $$|1/n \sum_{k=1}^n (-1)^k f(k/n)| < \epsilon / 2 < \epsilon$$ For odd $n$: $$|1/n \sum_{k=1}^n (-1)^k f(k/n)| < \epsilon (n-1)/(2n) + |f(1 )|/n$$ For the odd case, we can set an $n$ even larger than the previous one so that it is less than $\epsilon$. This means that for large enough $n$, $|1/n \sum_{k=1}^n (-1)^k f(k/n)| < \epsilon$. By def. of a limit, the limit is equal to 0, as desired.

Problem 14

We consider the function $g(x) = f(x) - f(x + T/2)$, which is continuous. Show that $g(0) = -g(T/2)$, and then use IVT.

Problem 15

Following the hint, $f'(0) = a_1 + 2a_2 + \ldots + na_n$. We then show that assuming $|f'(0)| > 1$ leads to a contradiction of the original inequality. We can use the limit definition of the derivative for this.

Problem 16

False.

Let $A = \{1/\sqrt{p} \vert p \text{ is prime}\}$. Define $f(x) = 1$ when $x \in A$, and $f(x)=0$ otherwise.

Clearly, if we take $y_n = 1/\sqrt{p_n}$, where $p_n$ is the $n$th prime, $lim y_n = 0$, yet $lim f(y_n) = 1$, so $\lim_{x \to 0} f(x) \neq 0$.

We just have to prove that $f$ satisfies the original function requirement. We do this by showing the sequence $x_n = r/n$ contains at most one element in $A$ for all real $r$. Proceed by contradiction, and say $\{x_n\}$ contains $y_a$ and $y_b$ for distinct $a, b$. Then for some $m, n$ (note that $r \neq 0$): $$y_a = r/m, y_b = r/n$$ $$y_a/y_b = n/m$$ $$\sqrt{p_b/p_a} = n/m$$ LHS is irrational, and RHS is rational, so contradiction. Since $x_n$ contains only 1 element in A, $lim f(x_n) = 0$ for all real $r$. We are done.

Problem 17

Consider $f(x)e^{g(x)}$, which is continuous and differentiable on the same intervals. We take its derivative and apply MVT, and we are done.

Problem 18

(a)

Let partition $P_n = \{0, 1/n, 2/n, \ldots, n/n\}$. Note that $mesh(P_n) = 1/n$ and $L(f, P_n) \leq R_n \leq U(f, P_n)$.

The idea now is to use Ross 32.7. Let $\epsilon > 0$, and since $f$ is integrable, there exists a $\delta > 0$ that satisfies $mesh(P) < \delta \implies U(f, P) - L(f, P) < \epsilon$ for all partitions $P$. We choose $n$ large enough such that $mesh(P_n) < \delta$, so that we get the inequalities: $U(f, P_n) - \epsilon < R_n \leq U(f, P_n$ and $L(f, P_n) \leq R_n < L(f, P_n) + \epsilon$. This gives us:

$$U(f) \leq \sup_n(U(f, P_n)) = lim R_n = \inf_n(L(f, P_n)) \leq L(f)$$

Since we also have $L(f) = U(f)$ by integrability, $lim R_n = \int_0^1 f(x)dx$, as desired.

(b)

Indicator function for rationals.

Question List

1. How do you use the rational root theorem to prove that something is irrational?

Say $b$ is our number. We show that $b$ is the solution is to some polynomial, $P(b)$. We then use the rational root theorem to find all possible rational roots to $P(b)$, and show that $b$ is none of those rational roots; therefore, $b$ is irrational. Alternatively, we could plug in the possible rational roots into $P(b)$, and show that they all don't work; therefore, since bb works, $b$ cannot be rational.

2. Why is the Completeness Axiom useful?

According to the Completeness Axiom, any set that is bounded above has a least upper bound, meaning that the sup exists. This allows us to make a construction of the reals, by making a set that satisfies the Completeness Axiom, ensuring that there are no “gaps” in the reals.

3. How do we use inf/sup in proofs?

Assuming that a set is bounded above/below, say we have an upper bound $x$ of a set $S$. Then, since $sup(S)$ is the least upper bound for $S$, we know $x > sup(S)$ by definition of sup. Similar logic applies to inf.

4. What is the general strategy to prove that a sequence converges to a certain number?

Say our sequence is $x_n$ and our number is $x$. We fix a certain $\epsilon$, and we try to find an explicit $N$ in terms of $\epsilon$ such that $n > N$ implies $|x_n - x| < \epsilon$.

5. How to show that a sequence doesn't converge?

As above, say our sequence is $x_n$. We fix a certain $\epsilon$, and show that there are no such $N$ such that $n > N$ implies $|x_n - x| < \epsilon$. We can also do this by showing that there are an infinite number of $x_n$ that satisfy $|x_n - x| \geq \epsilon$.

6. What is the difference between a sequence and a set?

In a sequence, order matters, and we can have repetition of elements. Meanwhile, in a set, order does not matter, and we cannot have repeated elements.

7. What is the intuition behind a Cauchy sequence?

By the definition of a Cauchy sequence, for all $\epsilon > 0$, there exists $N$ such that $a, b > N$ implies $|x_a - x_b| < \epsilon$. Intuitively, this means that as the sequence goes on, the amplitude of its oscillations die out and eventually go to 0.

8. How do we show that a recursive sequence converges, and how do we find what it converges to?

Usually, we will show that the recursive sequence is monotonically decreasing/increasing by showing that $x_{n+1} > x_{n}$, or $x_{n+1} < x_{n}$. Then we will show it is bounded, showing that $|x_n| < M$ for all $n$. Since we have shown our sequence is monotone and bounded, it must converge. To show what it converges to, we take the limit of both sides of the sequence, and solve for $lim x_n$.

9. How do we show there exists a sequence that converges to a certain number?

We basically work backwards from the definition of a limit. Say our desired number is $x$, and we want to construct a sequence from the set $S$. We then show that for all $\epsilon > 0$, the intersection $S \cap (x - \epsilon, x + \epsilon)$ is non-empty. By doing this, we can construct a sequence that converges to $x$ by letting $\epsilon_n = 2^{-n}$, and then taking $x_n \in S \cap (x - \epsilon_n, x + \epsilon_n)$. We can then show through the definition of a limit that this sequence converges to $x$.

10. What does it mean to be dense on a set?

By definition, a subset $Y$ of a set $X$ is dense on $X$ if every point in $X$ is contained in $Y$ or is a limit point of $Y$. In other words, for all $x \in X$, we can construct a sequence from $Y$ that converges to $x$. As an example, the rationals are dense on the reals.

11. What is the intuition behind lim sup/inf?

lim sup is essentially when we take the limit of the sup of the tail end of a sequence. In other words, it measures the least upper bound of the tail end of the sequence, ignoring all initial behaviors. Similar logic applies for lim inf. Sup for n>N is monotone decreasing, and inf for n>N is monotone increasing.

12. Why does the “epsilon of room” trick work?

The “epsilon of room” trick is used to show $a=b$ by showing $|b-a| < \epsilon$ for all $\epsilon > 0$. We can prove this by contradiction. Say $b \neq a$, so $b-a \neq 0$. We take $\epsilon = \frac{b-a}{2}$, and the inequality is not satisfied. Therefore, we have a contradiction, and $a = b$.

13. What are alternative ways to phrase the definition of lim sup?

We use the “epsilon of room” trick, and the definition of a limit. We consider $limsup(x_n) + \epsilon$, where $\epsilon > 0$. So, there must exist an $N$ such that $n > N$ implies $x_n < limsup(x_n) + \epsilon$. If this didn't happen, then we would have an infinite number of n such that $limsup(x_n) < x_n < limsup(x_n) + \epsilon$, meaning $limsup(x_n)$ would not be an upper bound for the tail end, contradiction.

14. When can we have a subsequence converge to a certain number?

We can have a subsequence converge to a certain number $x$ if for any $\epsilon > 0$, there are an infinite number of $x_n$ that satisfy $|x_n - x| < \epsilon$. The difference between this and the sequence actually converging is that we only need an infinite number of $x_n$, instead of the stronger requirement all $x_n$ for $n>N$.

15. How do we find the set of subsequential limits for a certain sequence?

To find the set of subsequential limits for a certain sequence, we apply the above condition to all points in the real numbers. Usually, the sequence we are given is bounded, so we only need to consider numbers in the closed interval $[inf(x_n), sup(x_n)]$.

16. What does being Cauchy on a metric space mean?

It takes the same idea as a Cauchy sequence, except that we replace the absolute value sign with the distance function for that metric space. Formally, this means that for a specific sequence, for all $\epsilon > 0$, there is an $N$ such that $a, b > N$ implies $d(x_a, x_b) < \epsilon$.

17. What does completeness mean?

The definition of completeness is that in a metric space, all Cauchy sequences in that metric space are convergent to a point inside that metric space. Intuitively, this means that there are no “gaps” in the metric space, similar to what the Completeness Axiom implies.

18. Alternative ways of showing that set is closed?

By proposition 13.9 in Ross, a set is closed if it contains all its limit points. So, we just need to show that the sequence contains all its limit points. We can use this to easily show that a set is not closed, by showing that one of its limit points is not in the set. For example, the set of rationals is not closed on the reals, since we can construct a sequence of rationals that converges to an irrational number.

19. What does being sequentially compact mean?

If a set E is sequentially compact, then every sequence in E has a subsequence that converges to a point in E. This definition is equivalent to the open cover definition of compactness, so if we want to prove compactness, we may opt to use this definition instead.

20. How do you prove that a set is open?

We take an arbitrary point $x$ in the set, and then find an $\epsilon$ such that the interval $(x - \epsilon, x + \epsilon)$ in contained in the set. This satisfies the definition of an open set.

21. What are the ways we can prove a series converges, using the limit definition?

We can find an explicit formula for the partial sums, and then directly take the limit and find what the series converges to. We can also use the Cauchy convergence condition, in which for all $\epsilon > 0$, there exists an $N$ such that $n > N$ implies $\sum_{n}^{\infty} x_n < \epsilon$.

22. Ways to prove discontinuity?

We can use the definitions for continuity, and then show the function goes against one of the definitions. For one specific point, we can either use the epsilon-delta definition, and show that for a specific $\epsilon$, we cannot find any $\delta$ that satisfies the definition. We can also use the limit definition, by finding two sequences $a_n$ and $b_n$ that converge to x, but showing $lim f(a_n) \neq lim f(b_n)$.

23. What is the difference between a discontinuity of the first and second kind?

In the first kind, both the left-handed and right-handed limits exist at the point, but we either have $f(x-) \neq f(x+)$ or we have $f(x-) = f(x+)$ and $f(x) \neq f(x-)$.

24. What are the necessary conditions for the IVT to be valid?

For IVT to be valid on a certain interval $I$, the function $f$ must be continuous on the entire interval $I$.

25. What is the intuition behind uniform continuity?

By definition, uniform continuity means that for all $\epsilon > 0$, there is some $\delta$ such that $|x-y| < \delta$ implies $|f(x) - f(y)| < \epsilon$. Intuitively, this means the “slope” of a function is bounded at all points on the interval. This is why $x^2$ is not uniformly continuous; its slope goes to infinity as x goes to infinity.

26. Why does continuity on a compact set imply uniform continuity over that set?

We can prove this by contradiction. Say there is some $\epsilon$ that doesn't satisfy the definition of uniform continuity. Then, intuitively, there exist $x$ and $y$ be arbitrarily close to each other, yet $|f(x)-f(y)| \geq \epsilon$. Intuitively, this goes against the definition of continuity, so we must have that $f$ is uniformly continuous. For a rigorous proof, see https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor_theorem or Ross 19.2. This theorem is used to show that continuous functions are Darboux integrable.

27. What is Volterra's function?

Volterra's function is a function that is differentiable, and its derivative is bounded; yet, it (i.e., the derivative)is not Riemann integrable. A construction can be found in the wiki article: https://en.wikipedia.org/wiki/Volterra%27s_function.

28. What is an example of a sequence of continuous functions that converges to a discontinuous function?

One such example is $f_n = x^n$ on the interval $[0,1]$, which converges to the piecewise function where $f = 0$ when $x \neq 1$, and $f = 1$ when $x = 1$. Note that this sequence only converges pointwise.

29. Why does a sequence of continuous functions that converges uniformly converge to a continuous function?

We can use the epsilon-delta proof to show this. As an intuitive argument, we can find a function $f_n$ that is at most $\frac{\epsilon}{3}$ distance away from the function $f$, and since $f_n$ is continuous, we can find a point $y$ that satisfies $|f_n(y) - f_n(x)|<\frac{\epsilon}{3}$. We then use the triangle inequality to satisfy the epsilon-delta proof.

30. What is the difference between pointwise and uniform convergence?

Intuitively, pointwise convergence only means that for every point on a function, it will converge to the point on the final function (not necessarily simultaneously). For uniform convergence, every point “collapses” onto the convergent function. By definition, pointwise means that $lim f_n(x) = f(x)$ for all $x$ in the interval. Uniform convergence means that $lim sup_x|f_n - f| = 0$.

31. What are ways to show a set $S$ is disconnected?

This follows from the two definitions of connectedness. We can show that the set has a subset that is both open and closed that isn't the empty set or $S$. We can also show that the set is a disjoint union of two non-empty open sets.

32. What is an induced metric?

An induced metric from a metric space $S$ to a subset $X$ is when we use the same distance function from X on S. Specifically, $d_S(x,y) = d_X(x,y)$ for $x, y \in S$.

33. How does the change of variables theorem relate to u-substitution?

According to the change of variables theorem, $\int_{A}^{B} g d\beta = \int_{a}^{b} f d\alpha$, where $\beta(y) = \alpha(\psi(y))$ and $g(y) = f(\psi(y))$, with $\psi$ strictly increasing. If we take $\alpha = x$ and $\beta = \psi$, we get $\int_{A}^{B} g d\psi = \int_{a}^{b} f dx$. After applying Ross 6.17, we have $\int_{A}^{B} g(\psi(x)) \psi'(x) dx = \int_{a}^{b} f dx$, which is the same form as u-substitution. However, we do have the requirement that $\psi$ is strictly increasing.

34. What is the difference between the Riemann and Darboux integral?

The Darboux integral takes the upper and lower bound for all possible partitions. On the other hand, the Riemann integral considers all possible Riemann sums for a partition with a mesh size lower than $\delta$. It states that for each $\epsilon > 0$, there is a $\delta$ such that all possible Riemann sums for a partition with a mesh size less than $\delta$ differ from the value of the integral by at most $\epsilon$. By Theorem 32.9, Darboux integration and Riemann integration are equivalent.

35. What are the conditions necessary for the mean value theorem?

For the mean value theorem to be valid on an interval $[a,b]$, the function $f$ must be continuous on $[a,b]$ and differentiable on $(a,b)$.

36. What are the conditions necessary to use L'Hopital's rule?

For L'Hopital's rule to be valid, we need that $\lim_{x \to s} \frac{f'(x)}{g'(x)} = L$ to exist, and for both $f'$ and $g'$ to be differentiable at a neighborhood near $s$. We also need the quotient $\frac{f(x)}{g(x)}$ to be of a indeterminate form, or for $|g(x)|$ to go to $\infty$.

37. Conditions necessary for Taylor's Theorem?

If our remainder term is of degree $n$, we just need that the function $f$ be differentiable $n$ times on the interval $(a,b)$. Note that for degree 1, we just get the Mean Value Theorem.

38. What is the Weierstrass M-Test?

The Weierstrass M-Test states that if we have a convergent series $\sum M_k$, with $M_k \geq 0$, and a sequence of functions such that $|g_k(x)| \leq M_k$ for all x in a set $S$, then $\sum g_k$ converges uniformly on $S$. We can use this theorem to prove an “annoying” sequence of functions converges uniformly, such as something like $f_n(x) = \sum_{k=0}^{n} 4^{-k}sin(nx)$.

39. Why does L'Hopital's rule work when $|g(x)|$ goes to $\infty$, even when we don't have an indeterminate form?

Even though we don't have an indeterminate form, this works out in the proof, shown in Ross 30.2. Note that for something like $\lim_{x \to \infty} \frac{\sin{x}}{x}$, the limit $\lim_{x \to s} \frac{f'(x)}{g'(x)}$ doesn't even exist, so we cannot use L'Hopital's rule in the first place.

40. How do we find the radius of convergence?

For a power series $\sum_{n} c_n z^n$, we find the radius of convergence as follows. Let $\alpha = \limsup_{n \to \infty} |c_n|^{1/n}$, and the radius of convergence will be given by $R = 1/\alpha$. For any z satisfying $|z| < R$, the power series will converge; note that if $|z|=R$, we don't know what will happen and we will have to employ another test, such as the alternating series test. Also note that $R=0$ if $\alpha = \infty$. For a proof, we can just use the root test for absolute convergence.

41. Why do we have $a+b \geq 2\sqrt{ab}$?

This is the AM-GM inequality for two terms. Note that we have equality when $a=b$. We prove this as follows. Trivially, $(\sqrt{a}-\sqrt{b})^2 \geq 0$. Expanding this and rearranging terms gives $a + b \geq 2\sqrt{ab}$, which is our desired inequality. Putting $a=b$ makes the original inequality $0 \geq 0$, which shows why we have equality.

42. What is an example of a power series where the radius of convergence is 0?

We want to find $c_n$ such that $\alpha = \limsup_{n \to \infty} |c_n|^{1/n}$ diverges. We can simply do this by choosing $c_n = n^n$, in which case $\alpha = \limsup_{n \to \infty} n = \infty$, so $R=0$. So, one example of this power series is $\sum (nx)^n$.

43. What is an example of a smooth function whose Taylor series does not converge to that function?

We can consider the smooth function $f(x) = e^{-1/x^2}$ for $x>=0$ and $f(x)=0$ elsewhere. We showed in class that this was continuous, and it is also smooth since the derivative to the right side of $x = 0$ goes to $0$, as $e^{-1/x^2}$ shrinks faster than any polynomial. As such, its Taylor series about $x=0$ is just $P(x) = 0$, which definitely is not $f(x)$ for $x > 0$.

44. What does “zero measure” mean when talking about the Lebesgue criterion for Riemann integration?

By definition, a set $S$ has zero measure when, for all $\epsilon > 0$, $S$ can be contained in the union of open balls $U_1, U_2, \ldots$, such that $\sum vol(U_k) < \epsilon$. Here, $vol(U_k)$ is the volume of the open ball $U_k$. Any countable set has zero measure, so for a function with a finite number of discontinuities, its set of discontinuities has zero measure. See http://math.uchicago.edu/~may/REU2013/MeasureZero.pdf for more info.

45. How can a continuous function map a bounded set to an unbounded set?

We consider the function $f(x) = 1/x$ on the interval $(0, \infty)$, noting that $f$ is continuous in this interval. $f$ maps the interval $(0,1)$ to $(1, \infty)$, so this is an example of how it can map a bounded set to an unbounded set. Note that the bounded set must be open; if it was closed, then the set would also be compact, and continuous functions map compact sets to compact sets.

46. Why is the set $[0,1] \cap \mathbb{Q}$ not compact while $[0,1]$ is? (MT2, Q1, (4))

$[0,1]$ is compact since it is closed and bounded. However, $[0,1] \cap \mathbb{Q}$ is bounded, but not closed. To see why, consider its complement, which includes the irrationals in $[0,1]$. If we take any open ball around an irrational, it will contain some rationals by density of the rationals on $\mathbb{R}$. Therefore, the complement of $[0,1] \cap \mathbb{Q}$ is not open, so $[0,1] \cap \mathbb{Q}$ is not closed. Since $[0,1] \cap \mathbb{Q}$ is not closed, it cannot be compact.

47. Why is the set $\{0\} \cup \{1/n | n \in \mathbb{N}\}$ compact? (MT2, Q1, (5))

First of all, we can see clearly that it is bounded, so we just need to show closure. We consider its complement. For any point in $\mathbb{R} - (\{0\} \cup \{1/n | n \in \mathbb{N}\})$, we can find an open ball around it that doesn't intersect with $\{0\} \cup \{1/n | n \in \mathbb{N}\}$. Therefore, the set's complement is open, and the set itself is closed. Since it is closed and bounded on $\mathbb{R}$, it is compact. Also note that since this is compact on $\mathbb{R}$, it is compact on any set.

48. What are the conditions necessary to use the alternating series test?

We must have a series that obeys the following. $a_n \geq 0$ is monotone decreasing, with $lim a_n = 0$. Then, we will have that $\sum (-1)^{n+1} a_n$ converges.

49. When should we use the root or ratio test when determining when a series converges/diverges?

Usually when $a_n$ is in the form of $\alpha^n$, it is most obvious to use the root test since this exponent of $n$ will cancel out most easily. Whenever we have factorials or if the denominator and numerator are not raised to the power of $n$, it is usually a bit simpler to use the ratio test instead. If the results are inconclusive from either of these tests, we should consider the comparison test.

50. How do we know when a function $f$ is Riemann integrable with respect to $\alpha$?

If $f$ is continuous, it is automatically integrable. For a continuous $\alpha$, any monotone increasing or decreasing $f$ is integrable. Also, for any $f$ with a countable number of discontinuities, with $\alpha$ continuous where $f$ has discontinuities, $f$ is integrable. For more info on this last case, see http://www.math.ncku.edu.tw/~rchen/Advanced%20Calculus/Lebesgue%20Criterion%20for%20Riemann%20Integrability.pdf, which shows that a function we encountered in HW is Riemann integrable.

51. What is a step function?

In short, it is a piecewise function that has a finite number of constant pieces. See https://en.wikipedia.org/wiki/Step_function for more info.

52. How do we prove that a metric space is complete?

By the definition, we must consider a Cauchy sequence in that metric space. We then show that the Cauchy sequence converges to some point (not necessarily in the metric space), using a proof similar to the proof that all Cauchy sequences of real numbers are convergent (Ross 10.11). We then show that the convergent result is contained in the metric space.

53. How do we prove a one-sided limit $f(x+)$?

To make things simple, we talk about right-handed limit. From definition, we show that for all $\epsilon > 0$, there is some $\delta > 0$ such that $x \in (x, x + \delta) \implies |f(x) - f(x+)| < \epsilon$.

ඞ ඞ ඞ ඞ ඞ