User Tools

Site Tools


math104-s21:s:ryanpurpura

Ryan Purpura's Final Notes

Questions can be found at the very bottom.

Number systems

N\mathbb{N} is the set of natural numbers {1,2,3,}\{1, 2, 3, \dots\}. Key properties of N\mathbb{N} are that 1N1 \in \mathbb{N} and nN    n+1Nn \in \mathbb{N} \implies n + 1 \in \mathbb{N}. This natually leads to the idea of mathematical induction, which allows us to prove statements for all N\mathbb{N}.

Mathematical induction works as follows: given a proposition PnP_n, prove for the base case P1P_1 (or some other starting place) and then show that Pk    Pk+1P_k \implies P_{k+1}. This shows that PnP_n is true for all N\mathbb{N} (or subset if your base case is n=4n = 4 for instance).

Integers Z\mathbb{Z} extend N\mathbb{N} to include 00 and negative numbers. Rationals Q\mathbb{Q} are ratios of integers.

Property: If r=cdQr = \frac{c}{d} \in \mathbb{Q} (cc and dd are coprime) and rr satisfies i=0ncixi\sum_{i=0}^{n} c_i x^i with ciZc_i \in \mathbb{Z}, cn0c_n \ne 0, c00c_0 \ne 0, then dd divides cnc_n and cc divides c0c_0.

A natural corollary to this can be found by taking the contrapositive, stating that if dd does not divide cnc_n or cc does not divide c0c_0, then cd\frac{c}{d} cannot satisfy the equation. This allows us to enumerate all possible rational solutions to equations of that form.

Example: prove 2Q\sqrt{2} \notin \mathbb{Q}. 2\sqrt{2} solves x22=0x^2 - 2 = 0. By the rational root theorem, the only possible rational roots are ±{1,2}\pm \{1, 2\}. Plugging these in, we can clearly see that none of them solve the equation. Thus the equation has no rational solution, and so 2\sqrt{2} cannot be rational.

Sets and sequences

Maximum of a set of SRS \in \mathbb{R} is αS\alpha \in S such that αβ\alpha \ge \beta βS\forall \beta \in S. Minimum is defined similarly. Maxima and minima need not exist. However, they must exist for finite, nonempty subsets of R\mathbb{R}. Upper and lower bounds are defined similarly, but now α\alpha need not exist in SS. These also need not exist, e.g. S=RS = \mathbb{R} has no upper/lower bounds.

We define the least upper bound of S to be the supS\sup{S} and the greatest lower bound to be infS\inf{S}. Once again, these need not exist. However, we assume the Completeness Axiom to state that if SS is bounded from above, supS\sup{S} exists, and likewise for infS\inf{S}.

This allows us to prove the Archimedian Property, which states a,b>0    nN s.t. na>ba, b > 0 \implies \exists n \in N \text{ s.t. } na > b. A quick proof sketch: assume for sake of contradiction that a,b>0a,b > 0 but nN\forall n \in \mathbb{N}, nabna \le b. Bounded above, so use completeness axiom to show that supnna\sup_n{na} exists but is not an upper bound.

A sequence is a function NR\mathbb{N} \to \mathbb{R}, typically denoted as ana_n. We define the limit of a sequence to be the number αR\alpha \in \mathbb{R} if ϵ>0\forall \epsilon > 0, N>0\exists N > 0 such that n>N\forall n > N, we have anα<ϵ|a_n - \alpha| < \epsilon.

An important theorem is that all convergent sequences are bounded (but not all bounded sequences are convergent!!). Proof sketch: fix an ϵ\epsilon, which gives us an NN for which n>N    ann > N \implies a_n within ϵ\epsilon of the limit. Then the function is either bounded by the ϵ\epsilon bound or by the maximum of an|a_n| for n<Nn < N, which is a finite set.

Limits of addition, multiplication, and division are equal to limits of addition, multiplication, and division of limits (assuming the individual limits exist and for division, taking care not to let the denominator equal 0).

Note that an0a_n \ne 0 does not imply liman0\lim a_n \ne 0. Examples: an=1/na_n = 1/n, which 0n\ne 0 \forall n but liman=0lim a_n = 0.

We define liman=+\lim a_n = +\infty if M>0,N>0\forall M > 0, \exists N > 0 such that an>Ma_n > M n>N\forall n > N.

For example, an=n+nsinna_n = n + \sqrt{n} \sin n goes to ++\infty because n+sinnnnn + \sin n \ge n - \sqrt{n}. Informally, because nn grows faster than (n)\sqrt(n), nnn - \sqrt{n} goes to infinity.

Monotone sequences: if a sequence is increasing (resp. decreasing) and bounded, then it is convergent. Proof sketch: consider the set SS of all ana_n. By completeness theorem, sup\sup (resp. inf\inf) exists, so there is a sequence element greater than sup(S)ϵ\sup(S) - \epsilon. Monotonicity implies that all elements after that satisfy the ϵ\epsilon bound.

Lim sup, Lim inf, Cauchy Sequences

Given a sequence ana_n, define SN=supannNS_N = \sup {a_n | n \ge N}. Notably, N<M    SNSMN < M \implies S_N \ge S_M. So decreasing and has a (possibly infinite) limit. We define limsupan=limNSN=limN(supn>Nan)\lim \sup a_n = \lim_{N\to \infty} S_N = \lim_{N\to\infty}\left( \sup_{n>N} a_n \right)

Example: an=3+(1)n={2,4,2,4,}a_n = 3 + (-1)^n = \{2, 4, 2, 4, \dots\}. limsupan=4\lim\sup{a_n} = 4, liminfan=2\lim\inf{a_n} = 2.

Some properties of these: liminf(an)limsup(an)\lim\inf(a_n) \le \lim\sup(a_n), since nonstrict inequalities are preserved under limits. limsupsnklimsupsn\lim\sup s_{n_k} \le \lim\sup s_n Proof sketch: use the fact that subsets have smaller (or equal) sup\sup compared to the superset.

Cauchy sequence: ana_n is a Cauchy sequence if ϵ>0\forall \epsilon > 0, N>0\exists N > 0 such that n,m>N\forall n, m > N, we have anam<ϵ|a_n - a_m| < \epsilon.

Cauchy sequence     an\iff a_n converges. Proof sketch for reverse direction: consider N for bound ϵ/2\epsilon / 2. Then use triangular inequality.

Proof sketch for forward direction: first show (a_n) converges iff limsup(an)=liminfan\lim\sup(a_n) = \lim\inf a_n. Then show that limsupan=limsupan\lim\sup a_n = \lim\sup a_n exist and are equal, taking advantage of the fact that Cauchy implies bounded.

Recursive sequences: if Sn=f(Sn1)S_n = f(S_{n-1}), then if the limit exists and equals α\alpha, then α=f(α)\alpha = f(\alpha).

Example: If Sn=sn1+52Sn1S_n = \frac{s_{n-1} + 5}{2 S_{n-1}} with S1=5S_1 =5, limnan=α\lim_{n\to\infty} a_n = \alpha \implies α2+52α=α\frac{\alpha^2 +5}{2\alpha} = \alpha which means α=±5\alpha = \pm \sqrt{5}. Be careful though, because the initial condition matters. Given out initial condition, we can bound SnS_n between 5\sqrt{5} and 55 inclusive for all relevant nn using induction, which implies 5\sqrt{5} is the correct answer.

Subsequences: If sns_n is a subsequence, let nkn_k be a strictly increasing sequence in N\mathbb{N}. Then we can define a new sequence tk=(Snk)kt_k = \left(S_{n_k}\right)_k, called a subsequence of sns_n.

An important property of subsequences is that even if a sequence does not converge to a value, a subsequence of that sequence may. Specifically, let (sn)(s_n) be any sequence. (sn)(s_n) has a subsequence converging to tt iff ϵ>0\forall \epsilon > 0, the set Aϵ={nNsnt<ϵ}A_{\epsilon} = \{ n \in \mathbb{N} | |s_n-t| < \epsilon\} is infinite. In other words, there are infinite elements within the epsilon bound of tt in sns_n.

Forward direction proof sketch: just use the definition of convergence. Reverse direction proof sketch: Consider taking ϵ\epsilon smaller and smaller and taking an element from each ϵ\epsilon shrinkage to create the convergent subsequence (make sure that each element has a greater index than the one before it).

Every sequence has a monotone subsequence. Proof sketch: Define a dominant term to be sns_n such that m>n\forall m > n, sn>sms_n > s_m. Either there are infinitely many dominant terms (then dominant terms form a monotone subsequence), or there are finitely many. In the second case, then after the last dominant term, each element has some element greater than it with a higher index. Use this to form a monotone subsequence.

This theorem allows us to prove that every bounded sequence has a convergent subsequence, because a monotone subsequence must exist and it will be bounded, implying that it is convergent.

Given a sequence (s_n), we say tRt \in \mathbb{R} is a subsequence limit if there exists a subsequence that converges to tt. For example, liminfsn\lim\inf s_n and limsupsn\lim\sup s_n are subsequence limits. Proof sketch: Use definition of limsup\lim\sup and liminf\lim\inf, specifically that it is a limit of sup\sup values of the tail. Use the limit to produce an ϵ\epsilon bound on the sup\sup of tails for large enough nn. Then show that this implies infinite elements of the original sequence in that ϵ\epsilon bound.

Containing a single subsequence limit is necessary and sufficient for the existence of a convergent sequence.

Closed subsets: SRS \subset \mathbb{R} is closed if for all convergent sequences in SS, the limit also belongs to SS.

Example: (0,1)(0, 1) is not closed, because the sequence an=1/na_n = 1/n converges to 00, which is not in the set.

An extremely important result is that for positive sequences, liminfsn+1snliminf(sn)1/nlimsup(sn)1/nlimsupsn+1sn.\lim\inf \frac{s_{n+1}}{s_n} \le \lim\inf \left(s_n\right)^{1/n} \le \lim\sup \left(s_n\right)^{1/n} \le \lim\sup \frac{s_{n+1}}{s_n}. Proof sketch for last inequality (first is similar): Use fact that if Sn+1/SnL+ϵS_{n+1}/S_n \le L + \epsilon, then Sn+k/Sn(L+ϵ)kS_{n+k}/S{n} \le (L + \epsilon)^k. For n>Nn > N, we have Sn1/n(SN(1+ϵ)nN)1/nS_n^{1/n} \le \left( S_N (1+\epsilon)^{n-N} \right)^{1/n}. Take limsup\lim\sup of both sides, and the right side can be massaged to show limsup(Sn)1/nL+ϵ\lim\sup(S_n)^{1/n} \le L + \epsilon.

Metric Spaces

A metric space is a set SS and a function $d: S \cross S \to \mathbb{R}^{0+}$. The function dd must satisfy nonnegativity, d(x,y)=0    x=yd(x,y) = 0 \iff x = y, commutativity, and the triangular inequality.

We can generalize sequences to use SS instead of RR. Specifically, if sns_n is a sequence in SS, then we define sns_n to be Cauchy if ϵ>0,N>0\forall \epsilon > 0, \exists N > 0 s.t. n,m>N\forall n,m > N, d(sn,sm)ϵd(s_n, s_m) \le \epsilon. Also, we say sns_n converges to sSs \in S if ϵ>0\forall \epsilon > 0, N>0\exists N > 0 s.t. n>N\forall n > N, d(sn,s)<ϵd(s_n, s) < \epsilon. Just like the real number case, these two notions are equivalent.

We call a metric space (S,d)(S, d) complete if every Cauchy sequence has a limit in SS. For example, Q\mathbb{Q} is not complete, but R\mathbb{R} is.

The Bolzano-Weierstrass Theorem states that every bounded sequence in R\mathbb{R} has a convergent subsequence.

A topology on a set SS is a collection of open subsets such that S,S, \varnothing are open; (possibly infinite) union of open subsets are open, and finite intersection of open sets are open. We can define a topology on a metric space by defining the ball Br(p)={xSd(p,x)<r}B_r(p) = \{ x \in S | d(p,x) < r \} to be open.

We define ESE \subset S to be closed iff EcE^c is open, that is, xE\forall x \notin E, δ>0\exists \delta > 0 such that Bδ(x)E=B_\delta(x) \cap E = \varnothing.

We define the closure of a set ESE \in S to be \bar{E} = \cap \{ F | F \subset S \text{ closed set, } F \subset E \}$. We similarly define the interior to be the union of all open subsets of EE. The boundary of EE is the set difference bewteen the closure and the interior.

We define the set of limit points EE' to be all such points pp where ϵ>0,qE,qp\forall \epsilon > 0, \exists q \in E, q \ne p, such that d(p,q)<ϵd(p, q) < \epsilon. Very importantly, EE=EˉE \cup E' = \bar{E}. (It's also possible to *define* Eˉ\bar E this way as Rudin does)

An important property relating closedness and sequences is that ES is closed      convergent sequence xninS,x=limxnE \subset S \text{ is closed } \iff \forall \text{ convergent sequence } x_n in S, x = \lim x_n. The proof for both directions utilize proof by contradiction.

Compactness

We define an open cover of ESE \subset S to be a collection of open sets {Gd}αA\{G_d\}_{\alpha\in A} such that EαGαE \subset \bigcup _\alpha G_\alpha. From there, we can define compactness. A set KSK \subset S is compact if for any open cover of kk, there exists a finite subcover.

For example, if KK is finite, we can produce a finite subcover by, for each xKx \in K, picking a GxG_x such that xGxx \in G_x. Then the subcover is finite.

The Heine-Borel Theorem states that a subset of Rn\mathbb{R^n} is compact iff it is closed and bounded.

Lemma: a closed subset in a compact subset is compact.

Sequential compactness is an alternate definition of compactness, which states that XX is sequentially compact if every sequence of points in XX has a convergent subsequence converging to a point in XX.

Series

Series is an infinite sum of a sequence: n=1an\sum _{n=1}^\infty a_n. We define convergence as the convergence of partial sums Sn=j=1najS_n = \sum_{j=1}^n a_j.

We say that a series satisfies the Cauchy Condition if ϵ>0,\forall \epsilon > 0, N>0\exists N > 0 such that n,m>N\forall n,m > N, j=nman<ϵ\left| \sum_{j=n}^m a_n < \epsilon \right|. A Cauchy series is equivalent to the sequence of partial sums being Cauchy which is equivalent to the convergence of both.

If the series converges, then liman=0\lim a_n = 0. Note that the opposite is not necessarily true.

To determine whether series converge, we can use comparison tests.

Comparison test: nan converges bn<an    bnconverges\sum_n a_n \text{ converges } \wedge |b_n| < a_n \implies b_n \text{converges} Proof sketch: Show that nbn\sum_n b_n is Cauchy. Related, we say that nbn\sum_n b_n converges absolutely if nbn\sum_n |b_n| converges.

A classic example of a series that does not converge absolutely is the alternating harmonic series.

For the following tests, we recall that

liminfsn+1snliminf(sn)1/nlimsup(sn)1/nlimsupsn+1sn.\lim\inf \frac{s_{n+1}}{s_n} \le \lim\inf \left(s_n\right)^{1/n} \le \lim\sup \left(s_n\right)^{1/n} \le \lim\sup \frac{s_{n+1}}{s_n}.

Root Test Let α=limsupan1/n\alpha = \lim\sup |a_n|^{1/n}.

1. If α>1\alpha > 1, then series diverges

2. If α<1\alpha < 1, then series converges absolutely.

3. If α=1\alpha = 1, then series could converge or diverge.

Ratio Test Let α=limsupan+1an\alpha = \lim\sup \left| \frac{a_{n+1}}{a_n} \right|.

1. If α1\alpha \le 1, the series converges absolutely.

2. If α>1\alpha > 1, the series diverges.

Alernating Series Test For “nonincreasing” (i.e. the absolute values of terms are nonincreasing) alternating series where the terms tend to 0, the series converges.

Example: (1)n1n\sum (-1)^n \frac{1}{\sqrt{n}} converges since terms tend to 0 and 1n\frac{1}{\sqrt n} is nonincreasing.

Integral Test If ff is continuous, positive, and decreasing such that f(n)=anf(n) = a_n, then convergence of the integral is same as convergence of series.

Continuity and Uniform Convergence

I already LaTaX'd notes on continuity which can be found here: https://rpurp.com/2021-03-02.pdf

Now let's examine the link between compactness and continuity. Specifically, if ff is a continuous map from XX to YY, then f(E)Yf(E) \subset Y is compact if EE is compact. This can be proven by sequential compactness: a quick proof sketch is that for a sequence in f(E)f(E), we can reverse the map to find a sequence in EE. Since EE is compact, there's a subsequence that converges to a value in EE. Since continuity preserves limits, this is a subsequence limit for the sequence in YY as well.

A corollary to this is that if f:xRf: x\to \mathbb{R} is continuous and XEX \subset E is compact, then $\existsp, q \in E$ such that f(p)=supf(E)f(p) = \sup f(E) and f(q)=inff(E)f(q) = \inf f(E).

However, a preimage of a compact set may not be compact.

Uniform Continuity

f:XYf: X \to Y is uniformly continuous if ϵ>0\forall \epsilon > 0, δ>0\exists \delta > 0 such that p,qX,dX(p,q)<δ    dY(f(p),f(q))<ϵ\forall p,q \in X, d_X(p, q) < \delta \implies d_Y(f(p), f(q)) < \epsilon. Compared with the normal definition of continuity, we notice that we must select a δ\delta that works with *all* points. For regular continuity, we can choose a \delta per-point$.

Theorem: For f:XYf: X \to Y, if ff is continuous and XX is compact, then ff is uniformly continuous.

Example: f(x)=1/xf(x) = 1/x. Normally continuous except at x=0x= 0. But if we restrict the domain to a compact interval [1,2][1, 2], ff becomes uniformly continuous.

Interesting theorem: Given f:XYf: X \to Y continuous with XX compact, and ff a bijection, then f1:YXf^{-1}: Y \to X is continuous. Proof sketch: Show the preimage of an open set in f1f^{-1} is open.

Connectedness

We define connectedness as follows: XX is connected iff the only subset of XX that is both open and closed are XX and \varnothing.

An alternate definition is that XX is not connected iff U,VX nonempty, open \forall U, V \subset X \text{ nonempty, open }, $\U \cap V = \varnothing$ and X=UVX = U \coprod V. Or SX,SX\exists S \subset X, \varnothing \ne S \ne X such that SS is both open and closed. Then X=SScX = S \coprod S^c.

Example: $X = [0,1] \union [2, 3]$. Then let S=[0,1]S = [0,1] and Sc=[2,3]S^c = [2,3].

Theorem: Continuous functions preserve connectedness.

Theorem: ER connected     x,yE,x<y,[x,y]EE \subset \mathbb{R} \text{ connected } \iff \forall x,y \in E, x < y, [x,y] \subset E.

Rudin definition for connectedness: SS cannot be written as ABA \cup B, where AˉB=ABˉ=\bar A \cap B = A \cap \bar B = \varnothing.

Intermediate value theorem: This falls almost directly out of the fact that continuous functions preserve connectedness. If f:[a,b]Rf: [a,b] \to \mathbb{R} is continuous, and f(a)<f(b)f(a) < f(b), then y(f(a),f(b)),x(a,b)\forall y \in (f(a), f(b)), \exists x \in (a,b) s.t. f(x)=yf(x) = y.

Discontinuities: Given the left and right-hand limits of ff at x0x_0, if f(x0)=f(x0+)=f(x0)f(x_0) = f(x_0^+) = f(x_0^-), we say the function is continuous at x0x_0.

If both left and right-handed limits exist, but they disagree with each other or the value of the function, we call this a simple discontinuity. If one of the sides doesn't exist at all (classic example is sin(1/x)\sin(1/x) around x=0x=0, then we call it a 2nd-type discontinuity.

Monotonicity and Continuous Functions: If f:(a,b)Rf: (a,b) \to \mathbb{R} is monotone-increasing, then ff has countably many discontinuities.

Convergence of Functions

We say that a sequence of functions fnf_n converges pointwise to ff if xX\forall x \in X we have limfn(x)=f(x)\lim f_n(x) =f(x).

Example: fn(x)=1+1nsinxf_n(x) = 1 + \frac{1}{n} \cdot \sin x. Then fnff_n \to f pointwise where f(x)=1f(x) = 1.

An extremely useful object is the bump function, which we typically denote as $\phi(x) = \begin{cases}0 & x<0
2x & x \in [0, 1/2]
1 -2x & x \in [1/2, 1]
0 & x > 1 \end{cases}$

For example, fn(x)=ϕ(xn)f_n(x) = \phi(x-n) converges pointwise to 0.

We can see here that pointwise convergence does not preserve integrals. Also, convergence of ff doesn't imply convergence of ff' (pointwise).

Vectors converging pointwise is equivalent to them converging in a d2d_2 sense. (Actually, as long as it is a norm).

We can also have uniform convergence of functions. We say that fnf_n converges to ff uniformly if ϵ>0\forall \epsilon > 0, N>0\exists N > 0 such that n>N\forall n > N, xX\forall x \in X, we have fn(x)f(x)<ϵ|f_n(x)-f(x)| < \epsilon.

We can also express uniform convergence using the equivalent notion of Uniformly Cauchy: ϵ>0\forall \epsilon > 0 N>0\exists N > 0 s.t. n,m>N\forall n,m > N, fn(x)fm(x)<ϵ|f_n(x) -f_m(x) | < \epsilon.o

We also can talk about series of functions n=1fn\sum_{n=1}^\infty f_n. We say that this converges uniformly to ff if the partial sums Fn=n=1NfnF_n = \sum_{n=1}^N f_n converge uniformly to ff.

A test we can apply to see whether a series of functions converges uniformly is the Weierstrass M-test. If MnRM_n \in \mathbb{R} s.t. MnsupxXfn(x)M_n \ge \sup_{x\in X} |f_n(x)|: if n=1\sum_{n=1}^\infty, then n=1\sum_{n=1}^{\infty} converges uniformly. A proof sketch: Consider the absolute value of the series and use the triangular inequality.

Unlike pointwise convergence, uniform convergence preserves continuity.

Theorem: If KK is compact, and fn:KRf_n: K \to \mathbb{R} with the following conditions:

1. fnf_n is continuous

2. fnff_n\to f pointwise, ff continuous

3. fn(x)>fn+1(x)f_n(x) > f_{n+1}(x)

Then fnff_n \to f uniformly.

Derivatives

We define the derivative of ff as f(x)=limtxf(t)f(x)txf'(x) = \lim_{t\to x} \frac{f(t) -f(x)}{t-x}.

Differentiability implies continuity. This can be verified by multiplying the above by txt-x and using limit rules.

Chain rule: If h(t)=g(f(t))h(t) = g(f(t)), then h(x)=g(f(x))f(x)h'(x) = g'(f(x))f'(x).

We say f has a local maximum at pXp \in X if δ>0\exists \delta > 0 such that f(q)f(p)f(q) \le f(p) for all qXq \in X with d(p,q)<δd(p,q) < \delta.

If ff has a local maximum at x(a,b)x \in (a,b), and if f(x)f'(x) exists, then f(x)=0f'(x) = 0. This can be shown by bounding above and below by 0, proving it's 0.

Generalized MVT: If ff and gg are continuous on [a,b][a,b] and differentiable on (a,b)(a,b) then x(a,b)\exists x \in (a,b) such that [f(b)f(a)]g(x)=[g(b)g(a)]f(x)[f(b) - f(a)]g'(x) = [g(b)-g(a)]f'(x)

If we let g(x)=xg(x) = x, we get the regular MVT: f(b)f(a)=(ba)f(x)f(b) - f(a) = (b-a) f'(x)

Intermediate Value Theorem for Derivatives: If ff is differentiable on [a,b][a,b] and f(a)<λ<f(b)f'(a) < \lambda < f'(b) then x(a,b)\exists x \in (a,b) such that f(x)=λf'(x) = \lambda.

Proof sketch: Let g(t)=f(t)λtg(t) = f(t) - \lambda t. gg' goes from negative to postive, so somewhere gg attains its minimum and g(x)=0g'(x) = 0.

Correlary: If ff is differentiable on [a,b][a,b], ff' cannot have simple discontinuities, only discontinuities of the second kind.

L'Hospital's Rule: Suppose ff and gg are real and differentiable on (a,b)(a,b), and g(x)0g'(x) \ne 0 for all x(a,b)x\in (a,b). Let limxaf(x)g(x)=A\lim_{x\to a} \frac{f'(x)}{g'(x)} = A if it exists.

If f(x)0f(x) \to 0 and g(x)0g(x) \to 0 as xax \to a, or if g(x)g(x) \to \infty as xax \to a, then limxaf(x)g(x)=A.\lim_{x\to a} \frac{f(x)}{g(x)} = A.

The proof is quite involved and is included as a question later.

Taylor's Theorem

Let P(t)=k=0n1f(k)(a)k!(tα)kP(t) = \sum_{k=0}^{n-1} \frac{f^{(k)}(a)}{k!}(t-\alpha)^k. Then there exists a point xx between α\alpha and β\beta such that f(β)=P(β)+f(n)(x)n!(βα)f(\beta) = P(\beta) + \frac{f^{(n)}(x)}{n!}(\beta - \alpha).

This is essentially a different way to generalize the mean value theorem.

This proof will also be included as a question.

Integrals

Unless otherwise stated, assume ff is bounded and bounds of all integrals are from aa to bb.

We define a partition PP of [a,b][a,b] as finite set of points x0,x1,,xnx_0, x_1, \dots, x_n such that a=x0x1xn=ba = x_0 \le x_1 \le \dots \le x_n = b and define δxi=xixi1\delta x_i = x_i - x_{i-1} for convenience.

We further define Mi=supf(x)M_i = \sup f(x), mi=inff(x)m_i = \inf f(x), U(P,f)=i=1nMiδxiU(P,f) = \sum_{i=1}^n M_i \delta x_i, and L(P,f)=i=1nmiδxiL(P,f) = \sum_{i=1}^n m_i \delta x_i.

We define the upper and lower Riemann integrals of ff to be

fdx=infU(P,f) \overline { \int } f dx = \inf U(P, f) and fdx=supL(P,f) \underline {\int } f dx = \sup L(P, f).

If the upper and lower integrals are equal, we say that ff is Riemann-integral on [a,b][a,b] and write fRf \in \mathcal{R}.

Since m,M\exists m, M s.t. mf(x)Mm \le f(x) \le M, we have for all PP, m(ba)L(P,f)U(P,f)M(ba).m(b-a) \le L(P,f) \le U(P,f) \le M(b-a). So L(P,f)L(P,f) and U(P,f)U(P,f) are bounded, so the upper and lower integrals are defined for every bounded function ff.

We now generalize a bit. Consider a partition PP and let α\alpha be a monotonically increasing function on [a,b][a,b]. We write δαi=α(xi)α(xi1)\delta \alpha_i = \alpha (x_i) - \alpha (x_{i-1}).

We define U(P,f,α)=i=1nMiδαiU(P,f, \alpha) = \sum_{i=1}^n M_i \delta\alpha_i and L(P,f,α)=i=1nmiδαiL(P,f,\alpha) = \sum_{i=1}^n m_i \delta \alpha_i.

We define fdα=infU(P,f,α) \overline { \int } f d\alpha = \inf U(P, f, \alpha) and fdα=supL(P,f,α) \underline {\int } f d\alpha = \sup L(P, f, \alpha).

If they are equal, we have a Riemann-Stieltjes integral fdα\int f d\alpha

We define PP^* to be a refinement of PP if PPP \subset P^*. The common refinement of P1P_1 and P2P_2 is defined to be $P_1 \union P_2$.

Theorem: L(P,f,α)L(P,f,α)L(P, f, \alpha) \le L(P^*, f, \alpha) and $U(P^*, f, \alpah) \le U(P,f, \alpha)$.

Proof sketch: To start, assume PP^* contains one more point xx^* than PP and consider the inff(x)\inf f(x) of the two “induced” intervals by xx^*'s inclusion. Clearly both are larger than mim_i (corresponding to original interval containing xx^*.

Theorem: fdα=infU(P,f,α)fdα=supL(P,f,α) \overline { \int } f d\alpha = \inf U(P, f, \alpha) \ge \underline {\int } f d\alpha = \sup L(P, f, \alpha). Proof sketch: L(P1,f,α)U(P2,f,α)L(P_1, f, \alpha) \le U(P_2, f, \alpha). Fix P2P_2 and take sup\sup over P1P_1, then take inf\inf over all P2P_2.

Theorem 6.6: fR(α)f\in \mathcal{R}(\alpha) on [a,b][a,b] iff ϵ>0\forall \epsilon > 0 P\exists P such that U(P,f,α)L(P,f,α)<ϵU(P,f,\alpha) - L(P,f,\alpha) < \epsilon.

If direction: $L \le \underline{ \int } \le \overline \int \le U$ so 0<ϵ0 \le \overline { \int }- \underline { \int } < \epsilon, so they are equal and frf \in \mathcal r.

Only if direction: If fRf \in \mathcal R, P1,P2\exists P_1, P_2 s.t. U(P2,f,α)fdα<ϵ/2U(P_2, f, \alpha) - \int f d\alpha < \epsilon/2 and fdαL(P1,f,α)<ϵ/2\int f d\alpha - L(P_1, f, \alpha) < \epsilon/2.

Take common refinement PP. Show U(P,f,α)L(P,f,α)+ϵU(P, f, \alpha) \le L(P, f, \alpha) + \epsilon.

Fun facts about 6.6:

1. If holds for some PP and some ϵ\epsilon, then holds with same ϵ\epsilon for refinement of PP.

2. If si,tis_i, t_i \in [xi1,xi][x_{i-1},x_i], then i=1nf(si)f(ti)δαi<ϵ\sum_{i=1}^n |f(s_i) - f(t_i)| \delta \alpha_i < \epsilon.

3. If f is integrable and hypotheses of bb hold, then if(ti)δαifdα<ϵ.\left| \sum_i f(t_i) \delta\alpha_i - \int f d\alpha \right| < \epsilon.

Theorem: Continuitiy implies integrability. Proof: use fact uniformly continuous.

Theorem: Monotonicity of ff and continuous α\alpha implies integrability.

Theorem: If α\alpha is continuous where ff is discontinuous, and ff has only finite number of discontinuities, then f is integrable.

Theorem: Suppose fR(α)f \in \mathcal {R}(\alpha) on [a,b][a,b], mfMm \le f\le M, ϕ\phi continuous on [m,M][m, M], and h(x)=ϕ(f(x))h(x) = \phi(f(x)) on [a,b][a,b]. Then hRαh \in \mathcal{R}{\alpha} on [a,b][a,b].

Properties of the integral (That you didn't learn in Calculus BC): fdαM[α(b)α(a)]\left| f d\alpha \right| \le M[\alpha(b) - \alpha(a)]

fd(α1+α2)=fdα+fdα2\int f d(\alpha_1 + \alpha_2) = \int f d\alpha + \int f d\alpha_2.

ff,gg integrable implies fgfg integrable. Proof sketch: let ϕ(t)=t2\phi(t) = t^2 and use identity 4fg=(f+g)2(fg)24fg = (f+g)^2 - (f-g)^2.

fdαfdα\left| \int f d\alpha \right| \le \int |f| d\alpha. Proof sketch: let ϕ(t)=t\phi(t) = |t|.

Given unit step function II, if α(x)=I(xs)\alpha(x) = I(x-s) then fdα=f(s)\int f d\alpha = f(s) (This is quite similar to using the Dirac delta function in signal processing).

Relating sums and integrals: Given nonnegative cnc_n, ncn\sum_n c_n converges, and sequence sns_n, and ff continuous, If α(x)=n=1cnI(xsn)\alpha(x) = \sum_{n=1}^\infty c_n I(x-s_n) then fα=n=1cnf(sn)\int f \d \alpha = \sum_{n=1}^\infty c_n f(s_n).

Relating Derivatives and Integrals Assume α\alpha' is integrable. ff is integrable     \iff fαf\alpha is integrable. In that case fdα=f(x)a(x)dx\int f d\alpha = \int f(x) a'(x) dx.

Proof sketch: Use Theorem 6.6. Then use mean value theorem to obtain points tit_i such that δαi=α(ti)δxi\delta \alpha_i = \alpha'(t_i) \delta x_i.

Change of Variable Theorem If φ\varphi is strictly increasing continuous function that maps [A,B][A,B] onto [a,b][a,b], and α\alpha is monotonically increasing on [a,b][a,b] and fR(α)f\in \mathcal R(\alpha) on [a,b][a,b]. Define β(y)=α(φ(y))\beta(y) = \alpha(\varphi(y)) and g(y)=f(ϕ(y))g(y) = f(\phi(y)). Then ABgdβ=abfdα\int _A^B g d\beta = \int_a^b f d\alpha.

Integration and Differentiation.

If F(x)=axf(t)dtF(x) = \int_a^x f(t)dt, then FF continuous on [a,b][a,b], and if ff is continuous at x0x_0, FF is differentiable at x0x_0, and F(x0)=f(x0)F'(x_0) = f(x_0).

Proof sketch: Suppose f(t)M|f(t)| \le M. For x<yx < y: F(y)F(x)=xyf(t)dtM(yx)|F(y) - F(x)| = \left| \int_x^y f(t) dt \right| \le M(y-x). So we can bound F(y)F(x)|F(y) - F(x)| by epsilon since we can make yx|y-x| as small as we want, therefore uniformly continuous.

If ff is continuous at x0x_0, then given ϵ\epsilon choose δ\delta such that f(t)f(x0)<ϵ|f(t) - f(x_0)| < \epsilon| if tx0<δ|t-x_0| < \delta and atba \le t \le b. Hence F(t)F(s)tsf(x0)=1tsst[f(u)f(x0)]du<ϵ\left| \frac{F(t) -F(s)}{t-s} - f(x_0) \right| = \left| \frac{1}{t-s} \int_s^t [f(u) - f(x_0)] du\right| < \epsilon. Therefore, F(x0)=f(x0)F'(x_0) = f(x_0).

Fundamental Theorem of Calculus: If ff integrable and F\exists F such that F=fF' = f then abfdx=F(b)F(a)\int_a^b f dx = F(b) - F(a).

Final Questions

1: Prove there is no rational number whose square is 12 (Rubin 1.2).

A: Such a number would satify x212=0x^2 - 12 = 0. B the rational roots theorem, we can enumerate possible rational solutions x=±{1,2,3,4,6,12}x = \pm \{1, 2, 3, 4, 6, 12 \}. It can be verified that none of these satisfy the equation, so the equation has no rational solution, so no rational number has the square of 12.

2. Why is S=(0,2]S = (0, \sqrt 2] open in Q\mathbb Q?

A: For all points in S we can construct a ball such that the ball is entirely contained within the set S. This is because 2Q\sqrt 2 \notin \mathbb Q.

3. Construct a bounded set of real numbers with exactly 3 limit points.

A: 1,1/2,1/3,1/4,{1, 1/2, 1/3, 1/4, \dots} \cup {2, 1+ 1/2, 1 + 1/3, 1+1/4, \dots} \cup 3,2+1/2,2+1/3,2+1/4,{3, 2 + 1/2, 2+ 1/3, 2 + 1/4, \dots} with limit points 0, 1, 2

4. Why is the interior open?

A: It is defined to the the union of all open subsets in $\E$, and union of open subsets are open.

5. Does convergence of sn|s_n| imply that sn|s_n| converges?

A: No. Consider 1,1,1,1,-1, 1, -1, 1, \dots. sn|s_n| converges to 11 but sns_n does not converge.

6. Rudin 4.1: Suppose ff is a real function which satisfies limh0[f(x+h)f(xh)]=0\lim_{h\to 0} [f(x+h)-f(x-h)] = 0 for all xRx \in \mathbb{R}. Is ff necessarily continuous?

A: No: a function with a simple discontinuity still passes the test. In fact, since limits imply approaching from both sides, +h+h and h-h when approaching zero are the same thing, anyway.

7. What's an example of a continuous function with a discontinuous derivative?

A: Consider f(x)=xf(x) = |x|. The corner at x=0x= 0 has different left and right-hand derivatives of 1-1 and 11, respectively. This implies the derivative does not exist at x=0x=0, and a type-1 discontinuity exists there.

8. What's an example of a derivative with a type-2 discontinuity?

A: An example would be f(x)=x2sin(1/x)f(x) = x^2 \sin(1/x) with f(0):=0f(0) := 0 The derivative not zero is f(x)=2xsin(1/x)cos(1/x)f'(x) = 2x\sin(1/x) - \cos(1/x) which has a type-2 discontinuity at x=0x=0. (Source: https://math.stackexchange.com/questions/292275/discontinuous-derivative)

9: 3.3: Let CC be s.t. C<1|C| < 1. Show Cn0C^n \to 0 as nn \to \infty.

A: Assume WOLOG CC is positive (this extends naturally to the negative case with a bit of finagling) the limit exists since the sequence is decreasing and bounded below. Using recursive sequence Cn+1=CCnC^{n+1} = C C^n we get L=CLL = CL which implies L=0L = 0.

10: Can differentiable functions converge uniformly to a non-differentiable function?

A: Yes! Consider fn(x)=x2+1n.f_n(x) = \sqrt { x^2 + \frac{1}{n} }. It is clearly differentiable, and converges to x|x| uniformly. It develops a “kink” that makes it non-differentiable.

11: 3.5: Let SS be a nonempty subset of R\mathbb{R} which is bounded above. If s=supSs = \sup S, show there exists a sequence xn{x_n} which converges to ss.

Consider expanding ϵ\epsilon bounds. By definition, [sϵ,s][s - \epsilon, s] must contain a point in SS, otherwise sϵs - \epsilon is a better upper bound than the supremum! Thus we can make a sequence of points by starting with an epsilon bound of say, 11 and sampling a point within it, and then shrinking the epsilon bound to 1/21/2, 1/41/4, 1/81/8, etc.

12: Show that f(x)=0f(x) = 0 if xx is rational and 11 if xx is irrational has no limit anywhere.

A: Use the fact that Q\mathbb{Q} is dense on R\mathbb{R}. Given an ϵ<1/2\epsilon < 1/2, no matter what δ\delta you pick you're always going to get both rational and irrational numbers within that epsilon bound, which means the function will take on both 11 and 00 within that bound, which exceeds the ϵ\epsilon bound.

13: What exactly does it mean for a function to be convex?

A: A function is convex if x,y\forall x,y, 0α10 \le \alpha \le 1 we have f(αx+(1α)y)αf(x)+(1α)f(y)f(\alpha x + (1-\alpha) y) \le \alpha f(x) + (1-\alpha) f(y).

14. 5.2: Show that the value of δ\delta in the definition of continuity is not unique.

A: Given some satisfactory δ>0\delta > 0, we can just choose δ/2\delta / 2. The set of numbers in the input space implied by δ/2\delta / 2 is a subset of those from the old δ\delta, so clearly all of the points must satisfy f(x)f©<ϵ|f(x) - f©| < \epsilon required for continuity.

15: Why is the wrong statement as presented in lecture for the Fundamental Theorem of Calculus Flawed?

A: The reason is that the derivative of a function is not necessarily Riemann integral.

16: What function is Stieltjes integrable but not Riemann integrable?

A: Imagine a piecewise function that is the rational indicator function for 0x10\le x \le 1 and 0 elsewhere. This is obviously not Riemann integrable but we can assign α\alpha to be constant from 00 to 11 (i.e. assigning no weight to that part) to make it Stieltjes integrable. https://math.stackexchange.com/questions/385785/function-that-is-riemann-stieltjes-integrable-but-not-riemann-integrable

17: Why do continuous functions on a compact metrix space XX achieve their sup\sup and inf\inf?

A: f(X)f(X) is compact, which implies that it contains its sup\sup and inf\inf.

18. What's a counterexample to the converse of the Intermediate Value Theorem?

A: Imagine piecewise function f(x)=xf(x) = x for 0x50 \le x \le 5 and f(x)=x5f(x) = x - 5 for 5x105 \le x \le 10.

19: Sanity check: why is f(x)=x2f(x) = x^2 continuous at x=3x = 3?

A: limx3x2=9=f(3)\lim_{x\to 3} x^2 = 9 = f(3) (Definition 3). Proving with Definition 1 is annoying but you can see the proof in the problem book.

20: Prove the corellary to the 2nd definition of continuity: f:XYf: X \to Y is continuous iff f1(C)f^{-1}(C) is closed in XX for every closed set CC in YY.

A: A set is closed iff its complement is open. We can then use the fact that $f^{-1}(E^c} = [f^-1(E)]^c$ for every EYE \subset Y. (Basically, imagine the 2nd definition but you just took the complement of everything).

21: What's a function that is not uniformly continuous but is continuous?

A: A simple example is f(x)=x2f(x) = x^2 which is clearly continuous but finding a single value for δ\delta for a given ϵ\epsilon is impossible.

22: If we restrict x2x^2 to the domain [0,1][0,1], why does it become uniformly continuous?

A: This is because we restricted the domain to a compact set!

23: Give an example where EE' is a subset of EE.

A: Consider the classic set $\{0\} \union \{1, 1/2, 1/3, 1/4, \dots}$. E={0}E' = \{0\}.

24: Give an example where it's a superset of EE.

A: Consider E=(0,1)E = (0, 1) then E=[0,1]E' = [0, 1].

25: Give an example where it's neither.

A: Consider E={1,1/2,1/3,1/4,}E = \{1, 1/2, 1/3, 1/4, \dots \}. E={0}E' = \{0\}. But this is neither subset nor superset of EE.

26: Why is continuity defined at a point and uniform continuity defined on an interval?

A: Continuity allows you to pick a different δ\delta for each point, allowing you to have continuity at a single point. Uniform continuity needs to have a interval share the same value of δ\delta.

27: What is a smooth function, and give an example one.

A: A smooth function is infinitely differentiable. All polynomials are smooth.

28: Prove differentiability implies continuity.

A: Differentiability implies limtxf(t)f(x)tx=L\lim_{t \to x} \frac{f(t) - f(x)}{t-x} = L. Now multiply both sides by limtx(tx)\lim_{t \to x} (t -x), taking advantage of limit rules. so we get limtxf(t)f(x)tx(tx)=f(x)0=0\lim_{t\to x} \frac{f(t) -f(x)}{t-x}(t-x) = f'(x) \cdot 0 = 0 which means limtxf(t)=f(x)\lim_{t\to x} f(t) = f(x).

29: Use L'Hospital's Rule to evaluate limxxex\lim_{x\to \infty} \frac{x}{e^x}.

A: We clearly see that the limits of the top and bottom go to infinity, so we can use L'Hospital's Rule. Taking derivative of top and bottom, we get 1/ex1/e^x which goes to 00.

30: 5.1 Prove that if cc is an isolated point in DD, then ff is automatically continuous at cc.

A: Briefly, no matter what ϵ\epsilon is chosen, we can just choose a small enough neighborhood such cc is within the neighborhood. Then all xx (i.e., just c) in the neighborhood have f(x)f©=0|f(x)-f©| = 0.

math104-s21/s/ryanpurpura.txt · Last modified: 2022/01/11 10:57 by pzhou