Here you will find review material for this semester's final.
Topics of the class/Review Site will be organized as follows:
1. Sets and sequences. (Ch 1- 10 in Ross)
2. Subsequences, Limsup, Liminf. (Ch 10 - 12 in Ross)
3. Topology (Ch 13 in Ross, Chapter 2 in Rudin)
4. Series (Ch 14)
5. Continuity, Uniform Continuity, Uniform Convergence (Chapter 4, 7 in Rudin)
6. Derivatives (Chapter 5 in Rudin)
7. Integration (Chapter 6 in Rudin) Click here to access my webpage of exercises
Preface
This website particularly exists so a) I can review and organize my knowledge of basic real analysis and b) can help other students in the class to rehash on material.
In order to more kinesthetically review the material, I have decided to type out this project myself on the course webpage rather than cobbling together photos and notes. I have also given my takes for 51 important questions that come to mind on This webpage of exercises. These are meant to partially compensate for the more theoretical nature of lectures and readings.
Please enjoy.
Section 1: Sets and Sequences
Supremums and Infimums
When you were probably 5 years old, you probably learned about what a maximum or minimum element of the set is. Such concepts seem clear cut; however, things get confusing when we introduce certain open sets (e.g. (-5, 5).
Trick question: What is the maximum element of (-5, 5)? Answer: there is no maximum.
Likewise, there's no minimum.
However, we know that (-5, 5) has a supremum, which is defined as the minimum upper bound of the set, of 5. (-5, 5) has an infimum, which is defined as the maximum lower bound of the set, of -5.
More mathematical way of putting this (supremum is sup and infimum is inf):
supS=x⟺(∀s∈S,s≤x)∧(∀y<x,∃sb∈S:y<sb≤x)infS=x⟺(∀s∈S,s≥x)∧(∀y>x,∃sb∈S:y>sb≥x)
E.g.
sup{x⋅y:x,y∈{1,2,3,4}}=16 since {x⋅y:x,y∈{1,2,3,4}}={1,2,3,4,6,8,9,12,16}. All elements in the set are less than or equal to 16 and for any real number x less than 16, there exists a number in the set greater than x and less than or equal to 16; hence making 16 is an upper bound. Consider that for any z under 16 we always have an element in the set that is greater than z, e.g. 16. This makes 16 a minimum upper bound of the set.
inf{x⋅y:x,y∈{1,2,3,4}}=1. All elements in the set are greater than or equal to 1; hence making it an upper bound. Consider that for any z over 1 we always have an element in the set that is less than z, e.g. 1. This makes 1 a maximum lower bound of the set.
E.g.
Let S=(−12,1)∪(2,14)
supS=14 since: 1. all elements in the set are less than or equal to 14 and for any number x less than 14, there exists a number in the set that is greater than x and less or equal to than 14; hence 14 is an minimum upper bound of the set
infS=14. All elements in the set are greater than or equal to -12; hence making it a lower bound. Consider that for any z over -12 we always have an element in the set that is less than z, e.g. 1. This makes -12 a minimum lower bound of the set.
Remarks I would like to make: infS=−∞ iff S is not bounded from below and supS=∞ iff S is not bounded from above.
Useful results and Miscaleneous from First Week of Class:
From Homework 1: sup{s+u:s∈S,u∈U}=supS+supU.
From homework: inf(S+T)=infS+infT
Remark: not necessarily true that inf(S−T)=infS−infT
Archimedian Property. Given a,b∈R:a=b, we know that ∃q∈Q: q is between a and b.
Fundamental Theorem of Algebra: given polynomial function P(x)=cnxn+cn−1xn−1+cn−2xn−2+…+c1x+c0 whose coefficients are integers, we know that any rational zero of P(x) can be written in the form
q2q1 where q1,q2 are two integer factors of c0 and cn respectively.
Sequences
A sequence is an ordered set of numbers/terms. If a sequence (xn) in metric space M converges to limit l, then we know that
∀ϵ>0,∃N>0:∀n>N,d(xn,l)<ϵ. As a sidenote, d is a metric of metric space M. (We'll go over metric spaces in Section 2, but when dealing with Rn d is the Cartesian distance function in Rn.
Furthermore, a sequence (xn)is defined to be Cauchy iff ∀ϵ>0,∃N>0:∀n,m>N,d(xn,xm)<ϵ. An interesting property to note is that if a cauchy sequence is a sequence of real numbers, then it is also cauchy sequence in the space of real numbers and vice versa. However, note that not all cauchy sequences are convergent.
Definition: (limfn=∞)⟺(∀M>0:∃N>0:∀n>N,fn>M).
Definition: (limfn=−∞)⟺(∀M<0:∃N>0:∀n>N,fn<M).
E.g. xn=n4+14n+n2 converges to 1. We can take the limit n→∞limxn=n→∞limn4+14n+n2=1.
E.g. xn=n diverges since as n increases to infinity xn blows up to infinity, xn does not approach a real number.
E.g. xn=−i=1∑ni1 diverges to −∞ since as n increases to infinity, xn decreases and hence diverges to negative infinity (since it is commonly known that harmonic series ∑i=1ni1 does not approach a real number).
E.g. xn=(−1)n diverges since as n just oscillates between 1 and -1, xn does not approach a real number.
Useful Results (given both fn and gn converge)
Addition Rule lim(fn+gn)=limfn+limgn
Multiplication Rule lim(fngn)=limfn⋅limgn= (Note this works for finite sums).
Division Rule (only if both numerator and denominator converge as n→∞.
Monotonically Increasing and Decreasing
Although it may be redundant, I thought that it would be best to establish the definitions of monotonically increasing and decreasing. This will come in handy in the next subsection.
A sequence xn is said to be monotonically increasing iff xn+1≥xn.
A sequence xn is said to be monotonically decreasing iff xn+1≤xn.
For instance xn=en monotonically increases as xn+1=en+1=(e)(en)=exn≥xn (since xn≥0 by the nature of the exponential function).
E.g xn=e−n monotonically decreases as xn+1=en+1=(e1)(e−n)=(e1)xn≤xn (since xn≥0 by the nature of the exponential function).
Section 2: Subsequences, Limsup, Liminf
We define limsupsn=limN→∞(sup{sn:n≤N}) and liminfsn=limN→∞(inf{sn:n≤N}).
One important point to note is that sup{sn:n≤N} and inf{sn:n≤N} are, respectively, monotonically decreasing and increasing functions. Furthermore, it is very important to note that limsupsn=liminfsn=limsn iff limsn converges.
E.g. Consider xn=(−1)n. We know that limsupxn = 1 because of how xn alternates between -1 and 1 with 1 occurring on even n and -1 occurring on odd n. Thus, we can also say that liminfxn = 1.
Useful results:
Given that an is a sequence of nonzero real numbers, liminf∣an+1/an∣≤liminf∣an∣n1≤limsup∣an∣n1≤limsup∣an+1/an∣
Given that lim∣an+1/an∣=L we know that lim∣an∣n1=L.
Subsequences
We define a subsequence snk of sn as an ordered set of points in sn such that ∀k∈N,nk<nk+1.
Important result: We know that ∀xnk∈(xn),xnk→x iff xn→x.
Another useful limit theorem is that for all ∀(xn):∣xn∣≤M, we know that there exists a converging subsequence of points in xn. This is called the Bolzano-Weitrass theorem .
E.g. Let xn=(−1)n. Then, although we definitely know that xn doesn't converge, that sequence is bounded (i.e. ∣xn∣≤1). We also see a convergent subsequence of it, x2k converges, specifically x2k→1 because x2k=(−1)2k=1.
Useful Result: Given sequence of nonzero real numbers sn, we know liminf∣snsn+1∣≤liminf∣sn∣n1≤limsup∣sn∣n1≤limsup∣snsn+1∣. We also know from this theorem that lim∣sn∣n1=lim∣snsn+1∣ when lim∣sn∣n1,lim∣snsn+1∣ exist (and that includes when these limits are ±∞).
E.g. Consider xn=(4)n1. Consider that xn=(4)n1=∣4∣n1=∣an∣n1. Without using any complicated identities, lim∣anan+1∣=lim∣44∣=1. Applying the consequence from the previous “Useful Result”, we get that lim∣an∣n1=lim4n1=1.
Section 3: Topology
In order to generalize and make more precise the discussion on limits and later functions, we need to create a language with which to discuss domain, range, and sets. In pure math, not everything is in the set of real numbers. Therefore, we introduce the concepts of metric spaces, balls, and generalized notions of compactness and open sets.
Metric Spaces
For most of our math lives up to Math 53, we have dealt with the cartesian space. We learned that distance between two points in 3d space is d(x,y)=(x0−y0)2+(x1−y1)2+(x2−y2)2. Then in Math 54 (or in Physics 89) we met linear spaces in terms of vectors, functions, polynomials, and all the familiar constructs we could think of. Now, we use an ever more generalized idea of a space and distance.
A metric space (A, d) is a metric space iff the metric d satisfies the following for all x,y∈A
1. d(x,y)∈(R∖R−).
2. d(x,y)=0⟺x=y
3. d(x,y)=d(y,x)
4. d(x,y)+d(y,z)≥d(x,z)
A set is S said to be open in (A, d) iff
∀sinS,∃r>0:{x∈A:d(x,s)<r}⊆S. As a note {x∈A:d(x,s)<r}=Br(s) where Br(s) is called an open ball of radius r around point s.
A limit point s of S is a point such that we can construct a sequence of points in S that are not equal to s but nonetheless converge to s.
The closure of a set S is defined to be the set of all limit points of S union S itself.
Using this idea, a set S is said to be closed iff all limit points of S are in S.
E.g. Consider the space (−1,1) with metric d defined as d(x,y)=1 iff x=y and d(x,y)=0
iff x=y. Denote our metric space by (M,d). It can be established that d(x, y) is a valid metric since d(x,y).
This is because:
∀x,y∈(−1,1),d(x,y)≥0. Also note by definition of d that d(x,y)=0 iff x=y.
By definition of the function we know that d(x,y)=d(y,x). Specifically when x=y, d(x,y)=0d(y,x) and when x=y, d(x,y)=1=1=d(y,x).
Finally the triangle inequality holds in this metric space. Suppose it did not hold. Then ∃x,y,z∈(−1,1),d(x,y)+d(y,z)<d(x,z). The only way this is possible given how this metric is set up is if d(x,y)+d(y,z)=0 and d(x,z)=1. In such case, we know d(x,y)=0=d(y,z) since our metric can never spit out a negative value. Thus we have to conclude x=y,y=z and therefore x=z. However this leads us to say that d(x,z)=0=1, which is contradiction.
In this set we can also say that [0,0.5] is open in (M,d). We can say for any x∈[0,0.5] , ∃r>0:{y∈M:d(y,x)<r}⊆[0,0.5]. Specifically we can make r be any number less than 1. That way regardless of what x∈M is , we know that {y∈M:d(x,y)<r}={x}⊆[0,0.5].
Interestingly, we can prove that [0,0.5] is closed in (M,d). In order to show this we need to show [0,0.5]=E=E=E∪E′ (we denote E′ as the set of limit points of E).
Consider that E has no limit points! In order to show this, we can show that for arbitrary p and ∀pn→p where p∈M, p∈pn. This follows from the definition of the limit, which shows tells us that ∀ϵ>0,∃N:∀n>N,d(pn,p)<ϵ. Now consider what happens when 0<ϵ<1. In that case, we know that ∀n>N,pn=p because of how the metric d is defined and the only way for d(pn,p)<1 is for pn=p so that d(pn,p)=0. Since ∀xn→p, p∈xn, p can never be a limit point.
Therefore E′=∅. And therefore E=E and thus [0,0.5] is closed. This illustrates that just because E is open does not necessarily mean that it is not closed.
Useful results on Open Sets Closed Sets
A set is open iff its complement is closed.
Given that ⋃Si is a union of open sets, ⋃Si is open.
Given that ⋂Si is the intersection of closed sets, ⋂Si is closed.
Given that ⋃Si is a union of finitely many closed sets, ⋃Si is closed.
Given that ⋂Si is the intersection of open sets, ⋂Si is open.
Induced Topology
Sometimes, we want to create our own metric space and transfer over properties. How these properties transfer over can be summarized by the following drawing (credits to Peng Zhou).
In this diagram there are two ways to create induced topology. We can either obtain the topology from an induced metric or from the old set itself.
One key idea to note is that (and I quote from course notes) Given A⊆S and (S, d) is a metric space, we can equip A with an induced metric. E⊆A is open iff ∃ open set E2⊆S such that E=E2∩S. From how a complement of an open set in ambient space is closed, we can use this theorem to tell alot topologically about the set.
Other Basic Topology Definitions
A Set is countable iff there exists a one to one mapping between the elements of the set to the set of natural numbers.
An open ball (or neighborhood as referred to in Rudin), about point x and with radius r is B(r;x)={y∈M:d(y,x)<r} where (M, d) is the ambient metric space of x.
p is a limit point of S, a subset of M where S's ambient metric space is (M,d), iff we know that ∀r>0,∃p2=p:p2∈B(r;p).
S is bounded iff ∃r:∀p,q∈S,d(p,q)≤r.
S is dense in X iff every element in X is in S or is a limit point of S.
S is a perfect set iff S is closed and S does not have any isolated points.
diam(S), the diameter of S is given to be diam(S)=supx,y∈Sd(x,y).
E.g.: We can say that in the set {n1:ninN} has a limit point 0 since we know that limn1=0 and therefore ∀ϵ>0,∃N∈N:∀n≥N,d(n1,0)=∣n1−0∣<ϵ. Therefore, ∀ϵ>0,B(ϵ;x)
Compactness
In a very abstract way, we can think of compact as the mathematical way of saying controlled, contained or small; to get a gist of what compactness mean, one may take some of the interesting analogies/metaphors/layman concepts expressed in this article from the Scientific American.
Compactness is a tool that is hard to master but is VERY useful once you get it. For instance, we can link compactness to discuss uniform continuity of a function, compactness of the output of a function that is inputted a compact set, preserving the openness of a set, and even convergence of subsequences. Because of how compactness is related to many other ideas, understanding this very thoroughly is very important if you want to continue in analysis at the undergraduate and graduate levels.
An open cover of set S is a collection of open sets {Cα} such that S⊆⋃Cα.
A set S is defined to be compact iff ∀ Open covers of {Cα}, there exists a finite subcover {Cα}.
E.g. we can say that (0,1) is not a compact set. This is a classic example. Consider what happens when we make our subcover Cα = {(n+21,n1):n∈N}. We know that C has infinitely many elements. However there is no finite subcover of C since if I get rid of even one of the elements in C; there exist elements in (0,1) that would not be included in the collection of remaining elements in C. Thus, there cannot be a finite subcover of C. C is not compact.
One shortcut applicable to compactness and dealing with real numbers is that a subset of R is compact iff that subset is closed and compact; that is called the Heine-Borrel theorem. Notice that we can only apply this to subsets of R.
However, one can truly be sure that if a set is closed then it is closed and bounded in some metric space. For instance consider the set [−1,1]∩Q. The set, as you may know from the exam, is compact. Furthermore, we know the set is bounded and also it is closed (since [−1,1]∩Q=E∪E′ (union of points in E and limit points of E).
More Useful Results for Compactness
Sequential Compactness = Compactness. Sequential compactness specifically means that given any sequence of points sn in the set S, we know that ∃ subsequence snk that converges to a point in S.
We also know given that f is a continuous map from X→Y and S is a compact subset of
Closed subsets of a compact set are compact.
Given that {Cα} is a collection of compact sets such that the intersection of any finite number of sets in the collection is nonempty, we know that ⋂Cα is nonempty.
Connectedness
We know that A are connected iff we cannot write A as a union of two disjoint open sets.
E.g. (Based on problem from midterm 2). Q is NOT a connected set. I can write it as a union of two disjoint open sets such as (−∞,2)∩Q and (2,∞)∩Q.
E.g. On the other hand, R is a connected set. Suppose that I can write R as a union of disjoint sets. This inevitably leaves points missing from the R. (Perhaps an exercise to prove).
Section 4: Series
This section proceeds in a way similar to how we think of sequences; after all, we are working with indices and hence countable number of elements. We can think of limn→∞Sn=∑i=0∞si where SN=∑i=0Nsi Similar terminology and ideas apply, such as a form of sandwich theorem, the idea of convergence. However, this concept also ties into the properties of Taylor expansions, later on, and there are some additional, or rephrased, rules that need to be kept in mind when examining the convergence/divergence of sums.
E.g. Let ∑i=0∞2−i∣sini∣. We know that ∣sini∣≤1 by definition of the sine function; this makes 2−i∣sini∣≤2−i. Also ∑i=0∞2−i→1−1/21=2 as we learned from high school math. Since ∑i=0∞2−i converges and 0≤2−i∣sini∣≤2−i, we use the comparison test to conclude that ∑i=0∞2−i∣sini∣.
E.g. Let ∑i=1∞i2i. Notice that 2ii1≥i1≥0. Furthermore ∑i=1∞i1 is a well known divergent sequence (harmonic). Thus, we know that ∑i=1∞i2i diverges by the comparison tests.
Another useful test we can use is the ratio test:
Let ∑i=0∞ai where ai is nonzero. We know that the series converges if limsup∣anan+1∣<1. We know that the series diverges if liminf∣anan+1∣>1. Otherwise the test is inconclusive.
E.g. Let ∑n=1∞4nn2. We know limsup∣anan+1∣=lim∣anan+1∣=lim∣4nn24n+1(n+1)2∣=lim∣41(n/n+1/n)2∣=1/4<1
Thus ∑n=1∞4nn2 converges.
E.g. Let ∑n=1∞n2n. We know liminf∣anan+1∣=lim∣anan+1∣=lim∣(1+1/n)2∣=2>1. Thus ∑n=1∞n2n diverges.
And then the root test:
Given ∑n=0∞an, if limsup∣an∣n1<1, then we know that the series converges. Otherwise if limsup∣an∣n1>1, the series diverges.
E.g. Consider ∑n=0∞(31)n. We know that limsup(31)n=0<1 and hence that series converges.
E.g. Consider ∑n=0∞(34)n. We know that limsup(34)n=∞>1 and hence that series diverges.
And consider the Integral Test:
Given ∑n=0∞an and that f(n)=an and that f(n) is monotonically decreasing and is nonnegative, if ∫0∞f(x)dx converges then the series converges. If ∫0∞f(x)dx diverges then the series diverges.
E.g. ∑n=0∞n+11. We know that f(x)=x+11 monotonically decreases on x∈[0,∞) and is nonnegative on that interval as well. Hence we apply the integral test. ∫0∞f(x)dx=[ln(x+1)]0∞=∞. Thus ∑n=0∞n+11 diverges.
E.g. ∑n=0∞(n+1)21. We know that f(x)=(x+1)21 monotonically decreases on x∈[0,∞) and is nonnegative on that interval as well. Hence we apply the integral test. ∫0∞f(x)dx=[x+1−1]0∞=1. Thus ∑n=0∞(n+1)21 diverges.
Speaking of the integral test we can derive the p-series test. This test states that given series ∑n=1∞np1, if p≤1, then the series diverges and if p>1 the series converges.
Finally there is the alternating series test:
Given ∑n=1∞(−1)nan where an>0 and liman=0 and an monotonically decreases, we know that ∑n=1∞(−1)nan converges.
If ∑n=1∞an, series ∑n=1∞(−1)nan is defined to converge absolutely.
E.g. Consider ∑i=1∞(−1)nn2+n1. We know that an+1=(n+1)2+(n+1)1=n2+2n+1+(n+1)1=n2+3n+21≤n2+n1=an and liman=limn2+n1=0. By alternating series test ∑i=1∞(−1)nn2+n1 converges. Furthermore the series converges absolutely as 0≤∣an∣=n2+n1≤n21 and
Some interesting things you can do with these series is to compute sums of the series. This can be done though things like Fourier Series or even using common McLaurin expansions of series. You may have seen these in some hard math 1b exams.
Section 5: Continuity
This section of the course picks up from the 1-2 weeks in AP Calc BC that was dedicated to continuity. Back in that day, the domain of a real function is on the real line we know that f is continuous at x0 iff limx→x0+f(x)=limx→x0−f(x)=f(x0). This made things very easy to prove. We also know that continuity, for granted, creates things like the intermediate value theorem and how if a function is differentiable then it is continuous. Now, once we start to deal with different metrics and less physically intuitive spaces (e.g. even hard-to-draw 3D space), we need to consider many more dimensions and nuances of continuity. This requires redefining/generalizing from scratch the limit of a function and then formalizing the many behaviors of a continuous function.
Limits and The Many Ways to Say "Continuous"
We can say that a function f has a limit of l at p iff (∀pn:pn→p),→p,limf(pn)=l. I have to emphasize here ∀pn:pn→p since this is what makes it harder to show that a limit exists than it is to show that a limit doesn't exist. You may have learned this from doing Math 53 (Multivariable Calc) limits.
In calculus class we have learned that function f is continuous at p iff limx→pf(x)=f(p). f is continuous (implicit over S) if ∀p∈S,limx→pf(x)=f(p).
Another way of saying this is that ∀pn:pn→p,limn→∞f(pn)=f(p).
Other statements equivalent to continuity of f:X→Y, given x (element in metric space (X, d)):
The epsilon and delta definition that ∀ϵ>0,
∃δ>0:∀x2∈X,dX(x2,x)<δ→dY(f(x2),f(x))<ϵ.
Given arbitrary V⊆Y that is open, f−1(V) is open on X. ( Note: f−1(V)={x∈X:f(x)∈V}.)
One thing to also note is how compactness is preserved through continuous functions. More precisely: given X2⊆X is compact, we know that f(X2) is also compact. This is proven in Problem 1 of my problems page.
Furthermore, from The Problem Book in Real Analysis we know f is not uniformly continuous iff (∃ϵ,xn,yn:∣xn−yn∣<n1:∣f(xn)−f(yn)∣>ϵ.
Be careful to not confuse uniform continuity with mere continuity!
Uniform continuity vs regular continuity
This is a TIGHTER condition when compared to regular continuity.
E.g. f(x)=x1 on (0,1). (Probably a simple example from the Problem Book for Real Analysis).
We know that f(x)=x1 is continuous (this can be a problem for another time). However we also know that f(x)=x1 is NOT uniformly continuous over (0,1). This is since for epsilon and the sequences in (0, 1) ϵ=1/2,xn=n1,yn=2n1., we know that ∣xn−yn∣=2n1<n1 and ∣f(xn)−f(yn)∣=∣n−2n∣=n>21=ϵ.
On the other hand, we know that the exponentially decaying function,f(x)=e−x is uniformly continuous on [1,∞). This is since for all ϵ>0,∃δ>0:∀x∈(1,∞):d(x,y)<δ⟶∣f(x)−f(y)∣<ϵ. Specifically once can notice that ∣e−x−e−y∣≤1∣x−y∣ on (1,∞). Thus, we can make δ=ϵ. That way, if we make ∣x−y∣<δ, then ∣e−x−e−y∣≤1∣x−y∣<δ=ϵ on (1,∞) (Extra challenge: prove the Lipschitz continuity of f on [1,∞) using mean value theorem or some other means). Thus we can say that ∀ϵ>0,∃δ>0:(∣x−y∣<δ)⟶(∣f(x)−f(y)∣)<ϵ. Thus f on [1,∞) is uniformly continuous.
Some useful results:
If S is a closed and compact subset of the domain of continuous function of f, we know that f is uniformly continuous on S.
Uniform Convergence
Another interesting topic of continuity seems almost the same as the first parts of this course regarding sequences. However, we first change things by using continuous functions as elements in our sequence fn. Secondly, we make sure that the end function that the sequence converges to fn is going to be so that ∀ϵ>ϵ,∃N:∀n>N,dY(fn(x),f(x))<ϵ for any x in the domain of f.
Looks familiar. Yes. Is this exactly the same as saying that fn→fpointwise? NO!
After all, consider the definition of pointwise convergence, which we have learned before:
∀x∈X,∃ϵ>0:∃N>0:∀n>N:dY(fn(x),f(x))<ϵ. The definitions look DIFFERENT.
E.g. of a non-uniformly convergent function that converges pointwise: Consider fn(x)=1−x2/n2. We know fn(x)→f(x)=1 pointwise. However notice that it is not the case that ∀ϵ>ϵ,∃N:∀n>N,dY(fn(x),f(x))<ϵ.
Suppose that it is a case: then ∀ϵ>0,∃N:∀n>N,∣1−x2/n2−1∣ϵ for all x in R. Consider that ∣1−x2/n2−1∣ is maximized when x=n. This makes ∣1−x2/n2−1∣=∣1−n2/n2−1∣=1. Such applies for all n. Hence, if I make ϵ=1/2 it is not true that ∃N:∀n>N,∣1−x2/n2−1∣ϵ=1/2 for all x in R. Thus, contradiction has been reached.
In other words fn(x)=1−x2/n2→f(x)=1 pointwise but not uniformly.
E.g. of a uniformly convergent function. Consider the function fn(x)=1+4−nx. We know that fn→f pointwise where f(x)=x. We can prove that fn→f uniformly. Consider that ∣1+4−nx−x∣=∣1+4−n−x4−n∣=∣4n+1x∣ converges to 0 as n increases regardless of x. Thus we know that ∀ϵ>0,∃N:∀n>N:∣fn(x)−f(x)∣<ϵ for all x. By definition of uniform convergence, we know that fn→f uniformly.
Some useful results regarding uniform convergence:
limn→∞sup{∣fn(x)−f(x)∣:x∈X}⟺fn→f uniformly.
If ∣fn(x)∣≤Mn∀x∈X,n∈N and ∑i=0nMi converges, we know that fn(x)→f(x) (where f(x)=∑i=0∞fi(x)) uninformly and vice versa. This is the Weitrass M-Test .
(∀ϵ>0,∃N>0:∀m,n>N,∣fm(x)−fn(x)∣<ϵ,∀x∈X)⟺fn→f uniformly. This is the Cauchy Criterion for uniform convergence.
Important results for Continuity
Given that A is a connected subset of X the domain of continuous function f:X→Y, we know that f(A) also has to be a connected set.
Given that C is a compact subset of X and given continuous f:X→Y, we know that f is uniformly continuous on X.
The Intermediate Value Theorem: Given f continuous on [a,b], a<b, and y in between f(a),f(b), we know that there exists x∈(a,b) such that f(x)=y.
E.g. (Credits to Chris Rycroft's Practice Spring 2012 Math 104 Final) :
We would like to show that ∀n∈N,p(x)=x2n+1−4x+1 has at least three zeroes.
This problem may seem daunting, but we can use the intermediate value theorem + the properties of odd exponents.
Consider the following and how 2n+1 is odd and at least 3. We obtain that f(−1)=(−1)2n+1−4(−1)+1=−1+4+1=4>0f(−3)=(−3)2n+1−4(−3)+1<(−3)3−4(−3)+1=−14<0f(1)=(1)2n+1−4(1)+1=1−4+1=−2<0f(2)=(2)2n+1−4(2)+1≥(2)3−4(2)+1=1>0
.
We now can use intermediate value theorem to say that ∃x1∈(−3,−1):f(x1)=0, ∃x2∈(−1,1):f(x2)=0, and ∃x3∈(1,2):f(x3)=0. This is since a) p, a polynomial function, is continuous on R and on [−3,−1], [−1,1], and [1,2] and b) 0 is in (f(−3),f(−1)), (f(−1),f(1)), and on (f(1),f(2)).
We can furthermore declare that x1=x2=x3 because we know that each of these x's correspond to disjoint intervals.
Therefore, I can say that there are at least 3 points x1,x2,x3 in R such that p(x1)=0=p(x2)=p(x3). QED.
Section 6: Derivatives
Definition of derivative and differentiability:
We know that f is differentiable iff ∀x∈R,limh→0hf(x+h)−f(x) converges.That limit is known as the derivative of f.
Another alternate definition: We know that f is differentiable iff ∀x∈R,limh→xx−hf(x)−f(h) converges.
Basic E.g. Prove that f(x)=x2 is differentiable. We know that limh→0x−hf(x)−f(h)=limh→xx−hx2−h2=limh→x(x+h)=2x. Hence f(x) is differentiable with a derivative of 2x.
Basic E.g. Prove that f(x)=∣x∣ is NOT differentiable. Try x= 0 and limh→0∣0∣−∣h∣f(0)−f(h)=limh→00−h∣h∣=limh→xh−∣h∣. We know that, from common example that limh→0+∣h∣/h=1=−1=limh→0−∣h∣/h. Thus limh→00−hf(0)−f(h)=limh→00−h∣h∣=limh→xh−∣h∣ diverges. Thus f(x)=∣x∣ is not differentiable.
Mean Value Theorem
The most well known form of the Mean Value Theorem tells us that if f is real and continuous on [x1,x2] and is differentiable on (x1,x2), we know that ∃x∈(x1,x2):f(x2)−f(x1)=(x2−x1)f(x). We've seen alot of this in Math 1A type calculus.
E.g. Consider the function f(x)=xsinx−x. Consider equation 1=xcosx+sinx. In an interesting twist we can show that f(x) does have a zero.Consider interval [0,π/2]. In this case we know that the equation does have a solution. For instance consider that f(0)=0sin0−0 and f(0)=0sin0−0 and f(π/2)=π/2sinπ/2−π/2=0. Thus using the mean value theorem and knowing f is continuous and differentiable on [0,π/2] and on (0,π/2) respectively, ∃x∈(0,π/2):f′(x)(π/2−0)=f(π/2)−f(0)=0. Thus we know that for. some x∈(0,π/2), f′(x)=xcosx+sinx−1=0. Thus we know that ∃x:1=xcosx+sinx.
Speaking of the previous problem, the mean value theorem can be used to arrive at the Rolle's theorem but
Nonexample. Consider the function f(x)=∣x∣. We know that f(1)=f(−1). Furthermore f is continuous on R. You can even draw a tangent horizontal line at the origin that is tangent to the curve of f. However, this does not mean that ∃x∈(−1,1):f′(x)(1−(−1))=f(1)−f(−1)=1−1=0. This is since f is not differentiable on (−1,1). Notice that there is a cusp at x=0 (lefthand derivative is -1 and righthand derivative is +1).
A more generalized form of the mean value theorem is given as follows:
Given real functions f,g continuous over interval [x1,x2] and differentiable over (x1,x2). We know that ∃x∈(x1,x2):(f(x2)−f(x1))g′(x)=(g(x2)−g(x1))(f′(x)).
Intermediate Value Theorems for Derivatives
In order to make an analog to the intermediate value theorem for continuous functions, we use make the following statement:
Given that f is differentiable over [a,b], f′(a)<f′(b), and μ is in (f′(a),f′(b)), I know that ∃c∈(a,b):f′(x0)=μ.
One important thing to note here is that this does not result from the intermediate value theorem for continuous functions. Rather, it originates from the idea of global minima. The proof is very simple:
Consider g(x)=f(x)−μx. We know g′(a)=f′(a)−μ<0 and g′(b)=f′(a)−μ>0. Now,since [a,b] is compact, we know that there exists a global minimum for f. It cannot be a,b since f′(a)<0,f′(b)>0 are nonzero. Therefore, we must say that the global minimum for f must be x0∈(a,b). Since f is continuous on [a,b] it must be the case that x0 is a local minimum as well. Thus, f′(x0)=0.
L'Hopital's Rule
Given that f,g:(a,b)→R are differentiable, ∀x∈(a,b),g(x)=0, and one of the following (given that c is in (a,b)):
limx→cf(x)=limx→cg(x)=0
\lim_{x \to c} g(x) = \pm \infty$
, then we can write limx→cg(x)f(x)=limx→cg′(x)f′(x).
The proof for this theorem is actually quite long; one part of the proof is in the April 8th notes for this course and the rest is to be completed in the problem sheet I created.
This theorem can be quite useful.
E.g. Spin off From Chris H. Rycroft's Spring 2012 final.
Consider f(x)=ex−1x,x=0. We know that f(x)=c for some constant at x=0. What should c be to make f continuous at x=0?
We can make sure that limx→0f(x)=f(0), by one definition of continuity.
In other words, make c=f(0)=limx→0f(x)=limx→0ex−1x (we encounter a 0/0 situation, so we use L'Hopital) =limx→0ex1=1.
Thus make c=1; that way f is continuous at 0.
Taylor's Theorem
Given a function f and its n-1 st order derivative that is continuous on on interval [a,b], and given α,β∈[a,b], we know that f(β)=i=0∑n−1i!f(i)(α)(β−α)i+n!f(n)(x)(β−α)n for some x between α and β. Notice that if I make n=1, Taylor's theorem would seem very similar (if not identical) to the mean value theorem.
E.g. Find the bound between f(x)=sinx−x on [π/2,3π/2]. We know that x is the 1st order Taylor polynomial of g(x)=sinx. By taylor's theorem, we know that ∃x2∈[π/2,x]:sinx−x=2!g(2)(x2)(x−π/2). We can bound, sinx−x=2!g(2)(x2)(x−π/2)2.
We can provide a rough bound first and foremost.
This would be made by maximizing the absolute values of value of 2!g′′(x2) and of (x−π/2)2. We know that ∣2g(2)(x2)∣≤21 because g is a sine function and ∣(x−π/2)2≤π2 because x∈[π/2,3π/2]. Thus we can bound f(x)=sinx−x with ∣f(x)∣≤π2/2.
The Taylor Series, whose error bound can be found from application of Taylor's theorem, of f about x0 can be defined by
∑n=0∞n!f(n)(x0)(x−x0)n. The set of points around x0 where the taylor series of f about x0 converges can be defined as the interval of convergence I. ∣supI−x0∣ distance from center of the interval of convergence to the boundary of the interval of convergence is defined as the radius of convergence.
E.g. Consider the taylor series of ln(x+1) about x =0, which is defined as ∑n=0n+1(−1)nxn+1. We can find the interval of convergence solving the following inequality:
We can tell that the radius of convergence of the taylor series of ln(x+1) about x =0 is 1. We also know that the the taylor series converges for x∈(−1,1).
Now to be sure about the true interval of convergence, we need examine what the taylor series does (a) at x=x0−(radius of convergence) and (b) at x=x0+(radius of convergence).
Consider (a). In this case we know that the taylor series at x=−1 equals ∑n=0∞n+1(−1)n(−1)n+1=∑n=0∞n+1(−1)2n+1=∑n=0∞n+1−1=−∑n=0∞n+11. We know by the way that ∑n=0∞n+11 does NOT converge by the integral test. (See section on series for why).
Consider (b). In this case we know that the taylor series at x=−1 equals ∑n=0∞n+1(−1)n(1)n+1=∑n=0∞n+1(−1)n. We know by the way that ∑n=0∞n+1(−1)n does NOT converge by the alternating series test. (Since this alternating series is such that 1) an+1=n+1+11≤n+11=an and 2) liman=limn+11=0.
Thus we can conclude that the taylor series of ln(x+1) about 0 converges on (−1,1].
Section 7: Integrals
From Math 1A, we have all learned that the (kind-of and practical) antonym for differentiation is integration. In this case, we have also learned that integrals are really just a sum of many rectangles. In this subpart of analysis, we formalize and generalize this intuition even further.
New vocabulary:
Partition of [a,b] :denoted by P, a finite sequence of points in the interval including a and b.
Riemann Integrals
We use the riemann integral to be defined as follows:
∫abf(x)dx=limn→∞∑i=0nf(si)△xi where △xi=xi+1−xi where xi−1<xi and both are consecutive points in the Riemann sum and where si is a point in [xi,xi+1]. x0,x1,…,xn,xn+1 are points in partition P of [a,b] and x0,xn+1 are a and b, respectively.
The upper Riemann sum is defined to be U(f,P)=limn→∞∑i=0nsupx∈[xi,xx+1]f(x)△xi. Similarly, The lower Riemann sum is defined to be U(f,P)=limn→∞∑i=0ninfx∈[xi,xx+1]f(x)△xi.
A function is said to be Riemann integrable on [a,b] iff “upper integral” =U(f)=infU(f,P)=supL(f,P)=L(f)= “lower integral” where the inf's and sup's are over all possible partitions P of [a,b]. In that case
infU(f,P)=supL(f,P)=∫abf(x)dx
This seems pretty basic, but integrals can be generalized and studied even further through the Stieltjes integral. There we use a dα.
Stieltjes Integral
Let α be a monotonically increasing function. We denote our Stieltjes integral over [a,b] through the following.
∫fdα.
Like the Riemann integral, we utilize partitions and sums.
Given P={x0=a,x1,x2,…,xn+1=b}
a function f is stieltjes integrable with respects to α (denote by f∈R(α)) on [a,b] iff U(f,α)=L(f,α) for [a,b].
Remarks:
Do NOT confuse the Stieltjes integral with the u-substitution, change of variables you have learned from Math 1B.
One can think of the Riemann integral as a specialization of the Stieltjes integral with the α(x)=x.
E.g. Consider α(x)=H(x−1/2) (where H is the Heaveside Step Function. Consider f(x)=x2. Now we may want to consider the integral over [−5,5] with respects to our α. In that case we consider that ∫x=−5x=5f(x)dα. Initially, one may be tempted to answer that this yields in infinity. However, notice that for any partition of [−5,5], we know that unless the interval [xi,xi+1] created from the partition contains 1/2 and some points less than 1/2. then △α=α(xi+1)−α(xi)=H(xi+1−1/2)−H(xi−1/2)=1 by definition of the Heaveside function. For all other intervals △α=0. In that case, letting I=[xm,xm+1] be the interval created by the partition points such that 1/2 is in that interval and that interval contains a number less than 1/2, we can say that U(f,P,α)=supx∈If(x)⋅1 and L(f,P,α)=infx∈If(x)⋅1. Now if we find $U(f, \alpha) = \inf_{P \in \mbox{partition of [-5, 5]}}U(f, P, \alpha)$ and $L(f, \alpha) = \sup_{P \in \mbox{partition of [-5, 5]}}L(f, P, \alpha)$ we get that U(f,α)=L(f,α)=f(1/2). In that case we can conclude that ∫x=−5x=5f(x)dα=f(1/2)=1/4.
This is a special case of the Stieltje's integral that is explained in more detail and precision in theorem
6.15 in Rudin. Further information about this topic can be found in theorem 6.16 in Rudin.
More Useful Properties
More on Refinements
We define a refinement Q of a partition P as a partition such that P⊂Q. The refinement can be thought of something that makes a finer grain approximation for the integral of f over [a,b]. This is formally stated in the following theorem:
Given Q is a refinement of P, we know U(f,Q,α)≤U(f,P,α) and L(f,P,α)≤L(f,Q,α).
Given two refinements P, Q of [a,b], we can call P⋃Q the common refinement of P and Q.
More on Proving that a function is integrable with respects to α
Cauchy Criterion: If we can prove that ∀ϵ>0,∃ Partition P of [a, b] :U(f,P,α)−L(f,P,α)<ϵ, then we can say that f∈R(α) on [a,b].
Continuity: If f is continuous on [a,b], then f∈R(α) (where α is an increasing function) .
Composition of a function: Given that f∈R(α) for [a,b] and g is continuous over f([a,b]), we know that g∘f is integrable with respects to α over [a,b].
Addition, Multiplication of integrable functions: Given that ∀f,g∈Rα, we know that f+g∈R(α) and f⋅g∈R(α).
There indeed is a way to “convert” from an integral with respects to α to a regular, friendly Riemann integral. We know that given an increasing α such that α′ is Riemann integrable on [a,b] and that f is a bounded real function on [a,b], we know that ∫x=ax=bfdα=∫x=ax=bfα′(x)dx. Note that this applies to differentiable α so it does not apply to the step function example from previous section.
E.g. Just so that we don't confuse this with u-substitution, we can try the following easy example. On [0,1],consider α(x)=sin(x) (which by the way is increasing on that specified interval) and consider f(x)=cos(x). Since α(x) is differentiable and α′ is riemann integrable and f is bounded, we know that ∫x=0x=1f(x)dα=∫x=0x=1cos(x)α′(x)dx=∫x=0x=1cos(x)α′(x)dx=∫x=0x=1cos(x)cos(x)dx=∫x=0x=12cos(2x)+1dx=[221sin(2x)+x]x=0x=1=4sin(2)+21
We can also discuss change of variables, but in the context of the Stieltjes integral. Specifically we can use that given φ(x) is a strictly increasing continuous function mapping from [A,B]→[a,b] , f∈R(α), and that β(y)=α(φ(y)),g(y)=f(φ(y)), we can say that ∫ABgdβ=∫abfdα. Here it is important to note that the u-substitution takes advantage of a specific form of this, where α(x)=x and ϕ′∈R on [a,b] and ∫abfdx=∫ABf(φ(x))φ′(x)dx.
Fundamental Theorem of Calculus
In order to connect derivatives, integrals, mathematicians have established the fundamental theorems of calculus, the first one giving an almost obvious statement about the area under the curve of f as a difference of the antiderivative taken at two points and the second one giving an interface to convert from antiderivative to the original function.
One interesting idea to consider is the relationship between the function f and its antiderivative on [a,b]. The second fundamental theorem of calculus Theorem 6.20 tells us that given an Riemann integrable f on [a,b] , F(x)=∫axf(t)dt is continuous on [a,b] and if f is continuous at x0∈[a,b], then F is differentiable on [a,b] and f(x0)=F′(x0).
E.g. we can consider f(x)=x2 on [−1,1]. We know that F(x)=∫t=−1t=xf(t)dt=x3/3+1/3. As the theorem would predict, F is definitely continuous and since f is continuous, we know F is differentiable and F′(x)=f(x)=x2, which sounds just about right from the Second Fundamental Theorem of Calculus.
Precisely stated, the fundamental theorem of calculus can be written as the following:Given f∈R for [a,b] and F is assumed to be a differentiable function on [a,b] with F′=f, we know that ∫abf(x)dx=F(b)−F(a).
This is quite a basic result (I learned this back in sophomore year of high school just to do really fun integrals in a row), but deriving this in class has shown us how intricate mathematics can truly be. This is about as high as we go.
All that is left in this course is some detail about the uniform convergence of integrals and derivatives….
Uniform Convergence of integrals and derivatives
We know that generally if fn→f uniformly and fn is integrable (whether Riemann integrable or with respects to some increasing function α), then f is also integrable.
E.g. Consider the function fn(x)=n2+1x2 on [0,1]. We know that fn(x) is Riemann integrable because it is continuous on [0,1]. Now, consider that fn→f where f(x)=0 on [0,1]. We can also prove that fn→f uniformly because limn→∞supx∈[0,1]∣fn(x)−f(x)∣=limn→∞supx∈[0,1]∣n2+1x2∣=limn→∞n2+11=0. Therefore, we can consider f(x)=0 (for x∈[0,1]) to be integrable. This makes sense as we can calculate the trivial integral ∫01f(x)dx=∫010dx=0.
It is NOT generally true that if fn→f uniformly and fn is differentiable, then f is also differentiable. We can also say that HOWEVER, if fn is a sequence of differentiable functions (differentiable on [a,b]) such that fn′(x)→g(x) uniformly and there exists x∈[a,b]:fn(x) converges., then we know that fn→f uniformly and that f′(x)=g(x).
However notice fn′(x)=n−ne−nx=−e−nx. Pointwise, limn→∞fn′(x)=g(x)=0 for (positive x) and limn→∞fn′(x)=g(x)=1 for (x = 0). Now we can take supx∈[0,∞)∣fn′(x)−g(x)∣=1. Therefore limn→∞supx∈[0,∞)∣fn′(x)−g(x)∣=1=0. Therefore fn′(x) does not uniformly converge.
The moral of the story here is that just because fn→f uniformly and fn is differentiable over the domain of f, we cannot say that fn′→g uniformly for some function g.
Now this concludes my review notes for Math 104
Sources:
Elementary Analysis by Ross
Principles of Mathematical Analysis by Walter Rudin
A Problem Book in Real Analysis by Askoy, Khamsi
What Does Compactness Really Mean? by Evelyn Lamb (Scientific American)
Past exams written by Peng Zhou, Ian Charlesworth, Sebastian Eterovic, Charles Rycroft. (UC Berkeley)
Wikipedia
math104-s21/s/ryotainagaki.txt · Last modified: 2022/01/11 18:31 by 24.253.46.239