It is wrong to conclude that limsup(an+1/an))<1.
It is wrong to conclude an is decreasing.
3
One need to show that xn is bounded from below; xn is monotone decreasing. These two conditions together show xn converges to some x. Then one need to prove that x=a. Missing any of the three steps would make one lose points.
There are other methods to prove this problem, such as creating an auxillary sequence y1=x1,yn+1=(yn+a)/2, and show that xn≤yn and yn→a.
4
One should start with a Cauchy sequence {fn(x)} in C(K), and construct a function f(x) by taking pointwise limit, define for x∈[0,1], f(x)=limnfn(x). Thus defined, f(x) is just a function, and may not be continuous, and we don't know yet fn→f uniformly or not. Once we show that the convergence is uniform (see solution), then we can use the result that uniform convergence preserve continuity, to conclude that f is a continuous function on [0,1], hence f∈C(K).
5
If a continuous function f:(0,1)→R is unbounded, then it means either limx→0∣f(x)∣=∞ or limx→1∣f(x)∣=∞, since the value of f(p) is finite for any p∈(0,1). For example, consider the function f(x)=1/x+1/(1−x), it is an example of unbounded function (and it is not uniformly continuous).
A common mistake is to say, assume f is unbounded, then there exists a p∈(0,1), such that limx→pf(x)=∞. That is not what 'unbounded' mean.
6
Some approach is like this,
Consider the interval [0,1], take a global max of f(x) on [0,1]. assume it is at x=b.
For any u∈(f(0),f(b)), which is also (f(1),f(b)), there exists a b1(u)∈(0,b) and b2(u)∈(b,1), such that f(b1(u))=u, f(b2(u))=u.
So far the two sentences are correct. But the problem is that, as u→f(b), it is not true that b1(u)→b and b2(u)→b. (imagine f(x) has a 'plateau' instead of a 'peak' near x=b). And it is not true that, as one move u, the function b1(u) and b2(u) varies continuously.
This is an interesting direction, but needs more careful argument to make it work.
7
It is tempting to consider f(x)=f(0)+∫0xf′(t)dt, however, we don't know if f′(t) is integrable or not. (-1 point)
Even if we assume f′(x) is integrable, it is wrong to say ∫f′(x)dx≤∫2dx. One need to use definite integral.
8
Given A,B compact subset of X, one need to show that A∩B is compact.
If one wants to prove using definition, then one needs to start with an arbitrary open cover of A∩B. Note that this may not be an open cover of either A or B.
It is a good idea to extend the above open cover to an open cover of A. For example, by add in the open set Bc. Some answer are vague about how to do this extension. If one don't specify the extension, I can throw in an open set that equals to X, and pick the finite subcover to consist of a single open set, just the added X.
Some answer also write, compact set is equivalent to being closed and bounded. That's false for general metric space X.
Some answer write, “any subset of a compact set is compact”. False, for example (0,1)⊂[0,1].
10
Write ∫12B(x)A(x)dx=∫12B(x)dx∫12A(x)dx.
math104-f21/final-mistakes.txt · Last modified: 2022/01/11 08:36 by pzhou