|
||||
Title: Summation Post by ThudanBlunder on Sep 10th, 2007, 10:10am Let F(n) = 2n + 2/3 - enhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif(k - n)ke-k/k! for k = 0 to n-1 and n = 1,2,3,4,....... a) Prove that F(n) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/bigto.gif 0 as n http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/bigto.gif http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/infty.gif b) Find F(1000) to 3 significant figures. c) Find the smallest m such that |F(m)| < |F(m+1)| |
||||
Title: Re: Summation Post by Eigenray on Sep 10th, 2007, 5:43pm The poor convergence makes me think Poisson summation would be in order. Anyway, let f(n,k) = (n-k)k e-k/k!. We want to approximate http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif (-1)k f(n,k). For fixed n, f(n,k) is maximized when f(n,k+1) ~ f(n,k), or (n-k-1)k+1/(k+1) ~ e(n-k)k. Letting k = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gifn, we want (1 - 1/[n(1-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif)])http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gifn(n-k-1)/(k+1) ~ e. Letting n http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gif http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/infty.gif, we get e-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif/(1-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif) (1-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif)/http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif= e which is to say, http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif/(1-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/lambda.gif= [link=http://en.wikipedia.org/wiki/Lambert_W_function]W[/link](1/e) ~ 0.2785, or http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif ~ 0.2178. Now, the maximum value is about f(n, http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gifn) ~ ehttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/lambda.gifn/http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/surd.gif{2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gifn}. Plotting f(n, k), it looks like a constant times normal, with mean http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gifn, and variance ~ http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/nu.gifn, where http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/nu.gif ~ 0.1332? (I don't know what http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/nu.gif is.) That is, f(n,k) ~ C N(http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gifn, http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/nu.gifn). Considering the max, ehttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/lambda.gifn/http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/surd.gif{2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gifn} = C/http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/surd.gif{2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/nu.gifn}, so C = ehttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/lambda.gifn http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/surd.gif{http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/nu.gif/http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif}. Or, in other words, gn(x) = f(n, kx) ~ ehttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/lambda.gifn/n http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/surd.gif{http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/nu.gif/http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif} N(http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/mu.gif, http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/nu.gif/n), and we are interested in http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif(-1)k gn(k/n). But I don't think this is a good enough approximation. It looks pretty lousy, actually. Eh. |
||||
Title: Re: Summation Post by Eigenray on Sep 10th, 2007, 6:38pm I'm guessing it has something to do with http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif01 en cos(http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifnx) (1/x - 1)nx (2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifnx)-1/2 dx. It seems to converge to 1, but it's a bit tricky to approximate. |
||||
Title: Re: Summation Post by Eigenray on Sep 11th, 2007, 2:35pm This might be useful: define f(x) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gifk<x (-1)k (x-k)k ex-k/k!. This looks a lot like some sort of generating function. So look at f'(x) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif(-1)kex-k/k! { (x-k)k + k(x-k)k-1 } = f(x) + http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif(-1)kex-k/(k-1)! (x-k)k-1 = f(x) - http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif(-1)rex-1-r/r! (x-1-r)r = f(x) - f(x-1) So f'(x) = f(x) - f(x-1), or f' = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/cdelta.giff, where http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/cdelta.gif is the difference operator. That is, the slope of the tangent at each point is the same as the secant going one back. Interesting. So f'=http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/cdelta.giff, with f(x)=ex for x in [0,1], and we want to show f(x) ~ 2x + 2/3. But I've never been very good with differential equations. Integrating f'=http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/cdelta.giff, from 1 to x, gives f(x) - f(1) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gifx-1x f(t)dt - http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif01 f(t)dt, or f(x) = 1 + http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gifx-1x f(t)dt. I don't know if that helps either. |
||||
Title: Re: Summation Post by Eigenray on Sep 11th, 2007, 3:27pm Well, I can show one thing. Intuitively, since f' = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/cdelta.giff, f should straighten out to be close to linear. So suppose f'(x) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gif c. We show c=2. For x>0, and t>0, set gx(t) = [f(x+t) - f(x)]/t = f'(http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/xi.gif) for some http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/xi.gif in [x, x+t], by the intermediate value theorem. By assumption then, gx(t) = c + hx(t), where hx converges to 0, uniformly in t, as x http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gif http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/infty.gif. Now, f(x+t) = f(x) + tc + thx(t), and so f(x) + c + hx(1) = f(x+1) = 1 + http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif01 f(x+t)dt = 1 + f(x) + c/2 + http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif01 thx(t)dt. Letting xhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gifhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/infty.gif gives c=2. [f(x) is not actually differentiable everywhere, but if you read the next post, it suffices to work with G(x) = F(x) - x2, which is differentiable. So if we can show that g = G' converges to a constant, then that constant must be 2/3, and the result follows.] |
||||
Title: Re: Summation Post by Eigenray on Sep 11th, 2007, 4:03pm Aha, but we can repeat the argument! Let g(x) = f(x) - 2x. Then g satisfies the same differential equation: g' = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/cdelta.gifg, and the integral equation becomes g(x) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gifx-1x g(t)dt. Now, let G be an antiderivative of g. Then the above becomes G' = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/cdelta.gifG, and so as before G(x) = G(1) - http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif01 G(t)dt + http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gifx-1x G(t)dt = 1/3 + http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gifx-1x G(t)dt, since G(x) = ex - x2 on [0,1]. The same argument as for f now gives G(x) ~ 2/3 x + c'. Which is to say, g ~ 2/3, and finally f(x) ~ 2x + 2/3. So all we need to prove is: suppose g(x) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gifx-1x g(t)dt. Then g(x) converges to a finite value, namely 2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif01 [ g(t) - http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif0t g(u)du ] dt. Intuitively, this is much more clear than the original problem: g(x) is simply the average of g(t) over [x-1,x]. How could it possibly not converge ;) |
||||
Title: Re: Summation Post by Sameer on Sep 11th, 2007, 4:26pm The last term in F(n) looks very similar to a Discrete(?) Laplace transform!! F(s) = Integral f(t)*e-st dt |
||||
Title: Re: Summation Post by Eigenray on Sep 11th, 2007, 4:57pm Laplace!! Let fk(t) = (t-k)ket-k/k!, when t>k, and 0 otherwise. Then the Laplace transform works out to be Fk(s) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gifkhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/infty.gif fk(t) e-st dt = e-ks/(s-1)k+1, and the Laplace transform of f(t) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif(-1)kfk(t) is the sum of the geometric series F(s) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif(-1)kFk(s) = 1/(s-1+e-s) = 2/s2 + 2/(3s) + 1/18 - s/270 + ... = 2/s2 + 2/(3s) + G(s). Inverse Laplacing, f(t) = 2t + 2/3 + g(t), and presumably g(t) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gif 0. G(s) is meromorphic, with infinitely many poles, at each non-zero solution to e-s=1-s: s = -log(a+http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/delta.gif) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pm.gif (a-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/epsilon.gif)i, a=2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifk+http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gif/2, http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/epsilon.gif ~ (1+log a)/a, http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/delta.gif ~ [(log a)2-1]/2a, k http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/in.gif http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/bbz.gif+, all in the left half-plane. But I'm not familiar enough with the Laplace transform to know how g(t) behaves. |
||||
Title: Re: Summation Post by Sameer on Sep 11th, 2007, 5:12pm For G(s) you get s in the numerator???? That will make the system unstable and non-convergent!!! I will take a closer look when I get home!! |
||||
Title: Re: Summation Post by Eigenray on Sep 11th, 2007, 6:59pm g(t) =1/(2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gif) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/subinfty.gifhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/supinfty.gif G(iu)eiutdu, which is essentially the Fourier transform of G(iu). G(iu) isn't in L1, so the above isn't actually integrable, and we can't use the Riemann-Lebesgue lemma, but since G(iu) is in L2 and analytic, I think it follows that g(t) goes to 0, decaying exponentially in fact. Actually, we are only interested in Re g(t) = 1/(2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gif) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif [Re G(iu) cos(ut) - Im G(iu) sin(ut)]dt, and Re G(iu) is in L1, since Re[1/(iu-1+e-iu)] ~ (cos u - 1)/u2, and Re[2/(iu)2 + 2/(3iu)] = -2/u2. So the only issue is whether http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif Im G(iu)sin(ut)dt goes to 0. |
||||
Title: Re: Summation Post by Eigenray on Sep 12th, 2007, 5:29am Whahahaha: Recapitulating, G(z) = 1/(z-1+e-z) +2/(3z) - 2/z2, and we want to compute, for t>0, g(t) = 1/(2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifi) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/int.gif-ihttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/subinfty.gifihttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/infty.gif G(z)eztdz. Fix M>0, and let N>0 be large. Consider the rectangular contour going from -iM http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gif iM http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gif -N+iM http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gif -N-iM http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gif -iM. The integral along the right edge is what we want to evaluate. Waving my hands, the integral along the other three sides should be small. This just leaves the residues inside the rectangle. G(z)ezt has a pole at z=c, where e-c=1-c. The residue there is limz->c (z-c)G(z)ezt = ect / limz->c (z-1+e-z)/(z-c) = ect / (1 - e-c ) = ect / c. Now the poles come in conjugate pairs, and adding up the residues inside the rectangle, and letting M go to infinity, we get: g(t) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gifc 2Re[ ect/c ] (*) where the sum is over conjugate pairs of non-zero c satisfying e-c=1-c. That is, c = {-2.09 http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pm.gif 7.46 I, -2.66 http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pm.gif 13.9 I, -3.03 http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pm.gif 20.22 I, -3.29 http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pm.gif 26.54 I, ...} Let gn(t) be the sum of the first n terms of the Fourier series (*), and let hn(t) = f(t) - 2t - 2/3 - gn(t). Attached are plots of g=h0, h1, h2, and h3, in black, red, green, and blue, respectively. (Only h1,h2 are shown in both.) Hooray Fourier series! Finally, -F(1000) = g(1000) = 2Re[e1000c1/c1 + e1000c2/c2 + ...] = 1.1469767*10-909 - O(10-1158). Note that this makes sense: the functions y(x)=x, y(x)=1, and y(x)=ecx, where c=1-e-c, are all solutions to y'(x) = y(x) - y(x-1), and f(x) is a linear combination of these: f(x) = 2x + 2/3 + http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif ecx/c. ~ 2x + 2/3 + 0.258*e-2.089 x cos(7.461 x + 1.298) + O(e-2.664 x). And it appears n=800 is the first time |F(n)| < |F(n+1)|. |
||||
Title: Re: Summation Post by Eigenray on Sep 12th, 2007, 7:15pm Another problem that involved Fourier transform was [link=http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_putnam;action=display;num=1156417763]Alternating Sum[/link]: as z http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/to.gif 1-, S(z) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gifk=0http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/supinfty.gif (-1)k z2^k = 1/2 - (1-z)/3 + g(z) + 2/log 2 http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif{m>0 odd} Re[ http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/cgamma.gif(mhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifi/log 2)ex mhttp://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifi ], where z = e-2^(-x), and g(z) = O((1-z)2). |
||||
Title: Re: Summation Post by Barukh on Sep 12th, 2007, 9:08pm What's your solution, T&B? |
||||
Title: Re: Summation Post by ThudanBlunder on Sep 14th, 2007, 4:33pm on 09/12/07 at 21:08:47, Barukh wrote:
Although I did elementary Complex Variable at uni, it was a long time ago and I have forgotten most of it. (However, this does not stop me knowing a good problem when I see one.) So I won't write out the full solution as though I am a real mathematician like Eigenray. But 'my' solution begins by letting I(n,x) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif(k-n)ke(k-n)x/k! = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gifDk(1/k!)e(k-n)x/Dxk between k = 0 and n Thus F(n) = 2n + 2/3 - I(n,-1) As e(k-n)x is an entire function in the complex plane we are now able to apply Cauchy's Theorem to prove that F(n) = 2n + 2/3 - http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif(1/2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifi)http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/oint.gife(k-n)x/(1+z)k+1.dz where the contour integral must enclose z = -1, and sum over k explicitly. Further details on request, but I believe Eigenray will be able to see the rest of this method for a) (I wish the edit window wasn't so small and that we could save drafts of our work, so as not to lose it somehow before posting.) |
||||
Title: Re: Summation Post by ThudanBlunder on Sep 15th, 2007, 9:49am I will type my solution up in MathType as soon as I have time. |
||||
Title: Re: Summation Post by Eigenray on Sep 15th, 2007, 5:15pm Summing the (finite) geometric series, we get F(n) = 2n + 2/3 + 1/(2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/pi.gifi) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/oint.gif (1+z+ez)-1/(1+z)n dz, since e-nz/(1+z-ez) is analytic at z=-1. So by Cauchy's theorem, we get 1/(1+z-ez) = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gifn=1http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/supinfty.gif [F(n) - 2n - 2/3] (z+1)n-1, and killing the pole at 0, G(z) = 1/(1+z-ez) - 2/(3z) + 2/z2 = http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/sum.gif F(n) (z+1)n-1. It took me a while to realize that this actually proves (a)! Hint: [hide]where are the poles of G[/hide]? Amazing. Complex analysis just seems a little too beautiful sometimes. Now for (b) (and to get the Fourier series) we just play "Whac-A-Pole". on 09/14/07 at 16:33:29, ThudanBlunder wrote:
Yes. Unfortunately the textarea width and height are set in css, so something like [link=http://userscripts.org/scripts/show/600]TextareaResize[/link] doesn't work. I couldn't even resize it using DOM Inspector, but there's probably a way. Googling around, I put together the following, which seems to work okay: Code:
I just stuck it inside SMQ's Greasemonkey script. Of course you can modify the parameters I used, and add something similar for height. But I don't know much about this sort of thing; there should be a way to just remove the width and height styles, so that modifying the rows and cols attributes will work. Edit: [link=http://userscripts.org/scripts/show/10140]Textarea_drag_resize[/link] works nicely. But it could be customized to default to some affine function of the window's width and height. |
||||
Title: Re: Summation Post by Barukh on Sep 16th, 2007, 9:52am on 09/14/07 at 16:33:29, ThudanBlunder wrote:
... and I haven't done it at all. That's not good. Eigenray, could you please elaborate on your "too beautiful" solution? Also, is there good material to read on the subject? |
||||
Title: Re: Summation Post by Eigenray on Sep 16th, 2007, 2:43pm on 09/16/07 at 09:52:51, Barukh wrote:
Use the fact that [link=http://planetmath.org/encyclopedia/RadiusOfConvergenceOfAComplexFunction.html]the radius of convergence of a power series is the distance to the nearest singularity[/link]. Quote:
Have a look at [link=http://www.math.upenn.edu/~wilf/DownldGF.html]Generatingfunctionology[/link], particularly section 5.2 (and section 2.4 for background). The basic tool for all this is [link=http://planetmath.org/encyclopedia/CauchyIntegralFormula.html]Cauchy's Integral Formula[/link]. I know Stein & Shakarchi's Complex Analysis has a chapter on asymptotics, but I don't have my copy on me. Googling on some keywords (complex analysis, generating functions, asymptotics) also turns up [link=http://algo.inria.fr/flajolet/Publications/anacombi2.ps.gz]this[/link], apparently part of the book [link=http://algo.inria.fr/flajolet/Publications/books.html]Analytic Combinatorics[/link], which seems pretty comprehensive. |
||||
Title: Re: Summation Post by Sameer on Sep 16th, 2007, 3:07pm on 09/16/07 at 09:52:51, Barukh wrote:
My Engineering Math text book used to have all this. Yet it has been a long time!! Maybe any Higher Engineering Mathematics should have this!! I should look for one too!! hmm... |
||||
Powered by YaBB 1 Gold - SP 1.4! Forum software copyright © 2000-2004 Yet another Bulletin Board |