You've probably heard of Newton's method for solving polynomials. If you apply Newton's method for a cubic polynomial, it may not work. You may get stuck under a local minimum. And if you change the initial guess a little bit, it might still not converge to a root. So Newton's method is not reliable for solving polynomial equations. The problem I worked on was whether or not there was any algorithm like Newton's method, involving iteration of just one rational function, that can reliably solve polynomial equations. I was able to prove the answer is no for degree 4 or more, and actually I found a new algorithm for solving cubics, which is reliable.
Then I went to MSRI and was at MIT for a semester, then Princeton for four years. Peter Doyle and I worked in Princeton on solving fifth degree equations, and we found this beautiful unexpected algorithm for solving quintic polynomials. But it's not contradicted by my thesis because it's a tower of iterations; that is, you iterate one rational function, take the thing to which it converges, and plug that into another one.
As you may know, solving the quintic is bound up with the Galois group A5, and the fact that A5 is a simple group. This was used by Galois to prove you can't solve the quintic equation by radicals.
It turns out that to be able to solve an equation using an iterated rational map, what you have to do is find a rational map whose symmetry group is the Galois group of the polynomial. Now there is only a small set of groups that can be symmetry groups on the Riemann sphere, and the interesting ones come from the Platonic solids. So A5, the symmetry group of the dodecahedron, is the most complicated one you can get. We used this rational map with A5 symmetry to give a new algorithm for solving the quintic equation reliably. And by the same token, since S6 or A6 does not operate on the Riemann sphere, there is no similar algorithm to solve equations of degree 6, or more. So that was my first area of research: solving polynomials, and dynamics of rational maps. Link
Now, the next thing I worked on when I was at Princeton was Thurston's theory of hyperbolic 3-manifolds. Thurston has a research program, which has been very successful, to try to find a canonical geometry for three-dimensional objects. For example, if you imagine you have some manifold, that is secretly a 3-sphere, if you could somehow find a round metric on it, then you would suddenly recognize it as the 3-sphere. So if you can find a metric that gives the manifold a good shape, then you can recognize what the manifold is. It turns out that most three-dimensional manifolds admit these metrics, but the metrics are not positively curved like the 3-sphere, they are negatively curved. For example, if you take the outside of a knot in S3, a knot complement, then it almost always admits one of these so-called hyperbolic metrics of constant negative curvature. Because of that, there are now computer programs, where you can just draw a knot at random with a mouse, and click, and within one or two seconds it will tell you exactly what knot it is. And if you give it two knots, it will immediately recognize whether or not they are the same knot. This is amazing because the problem of classifying knots was classically extremely difficult to solve.
While at Princeton I found a new, analytic proof of Thurston's theorem that provides hyperbolic structures on many 3-manifolds, including most knot complements. This new proof has to do with Poincaré series, a classical topic in complex analysis, and it also lead to the solution of conjectures of Kra and Bers. Later at Berkeley I began to see parallels between the theory of 3-manifolds that fiber over the circle; this topic is worked out in 2 books that appeared in the Princeton "Annals of Math. Studies". The Fields medal was, I imagine, in recognition of these projects.
So I worked on the dynamics of rational maps, and I worked on hyperbolic
3-manifolds, and I worked on Riemann surfaces per se, and I've
also worked on topology of surfaces and knots. And the thing I'd like to
emphasize is that for me all of those fields are really the same field.
You very easily start working on a problem in dynamics, and find yourself
a few months later working on a problem in knot theory or topology,
because they are all very interconnected -- knots, complex analysis,
polynomials, Riemann surfaces, hyperbolic 3-manifolds, etc. There is not
really a name for this field, but that's the field I work in.
Princeton and Harvard both treat their graduate students very well. There is a good ratio of the number of students per faculty. Students are well-funded, departments are small enough that students get a lot of individual attention. And I think the students learn a lot from each other in both of the places. That's a big component of graduate education.
Berkeley is also really wonderful. It's a place that has a huge department, one hundred faculty if you count emereti. I really loved it, but it takes a lot of energy to find a good place to live, to find a good advisor, and to get into the right niche, mathematically and so on. But as you do that, it pays you back very much. And the weather is beautiful. You can walk from campus into Strawberry Canyon then into Tilden Park, and be completely out of view of humanity within 40 minutes. (At Harvard, on the other hand, I found I could bicycle for an hour, and still be in suburbia...) In Berkeley the swimming pools are outdoors, it's very lively, and it's also very tolerant -- to all sorts of different lifestyles, different kinds of people. You feel a sense of freedom. You don't feel any qualm about trying out a new idea, and not worrying so much about whether or not it's going to work. One of the great things about Berkeley is that there are so many graduate students, and so many postdocs in the area, especially with MSRI, that you can have a working group on any mathematical topic you can think of. There's a lot of mathematical interest there.
I really enjoyed being a graduate student at Harvard too. Cambridge and
Berkeley both have advantages over Princeton, in the sense that they're
young communities, there is a lot going on, they're close to a major city.
You can tell a little bit from my graduate experience that although I
think Harvard is really great, the fact that its faculty is small might
make it hard to find an advisor who is in the area you want to work in.
And I think that the real key to success in graduate school is finding
something that you are interested in enough to keep you going for
four or five years.
I took a real analysis course when I was an undergraduate; I went to Stanford for a year and took a great real analysis course from Benjamin Weiss who was a visiting professor from Jerusalem. And that really got me excited about analysis. Then I went back to Williams and I worked closely with Bill Oliver. He was very influential in my mathematical education; it was from him that I first learned this idea of using dictionaries in mathematics to use as a sort of analogy between different fields or different theoretical developments to try to guide my work. So those were my early influences.
When I came to Harvard and I was sort of casting about. I knew how to computer program -- I'd been working in the summers at IBM-Watson in Yorktown Heights -- and Mandelbrot and Mumford were almost collaborating; Mandelbrot was furnishing access to computers at Yorktown Heights to Mumford, who was drawing these beautiful pictures of limit sets of Kleinian groups. As somebody who was conversant with the computer world at Yorktown, I started working for him as his computer programmer, helping him draw these pictures and so forth. You have to imagine, in those days, we had to make a long-distance modem call and then work at a 30 character per second terminal writing programs in FORTRAN. Then we would draw a picture and we would have to wait a week for them to mail it to us from Yorktown to see if it came out right.
Then I got interested in Hausdorff dimension, and since I knew some real analysis, I tried working on that. My first paper ever was on a problem I learned when I first met Professor Hironaka, who was a Harvard professor at the time, although he'd been on leave in Japan. When he first came back from Japan, he told me this question which he hadn't been able to solve, which was to compute the fractal dimension of a particular set. This set is obtained by drawing the letter "M" and repeating the same figure, as shown here.
In the end you get a set with is not self-similar, but it is self-affine. Fractals whose dimensions are easy to compute have the property that if you take a small piece and re-scale it by the same factor in both dimensions, it looks like a larger piece. This one has the property that a very little gap can be scaled to the big gap, but you have to scale by a power of two in one direction and by a power of three in the other; because of that it's dimension is tricky to compute. In my first research paper, I computed it's dimension: D = log2 (1 + 2log32). That was a wonderful problem; I worked on it very hard. You can see that I liked to stay close to the ground of math I really understood.
Then I started getting more interested in complex dynamics, so I went to
one complex variable from one real variable; I always stayed close to
stuff I could really understand. So now, twelve years after my Ph.D.,
I'm finally writing a paper that has to do with Kähler geometry; and
I certainly didn't feel comfortable with Kähler metrics when I was in
graduate school. I had to not only work up to the topics, but also see an
internal motivation for getting to them, rather than having them plopped
down in a "well this is what we're going to learn next"-manner.
Sullivan invented a beautiful dictionary between rational maps and
Kleinian groups. A rational map is a map of the Riemann sphere to itself
given by the quotient of two polynomials; for example x2
+ c, where the polynomial in the denominator is 1. The interesting
thing to study is iteration of these maps. When you have a compact
hyperbolic 3-manifold, its universal cover turns out to be the solid
(open) 3-ball. The quotient of the 3-ball by the action of the
fundamental group of the original manifold is the manifold again. The
3-ball can be compactified by adding its boundary in R3,
namely the sphere S2. The group action on the 3-ball
extends to the boundary S2 as Möbius
transformations (i.e. maps of the form (az + b)/(cz +
d)). This is called a Kleinian group. Notice that we began by
considering a 3-dimensional manifold and we ended up with a dynamical
system on the sphere. This is how the two subjects are connected. There
are many theorems making this connection explicit. I wrote a survey
article ("The
classification of conformal dynamical systems") for Yau's conference
which laid out not only this dictionary, but a research program for
proving results based on it. Understanding and developing this dictionary
has been a big motivation in my work. For example, one big gap in the
dictionary is reversing the process I described -- if we are given a
dynamical system on the sphere, no one knows how to find a
three-dimensional object associated to it. There is lots left to do in
this exciting field!
This brings to mind is a saying of Lipman Bers, who was one of my mentors; he said: "Mathematics is something that we do for the begrudging admiration of a few close friends." I think that's a good description of mathematics; you don't expect more than that, because the satisfaction of mathematics is really a personal thing. So I feel very lucky to have been selected for recognition by the Fields medal committee.
One of the wonderful things about math is that the community is fairly
small. When I went to Berlin to receive this prize, many people I knew
well from over the years were present -- a wonderful international
community of friends of mine. It was really a nice thing.