Matematički blogovi

New Theorem

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem of the Day has a 'new acquisition': The Contraction Mapping Theorem
Kategorije: Matematički blogovi

Theorem Update

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem no. 70, on vertex-degree colourability of graphs, has been updated to reflect the striking reduction in the upper bound recently proved by Kalkowski, Karonski and Pfender
Kategorije: Matematički blogovi

New Theorem

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem of the Day has a 'new acquisition': The Panarboreal Formula
Kategorije: Matematički blogovi

New Theorem

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem of the Day has a 'new acquisition': The Sophomore's Dream
Kategorije: Matematički blogovi

New Theorem

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem of the Day has a 'new acquisition': A Theorem of Schur on Real-Rootedness
Kategorije: Matematički blogovi

New Theorem

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem of the Day has a 'new acquisition': Euclid's Prism
Kategorije: Matematički blogovi

New Theorem

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem of the Day has a 'new acquisition': Woodall's Hopping Lemma
Kategorije: Matematički blogovi

New Theorem

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem of the Day has a 'new acquisition': The Short Gaps Theorem
Kategorije: Matematički blogovi

Theorem Update

Theorem of the Day - Uto, 2018-01-09 15:15
A new version of Theorem no. 90, Cardano's Cubic Formula, has been posted. The new version has a much more explicit example of how to use the formula to solve cubic equations.
Kategorije: Matematički blogovi

New Theorem

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem of the Day has a 'new acquisition': de Moivre's Theorem
Kategorije: Matematički blogovi

New Theorem

Theorem of the Day - Uto, 2018-01-09 15:15
Theorem of the Day has a 'new acquisition': Euler's Partition Identity
Kategorije: Matematički blogovi

Correlations of the von Mangoldt and higher divisor functions II. Divisor correlations in short ranges

Terrence Tao - Sri, 2017-12-27 18:55

Kaisa Matomaki, Maksym Radziwill, and I have uploaded to the arXiv our paper “Correlations of the von Mangoldt and higher divisor functions II. Divisor correlations in short ranges“. This is a sequel of sorts to our previous paper on divisor correlations, though the proof techniques in this paper are rather different. As with the previous paper, our interest is in correlations such as

for medium-sized and large , where are natural numbers and is the divisor function (actually our methods can also treat a generalisation in which is non-integer, but for simplicity let us stick with the integer case for this discussion). Our methods also allow for one of the divisor function factors to be replaced with a von Mangoldt function, but (in contrast to the previous paper) we cannot treat the case when both factors are von Mangoldt.

As discussed in this previous post, one heuristically expects an asymptotic of the form

for any fixed , where is a certain explicit (but rather complicated) polynomial of degree . Such asymptotics are known when , but remain open for . In the previous paper, we were able to obtain a weaker bound of the form

for of the shifts , whenever the shift range lies between and . But the methods become increasingly hard to use as gets smaller. In this paper, we use a rather different method to obtain the even weaker bound

for of the shifts , where can now be as short as . The constant can be improved, but there are serious obstacles to using our method to go below (as the exceptionally large values of then begin to dominate). This can be viewed as an analogue to our previous paper on correlations of bounded multiplicative functions on average, in which the functions are now unbounded, and indeed our proof strategy is based in large part on that paper (but with many significant new technical complications).

We now discuss some of the ingredients of the proof. Unsurprisingly, the first step is the circle method, expressing (1) in terms of exponential sums such as

Actually, it is convenient to first prune slightly by zeroing out this function on “atypical” numbers that have an unusually small or large number of factors in a certain sense, but let us ignore this technicality for this discussion. The contribution of for “major arc” can be treated by standard techniques (and is the source of the main term ; the main difficulty comes from treating the contribution of “minor arc” .

In our previous paper on bounded multiplicative functions, we used Plancherel’s theorem to estimate the global norm , and then also used the Katai-Bourgain-Sarnak-Ziegler orthogonality criterion to control local norms , where was a minor arc interval of length about , and these two estimates together were sufficient to get a good bound on correlations by an application of Hölder’s inequality. For , it is more convenient to use Dirichlet series methods (and Ramaré-type factorisations of such Dirichlet series) to control local norms on minor arcs, in the spirit of the proof of the Matomaki-Radziwill theorem; a key point is to develop “log-free” mean value theorems for Dirichlet series associated to functions such as , so as not to wipe out the (rather small) savings one will get over the trivial bound from this method. On the other hand, the global bound will definitely be unusable, because the sum has too many unwanted factors of . Fortunately, we can substitute this global bound with a “large values” bound that controls expressions such as

for a moderate number of disjoint intervals , with a bound that is slightly better (for a medium-sized power of ) than what one would have obtained by bounding each integral separately. (One needs to save more than for the argument to work; we end up saving a factor of about .) This large values estimate is probably the most novel contribution of the paper. After taking the Fourier transform, matters basically reduce to getting a good estimate for

where is the midpoint of ; thus we need some upper bound on the large local Fourier coefficients of . These coefficients are difficult to calculate directly, but, in the spirit of a paper of Ben Green and myself, we can try to replace by a more tractable and “pseudorandom” majorant for which the local Fourier coefficients are computable (on average). After a standard duality argument, one ends up having to control expressions such as

after various averaging in the parameters. These local Fourier coefficients of turn out to be small on average unless is “major arc”. One then is left with a mostly combinatorial problem of trying to bound how often this major arc scenario occurs. This is very close to a computation in the previously mentioned paper of Ben and myself; there is a technical wrinkle in that the are not as well separated as they were in my paper with Ben, but it turns out that one can modify the arguments in that paper to still obtain a satisfactory estimate in this case (after first grouping nearby frequencies together, and modifying the duality argument accordingly).

Filed under: math.NT, paper Tagged: correlation, divisor function, Kaisa Matomaki, Maksym Radziwill
Kategorije: Matematički blogovi

Metrics of linear growth – the solution

Terrence Tao - Pet, 2017-12-22 01:23

In the tradition of “Polymath projects“, the problem posed in the previous two blog posts has now been solved, thanks to the cumulative effect of many small contributions by many participants (including, but not limited to, Sean Eberhard, Tobias Fritz, Siddharta Gadgil, Tobias Hartnick, Chris Jerdonek, Apoorva Khare, Antonio Machiavelo, Pace Nielsen, Andy Putman, Will Sawin, Alexander Shamov, Lior Silberman, and David Speyer). In this post I’ll write down a streamlined resolution, eliding a number of important but ultimately removable partial steps and insights made by the above contributors en route to the solution.

Theorem 1 Let be a group. Suppose one has a “seminorm” function which obeys the triangle inequality

for all , with equality whenever . Then the seminorm factors through the abelianisation map .

Proof: By the triangle inequality, it suffices to show that for all , where is the commutator.

We first establish some basic facts. Firstly, by hypothesis we have , and hence whenever is a power of two. On the other hand, by the triangle inequality we have for all positive , and hence by the triangle inequality again we also have the matching lower bound, thus

for all . The claim is also true for (apply the preceding bound with and ). By replacing with if necessary we may now also assume without loss of generality that , thus


for all integers .

Next, for any , and any natural number , we have

so on taking limits as we have . Replacing by gives the matching lower bound, thus we have the conjugation invariance


Next, we observe that if are such that is conjugate to both and , then one has the inequality


Indeed, if we write for some , then for any natural number one has

where the and terms each appear times. From (2) we see that conjugation by does not affect the norm. Using this and the triangle inequality several times, we conclude that

and the claim (3) follows by sending .

The following special case of (3) will be of particular interest. Let , and for any integers , define the quantity

Observe that is conjugate to both and to , hence by (3) one has

which by (2) leads to the recursive inequality

We can write this in probabilistic notation as

where is a random vector that takes the values and with probability each. Iterating this, we conclude in particular that for any large natural number , one has

where and are iid copies of . We can write where are iid signs.  By the triangle inequality, we thus have

noting that is an even integer.  On the other hand, has mean zero and variance , hence by Cauchy-Schwarz

But by (1), the left-hand side is equal to . Dividing by and then sending , we obtain the claim.

The above theorem reduces such seminorms to abelian groups. It is easy to see from (1) that any torsion element of such groups has zero seminorm, so we can in fact restrict to torsion-free groups, which we now write using additive notation , thus for instance for . We think of as a -module. One can then extend the seminorm to the associated -vector space by the formula , and then to the associated -vector space by continuity, at which point it becomes a genuine seminorm (provided we have ensured the symmetry condition ). Conversely, any seminorm on induces a seminorm on . (These arguments also appear in this paper of Khare and Rajaratnam.)


Filed under: math.GR, math.MG, polymath Tagged: norms
Kategorije: Matematički blogovi

Bi-invariant metrics of linear growth on the free group, II

Terrence Tao - Sri, 2017-12-20 07:16

This post is a continuation of the previous post, which has attracted a large number of comments. I’m recording here some calculations that arose from those comments (particularly those of Pace Nielsen, Lior Silberman, Tobias Fritz, and Apoorva Khare). Please feel free to either continue these calculations or to discuss other approaches to the problem, such as those mentioned in the remaining comments to the previous post.

Let be the free group on two generators , and let be a quantity obeying the triangle inequality

and the linear growth property

for all and integers ; this implies the conjugation invariance

or equivalently

We consider inequalities of the form


for various real numbers . For instance, since

we have (1) for . We also have the following further relations:

Proposition 1

  • (i) If (1) holds for , then it holds for .
  • (ii) If (1) holds for , then (2) holds for .
  • (iii) If (2) holds for , then (1) holds for .
  • (iv) If (1) holds for and (2) holds for , then (1) holds for .

Proof: For (i) we simply observe that

For (ii), we calculate

giving the claim.

For (iii), we calculate

giving the claim.

For (iv), we calculate

giving the claim.

Here is a typical application of the above estimates. If (1) holds for , then by part (i) it holds for , then by (ii) (2) holds for , then by (iv) (1) holds for . The map has fixed point , thus

For instance, if , then .

Filed under: math.GR, math.MG, question Tagged: free group, geometric group theory
Kategorije: Matematički blogovi

Bi-invariant metrics of linear growth on the free group

Terrence Tao - Ned, 2017-12-17 05:41

Here is a curious question posed to me by Apoorva Khare that I do not know the answer to. Let be the free group on two generators . Does there exist a metric on this group which is

  • bi-invariant, thus for all ; and
  • linear growth in the sense that for all and all natural numbers ?

By defining the “norm” of an element to be , an equivalent formulation of the problem asks if there exists a non-negative norm function that obeys the conjugation invariance

for all , the triangle inequality

for all , and the linear growth

for all and , and such that for all non-identity . Indeed, if such a norm exists then one can just take to give the desired metric.

One can normalise the norm of the generators to be at most , thus

This can then be used to upper bound the norm of other words in . For instance, from (1), (3) one has

A bit less trivially, from (3), (2), (1) one can bound commutators as

In a similar spirit one has

What is not clear to me is if one can keep arguing like this to continually improve the upper bounds on the norm of a given non-trivial group element to the point where this norm must in fact vanish, which would demonstrate that no metric with the above properties on would exist (and in fact would impose strong constraints on similar metrics existing on other groups as well). It is also tempting to use some ideas from geometric group theory (e.g. asymptotic cones) to try to understand these metrics further, though I wasn’t able to get very far with this approach. Anyway, this feels like a problem that might be somewhat receptive to a more crowdsourced attack, so I am posing it here in case any readers wish to try to make progress on it.

Filed under: math.GR, math.MG, question Tagged: free group, geometric group theory
Kategorije: Matematički blogovi

Embedding the Boussinesq equations in the incompressible Euler equations on a manifold

Terrence Tao - Pet, 2017-12-15 02:04

The Boussinesq equations for inviscid, incompressible two-dimensional fluid flow in the presence of gravity are given by

where is the velocity field, is the pressure field, and is the density field (or, in some physical interpretations, the temperature field). In this post we shall restrict ourselves to formal manipulations, assuming implicitly that all fields are regular enough (or sufficiently decaying at spatial infinity) that the manipulations are justified. Using the material derivative , one can abbreviate these equations as

One can eliminate the role of the pressure by working with the vorticity . A standard calculation then leads us to the equivalent “vorticity-stream” formulation

of the Boussinesq equations. The latter two equations can be used to recover the velocity field from the vorticity by the Biot-Savart law

It has long been observed (see e.g. Section 5.4.1 of Bertozzi-Majda) that the Boussinesq equations are very similar, though not quite identical, to the three-dimensional inviscid incompressible Euler equations under the hypothesis of axial symmetry (with swirl). The Euler equations are

where now the velocity field and pressure field are over the three-dimensional domain . If one expresses in polar coordinates then one can write the velocity vector field in these coordinates as

If we make the axial symmetry assumption that these components, as well as , do not depend on the variable, thus

then after some calculation (which we give below the fold) one can eventually reduce the Euler equations to the system

where is the modified material derivative, and is the field . This is almost identical with the Boussinesq equations except for some additional powers of ; thus, the intuition is that the Boussinesq equations are a simplified model for axially symmetric Euler flows when one stays away from the axis and also does not wander off to .

However, this heuristic is not rigorous; the above calculations do not actually give an embedding of the Boussinesq equations into Euler. (The equations do match on the cylinder , but this is a measure zero subset of the domain, and so is not enough to give an embedding on any non-trivial region of space.) Recently, while playing around with trying to embed other equations into the Euler equations, I discovered that it is possible to make such an embedding into a four-dimensional Euler equation, albeit on a slightly curved manifold rather than in Euclidean space. More precisely, we use the Ebin-Marsden generalisation

of the Euler equations to an arbitrary Riemannian manifold (ignoring any issues of boundary conditions for this discussion), where is a time-dependent vector field, is a time-dependent scalar field, and is the covariant derivative along using the Levi-Civita connection . In Penrose abstract index notation (using the Levi-Civita connection , and raising and lowering indices using the metric ), the equations of motion become


in coordinates, this becomes

where the Christoffel symbols are given by the formula

where is the inverse to the metric tensor . If the coordinates are chosen so that the volume form is the Euclidean volume form , thus , then on differentiating we have , and hence , and so the divergence-free equation (10) simplifies in this case to . The Ebin-Marsden Euler equations are the natural generalisation of the Euler equations to arbitrary manifolds; for instance, they (formally) conserve the kinetic energy

and can be viewed as the formal geodesic flow equation on the infinite-dimensional manifold of volume-preserving diffeomorphisms on (see this previous post for a discussion of this in the flat space case).

The specific four-dimensional manifold in question is the space with metric

and solutions to the Boussinesq equation on can be transformed into solutions to the Euler equations on this manifold. This is part of a more general family of embeddings into the Euler equations in which passive scalar fields (such as the field appearing in the Boussinesq equations) can be incorporated into the dynamics via fluctuations in the Riemannian metric ). I am writing the details below the fold (partly for my own benefit).

Firstly, it is convenient to transform the Euler equations on an arbitrary Riemannian manifold to a covelocity formulation, by introducing the covelocity -form , as this formulation allows one to largely avoid having to work with covariant derivatives or Christoffel symbols. Lowering indices in the Euler equation (9) then gives the system

Noting that , and introducing the modified pressure , we arrive at the system

As the Levi-Civita connection is torsion-free (or equivalently, one has the symmetry , we have , thus we arrive at the system

which is equivalent to (and thus embeddable in) the Euler equations. The advantage of this formulation is that the metric now plays no role whatsoever in the main equation (11), and only appears in (12) and (13). One can also interpret the expression as the Lie derivative of the covelocity along the velocity .

If one works in a coordinate system in which the volume form is Euclidean (that is to say, ), then the Riemannian divergence is the same as the ordinary divergence ; this can be seen either by viewing the divergence as the adjoint of the gradient operator with respect to the volume form, or else by differentiating the condition to conclude that , which implies that and hence . But actually, as already observed in my previous paper, one can replace with “for free”, even if one does not have the Euclidean volume form condition , if one is prepared to add an additional “dummy” dimension to the manifold . More precisely, if solves the system

on some -dimensional Riemannian manifold , then one can introduce modified fields on a -dimensional manifold by defining

then these fields obey the same system, and hence (since ) solve (11), (12), (13). Thus the above system is embeddable into the Euler equations in one higher dimension. To embed the Boussinesq equations into the four-dimensional Euler equations mentioned previously, it thus suffices to embed these equations into the system (14)(16) for the three-dimensional manifold with metric


Let us more generally consider the system (14)(16) under the assumption that splits as a product of two manifolds , with all data independent of the coordinates (but, for added flexibility, we do not assume that the metric on splits as the direct sum of metrics on and , allowing for twists and shears). This, if we use Roman indices to denote the coordinates and Greek indices to denote the coordinates (with summation only being restricted to these coordinates), and denote the inverse metric by the tensor with components , then we have

and then the system (14)(16) in these split coordinates become

We can view this as a system of PDE on the smaller manifold , which is then embedded into the Euler equations. Introducing the material derivative , this simplifies slightly to

We substitute the third and fourth equations into the first, then drop the fourth (as it can be viewed as a definition of the field , which no longer plays any further role), to obtain

We can reverse the pressure modification by writing

to move some derivatives off of the covelocity fields and onto the metric, so that the system now becomes


At this point one can specialise to various special cases to obtain some possibly simpler dynamics. For instance, one could set to be flat (so that is constant), and set and to both vanish, then we obtain the simple-looking (but somewhat overdetermined) system

This is basically the system I worked with in my previous paper. For instance, one could set one of the components of , say to be identically , and to be an arbitrary divergence-free vector field for that component, then , and all the other components of are transported by this static velocity field, leading for instance to exponential growth of vorticity if has a hyperbolic fixed point and the initial data of the components of other than are generic. (Alas, I was not able to modify this example to obtain something more dramatic than exponential growth, such as finite time blowup.)

Alternatively, one can set to vanish, leaving one with

If consists of a single coordinate , then on setting , this simplifies to

If we take to be with the Euclidean metric , and set (so that has the metric (17)), then one obtains the Boussinesq system (1)(3), giving the claimed embedding.

Now we perform a similar analysis for the axially symmetric Euler equations. The cylindrical coordinate system is slightly inconvenient to work with because the volume form is not Euclidean. We therefore introduce Turkington coordinates

to rewrite the metric as

so that the volume form is now Euclidean, and the Euler equations become (14)(16). Splitting as before, with being the two-dimensional manifold parameterised by , and the one-dimensional manifold parameterised by , the symmetry reduction (18)(21) gives us (26)(29) as before. Explicitly, one has

Setting to eliminate the pressure , we obtain

Since , , , , and , we obtain the system (5)(8).

Returning to the general form of (22)(25), one can obtain an interesting transformation of this system by writing for the inverse of (caution: in general, this is not the restriction of the original metric on to ), and define the modified covelocity

then by the Leibniz rule

Replacing the covelocity with the modified covelocity, this becomes

We thus have the system


and so if one writes

we obtain

For each , we can specify as an arbitrary smooth function of space (it has to be positive definite to keep the manifold Riemannian, but one can add an arbitrary constant to delete this constraint), and as an arbitrary time-independent exact -form. Thus we obtain an incompressible Euler system with two new forcing terms, one term coming from passive scalars , and another term that sets up some rotation between the components , with the rotation speed determined by a passive scalar .

Remark 1 As a sanity check, one can observe that one still has conservation of the kinetic energy, which is equal to

and can be expressed in terms of and as

One can check this is conserved by the above system (mainly due to the antisymmetry of ).

As one special case of this system, one can work with a one-dimensional fibre manifold , and set and for the single coordinate of this manifold. This leads to the system

where is some smooth time-independent exact -form that one is free to specify. This resembles an Euler equation in the presence of a “magnetic field” that rotates the velocity of the fluid. I am currently experimenting with trying to use this to force some sort of blowup, though I have not succeeded so far (one would obviously have to use the pressure term at some point, for if the pressure vanished then one could keep things bounded using the method of characteristics).

Filed under: expository, math.AP Tagged: Boussinesq equations, incompressible Euler equations, Riemannian geometry
Kategorije: Matematički blogovi

Review: Problem Solving and Reasoning With Discrete Mathematics

Math~Blog - Uto, 2017-11-21 23:45





US K-12 mathematics has long been dominated by the notion that any legitimate pathway through high school and into college requires passing over Calculus Mountain. It isn’t a real mountain, but it might as well be for those who wish to pursue certain professions, not all of which actually make use of calculus at the graduate or professional school level or indeed in those professions in the field. In particular, medicine, dentistry, veterinary medicine, and other prestigious branches of “the healing arts” make calculus an absolute hurdle for those who wish to qualify for professional training. And the results can be harmful not only to individuals who are thus shut out because they can’t get over Calculus Mountain but to the professions themselves and the public they serve. Few doctors outside of those doing research ever use one lick of calculus outside of their undergraduate education, but only those who can “cut it” for four semesters in college are going to get into the requisite professional schools. And not all those who make it are necessarily the best-suited to be healers; neither are all those who are kept out necessarily otherwise ill-qualified.

The above issues aside, there seems to be a factor of inertia and traditionalism that keeps calculus as the “One True Grail” for school mathematics. And to ask why this need be the only path for our students is to risk being branded a heretic. One person who has challenged the received wisdom is Dr. Joseph G. Rosenstein, an emeritus professor of mathematics at Rutgers University. He’s done so not by attacking analysis, but by offering a major alternative for students in K-12: discrete mathematics. In the introduction to his textbook, PROBLEM SOLVING AND REASONING with DISCRETE MATHEMATICS, he writes that the text, “addresses five types of issues related to problem solving and reasoning: (a) strategies for solving problems, (b) thinking and acting systematically, (c) mathematical reasoning, (d) mathematical modeling, and (e) mathematical practice. Although many of the mathematical topics in this book are not addressed in the current version of the Common Core State Standards, these five types of issues play a major role in what the standards refer to as “Standards for Mathematical Practice.”

Dr. Rosenstein’s textbook offers a coherent treatment of a major strand of mathematics that has many of the most important qualities of both abstract and applied math, including rigorous thinking and proving, utility, and beauty, all of which can be accessible to K-12 students who many not necessarily have been drawn to or successful with every aspect of the traditional curriculum. In these senses, Rosenstein provides a pathway that exemplifies the heart and soul of the Standards for Mathematical Practice, the piece of the Common Core Mathematics Standards that to my thinking exists above and beyond any particular list or sequencing of topics in the Content Standards.

Further, his book is eminently accessible to a broad range of students and teachers. He has selected from the most appealing areas of discrete mathematics, starting with coloring, graphs, counting, and paths and circuits, all of which provide students with the opportunity to dive into serious math and be successful with it. The problems are chosen to be thought-provoking and engaging, with a mix of the concrete, the abstract, the applied, and the esthetic. Teachers who are not themselves deeply familiar with one or more of the topics can readily explore discrete mathematics with students in ways that

would benefit everyone. Those who have a background in discrete mathematics will find plenty of familiar and new problems to challenge both their students and themselves.

As someone who loves discrete mathematics, I am truly excited to see this book appear just at a time when the United States needs to find alternatives for students who have not been well-served by the traditional curriculum for K-12 math. While no textbook can guarantee that it will instantly make students love mathematics who have heretofore feared and loathed the subject, I believe Rosenstein’s book comes close to offering a vast range of students a tempting invitation to reconsider math as something enjoyable, beautiful, and perhaps most importantly, understandable. Nothing breeds success like success, and with this book, students will be able to succeed in authentic mathematical problem-solving, regardless of previous experiences.

The post Review: Problem Solving and Reasoning With Discrete Mathematics appeared first on Math ∞ Blog.

Kategorije: Matematički blogovi

Joe Rosenstein: The Art of Being Discrete

Math~Blog - Uto, 2017-11-21 23:02

Joseph G. Rosenstein


It’s our pleasure to welcome Professor Joseph G. Rosenstein to the Math Blog. He is Distinguished  Emeritus Professor of mathematics at Rutgers University.  His biography is extensive and you may read about his accomplishments on his web page .


Michael Paul Goldenberg: Welcome, Joe. It’s a pleasure to be speaking with you again. I know you from attending a summer program for high school teachers in applied graph theory at DIMACS in the summer of 2001. You were running a parallel program for middle and elementary school teachers. You were quite a legend among the high school teachers who were returning from previous years. I was told to be  on the lookout for someone with a big beard, some sort of splashy tie, and shorts. Do you still dress like that during your summer work? Any reasons beyond a personal taste for the “Joe Rosenstein Look”?


Joe Rosenstein: Yes, I still look much the same. I still have a beard – my beard is now almost 52 years old – but, curiously, it is now much whiter than in the past. I always wear a tie, year-round, and whenever they’re out of style (which is often), they can be described as “splashy;” I started wearing ties over 50 years ago, and have quite a collection. I also wear shorts in the late spring and summer every year; indeed, with global warming, this year I wore shorts through the end of October. There is no mathematical, pedagogical, spiritual, or political reason for the way I look. It’s just who I am. 


MPG:  How did you first become interested in mathematics as a serious pursuit?  


JR: I can’t point to a specific moment or incident when I decided that mathematics would be the direction in which I would go. As a high school student, I would go to the main public library in Rochester, New York and devour whatever math books they had; I scored 100 on all the math Regents exams given in New York State (including Solid Geometry). It was natural for me to continue to focus on mathematics when I went to college. Initially, I was very discouraged, because most of the 75 students in the honors calculus course in which I was enrolled in Columbia had gone to high schools like Bronx Science and were much better prepared than me, but I stuck it out, and I was one of only six students who survived that two-year honors course. So clearly math and I were meant for each other. Therefore, it was clear that I would go to graduate school in mathematics.

I went to Cornell because while at Columbia I became interested in mathematical logic and Cornell was then (and still is) a center for mathematical logic, which indeed became my focus in graduate school. My mentors and thesis advisors at Cornell were Anil Nerode and  Gerald Sacks. One day I was told that I would be going to the University of Minnesota as an Assistant Professor, and that is what I did. That was September 1969.

I continued doing mathematical research for about 15 years and then decided to write a book called “Linear Orderings” which included much of the research that I had done, but was also intended to organize and put in one place all that was known about linear orderings; that book was published in 1983 in Academic Press’ Series on Pure and Applied Mathematics. By 1987, however, my focus had shifted from mathematical research to mathematics education.

When I am asked on occasion why I chose to go into mathematics, and to become an academic mathematician, I respond that I was good at math and never asked the question of what I really wanted to spend my life doing … so I just continued what I was doing successfully. Sixty years ago, young people often did not ask questions about their future, but rather took the path that was somehow laid out for them. As time passed, I learned that I had other interests and talents, and have pursued them as well. But I have no regrets about spending my life with mathematics..


MPG: What led you to a focus on discrete mathematics? 


JR: Before 1989 I had not focused on discrete mathematics, although in retrospect it had become clear to me by then that much of my research would now be called either theoretical computer science or infinite combinatorics. However, in that year, a proposal was submitted to the National Science Foundation to create a “Science and Technology Center” based at Rutgers whose focus was on discrete mathematics. As part of the proposal, applicants had to describe how their center would reach out to precollege teachers and students. By then I was already quite involved in running programs for high school teachers; I had started a well-attended annual “precalculus conference” (the 32nd conference will take place in March 2018!) and an annual week-long summer institute for new math teachers (and a parallel institute for new science teachers). So I think that we could make a convincing case that we could do the outreach to schools that NSF required. Indeed, our proposal was funded by NSF and we were one of the initial half dozen Science and Technology Centers; our center is called DIMACS, the Center for Discrete Mathematics and Theoretical Computer Science, and almost 30 years on, it is a flourishing center with an impressive track record and international reputation.

Soon after NSF funded DIMACS, I submitted proposals to NSF to fund programs for high school teachers and high school students. These proposals were also successful; indeed, I received four consecutive grants from NSF for the Rutgers Young Scholars Program (RYSP) in Discrete Mathematics (extending over 8 years) and three consecutive grants from NSF for the Leadership Program (LP) in Discrete Mathematics for teachers (extending over 11 years). The LP continued for another 8 years with funding from other sources and the RYSP celebrated its 26th iteration this past summer.

MPG: What did you learn from the Leadership Program in Discrete Mathematics? 

JGR: In its initial years, the LP provided programs just for high school teachers. We learned from those programs that

  • students were motivated by, and indeed excited by, the focus on applications;
  • all students were able to learn the topics in discrete mathematics, since there were few mathematical prerequisites for these topics;
  • while being accessible to all students, the topics could provide challenges to mathematically talented students;
  • discrete math provided an opportunity for a “new start” in mathematics to students who had previously been unsuccessful;
  • discrete math also provided an opportunity for a “new start” for teachers, in that techniques (such as focusing on problem solving and understanding, using group work, using questioning techniques) that they had avoided when teaching traditional topics, they were able to introduce successfully when teaching discrete math and were often able to transfer their new strategies into teaching traditional topics although we had initially assumed that discrete math would work only for high school students, the teachers in the LP told us that they were able to introduce discrete math topics successfully to middle school students as well.

As a result, when we applied for a second grant from the NSF, the reach was expanded to include middle school teachers as well. And the third, five-year, grant from the NSF was for K-8 teachers, since many of the topics in discrete math were accessible to elementary students as well, although the LP participants of course had to modify the curriculum and instruction to be suitable for their grade levels. 

MPG: You’ve spent a large part of your career promoting discrete mathematics as part of K-12 curricula on a state and national level. When I first came to the University of Michigan to do graduate work in mathematics education in 1992, I discovered that the State of Michigan already had a discrete mathematics strand embedded throughout its curriculum framework for math. While I was field supervisor of student teachers in secondary mathematics for U-M, I met a local high school teacher who was an enthusiastic booster of discrete math and did presentations for teachers at various conferences. I subsequently arranged to have him speak to my student teachers every semester and eventually was able to get a couple of them placed in his classroom. In my conversations with him, I realized that, outside of his classroom and those of a handful of other teachers around the state, the discrete mathematics strand was being honored far more in the breach than in the observance. The issue, he told me, was that there were absolutely no discrete mathematics items on the annual state assessments. How does this compare to your experiences in New Jersey? 


JR: In the early 1990s it became clear to the leadership of the Mathematical Sciences Education Board, part of the National Academy of Sciences, that improvement in mathematics education should be state-based. Influenced and encouraged by the MSEB, I created the New Jersey Mathematics Coalition in 1991, bringing together leaders from colleges, schools, industry, government, and the public sector in order to improve the mathematics education of New Jersey students. (Similar coalitions were created in subsequent years in many other states, as well as a national organization of such coalitions.) The coalitions advocated for the adoption of state standards.

As the idea of state standards gained momentum, the New Jersey Mathematics Coalition, in partnership with the NJ State Department of Education, was able to get a grant from the US Department of Education to create mathematics standards and a mathematics framework in New Jersey, an effort that I directed, that involved hundreds of New Jersey educators, together with people from the other sectors, and that produced the standards that were adopted by the State Board of Education in 1996 and that produced the New Jersey Mathematics Curriculum Framework to assist teachers and schools in implementing the standards.

These standards included a standard in discrete mathematics. Thus, when the state assessments were developed, they included items on discrete mathematics. Since the effort to develop the 1996 standards was not envisioned as providing specifications for statewide assessments, the standards were revised in 2002 in order to better align the revised state standards with revised state assessments.

As a result, New Jersey’s state assessments have, since soon after 1996 and until about 2008, included questions on discrete mathematics. I cannot avoid mentioning that, during this period, the scores of New Jersey students on the National Assessment of Educational Progress (NAEP) were among the highest in the nation.


MPG: What about on the national level? What happened to discrete mathematics with NCTM and with the Common Core Content Standards for mathematics? 


JR: With the advent of the Common Core standards, discrete mathematics has been essentially absent from the national math standards and, of course, from the New Jersey math standards. The one exception is that combinatorics has been incorporated into the standard relating to probability at the 8th grade level, although systematic listing and counting should be introduced at the elementary level.


MPG: Why do you think there has been so much resistance and inertia when it comes to discrete math in American K-12 mathematics education? 


JR: There has been a traditional and systemic resistance on the part of mathematicians against discrete mathematics, including a reluctance to consider it mathematics. Even Euler, the founder of graph theory, considered his solution to the question of whether a given graph has an Euler circuit as reasoning, not as mathematics. If summations, integrals, partial derivatives, or complex numbers are not present, then it’s not real mathematics. This perspective filters down to the K-12 curriculum into the view that all of mathematics should be preparation for calculus, a view that is echoed by teachers who never learned discrete mathematics in preparing for their teaching career.

Indeed, although the national standards were always intended to include the mathematics that prepare students for college, careers, and citizenship, the Common Core has hijacked the math standards so that it became preparation for calculus. That is very unfortunate, for not all students need to prepare themselves for calculus.

Unfortunately, most teachers don’t know discrete math or its value for their students and most college mathematicians who teach prospective teachers do not know how valuable it would be for their students our future K-12 teachers.

 MPG: Why do you think the Common Core focused on preparing students for calculus.


JGR: Three important reasons that I think led to the Common Core’s focus on preparing students for calculus were (a) a concern about the STEM pipeline, (b) a concern about US students performance on international assessments, and (c) a concern about the number of students who come to college with inadequate preparation for college math courses. From a superficial perspective, each of these supports a calculus-based curriculum.

With respect to (a), our research suggests that it is not that the STEM pipeline is too small (if shortages indeed exist), but rather that it is too leaky; a substantial number of students who already appear to be in the STEM pipeline are provided little encouragement to pursue STEM-based careers and drop out of math after taking AP calculus. 

With respect to (b), if the gap between US students and international students is indeed a problem (of which I am not convinced), then it’s not clear that narrowing our curriculum is an effective way to solve the problem. (In theory, if we spend 100% of our class time on the topics covered in the international assessments, then our students will do better than if we spend 90% of our class time on those topics and 10% of our time on other topics.) Our students’ scores may rise, but their overall understanding and experience with mathematics will suffer.

With respect to (c), very few of the students who come to college unprepared for college mathematics are going to end up taking, let alone succeeding, in calculus. They would be better prepared for college mathematics if they had a stronger experience with problem solving and reasoning.

Looking at the three issues more closely, we see that what all of these students need is not a narrower curriculum, but a broader curriculum, one that focuses more on problem solving and reasoning, as is the case when one incorporates discrete mathematics in the curriculum.

These ideas are discussed in more detail in my article “The Absence of Discrete Mathematics from Primary and Secondary Education in the United States … and Why that is Counterproductive” that will soon appear in the ICMI-13 Monograph published by Springer and entitled “Teaching and Learning Discrete Mathematics Worldwide: Curriculum and Research.”


MPG: I taught math in an alternative high school from 1998 – 2000. Virtually all of my students were testing at a 4th to 5th grade level in literacy and mathematics, and few of them had earned any high school credits in mathematics. During my second year there, after stumbling around trying to find something that would be accessible and interesting to students who feared and loathed mathematics and were very weak in basic arithmetic, to the extent that trying to teach them algebra was essentially futile. I stumbled via the Core-Plus curriculum into a unit on graph theory. While many of them floundered with Euler circuits and paths, something I thought would work for them, I was thrilled to finally find a topic that a more than a few of them liked and with which some of the weakest and most resistant students were extremely successful: graph coloring. What are your thoughts on that as a way to engage students who have not been doing well previously? 


JR: That is exactly right. When I teach courses for prospective K-8 teachers, I start with map coloring. More specifically I provide each group of 4-5 students with a map of the continental United States in which all the states are colored white and an envelope full of paper chips of various colors and ask them to color the states so that bordering states have different colors. I don’t say anything about the number of colors. Of course, each group colors the whole map in one or two minutes, and I then ask them whether they can eliminate one color, then whether they can eliminate another color and so on, and every group always is able to reduce the number of colors to four.

This activity, which we also used to start off the Leadership Program, has the advantage that it doesn’t appear to be mathematics, and therefore does not arouse all of the students’ and teachers’ residual fears about mathematics, the negative experiences they have had with math in the past, and their lack of confidence in their mathematical abilities. Through this activity, they find out that they can be successful in mathematics, and this initial successful encounter – and those that follow – enables them to continue to succeed. This activity, as well as the other activities in the LP, were developed by myself and Valerie DeBellis, who served as LP Associate Director.

My book, “Problem Solving and Reasoning with Discrete Mathematics,”  [note: reviewed this month] also begins with map coloring, although the activity described above does not work the same way with readers of a textbook as is does in a classroom setting. From map coloring, we go to vertex-edge graphs and graph coloring, and then to applications of graph coloring, and then to systematic construction of graphs. This book is designed both for a course for high school students and for a course for prospective K-8 teachers. But it is also appropriate for those who are mathematically curious.


MPG: Do you see any promising approaches to getting discrete mathematics accepted as an option for students in K-12 who may not be ready for or interested in calculus? 


JR: As it becomes clearer that the Common Core standards are the wrong standards, that they are inappropriate for a substantial number of students, the question of which standards are appropriate will be answered on a state-by-state basis, following the conception formulated by the MSEB almost 30 years ago. Perhaps having national standards is a good idea but, given the disastrous standards that were produced in this round, the national standards route will not be taken for many decades. That will make it possible for those math teachers who know about discrete mathematics to attempt to convince their states of its value.

In order for that to happen, those teachers need to work now to institute courses that can serve as models for their colleagues. For the past ten years, their hands have been tied, as schools and districts insist on spending class time exclusively on what’s in the standards. But in the coming decade, I anticipate that those restrictions will be lifted and teachers will have more freedom to explore other mathematical topics, including discrete math.


MPG: What are you working on now? 


JR: High school math teachers will have to convince their supervisors and principals that discrete mathematics is valuable for their students. So I am working on producing a video (actually a series of videos) that are designed for that purpose – emphasizing the importance of problem-solving and reasoning and how discrete math promotes that, emphasizing the importance of seeing how math can be used to solve real-world problems (which in a calculus-based curriculum doesn’t happen very much until calculus), and emphasizing that all students can benefit from learning about these topics. I hope that these videos will convince teachers of the value of discrete mathematics and that teachers will encourage their supervisors and principals to watch the videos and also come to see the value of discrete mathematics. I hope that many will, as a result, introduce discrete mathematics into their schools on an experimental basis and, when successful, will expand its availability to all of their students.

In order for this plan to work, and in order for teachers to realize that my book, “Problem Solving and Reasoning with Discrete Mathematics,” and other books are available, I will also have to develop strategies for reaching teachers, including through social networks – to do which I hope to recruit many other educators to promote discrete mathematics. I hope that some of those who read this interview will join me in this enterprise.

As you may know, we just elected a new governor in New Jersey. I hope that he will be amenable to developing new math standards for New Jersey, and I am prepared to be actively involved in that process. Such an effort may lead to bringing discrete math back into the New Jersey curriculum.


MPG: Anything else you’d like to share with us? 


 JR: The website for my book on discrete mathematics is I also have another website for other books I have written on Jewish themes, and particularly Jewish prayerbooks – that is

I recently retired after 48 years as Rutgers, and am getting used to adding “emeritus” to my title of “Distinguished Professor of Mathematics.” My Rutgers website is

Finally, my wife and I have been married for 48 years, and are very proud of our five daughters, five sons-in-law, and eleven grandchildren.


MPG: Thanks so much for taking the time to speak with us, Joe, and for sharing your passion for mathematics.


The post Joe Rosenstein: The Art of Being Discrete appeared first on Math ∞ Blog.

Kategorije: Matematički blogovi

An inverse theorem for an inequality of Kneser

Terrence Tao - Sri, 2017-11-15 22:46

I have just uploaded to the arXiv the paper “An inverse theorem for an inequality of Kneser“, submitted to a special issue of the Proceedings of the Steklov Institute of Mathematics in honour of Sergei Konyagin. It concerns an inequality of Kneser discussed previously in this blog, namely that

whenever are compact non-empty subsets of a compact connected additive group with probability Haar measure .  (A later result of Kemperman extended this inequality to the nonabelian case.) This inequality is non-trivial in the regime

The connectedness of is essential, otherwise one could form counterexamples involving proper subgroups of of positive measure. In the blog post, I indicated how this inequality (together with a more “robust” strengthening of it) could be deduced from submodularity inequalities such as

which in turn easily follows from the identity and the inclusion , combined with the inclusion-exclusion formula.

In the non-trivial regime (2), equality can be attained in (1), for instance by taking to be the unit circle and to be arcs in that circle (obeying (2)). A bit more generally, if is an arbitrary connected compact abelian group and is a non-trivial character (i.e., a continuous homomorphism), then must be surjective (as has no non-trivial connected subgroups), and one can take and for some arcs in that circle (again choosing the measures of these arcs to obey (2)). The main result of this paper is an inverse theorem that asserts that this is the only way in which equality can occur in (1) (assuming (2)); furthermore, if (1) is close to being satisfied with equality and (2) holds, then must be close (in measure) to an example of the above form . Actually, for technical reasons (and for the applications we have in mind), it is important to establish an inverse theorem not just for (1), but for the more robust version mentioned earlier (in which the sumset is replaced by the partial sumset consisting of “popular” sums).

Roughly speaking, the idea is as follows. Let us informally call a critical pair if (2) holds and the inequality (1) (or more precisely, a robust version of this inequality) is almost obeyed with equality. The notion of a critical pair obeys some useful closure properties. Firstly, it is symmetric in , and invariant with respect to translation of either or . Furthermore, from the submodularity inequality (3), one can show that if and are critical pairs (with and positive), then and are also critical pairs. (Note that this is consistent with the claim that critical pairs only occur when come from arcs of a circle.) Similarly, from associativity , one can show that if and are critical pairs, then so are and .

One can combine these closure properties to obtain further ones. For instance, suppose is such that . Then (cheating a little bit), one can show that is also a critical pair, basically because is the union of the , , the are all critical pairs, and the all intersect each other. This argument doesn’t quite work as stated because one has to apply the closure property under union an uncountable number of times, but it turns out that if one works with the robust version of sumsets and uses a random sampling argument to approximate by the union of finitely many of the , then the argument can be made to work.

Using all of these closure properties, it turns out that one can start with an arbitrary critical pair and end up with a small set such that and are also critical pairs for all (say), where is the -fold sumset of . (Intuitively, if are thought of as secretly coming from the pullback of arcs by some character , then should be the pullback of a much shorter arc by the same character.) In particular, exhibits linear growth, in that for all . One can now use standard technology from inverse sumset theory to show first that has a very large Fourier coefficient (and thus is biased with respect to some character ), and secondly that is in fact almost of the form for some arc , from which it is not difficult to conclude similar statements for and and thus finish the proof of the inverse theorem.

In order to make the above argument rigorous, one has to be more precise about what the modifier “almost” means in the definition of a critical pair. I chose to do this in the language of “cheap” nonstandard analysis (aka asymptotic analysis), as discussed in this previous blog post; one could also have used the full-strength version of nonstandard analysis, but this does not seem to convey any substantial advantages. (One can also work in a more traditional “non-asymptotic” framework, but this requires one to keep much more careful account of various small error terms and leads to a messier argument.)


[Update, Nov 15: Corrected the attribution of the inequality (1) to Kneser instead of Kemperman.  Thanks to John Griesmer for pointing out the error.]

Filed under: math.CO, paper Tagged: inverse theorems, Kneser's theorem, sum set estimates
Kategorije: Matematički blogovi

Continuous approximations to arithmetic functions

Terrence Tao - Čet, 2017-11-09 20:49

A basic object of study in multiplicative number theory are the arithmetic functions: functions from the natural numbers to the complex numbers. Some fundamental examples of such functions include

Given an arithmetic function , we are often interested in statistics such as the summatory function


the logarithmically (or harmonically) weighted summatory function


or the Dirichlet series

In the latter case, one typically has to first restrict to those complex numbers whose real part is large enough in order to ensure the series on the right converges; but in many important cases, one can then extend the Dirichlet series to almost all of the complex plane by analytic continuation. One is also interested in correlations involving additive shifts, such as , but these are significantly more difficult to study and cannot be easily estimated by the methods of classical multiplicative number theory.

A key operation on arithmetic functions is that of Dirichlet convolution, which when given two arithmetic functions , forms a new arithmetic function , defined by the formula

Thus for instance , , , and for any arithmetic function . Dirichlet convolution and Dirichlet series are related by the fundamental formula


at least when the real part of is large enough that all sums involved become absolutely convergent (but in practice one can use analytic continuation to extend this identity to most of the complex plane). There is also the identity


at least when the real part of is large enough to justify interchange of differentiation and summation. As a consequence, many Dirichlet series can be expressed in terms of the Riemann zeta function , thus for instance

Much of the difficulty of multiplicative number theory can be traced back to the discrete nature of the natural numbers , which form a rather complicated abelian semigroup with respect to multiplication (in particular the set of generators is the set of prime numbers). One can obtain a simpler analogue of the subject by working instead with the half-infinite interval , which is a much simpler abelian semigroup under multiplication (being a one-dimensional Lie semigroup). (I will think of this as a sort of “completion” of at the infinite place , hence the terminology.) Accordingly, let us define a continuous arithmetic function to be a locally integrable function . The analogue of the summatory function (1) is then an integral

and similarly the analogue of (2) is

The analogue of the Dirichlet series is the Mellin-type transform

which will be well-defined at least if the real part of is large enough and if the continuous arithmetic function does not grow too quickly, and hopefully will also be defined elsewhere in the complex plane by analytic continuation.

For instance, the continuous analogue of the discrete constant function would be the constant function , which maps any to , and which we will denote by in order to keep it distinct from . The two functions and have approximately similar statistics; for instance one has


where is the harmonic number, and we are deliberately vague as to what the symbol means. Continuing this analogy, we would expect

which reflects the fact that has a simple pole at with residue , and no other poles. Note that the identity is initially only valid in the region , but clearly the right-hand side can be continued analytically to the entire complex plane except for the pole at , and so one can define in this region also.

In a similar vein, the logarithm function is approximately similar to the logarithm function , giving for instance the crude form

of Stirling’s formula, or the Dirichlet series approximation

The continuous analogue of Dirichlet convolution is multiplicative convolution using the multiplicative Haar measure : given two continuous arithmetic functions , one can define their convolution by the formula

Thus for instance . A short computation using Fubini’s theorem shows the analogue

of (3) whenever the real part of is large enough that Fubini’s theorem can be justified; similarly, differentiation under the integral sign shows that


again assuming that the real part of is large enough that differentiation under the integral sign (or some other tool like this, such as the Cauchy integral formula for derivatives) can be justified.

Direct calculation shows that for any complex number , one has

(at least for the real part of large enough), and hence by several applications of (5)

for any natural number . This can lead to the following heuristic: if a Dirichlet series behaves like a linear combination of poles , in that

for some set of poles and some coefficients and natural numbers (where we again are vague as to what means, and how to interpret the sum if the set of poles is infinite), then one should expect the arithmetic function to behave like the continuous arithmetic function

In particular, if we only have simple poles,

then we expect to have behave like continuous arithmetic function

Integrating this from to , this heuristically suggests an approximation

for the summatory function, and similarly

with the convention that is when , and similarly is when . One can make these sorts of approximations more rigorous by means of Perron’s formula (or one of its variants) combined with the residue theorem, provided that one has good enough control on the relevant Dirichlet series, but we will not pursue these rigorous calculations here. (But see for instance this previous blog post for some examples.)

For instance, using the more refined approximation

to the zeta function near , we have

we would expect that

and thus for instance

which matches what one actually gets from the Dirichlet hyperbola method (see e.g. equation (44) of this previous post).

Or, noting that has a simple pole at and assuming simple zeroes elsewhere, the log derivative will have simple poles of residue at and at all the zeroes, leading to the heuristic

suggesting that should behave like the continuous arithmetic function

leading for instance to the summatory approximation

which is a heuristic form of the Riemann-von Mangoldt explicit formula (see Exercise 45 of these notes for a rigorous version of this formula).

Exercise 1 Go through some of the other explicit formulae listed at this Wikipedia page and give heuristic justifications for them (up to some lower order terms) by similar calculations to those given above.

Given the “adelic” perspective on number theory, I wonder if there are also -adic analogues of arithmetic functions to which a similar set of heuristics can be applied, perhaps to study sums such as . A key problem here is that there does not seem to be any good interpretation of the expression when is complex and is a -adic number, so it is not clear that one can analyse a Dirichlet series -adically. For similar reasons, we don’t have a canonical way to define for a Dirichlet character (unless its conductor happens to be a power of ), so there doesn’t seem to be much to say in the -aspect either.

Filed under: expository, math.NT Tagged: Dirichlet series, prime number theorem
Kategorije: Matematički blogovi