# Terrence Tao

*Ažurirano:*prije 2 tjedna 3 dana

### Elias M. Stein Prize for New Perspectives in Analysis

The Elias M. Stein Prize for New Perspectives in Analysis is awarded for the development of groundbreaking methods in analysis which demonstrate promise to revitalize established areas or create new opportunities for mathematical discovery. The current prize amount is US$5,000 and the prize is awarded every three years for work published in the preceding six years.

This prize was endowed in 2022 by students, colleagues, and friends of Elias M. Stein (my former advisor) to honor his remarkable legacy in the area of mathematical analysis. Stein, who passed away in 2018, is remembered for identifying many deep principles and methods which transcend their original context, and for opening entirely new areas of research which captivated the attention and imagination of generations of analysts. This prize seeks to recognize mathematicians at any career stage who, like Stein, have found exciting new avenues for mathematical exploration in subjects old or new or made deep insights which demonstrate promise to reshape thinking across areas.

This will be the inaugural year for the prize, and I have agreed to serve on the prize committee. We welcome nominations for the prize, which will be accepted until June 30, 2023, and are seeking a strong and diverse pool of nominees. Nominations (submitted at this link) should include a letter of nomination and a brief citation to be used in the event that the nomination is successful. Alternatively, if you are aware of a strong potential candidate but are not able to provide the nomination yourself, we welcome your suggestion (by private email) along with — if possible — your suggestions of possible nominators.

For questions about this award, please contact the AMS Secretary at secretary@ams.org.

### A Host–Kra F^omega_2-system of order 5 that is not Abramov of order 5, and non-measurability of the inverse theorem for the U^6(F^n_2) norm; The structure of totally disconnected Host–Kra–Ziegler factors, and the inverse theorem for the U^k Gowers uniform

Asgar Jamneshan, Or Shalom, and myself have just uploaded to the arXiv our preprints “A Host–Kra -system of order 5 that is not Abramov of order 5, and non-measurability of the inverse theorem for the norm” and “The structure of totally disconnected Host–Kra–Ziegler factors, and the inverse theorem for the Gowers uniformity norms on finite abelian groups of bounded torsion“. These two papers are both concerned with advancing the inverse theory for the Gowers norms and Gowers-Host-Kra seminorms; the first paper provides a counterexample in this theory (in particular disproving a conjecture of Bergelson, Ziegler and myself), and the second paper gives new positive results in the case when the underlying group is bounded torsion, or the ergodic system is totally disconnected. I discuss the two papers more below the fold.

** — 1. System of order which is not Abramov of order — **

I gave a talk on this paper recently at the IAS; the slides for that talk are available here.

This project can be motivated by the inverse conjecture for the Gowers norm in finite fields, which is now a theorem:

**Theorem 1 (Inverse conjecture for the Gowers norm in finite fields)** Let be a prime and . Suppose that is a one-bounded function with a lower bound on the Gowers uniformity norm. Then there exists a (non-classical) polynomial of degree at most such that .

This is now known for all (see this paper of Ziegler and myself for the first proof of the general case, and this paper of Milicevic for the most recent developments concerning quantitative bounds), although initial results focused on either small values of , or the “high characteristic” case when is large compared to . One approach to this theorem proceeds via ergodic theory. Indeed it was observed in this previous paper of Ziegler and myself that for a given choice of and , the above theorem follows from the following ergodic analogue:

**Conjecture 2 (Inverse conjecture for the Gowers-Host-Kra semi-norm in finite fields)** Let be a prime and . Suppose that with an ergodic -system with positive Gowers-Host-Kra seminorm (see for instance this previous post for a definition). Then there exists a measurable polynomial of degree at most such that has a non-zero inner product with . (In the language of ergodic theory: every -system of order is an Abramov system of order .)

The implication proceeds by a correspondence principle analogous to the Furstenberg correspondence principle developed in that paper (see also this paper of Towsner for a closely related principle, and this paper of Jamneshan and I for a refinement). In a paper with Bergelson and Ziegler, we were able to establish Conjecture 2 in the “high characteristic” case , thus also proving Theorem 1 in this regime, and conjectured that Conjecture 2 was in fact true for all . This was recently verified in the slightly larger range by Candela, Gonzalez-Sanchez, and Szegedy.

Even though Theorem 1 is now known in full generality by other methods, there are still combinatorial reasons for investigating Conjecture 2. One of these is that the implication of Theorem 1 from Corollary 2 in fact gives additional control on the polynomial produced by Theorem 1, namely that it is some sense “measurable in the sigma-algebra generated by ” (basically because the ergodic theory polynomial produced by Conjecture 2 is also measurable in , as opposed to merely being measurable in an extension of ). What this means in the finitary setting of is a bit tricky to write down precisely (since the naive sigma-algebra generated by the translates of will mostly likely be the discrete sigma-algebra), but roughly speaking it means that can be approximated to arbitrary accuracy by functions of boundedly many (random) translates of . This can be interpreted in a complexity theory sense by stating that Theorem 1 can be made “algorithmic” in a “probabilistic bounded time oracle” or “local list decoding” sense which we will not make precise here.

The main result of this paper is

**Theorem 3** Conjecture 2 fails for . In fact the “measurable inverse theorem” alluded to above also fails in this case.

Informally, this means that for large , we can find -bounded “pseudo-quintic” functions with large norm, which then must necessarily correlate with at least one quintic by Theorem 1, but such that none of these quintics can be approximated to high accuracy by functions of (random) shifts of . Roughly speaking, this means that the inverse theorem cannot be made locally algorithmic (though it is still possible that a Goldreich-Levin type result of polynomial time algorithmic inverse theory is still possible, as is already known for for ; see this recent paper of Kim, Li and Tidor for further discussion).

The way we arrived at this theorem was by (morally) reducing matters to understanding a certain “finite nilspace cohomology problem”. In the end it boiled down to locating a certain function from a -element set to a two-element set which was a “strongly -homogeneous cocycle” but not a “coboundary” (these terms are defined precisely in the paper). This strongly -homogeneous cocycle can be expressed in terms of a simpler function that takes values on a -element space . The task of locating turned out to be one that was within the range of our (somewhat rudimentary) SAGE computation abilities (mostly involving computing the Smith normal form of some reasonably large integer matrices), but the counterexample functions this produced were initially somewhat opaque to us. After cleaning up these functions by hand (by subtracting off various “coboundaries”), we eventually found versions of these functions which were nice enough that we could verify all the claims needed in a purely human-readable fashion, without any further computer assistance. As a consequence, we can now describe the pseudo-quintic explicitly, though it is safe to say we would not have been able to come up with this example without the initial computer search, and we don’t currently have a broader conceptual understanding of which could potentially generate such counterexamples. The function takes the form

where is a randomly chosen (classical) quadratic polynomial, is a randomly chosen (non-classical) cubic polynomial, and is a randomly chosen (non-classical) quintic polynomial. This function correlates with and has a large norm, but this quintic is “non-measurable” in the sense that it cannot be recovered from and its shifts. The quadratic polynomial turns out to be measurable, as is the double of the cubic , but in order to recover one needs to apply a “square root” to the quadratic to recover a candidate for the cubic which can then be used to reconstruct .

** — 2. Structure of totally disconnected systems — **

Despite the above negative result, in our other paper we are able to get a weak version of Conjecture 2, that also extends to actions of bounded-torsion abelian groups:

**Theorem 4 (Weak inverse conjecture for the Gowers-Host-Kra semi-norm in bounded torsion groups)** Let be a bounded-torsion abelian group and . Suppose that with an ergodic -system with positive Gowers-Host-Kra seminorm . Then, after lifting to a torsion-free group , there exists a measurable polynomial of degree at most defined on an *extension* of which has a non-zero inner product with .

Combining this with the correspondence principle and some additional tools, we obtain a weak version of Theorem 1 that also extends to bounded-torsion groups:

**Theorem 5 (Inverse conjecture for the Gowers norm in bounded torsion groups)** Let be a finite abelian -torsion group for some and . Suppose that is a one-bounded function with . Then there exists a (non-classical) polynomial of degree at most such that .

The degree produced by our arguments is polynomial in , but we conjecture that it should just be .

The way Theorem 4 (and hence Theorem 5) is proven is as follows. The now-standard machinery of Host and Kra (as discussed for instance in their book) allows us to reduce to a system of order , which is a certain tower of extensions of compact abelian structure groups by various cocycles . In the -torsion case, standard theory allows us to show that these structure groups are also -torsion, hence totally disconnected. So it would now suffice to understand the action of torsion-free groups on totally disconnected systems . For the purposes of proving Theorem 4 we have the freedom to extend as we please, and we take advantage of this freedom by “extending by radicals”, in the sense that whenever we locate a polynomial in the system, we adjoin to it roots of that polynomial (i.e., solutions to ) that are polynomials of the *same* degree as ; this is usually not possible to do in the original system , but can always be done in a suitable extension, analogously to how roots do not always exist in a given field, but can always be located in some extension of that field. After applying this process countably many times it turns out that we can arrive at a system which is -divisible in the sense that polynomials of any degree have roots of any order that are of the same degree. In other words, the group of polynomials of any fixed degree is a divisible abelian group, and thus injective in the category of such groups. This makes a lot of short exact sequences that show up in the theory split automatically, and greatly simplifies the cohomological issues one encounters in the theory, to the point where all the cocycles mentioned previously can now be “straightened” into polynomials of the expected degree (or, in the language of ergodic theory, this extension is a Weyl system of order , and hence also Abramov of order ). This is sufficient to establish Theorem 4. To get Theorem 5, we ran into a technical obstacle arising from the fact that the remainder map is not a polynomial mod if is not itself a prime power. To resolve this, we established ergodic theory analogues of the Sylow decomposition of abelian -torsion groups into -groups , as well as the Schur-Zassenhaus theorem. Roughly speaking, the upshot of these theorems is that any ergodic -system , with -torsion, can be split as the “direct sum” of ergodic -systems for primes dividing , where is the subgroup of consisting of those elements whose order is a power of . This allows us to reduce to the case when is a prime power without too much difficulty.

In fact, the above analysis gives stronger structural classifications of totally disconnected systems (in which the acting group is torsion-free). Weyl systems can also be interpreted as translational systems , where is a nilpotent Polish group and is a closed cocompact subgroup, with the action being given by left-translation by various elements of . Perhaps the most famous examples of such translational systems are nilmanifolds, but in this setting where the acting group is not finitely generated, it turns out to be necessary to consider more general translational systems, in which need not be a Lie group (or even locally compact), and not discrete. Our previous results then describe totally disconnected systems as *factors* of such translational systems. One natural candidate for such factors are the *double coset systems* formed by quotienting out by the action of another closed group that is normalized by the action of . We were able to show that all totally disconnected systems with torsion-free acting group had this double coset structure. This turned out to be surprisingly subtle at a technical level, for at least two reasons. Firstly, after locating the closed group (which in general is Polish, but not compact or even locally compact), it was not immediately obvious that was itself a Polish space (this amounts to the orbits of a closed set still being closed), and also not obvious that this double coset space had a good nilspace structure (in particular that the factor map from to is a nilspace fibration). This latter issue we were able to resolve with a tool kindly shared to us in a forthcoming work by Candela, Gonzales-Sanchez, and Szegedy, who observed that the nilspace fibration property was available if the quotient groups obeyed an algebraic “groupable” axiom which we were able to verify in this case (they also have counterexamples showing that the nilspace structure can break down without this axiom). There was however one further rather annoying complication. In order to fully obtain the identification of our system with a double coset system, we needed the equivalence

### Mathematics for Humanity initiative – application deadline extended to June 1

The International Center for Mathematical Sciences in Edinburgh recently launched its “Mathematics for Humanity” initiative with a call for research activity proposals (ranging from small collaborations to courses, workshops and conferences) aimed at using mathematics to contributing to the betterment of humanity. (I have agreed to serve on the scientific committee to evaluate these proposals.) We launched this initiative in January and initially set the deadline for April 15, but several people who had expressed interest felt that this was insufficient time to prepare a quality proposal, so we have now extended the deadline to June 1, and welcome further applications.

See also this Mathstodon post from fellow committee member John Baez last year where he solicited some preliminary suggestions for proposals, and my previous Mathstodon announcement of this programme.

### Would it be possible to create a tool to automatically diagram papers?

This is a somewhat experimental and speculative post. This week I was at the IPAM workshop on machine assisted proof that I was one of the organizers of. We had an interesting and diverse range of talks, both from computer scientists presenting the latest available tools to formally verify proofs or to automate various aspects of proof writing or proof discovery, as well as mathematicians who described their experiences using these tools to solve their research problems. One can find the videos of these talks on the IPAM youtube channel; I also posted about the talks during the event on my Mathstodon account. I am of course not the most objective person to judge, but from the feedback I received it seems that the conference was able to successfully achieve its aim of bringing together the different communities interested in this topic.

As a result of the conference I started thinking about what possible computer tools might now be developed that could be of broad use to mathematicians, particularly those who do not have prior expertise with the finer aspects of writing code or installing software. One idea that came to mind was a potential tool to could take, say, an arXiv preprint as input, and return some sort of diagram detailing the logical flow of the main theorems and lemmas in the paper. This is currently done by hand by authors in some, but not all, papers (and can often also be automatically generated from formally verified proofs, as seen for instance in the graphic accompanying the IPAM workshop, or this diagram generated from Massot’s blueprint software from a manually inputted set of theorems and dependencies as a precursor to formalization of a proof [thanks to Thomas Bloom for this example]). For instance, here is a diagram that my co-author Rachel Greenfeld and I drew for a recent paper:

This particular diagram incorporated a number of subjective design choices regarding layout, which results to be designated important enough to require a dedicated box (as opposed to being viewed as a mere tool to get from one box to another), and how to describe each of these results (and how to colour-code them). This is still a very human-intensive task (and my co-author and I went through several iterations of this particular diagram with much back-and-forth discussion until we were both satisfied). But I could see the possibility of creating an automatic tool that could provide an initial “first approximation” to such a diagram, which a human user could then modify as they see fit (perhaps using some convenient GUI interface, for instance some variant of the Quiver online tool for drawing commutative diagrams in LaTeX).

As a crude first attempt at automatically generating such a diagram, one couuld perhaps develop a tool to scrape a LaTeX file to locate all the instances of the theorem environment in the text (i.e., all the formally identified lemmas, corollaries, and so forth), and for each such theorem, locate a proof environment instance that looks like it is associated to that theorem (doing this with reasonable accuracy may require a small amount of machine learning, though perhaps one could just hope that proximity of the proof environment instance to the theorem environment instance suffices in many cases). Then identify all the references within that proof environment to other theorems to start building the tree of implications, which one could then depict in a diagram such as the above. Such an approach would likely miss many of the implications; for instance, because many lemmas might not be proven using a formal proof environment, but instead by some more free-flowing text discussion, or perhaps a one line justification such as “By combining Lemma 3.4 and Proposition 3.6, we conclude”. Also, some references to other results in the paper might not proceed by direct citation, but by more indirect justifications such as “invoking the previous lemma, we obtain” or “by repeating the arguments in Section 3, we have”. Still, even such a crude diagram might still be helpful, both as a starting point for authors to make an improved diagram, or for a student trying to understand a lengthy paper to get some initial idea of the logical structure.

More advanced features might be to try to use more of the text of the paper to assign some measure of importance to individual results (and then weight the diagram correspondingly to highlight the more important results), to try to give each result a natural language description, and to somehow capture key statements that are not neatly encapsulated in a theorem environment instance, but I would imagine that such tasks should be deferred until some cruder proof-of-concept prototype can be demonstrated.

Anyway, I would be interested to hear opinions about whether this idea (or some modification thereof) is (a) actually feasible with current technology (or better yet, already exists in some form), and (b) of interest to research mathematicians.

### Infinite partial sumsets in the primes

Tamar Ziegler and I have just uploaded to the arXiv our paper “Infinite partial sumsets in the primes“. This is a short paper inspired by a recent result of Kra, Moreira, Richter, and Robertson (discussed for instance in this Quanta article from last December) showing that for any set of natural numbers of positive upper density, there exists a sequence of natural numbers and a shift such that for all this answers a question of Erdős). In view of the “transference principle“, it is then plausible to ask whether the same result holds if is replaced by the primes. We can show the following results:

**Theorem 1**

- (i) If the Hardy-Littlewood prime tuples conjecture (or the weaker conjecture of Dickson) is true, then there exists an increasing sequence of primes such that is prime for all .
- (ii) Unconditionally, there exist increasing sequences and of natural numbers such that is prime for all .
- (iii) These conclusions fail if “prime” is replaced by “positive (relative) density subset of the primes” (even if the density is equal to 1).

We remark that it was shown by Balog that there (unconditionally) exist arbitrarily long but *finite* sequences of primes such that is prime for all . (This result can also be recovered from the later results of Ben Green, myself, and Tamar Ziegler.) Also, it had previously been shown by Granville that on the Hardy-Littlewood prime tuples conjecture, there existed increasing sequences and of natural numbers such that is prime for all .

The conclusion of (i) is stronger than that of (ii) (which is of course consistent with the former being conditional and the latter unconditional). The conclusion (ii) also implies the well-known theorem of Maynard that for any given , there exist infinitely many -tuples of primes of bounded diameter, and indeed our proof of (ii) uses the same “Maynard sieve” that powers the proof of that theorem (though we use a formulation of that sieve closer to that in this blog post of mine). Indeed, the failure of (iii) basically arises from the failure of Maynard’s theorem for dense subsets of primes, simply by removing those clusters of primes that are unusually closely spaced.

Our proof of (i) was initially inspired by the topological dynamics methods used by Kra, Moreira, Richter, and Robertson, but we managed to condense it to a purely elementary argument (taking up only half a page) that makes no reference to topological dynamics and builds up the sequence recursively by repeated application of the prime tuples conjecture.

The proof of (ii) takes up the majority of the paper. It is easiest to phrase the argument in terms of “prime-producing tuples” – tuples for which there are infinitely many with all prime. Maynard’s theorem is equivalent to the existence of arbitrarily long prime-producing tuples; our theorem is equivalent to the stronger assertion that there exist an infinite sequence such that every initial segment is prime-producing. The main new tool for achieving this is the following cute measure-theoretic lemma of Bergelson:

**Lemma 2 (Bergelson intersectivity lemma)** Let be subsets of a probability space of measure uniformly bounded away from zero, thus . Then there exists a subsequence such that

This lemma has a short proof, though not an entirely obvious one. Firstly, by deleting a null set from , one can assume that all finite intersections are either positive measure or empty. Secondly, a routine application of Fatou’s lemma shows that the maximal function has a positive integral, hence must be positive at some point . Thus there is a subsequence whose finite intersections all contain , thus have positive measure as desired by the previous reduction.

It turns out that one cannot quite combine the standard Maynard sieve with the intersectivity lemma because the events that show up (which roughly correspond to the event that is prime for some random number (with a well-chosen probability distribution) and some shift ) have their probability going to zero, rather than being uniformly bounded from below. To get around this, we borrow an idea from a paper of Banks, Freiberg, and Maynard, and group the shifts into various clusters , chosen in such a way that the probability that *at least one* of is prime is bounded uniformly from below. One then applies the Bergelson intersectivity lemma to those events and uses many applications of the pigeonhole principle to conclude.