Terrence Tao

Udruženi sadržaj What's new
Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao
Ažurirano: prije 5 tjedana 4 dana

Maryam Mirzakhani New Frontiers Prize

Uto, 2019-12-10 21:17

Just a short post to announce that nominations are now open for the Maryam Mirzakhani New Frontiers Prize, which is a newly announced annual $50,000 award from the Breakthrough Prize Foundation presented to early-career, women mathematicians who have completed their PhDs within the past two years, and recognizes outstanding research achievement.  (I will be serving on the prize committee.)  Nominations for this (and other breakthrough prizes) can be made at this page.

Kategorije: Matematički blogovi

Eigenvectors from Eigenvalues: a survey of a basic identity in linear algebra

Sri, 2019-12-04 04:20

Peter Denton, Stephen Parke, Xining Zhang, and I have just uploaded to the arXiv a completely rewritten version of our previous paper, now titled “Eigenvectors from Eigenvalues: a survey of a basic identity in linear algebra“. This paper is now a survey of the various literature surrounding the following basic identity in linear algebra, which we propose to call the eigenvector-eigenvalue identity:

Theorem 1 (Eigenvector-eigenvalue identity) Let be an Hermitian matrix, with eigenvalues . Let be a unit eigenvector corresponding to the eigenvalue , and let be the component of . Then

where is the Hermitian matrix formed by deleting the row and column from .

When we posted the first version of this paper, we were unaware of previous appearances of this identity in the literature; a related identity had been used by Erdos-Schlein-Yau and by myself and Van Vu for applications to random matrix theory, but to our knowledge this specific identity appeared to be new. Even two months after our preprint first appeared on the arXiv in August, we had only learned of one other place in the literature where the identity showed up (by Forrester and Zhang, who also cite an earlier paper of Baryshnikov).

The situation changed rather dramatically with the publication of a popular science article in Quanta on this identity in November, which gave this result significantly more exposure. Within a few weeks we became informed (through private communication, online discussion, and exploration of the citation tree around the references we were alerted to) of over three dozen places where the identity, or some other closely related identity, had previously appeared in the literature, in such areas as numerical linear algebra, various aspects of graph theory (graph reconstruction, chemical graph theory, and walks on graphs), inverse eigenvalue problems, random matrix theory, and neutrino physics. As a consequence, we have decided to completely rewrite our article in order to collate this crowdsourced information, and survey the history of this identity, all the known proofs (we collect seven distinct ways to prove the identity (or generalisations thereof)), and all the applications of it that we are currently aware of. The citation graph of the literature that this ad hoc crowdsourcing effort produced is only very weakly connected, which we found surprising:

The earliest explicit appearance of the eigenvector-eigenvalue identity we are now aware of is in a 1966 paper of Thompson, although this paper is only cited (directly or indirectly) by a fraction of the known literature, and also there is a precursor identity of Löwner from 1934 that can be shown to imply the identity as a limiting case. At the end of the paper we speculate on some possible reasons why this identity only achieved a modest amount of recognition and dissemination prior to the November 2019 Quanta article.

Kategorije: Matematički blogovi

An uncountable Moore-Schmidt theorem

Čet, 2019-11-28 16:47

Asgar Jamneshan and I have just uploaded to the arXiv our paper “An uncountable Moore-Schmidt theorem“. This paper revisits a classical theorem of Moore and Schmidt in measurable cohomology of measure-preserving systems. To state the theorem, let be a probability space, and be the group of measure-preserving automorphisms of this space, that is to say the invertible bimeasurable maps that preserve the measure : . To avoid some ambiguity later in this post when we introduce abstract analogues of measure theory, we will refer to measurable maps as concrete measurable maps, and measurable spaces as concrete measurable spaces. (One could also call a concrete probability space, but we will not need to do so here as we will not be working explicitly with abstract probability spaces.)

Let be a discrete group. A (concrete) measure-preserving action of on is a group homomorphism from to , thus is the identity map and for all . A large portion of ergodic theory is concerned with the study of such measure-preserving actions, especially in the classical case when is the integers (with the additive group law).

Let be a compact Hausdorff abelian group, which we can endow with the Borel -algebra . A (concrete measurable) –cocycle is a collection of concrete measurable maps obeying the cocycle equation

for -almost every . (Here we are glossing over a measure-theoretic subtlety that we will return to later in this post – see if you can spot it before then!) Cocycles arise naturally in the theory of group extensions of dynamical systems; in particular (and ignoring the aforementioned subtlety), each cocycle induces a measure-preserving action on (which we endow with the product of with Haar probability measure on ), defined by

This connection with group extensions was the original motivation for our study of measurable cohomology, but is not the focus of the current paper.

A special case of a -valued cocycle is a (concrete measurable) -valued coboundary, in which for each takes the special form

for -almost every , where is some measurable function; note that (ignoring the aforementioned subtlety), every function of this form is automatically a concrete measurable -valued cocycle. One of the first basic questions in measurable cohomology is to try to characterize which -valued cocycles are in fact -valued coboundaries. This is a difficult question in general. However, there is a general result of Moore and Schmidt that at least allows one to reduce to the model case when is the unit circle , by taking advantage of the Pontryagin dual group of characters , that is to say the collection of continuous homomorphisms to the unit circle. More precisely, we have

Theorem 1 (Countable Moore-Schmidt theorem) Let be a discrete group acting in a concrete measure-preserving fashion on a probability space . Let be a compact Hausdorff abelian group. Assume the following additional hypotheses:

Then a -valued concrete measurable cocycle is a concrete coboundary if and only if for each character , the -valued cocycles are concrete coboundaries.

The hypotheses (i), (ii), (iii) are saying in some sense that the data are not too “large”; in all three cases they are saying in some sense that the data are only “countably complicated”. For instance, (iii) is equivalent to being second countable, and (ii) is equivalent to being modeled by a complete separable metric space. It is because of this restriction that we refer to this result as a “countable” Moore-Schmidt theorem. This theorem is a useful tool in several other applications, such as the Host-Kra structure theorem for ergodic systems; I hope to return to these subsequent applications in a future post.

Let us very briefly sketch the main ideas of the proof of Theorem 1. Ignore for now issues of measurability, and pretend that something that holds almost everywhere in fact holds everywhere. The hard direction is to show that if each is a coboundary, then so is . By hypothesis, we then have an equation of the form

for all and some functions , and our task is then to produce a function for which

for all .

Comparing the two equations, the task would be easy if we could find an for which

for all . However there is an obstruction to this: the left-hand side of (3) is additive in , so the right-hand side would have to be also in order to obtain such a representation. In other words, for this strategy to work, one would have to first establish the identity

for all . On the other hand, the good news is that if we somehow manage to obtain the equation, then we can obtain a function obeying (3), thanks to Pontryagin duality, which gives a one-to-one correspondence between and the homomorphisms of the (discrete) group to .

Now, it turns out that one cannot derive the equation (4) directly from the given information (2). However, the left-hand side of (2) is additive in , so the right-hand side must be also. Manipulating this fact, we eventually arrive at

In other words, we don’t get to show that the left-hand side of (4) vanishes, but we do at least get to show that it is -invariant. Now let us assume for sake of argument that the action of is ergodic, which (ignoring issues about sets of measure zero) basically asserts that the only -invariant functions are constant. So now we get a weaker version of (4), namely

for some constants .

Now we need to eliminate the constants. This can be done by the following group-theoretic projection. Let denote the space of concrete measurable maps from to , up to almost everywhere equivalence; this is an abelian group where the various terms in (5) naturally live. Inside this group we have the subgroup of constant functions (up to almost everywhere equivalence); this is where the right-hand side of (5) lives. Because is a divisible group, there is an application of Zorn’s lemma (a good exercise for those who are not acquainted with these things) to show that there exists a retraction , that is to say a group homomorphism that is the identity on the subgroup . We can use this retraction, or more precisely the complement , to eliminate the constant in (5). Indeed, if we set

then from (5) we see that

while from (2) one has

and now the previous strategy works with replaced by . This concludes the sketch of proof of Theorem 1.

In making the above argument rigorous, the hypotheses (i)-(iii) are used in several places. For instance, to reduce to the ergodic case one relies on the ergodic decomposition, which requires the hypothesis (ii). Also, most of the above equations only hold outside of a set of measure zero, and the hypothesis (i) and the hypothesis (iii) (which is equivalent to being at most countable) to avoid the problem that an uncountable union of sets of measure zero could have positive measure (or fail to be measurable at all).

My co-author Asgar Jamneshan and I are working on a long-term project to extend many results in ergodic theory (such as the aforementioned Host-Kra structure theorem) to “uncountable” settings in which hypotheses analogous to (i)-(iii) are omitted; thus we wish to consider actions on uncountable groups, on spaces that are not standard Borel, and cocycles taking values in groups that are not metrisable. Such uncountable contexts naturally arise when trying to apply ergodic theory techniques to combinatorial problems (such as the inverse conjecture for the Gowers norms), as one often relies on the ultraproduct construction (or something similar) to generate an ergodic theory translation of these problems, and these constructions usually give “uncountable” objects rather than “countable” ones. (For instance, the ultraproduct of finite groups is a hyperfinite group, which is usually uncountable.). This paper marks the first step in this project by extending the Moore-Schmidt theorem to the uncountable setting.

If one simply drops the hypotheses (i)-(iii) and tries to prove the Moore-Schmidt theorem, several serious difficulties arise. We have already mentioned the loss of the ergodic decomposition and the possibility that one has to control an uncountable union of null sets. But there is in fact a more basic problem when one deletes (iii): the addition operation , while still continuous, can fail to be measurable as a map from to ! Thus for instance the sum of two measurable functions need not remain measurable, which makes even the very definition of a measurable cocycle or measurable coboundary problematic (or at least unnatural). This phenomenon is known as the Nedoma pathology. A standard example arises when is the uncountable torus , endowed with the product topology. Crucially, the Borel -algebra generated by this uncountable product is not the product of the factor Borel -algebras (the discrepancy ultimately arises from the fact that topologies permit uncountable unions, but -algebras do not); relating to this, the product -algebra is not the same as the Borel -algebra , but is instead a strict sub-algebra. If the group operations on were measurable, then the diagonal set

would be measurable in . But it is an easy exercise in manipulation of -algebras to show that if are any two measurable spaces and is measurable in , then the fibres of are contained in some countably generated subalgebra of . Thus if were -measurable, then all the points of would lie in a single countably generated -algebra. But the cardinality of such an algebra is at most while the cardinality of is , and Cantor’s theorem then gives a contradiction.

To resolve this problem, we give a coarser -algebra than the Borel -algebra, which we call the reduced -algebra , thus coarsening the measurable space structure on to a new measurable space . In the case of compact Hausdorff abelian groups, can be defined as the -algebra generated by the characters ; for more general compact abelian groups, one can define as the -algebra generated by all continuous maps into metric spaces. This -algebra is equal to when is metrisable but can be smaller for other . With this measurable structure, becomes a measurable group; it seems that once one leaves the metrisable world that is a superior (or at least equally good) space to work with than for analysis, as it avoids the Nedoma pathology. (For instance, from Plancherel’s theorem, we see that if is the Haar probability measure on , then (thus, every -measurable set is equivalent modulo -null sets to a -measurable set), so there is no damage to Plancherel caused by passing to the reduced -algebra.

Passing to the reduced -algebra fixes the most severe problems with an uncountable Moore-Schmidt theorem, but one is still faced with an issue of having to potentially take an uncountable union of null sets. To avoid this sort of problem, we pass to the framework of abstract measure theory, in which we remove explicit mention of “points” and can easily delete all null sets at a very early stage of the formalism. In this setup, the category of concrete measurable spaces is replaced with the larger category of abstract measurable spaces, which we formally define as the opposite category of the category of -algebras (with Boolean algebra homomorphisms). Thus, we define an abstract measurable space to be an object of the form , where is an (abstract) -algebra and is a formal placeholder symbol that signifies use of the opposite category, and an abstract measurable map is an object of the form , where is a Boolean algebra homomorphism and is again used as a formal placeholder; we call the pullback map associated to .  [UPDATE: It turns out that this definition of a measurable map led to technical issues.  In a forthcoming revision of the paper we also impose the requirement that the abstract measurable map be -complete (i.e., it respects countable joins).] The composition of two abstract measurable maps , is defined by the formula , or equivalently .

Every concrete measurable space can be identified with an abstract counterpart , and similarly every concrete measurable map can be identified with an abstract counterpart , where is the pullback map . Thus the category of concrete measurable spaces can be viewed as a subcategory of the category of abstract measurable spaces. The advantage of working in the abstract setting is that it gives us access to more spaces that could not be directly defined in the concrete setting. Most importantly for us, we have a new abstract space, the opposite measure algebra of , defined as where is the ideal of null sets in . Informally, is the space with all the null sets removed; there is a canonical abstract embedding map , which allows one to convert any concrete measurable map into an abstract one . One can then define the notion of an abstract action, abstract cocycle, and abstract coboundary by replacing every occurrence of the category of concrete measurable spaces with their abstract counterparts, and replacing with the opposite measure algebra ; see the paper for details. Our main theorem is then

Theorem 2 (Uncountable Moore-Schmidt theorem) Let be a discrete group acting abstractly on a -finite measure space . Let be a compact Hausdorff abelian group. Then a -valued abstract measurable cocycle is an abstract coboundary if and only if for each character , the -valued cocycles are abstract coboundaries.

With the abstract formalism, the proof of the uncountable Moore-Schmidt theorem is almost identical to the countable one (in fact we were able to make some simplifications, such as avoiding the use of the ergodic decomposition). A key tool is what we call a “conditional Pontryagin duality” theorem, which asserts that if one has an abstract measurable map for each obeying the identity for all , then there is an abstract measurable map such that for all . This is derived from the usual Pontryagin duality and some other tools, most notably the completeness of the -algebra of , and the Sikorski extension theorem.

We feel that it is natural to stay within the abstract measure theory formalism whenever dealing with uncountable situations. However, it is still an interesting question as to when one can guarantee that the abstract objects constructed in this formalism are representable by concrete analogues. The basic questions in this regard are:

  • (i) Suppose one has an abstract measurable map into a concrete measurable space. Does there exist a representation of by a concrete measurable map ? Is it unique up to almost everywhere equivalence?
  • (ii) Suppose one has a concrete cocycle that is an abstract coboundary. When can it be represented by a concrete coboundary?

For (i) the answer is somewhat interesting (as I learned after posing this MathOverflow question):

  • If does not separate points, or is not compact metrisable or Polish, there can be counterexamples to uniqueness. If is not compact or Polish, there can be counterexamples to existence.
  • If is a compact metric space or a Polish space, then one always has existence and uniqueness.
  • If is a compact Hausdorff abelian group, one always has existence.
  • If is a complete measure space, then one always has existence (from a theorem of Maharam).
  • If is the unit interval with the Borel -algebra and Lebesgue measure, then one has existence for all compact Hausdorff assuming the continuum hypothesis (from a theorem of von Neumann) but existence can fail under other extensions of ZFC (from a theorem of Shelah, using the method of forcing).
  • For more general , existence for all compact Hausdorff is equivalent to the existence of a lifting from the -algebra to (or, in the language of abstract measurable spaces, the existence of an abstract retraction from to ).
  • It is a long-standing open question (posed for instance by Fremlin) whether it is relatively consistent with ZFC that existence holds whenever is compact Hausdorff.

Our understanding of (ii) is much less complete:

  • If is metrisable, the answer is “always” (which among other things establishes the countable Moore-Schmidt theorem as a corollary of the uncountable one).
  • If is at most countable and is a complete measure space, then the answer is again “always”.

In view of the answers to (i), I would not be surprised if the full answer to (ii) was also sensitive to axioms of set theory. However, such set theoretic issues seem to be almost completely avoided if one sticks with the abstract formalism throughout; they only arise when trying to pass back and forth between the abstract and concrete categories.

Kategorije: Matematički blogovi

254A, Notes 9 – second moment and entropy methods

Sri, 2019-11-13 03:47

In these notes we presume familiarity with the basic concepts of probability theory, such as random variables (which could take values in the reals, vectors, or other measurable spaces), probability, and expectation. Much of this theory is in turn based on measure theory, which we will also presume familiarity with. See for instance this previous set of lecture notes for a brief review.

The basic objects of study in analytic number theory are deterministic; there is nothing inherently random about the set of prime numbers, for instance. Despite this, one can still interpret many of the averages encountered in analytic number theory in probabilistic terms, by introducing random variables into the subject. Consider for instance the form

of the prime number theorem (where we take the limit ). One can interpret this estimate probabilistically as

where is a random variable drawn uniformly from the natural numbers up to , and denotes the expectation. (In this set of notes we will use boldface symbols to denote random variables, and non-boldface symbols for deterministic objects.) By itself, such an interpretation is little more than a change of notation. However, the power of this interpretation becomes more apparent when one then imports concepts from probability theory (together with all their attendant intuitions and tools), such as independence, conditioning, stationarity, total variation distance, and entropy. For instance, suppose we want to use the prime number theorem (1) to make a prediction for the sum

After dividing by , this is essentially

With probabilistic intuition, one may expect the random variables to be approximately independent (there is no obvious relationship between the number of prime factors of , and of ), and so the above average would be expected to be approximately equal to

which by (2) is equal to . Thus we are led to the prediction

The asymptotic (3) is widely believed (it is a special case of the Chowla conjecture, which we will discuss in later notes; while there has been recent progress towards establishing it rigorously, it remains open for now.

How would one try to make these probabilistic intuitions more rigorous? The first thing one needs to do is find a more quantitative measurement of what it means for two random variables to be “approximately” independent. There are several candidates for such measurements, but we will focus in these notes on two particularly convenient measures of approximate independence: the “” measure of independence known as covariance, and the “” measure of independence known as mutual information (actually we will usually need the more general notion of conditional mutual information that measures conditional independence). The use of type methods in analytic number theory is well established, though it is usually not described in probabilistic terms, being referred to instead by such names as the “second moment method”, the “large sieve” or the “method of bilinear sums”. The use of methods (or “entropy methods”) is much more recent, and has been able to control certain types of averages in analytic number theory that were out of reach of previous methods such as methods. For instance, in later notes we will use entropy methods to establish the logarithmically averaged version

of (3), which is implied by (3) but strictly weaker (much as the prime number theorem (1) implies the bound , but the latter bound is much easier to establish than the former).

As with many other situations in analytic number theory, we can exploit the fact that certain assertions (such as approximate independence) can become significantly easier to prove if one only seeks to establish them on average, rather than uniformly. For instance, given two random variables and of number-theoretic origin (such as the random variables and mentioned previously), it can often be extremely difficult to determine the extent to which behave “independently” (or “conditionally independently”). However, thanks to second moment tools or entropy based tools, it is often possible to assert results of the following flavour: if are a large collection of “independent” random variables, and is a further random variable that is “not too large” in some sense, then must necessarily be nearly independent (or conditionally independent) to many of the , even if one cannot pinpoint precisely which of the the variable is independent with. In the case of the second moment method, this allows us to compute correlations such as for “most” . The entropy method gives bounds that are significantly weaker quantitatively than the second moment method (and in particular, in its current incarnation at least it is only able to say non-trivial assertions involving interactions with residue classes at small primes), but can control significantly more general quantities for “most” thanks to tools such as the Pinsker inequality.

— 1. Second moment methods —

In this section we discuss probabilistic techniques of an “” nature. We fix a probability space to model all of random variables; thus for instance we shall model a complex random variable in these notes by a measurable function . (Strictly speaking, there is a subtle distinction one can maintain between a random variable and its various measure-theoretic models, which becomes relevant if one later decides to modify the probability space , but this distinction will not be so important in these notes and so we shall ignore it. See this previous set of notes for more discussion.)

We will focus here on the space of complex random variables (that is to say, measurable maps ) whose second moment

of is finite. In many number-theoretic applications the finiteness of the second moment will be automatic because will only take finitely many values. As is well known, the space has the structure of a complex Hilbert space, with inner product

and norm

for . By slight abuse of notation, the complex numbers can be viewed as a subset of , by viewing any given complex number as a constant (deterministic) random variable. Then is a one-dimensional subspace of , spanned by the unit vector . Given a random variable to , the projection of to is then the mean

and we obtain an orthogonal splitting of any into its mean and its mean zero part . By Pythagoras’ theorem, we then have

The first quantity on the right-hand side is the square of the distance from to , and this non-negative quantity is known as the variance

The square root of the variance is known as the standard deviation. The variance controls the distribution of the random variable through Chebyshev’s inequality

for any , which is immediate from observing the inequality and then taking expectations of both sides. Roughly speaking, this inequality asserts that typically deviates from its mean by no more than a bounded multiple of the standard deviation .

A slight generalisation of Chebyshev’s inequality that can be convenient to use is

for any and any complex number (which typically will be a simplified approximation to the mean ), which is proven similarly to (6) but noting (from (5)) that .

Informally, (6) is an assertion that a square-integrable random variable will concentrate around its mean if its variance is not too large. See these previous notes for more discussion of the concentration of measure phenomenon. One can often obtain stronger concentration of measure than what is provided by Chebyshev’s inequality if one is able to calculate higher moments than the second moment, such as the fourth moment or exponential moments , but we will not pursue this direction in this set of notes.

Clearly the variance is homogeneous of order two, thus

for any and . In particular, the variance is not always additive: the claim fails in particular when is not almost surely zero. However, there is an important substitute for this formula. Given two random variables , the inner product of the corresponding mean zero parts is a complex number known as the covariance:

As are orthogonal to , it is not difficult to obtain the alternate formula

for the covariance.

The covariance is then a positive semi-definite inner product on (it basically arises from the Hilbert space structure of the space of mean zero variables), and . From the Cauchy-Schwarz inequality we have

If have non-zero variance (that is, they are not almost surely constant), then the ratio

is then known as the correlation between and , and is a complex number of magnitude at most ; for real-valued that are not almost surely constant, the correlation is instead a real number between and . At one extreme, a correlation of magnitude occurs if and only if is a scalar multiple of . At the other extreme, a correlation of zero is an indication (though not a guarantee) of independence. Recall that two random variables are independent if one has

for all (Borel) measurable . In particular, setting , for and integrating using Fubini’s theorem, we conclude that

similarly with replaced by , and similarly for . In particular we have

and thus from (8) we thus see that independent random variables have zero covariance (and zero correlation, when they are not almost surely constant). On the other hand, the converse fails:

Exercise 1 Provide an example of two random variables which are not independent, but which have zero correlation or covariance with each other. (There are many ways to produce some examples. One comes from exploiting various systems of orthogonal functions, such as sines and cosines. Another comes from working with random variables taking only a small number of values, such as .

From the cosine rule we have

and more generally

for any finite collection of random variables . These identities combine well with Chebyshev-type inequalities such as (6), (7), and this leads to a very common instance of the second moment method in action. For instance, we can use it to understand the distribution of the number of prime factors of a random number that fall within a given set . Given any set of natural numbers, define the logarithmic size to be the quantity

Thus for instance Euler’s theorem asserts that the primes have infinite logarithmic size.

Lemma 2 (Turan-Kubilius inequality, special case) Let be an interval of length at least , and let be an integer drawn uniformly at random from this interval, thus

for all . Let be a finite collection of primes, all of which have size at most . Then the random variable has mean

and variance

In particular,

and from (7) we have

for any .

Proof: For any natural number , we have

and hence

We now write . From (11) we see that each indicator random variable , has mean and variance ; similarly, for any two distinct , we see from (11), (8) the indicators , have covariance

and the claim now follows from (10).

The exponents of in the error terms here are not optimal; but in practice, we apply this inequality when is much larger than any given power of , so factors such as will be negligible. Informally speaking, the above lemma asserts that a typical number in a large interval will have roughly prime factors in a given finite set of primes, as long as the logarithmic size is large.

If we apply the above lemma to for some large , and equal to the primes up to (say) , we have , and hence

Since , we recover the main result

of Section 5 of Notes 1 (indeed this is essentially the same argument as in that section, dressed up in probabilistic language). In particular, we recover the Hardy-Ramanujan law that a proportion of the natural numbers in have prime factors.

Exercise 3 (Turan-Kubilius inequality, general case) Let be an additive function (which means that whenever are coprime. Show that

where

(Hint: one may first want to work with the special case when vanishes whenever so that the second moment method can be profitably applied, and then figure out how to address the contributions of prime powers larger than .)

Exercise 4 (Turan-Kubilius inequality, logarithmic version) Let with , and let be a collection of primes of size less than with . Show that

Exercise 5 (Paley-Zygmund inequality) Let be non-negative with positive mean. Show that

This inequality can sometimes give slightly sharper results than the Chebyshev inequality when using the second moment method.

Now we give a useful lemma that quantifies a heuristic mentioned in the introduction, namely that if several random variables do not correlate with each other, then it is not possible for any further random variable to correlate with many of them simultaneously. We first state an abstract Hilbert space version.

Lemma 6 (Bessel type inequality, Hilbert space version) If are elements of a Hilbert space , and are positive reals, then

Proof: We use the duality method. Namely, we can write the left-hand side of (13) as

for some complex numbers with (just take to be normalised by the left-hand side of (14), or zero if that left-hand side vanishes. By Cauchy-Schwarz, it then suffices to establish the dual inequality

The left-hand side can be written as

Using the arithmetic mean-geometric mean inequality and symmetry, this may be bounded by

Since , the claim follows.

Corollary 7 (Bessel type inequality, probabilistic version) If , and are positive reals, then

Proof: By subtracting the mean from each of we may assume that these random variables have mean zero. The claim now follows from Lemma 6.

To get a feel for this inequality, suppose for sake of discussion that and all have unit variance and , but that the are pairwise uncorrelated. Then the right-hand side is equal to , and the left-hand side is the sum of squares of the correlations between and each of the . Any individual correlation is then still permitted to be as large as , but it is not possible for multiple correlations to be this large simultaneously. This is geometrically intuitive if one views the random variables as vectors in a Hilbert space (and correlation as a rough proxy for the angle between such vectors). This lemma also shares many commonalities with the large sieve inequality, discussed in this set of notes.

One basic number-theoretic application of this inequality is the following sampling inequality of Elliott, that lets one approximate a sum of an arithmetic function by its values on multiples of primes :

Exercise 8 (Elliott’s inequality) Let be an interval of length at least . Show that for any function , one has

(Hint: Apply Corollary 7 with , , and , where is the uniform variable from Lemma 2.) Conclude in particular that for every , one has

for all primes outside of a set of exceptional primes of logarithmic size .

Informally, the point of this inequality is that an arbitrary arithmetic function may exhibit correlation with the indicator function of the multiples of for some primes , but cannot exhibit significant correlation with all of these indicators simultaneously, because these indicators are not very correlated to each other. We note however that this inequality only gains a tiny bit over trivial bounds, because the set of primes up to only has logarithmic size by Mertens’ theorems; thus, any asymptotics that are obtained using this inequality will typically have error terms that only improve upon the trivial bound by factors such as .

Exercise 9 (Elliott’s inequality, logarithmic form) Let with . Show that for any function , one has

and thus for every , one has

for all primes outside of an exceptional set of primes of logarithmic size .

Exercise 10 Use Exercise (9) and a duality argument to provide an alternate proof of Exercise 4. (Hint: express the left-hand side of (12) as a correlation between and some suitably -normalised arithmetic function .)

As a quick application of Elliott’s inequality, let us establish a weak version of the prime number theorem:

Proposition 11 (Weak prime number theorem) For any we have

whenever are sufficiently large depending on .

This estimate is weaker than what one can obtain by existing methods, such as Exercise 56 of Notes 1. However in the next section we will refine this argument to recover the full prime number theorem.

Proof: Fix , and suppose that are sufficiently large. From Exercise 9 one has

for all primes outside of an exceptional set of logarithmic size . If we restrict attention to primes then one sees from the integral test that one can replace the sum by and only incur an additional error of . If we furthermore restrict to primes larger than , then the contribution of those that are divisible by is also . For not divisible by , one has . Putting all this together, we conclude that

for all primes outside of an exceptional set of logarithmic size . In particular, for large enough this statement is true for at least one such . The claim then follows.

As another application of Elliott’s inequality, we present a criterion for orthogonality between multiplicative functions and other sequences, first discovered by Katai (with related results also introduced earlier by Daboussi and Delange), and rediscovered by Bourgain, Sarnak, and Ziegler:

Proposition 12 (Daboussi-Delange-Katai-Bourgain-Sarnak-Ziegler criterion) Let be a multiplicative function with for all , and let be another bounded function. Suppose that one has

as for any two distinct primes . Then one has

as .

Proof: Suppose the claim fails, then there exists (which we can assume to be small) and arbitrarily large such that

By Exercise 8, this implies that

for all primes outside of an exceptional set of logarithmic size . Call such primes “good primes”. In particular, by the pigeonhole principle, and assuming large enough, there exists a dyadic range with which contains good primes.

Fix a good prime in . From (15) we have

We can replace the range by with negligible error. We also have except when is a multiple of , but this latter case only contributes which is also negligible compared to the right-hand side. We conclude that

for every good prime. On the other hand, from Lemma 6 we have

where range over the good primes in . The left-hand side is then , and by hypothesis the right-hand side is for large enough. As and is small, this gives the desired contradiction

Exercise 13 (Daboussi-Delange theorem) Let be irrational, and let be a multiplicative function with for all . Show that

as . If instead is rational, show that there exists be a multiplicative function with for which the statement (16) fails. (Hint: use Dirichlet characters and Plancherel’s theorem for finite abelian groups.)

— 2. An elementary proof of the prime number theorem —

Define the Mertens function

As shown in Theorem 58 of Notes 1, the prime number theorem is equivalent to the bound

as . We now give a recent proof of this theorem, due to Redmond McNamara (personal communication), that relies primarily on Elliott’s inequality and the Selberg symmetry formula; it is a relative of the standard elementary proof of this theorem due to Erdös and Selberg. In order to keep the exposition simple, we will not arrange the argument in a fashion that optimises the decay rate (in any event, there are other proofs of the prime number theorem that give significantly stronger bounds).

Firstly we see that Elliott’s inequality gives the following weaker version of (17):

Lemma 14 (Oscillation for Mertens’ function) If and , then we have

for all primes outside of an exceptional set of primes of logarithmic size .

Proof: We may assume as the claim is trivial otherwise. From Exercise 8 applied to and , we have

for all outside of an exceptional set of primes of logarithmic size . Since for not divisible by , the right-hand side can be written as

Since outside of an exceptional set of logarithmic size , the claim follows.

Informally, this lemma asserts that for most primes , which morally implies that for most primes . If we can then locate suitable primes with , thus should then lead to , which should then yield the prime number theorem . The manipulations below are intended to make this argument rigorous.

It will be convenient to work with a logarithmically averaged version of this claim.

Corollary 15 (Logarithmically averaged oscillation) If and is sufficiently large depending on , then

for all outside of an exceptional set of logarithmic size .

Proof: For each , we have from the previous lemma that

for all outside of an exceptional set of logarithmic size . We then have

so it suffices by Markov’s inequality to show that

But by Fubini’s theorem, the left-hand side may be bounded by

and the claim follows.

Let be sufficiently small, and let be sufficiently large depending on . Call a prime good if the bound (18) holds and bad otherwise, thus all primes outside of an exceptional set of bad primes of logarithmic size are good. Now we observe that we can make small as long as we can make two good primes multiply to be close to a third:

Proposition 16 Suppose there are good primes with . Then .

Proof: By definition of good prime, we have the bounds

We rescale (20) by to conclude that

We can replace the integration range here from to with an error of if is large enough. Also, since , we have . Thus we have

Combining this with (19), (21) and the triangle inequality (writing as a linear combination of , , and ) we conclude that

This is an averaged version of the claim we need. To remove the averaging, we use the identity (see equation (63) of Notes 1) to conclude that

From the triangle inequality one has

and hence by Mertens’ theorem

From the Brun-Titchmarsh inequality (Corollary 61 of Notes 1) we have

and so from the previous estimate and Fubini’s theorem one has

and hence by (22) (using trivial bounds to handle the region outside of )

Since

we conclude (for large enough) that

and the claim follows.

To finish the proof of the prime number theorem, it thus suffices to locate, for sufficiently large, three good primes with . If we already had the prime number theorem, or even the weaker form that every interval of the form contained primes for large enough, then this would be quite easy: pick a large natural number (depending on , but independent of ), so that the primes up to has logarithmic size (so that only of them are bad, as measured by logarithmic size), and let be random numbers and drawn uniformly from (say) . From the prime number theorem, for each , the interval contains primes. In particular, contains primes, but the expected number of bad primes in this interval is . Thus by Markov’s inequality there would be at least a chance (say) of having at least one good prime in ; similarly there is a chance of having a good prime in , and a chance of having a good prime in . Thus (as an application of the probabilistic method), there exist (deterministic) good primes with the required properties.

Of course, using the prime number theorem here to prove the prime number theorem would be circular. However, we can still locate a good triple of primes using the Selberg symmetry formula

as , where is the second von Mangoldt function

see Proposition 60 of Notes 1. We can strip away the contribution of the primes:

Exercise 17 Show that

as .

In particular, on evaluating this at and subtracting, we have

whenever is sufficiently large depending on . In particular, for any such , one either has

or

(or both). Informally, the Selberg symmetry formula shows that the interval contains either a lot of primes, or a lot of semiprimes. The factor of is slightly annoying, so we now remove it. Consider the contribution of those primes to (25) with . This is bounded by

which we can bound crudely using the Chebyshev bound by

which by Mertens theorem is . Thus the contribution of this case can be safely removed from (25). Similarly for those cases when . For the remaining cases we bound . We conclude that for any sufficiently large , either (24) or

holds (or both).

In order to find primes with close to , it would be very convenient if we could find a for which (24) and (26) both hold. We can’t quite do this directly, but due to the “connected” nature of the set of scales , but we can do the next best thing:

Proposition 18 Suppose is sufficiently large depending on . Then there exists with such that

and

Proof: We know that every in obeys at least one of (27), (28). Our task is to produce an adjacent pair of , one of which obeys (27) and the other obeys (28). Suppose for contradiction that no such pair exists, then whenever fails to obey (27), then any adjacent must also fail to do so, and similarly for (28). Thus either (27) will fail to hold for all , or (28) will fail to hold for all such . If (27) fails for all , then on summing we have

which contradicts Mertens’ theorem if is large enough because the left-hand side is . Similarly, if (28) fails for all , then

and again Mertens’ theorem can be used to lower bound the left-hand side by (in fact one can even gain an additional factor of if one works things through carefully) and obtain a contradiction.

The above proposition does indeed provide a triple of primes with . If is sufficiently large depending on and less than (say) , so that , this would give us what we need as long as one of the triples consisted only of good primes. The only way this can fail is if either

for some , or if

for some . In the first case, we can sum to conclude that

and in the second case we have

and hence by Chebyshev bounds

Since the total set of bad primes up to has logarithmic size , we conclude from the pigeonhole principle (and the divergence of the harmonic series ) that for any depending only on , and any large enough, there exists such that neither of (29) and (30) hold. Indeed the set of obeying (29) has logarithmic size , and similarly for (30). Choosing a that avoids both of these scenarios, we then find a good and good with , so that , and then by Proposition 16 we conclude that for all sufficiently large . Sending to zero, we obtain the prime number theorem.

— 3. Entropy methods —

In the previous section we explored the consequences of the second moment method, which applies to square-integrable random variables taking values in the real or complex numbers. Now we explore entropy methods, which now apply to random variables which take a finite number of values (equipped with the discrete sigma-algebra), but whose range need not be numerical in nature. (One could extend entropy methods to slightly larger classes of random variables, such as ones that attain a countable number of values, but for our applications finitely-valued random variables will suffice.)

The fundamental notion here is that of the Shannon entropy of a random variable. If takes values in a finite set , its Shannon entropy (or entropy for short) is defined by the formula

where ranges over all the possible values of , and we adopt the convention , so that values that are almost surely not attained by do not influence the entropy. We choose here to use the natural logarithm to normalise our entropy (in which case a unit of entropy is known as a “nat“); in the information theory literature it is also common to use the base two logarithm to measure entropy (in which case a unit of entropy is known as a “bit“, which is equal to nats). However, the precise choice of normalisation will not be important in our discussion.

It is clear that if two random variables have the same probability distribution, then they have the same entropy. Also, the precise choice of range set is not terribly important: if takes values in , and is an injection, then it is clear that and have the same entropy:

This is in sharp contrast to moment-based statistics such as the mean or variance, which can be radically changed by applying some injective transformation to the range values.

Informally, the entropy informally measures how “spread out” or “disordered” the distribution of is, behaving like a logarithm of the size of the “essential support” of such a variable; from an information-theoretic viewpoint, it measures the amount of “information” one learns when one is told the value of . Here are some basic properties of Shannon entropy that help support this intuition:

Exercise 19 (Basic properties of Shannon entropy) Let be a random variable taking values in a finite set .

  • (i) Show that , with equality if and only if is almost surely deterministic (that is to say, it is almost surely equal to a constant ).
  • (ii) Show that

    with equality if and only if is uniformly distributed on . (Hint: use Jensen’s inequality and the convexity of the map on .)

  • (iii) (Shannon-McMillan-Breiman theorem) Let be a natural number, and let be independent copies of . As , show that there is a subset of cardinality with the properties that

    and

    uniformly for all . (The proof of this theorem will require Stirling’s formula, which you may assume here as a black box; see also this previous blog post.) Informally, we thus see a large tuple of independent samples of approximately behaves like a uniform distribution on values.

One can view Shannon entropy as a generalisation of the notion of cardinality of a finite set (or equivalently, cardinality of finite sets can be viewed as a special case of Shannon entropy); see this previous blog post for an elaboration of this point.

The concept of Shannon entropy becomes significantly more powerful when combined with that of conditioning. Recall that a random variable taking values in a range set can be modeled by a measurable map from a probability space to the range . If is an event in of positive probability, we can then condition to the event to form a new random variable on the conditioned probability space , where

is the restriction of the -algebra to ,

is the conditional probability measure on , and is the restriction of to . This random variable lives on a different probability space than itself, so it does not make sense to directly combine these variables (thus for instance one cannot form the sum even when both random variables are real or complex valued); however, one can still form the Shannon entropy of the conditioned random variable , which is given by the same formula

Given another random variable taking values in another finite set , we can then define the conditional Shannon entropy to be the expected entropy of the level sets , thus

with the convention that the summand here vanishes when . From the law of total probability we have

for any , and hence by Jensen’s inequality

for any ; summing we obtain the Shannon entropy inequality

Informally, this inequality asserts that the new information content of can be decreased, but not increased, if one is first told some additional information .

This inequality (33) can be rewritten in several ways:

Exercise 20 Let , be random variables taking values in finite sets respectively.

  • (i) Establish the chain rule

    where is the joint random variable . In particular, (33) can be expressed as a subadditivity formula

    Show that equality occurs if and only if are independent.

  • (ii) If is a function of , in the sense that for some (deterministic) function , show that .
  • (iii) Define the mutual information by the formula

    Establish the inequalities

    with the first inequality holding with equality if and only if are independent, and the latter inequalities holding if and only if is a function of (or vice versa).

From the above exercise we see that the mutual information is a measure of dependence between and , much as correlation or covariance was in the previous sections. There is however one key difference: whereas a zero correlation or covariance is a consequence but not a guarantee of independence, zero mutual information is logically equivalent to independence, and is thus a stronger property. To put it another way, zero correlation or covariance allows one to calculate the average in terms of individual averages of , but zero mutual information is stronger because it allows one to calculate the more general averages in terms of individual averages of , for arbitrary functions taking values into the complex numbers. This increased power of the mutual information statistic will allow us to estimate various averages of interest in analytic number theory in ways that do not seem amenable to second moment methods.

The subadditivity property formula can be conditioned to any event occuring with positive probability (replacing the random variables by their conditioned counterparts ), yielding the inequality

Applying this inequality to the level events of some auxiliary random variable taking values in another finite set , multiplying by , and summing, we conclude the inequality

In other words, the conditional mutual information

between and conditioning on is always non-negative:

One has conditional analogues of the above exercise:

Exercise 21 Let , , be random variables taking values in finite sets respectively.

  • (i) Establish the conditional chain rule

    and show that

    In particular, (36) is equivalent to the inequality

  • (ii) Show that equality holds in (36) if and only if are conditionally independent relative to , which means that

    for any , , .

  • (iii) Show that , with equality if and only if is almost surely a deterministic function of .
  • (iv) Show the data processing inequality

    for any functions , , and more generally that

  • (v) If is an injective function, show that

    However, if is not assumed to be injective, show by means of examples that there is no order relation between the left and right-hand side of (40) (in other words, show that either side may be greater than the other). Thus, increasing or decreasing the amount of information that is known may influence the mutual information between two remaining random variables in either direction.

  • (vi) If is a function of , and also a function of (thus for some and ), and a further random variable is a function jointly of (thus for some ), establish the submodularity inequality

We now give a key motivating application of the Shannon entropy inequalities. Suppose one has a sequence of random variables, all taking values in a finite set , which are stationary in the sense that the tuples and have the same distribution for every . In particular we will have

and hence by (39)

If we write , we conclude from (34) that we have the concavity property

In particular we have for any , which on summing and telescoping series (noting that ) gives

and hence we have the entropy monotonicity

In particular, the limit exists. This quantity is known as the Kolmogorov-Sinai entropy of the stationary process ; it is an important statistic in the theory of dynamical systems, and roughly speaking measures the amount of entropy produced by this process as a function of a discrete time vairable . We will not directly need the Kolmogorov-Sinai entropy in our notes, but a variant of the entropy monotonicity formula (41) will be important shortly.

In our application we will be dealing with processes that are only asymptotically stationary rather than stationary. To control this we recall the notion of the total variation distance between two random variables taking values in the same finite space , defined by

There is an essentially equivalent notion of this distance which is also often in use:

Exercise 22 If two random variables take values in the same finite space , establish the inequalities

and for any , establish the inequality

Shannon entropy is continuous in total variation distance as long as we keep the range finite. More quantitatively, we have

Lemma 23 If two random variables take values in the same finite space , then

with the convention that the error term vanishes when .

Proof: Set . The claim is trivial when (since then have the same distribution) and when (from (32)), so let us assume , and our task is to show that

If we write , , and , then

By dividing into the cases and we see that

since , it thus suffices to show that

But from Jensen’s inequality (32) one has

since , the claim follows.

In the converse direction, if a random variable has entropy close to the maximum , then one can control the total variation:

Lemma 24 (Special case of Pinsker inequality) If takes values in a finite set , and is a uniformly distributed random variable on , then

Of course, we have , so we may also write the above inequality as

The optimal value of the implied constant here is known to equal , but we will not use this sharp version of the inequality here.

Proof: If we write and , and , then we can rewrite the claimed inequality as

Observe that the function is concave, and in fact for all . From this and Taylor expansion with remainder we may write

for some between and . Since is independent of , and , we thus have on summing in

By Cauchy-Schwarz we then have

Since and , the claim follows.

The above lemma does not hold when the comparison variable is not assumed to be uniform; in particular, two non-uniform random variables can have precisely the same entropy but yet have different distributions, so that their total variation distance is positive. There is a more general variant, known as the Pinsker inequality, which we will not use in these notes:

Exercise 25 (Pinsker inequality) If take values in a finite set , define the Kullback-Leibler divergence of relative to by the formula

(with the convention that the summand vanishes when vanishes).

  • (i) Establish the Gibbs inequality .
  • (ii) Establish the Pinsker inequality

    In particular, vanishes if and only if have identical distribution. Show that this implies Lemma 24 as a special case.

  • (iii) Give an example to show that the Kullback-Liebler divergence need not be symmetric, thus there exist such that .
  • (iv) If are random variables taking values in finite sets , and are independent random variables taking values in respectively with each having the same distribution of , show that

In our applications we will need a relative version of Lemma 24:

Corollary 26 (Relative Pinsker inequality) If takes values in a finite set , takes values in a finite set , and is a uniformly distributed random variable on that is independent of , then

Proof: From direct calculation we have the identity

As is independent of , is uniformly distributed on . From Lemma 24 we conclude

Inserting this bound and using the Cauchy-Schwarz inequality, we obtain the claim.

Now we are ready to apply the above machinery to give a key inequality that is analogous to Elliott’s inequality. Inequalities of this type first appeared in one of my papers, introducing what I called the “entropy decrement argument”; the following arrangement of the inequality and proof is due to Redmond McNamara (personal communication).

Theorem 27 (Entropy decrement inequality) Let be a random variable taking values in a finite set of integers, which obeys the approximate stationarity

for some . Let be a collection of distinct primes less than some threshold , and let be natural numbers that are also bounded by . Let be a function taking values in a finite set . For , let denote the -valued random variable

and let denote the -valued random variable

Also, let be a random variable drawn uniformly from , independently of . Then

The factor (arising from an invocation of the Chinese remainder theorem in the proof) unfortunately restricts the usefulness of this theorem to the regime in which all the primes involved are of “sub-logarithmic size”, but once one is in that regime, the second term on the right-hand side of (45) tends to be negligible in practice. Informally, this theorem asserts that for most small primes , the random variables and behave as if they are independent of each other.

Proof: We can assume , as the claim is trivial for (the all have zero entropy). For , we introduce the -valued random variable

The idea is to exploit some monotonicity properties of the quantity , in analogy with (41). By telescoping series we have

where we extend (44) to the case. From (38) we have

and thus

Now we lower bound the summand on the right-hand side. From multiple applications of the conditional chain rule (37) we have

and

where

We now use the approximate stationarity of to derive an approximate monotonicity property for . If , then from (39) we have

Write and

Note that is a deterministic function of and vice versa. Thus we can replace by in the above formula, and conclude that

The tuple takes values in a set of cardinality thanks to the Chebyshev bounds. Hence by two applications of Lemma 23, (43) we have

The first term on the right-hand side is . Worsening the error term slightly, we conclude that

and hence

for any . In particular

which by (47), (48) rearranges to

From (46) we conclude that

Meanwhile, from Corollary 26, (39), (38) we have

The probability distribution of is a function on , which by the Chinese remainder theorem we can identify with a cyclic group where . From (43) we see that the value of this distribution at adjacent values of this cyclic group varies by , hence the total variation distance between this random variable and the uniform distribution on is by Chebyshev bounds. By Lemma 23 we then have

and thus

The claim follows.

We now compare this result to Elliott’s inequality. If one tries to address precisely the same question that Elliott’s inequality does – namely, to try to compare a sum with sampled subsums – then the results are quantitatively much weaker:

Corollary 28 (Weak Elliott inequality) Let be an interval of length at least . Let be a function with for all , and let . Then one has

for all primes outside of an exceptional set of primes of logarithmic size .

Comparing this with Exercise 8 we see that we cover a much smaller range of primes ; also the size of the exceptional set is slightly worse. This version of Elliot’s inequality is however still strong enough to recover a proof of the prime number theorem as in the previous section.

Proof: We can assume that is small, as the claim is trivial for comparable to . We can also assume that

since the claim is also trivial otherwise (just make all primes up to exceptional, then use Mertens’ theorem). As a consequence of this, any quantity involving in the denominator will end up being completely negligible in practice. We can also restrict attention to primes less than (say) , since the remaining primes between and have logarithmic size .

By rounding the real and imaginary parts of to the nearest multiple of , we may assume that takes values in some finite set of complex numbers of size with cardinality . Let be drawn uniformly at random from . Then (43) holds with , and from Theorem 27 with and (which makes the second term of the right-hand side of (45) negligible) we have

where are the primes up to , arranged in increasing order. By Markov’s inequality, we thus have

for outside of a set of primes of logarithmic size .

Let be as above. Now let be the function

that is to say picks out the unique component of the tuple in which is divisible by . This function is bounded by , and then by (42) we have

The left-hand side is equal to

which on switching the summations and using the large nature of can be rewritten as

Meanwhile, the left-hand side is equal to

which again by switching the summations becomes

The claim follows.

In the above argument we applied (42) with a very specific choice of function . The power of Theorem 27 lies in the ability to select many other such functions , leading to estimates that do not seem to be obtainable purely from the second moment method. In particular we have the following generalisation of the previous estimate:

Proposition 29 (Weak Elliott inequality for multiple correlations) Let be an interval of length at least . Let be a function with for all , and let . Let be integers. Then one has

for all primes outside of an exceptional set of primes of logarithmic size .

Proof: We allow all implied constants to depend on . As before we can assume that is sufficiently small (depending on ), that takes values in a set of bounded complex numbers of cardinality , and that is large in the sense of (49), and restrict attention to primes up to . By shifting the and using the large nature of we can assume that the are all non-negative, taking values in for some . We now apply Theorem 27 with and conclude as before that

for outside of a set of primes of logarithmic size .

Let be as above. Let be the function

This function is still bounded by , so by (42) as before we have

The left-hand side is equal to

which on switching the summations and using the large nature of can be rewritten as

Meanwhile, the left-hand side is equal to

which again by switching the summations becomes

The claim follows.

There is a logarithmically averaged version of the above proposition:

Exercise 30 (Weak Elliott inequality for logarithmically averaged multiple correlations) Let with , let be a function bounded in magnitude by , let , and let be integers. Show that

for all primes outside of an exceptional set of primes of logarithmic size .

When one specialises to multiplicative functions, this lets us dilate shifts in multiple correlations by primes:

Exercise 31 Let with , let be a multiplicative function bounded in magnitude by , let , and let be nonnegative integers. Show that

for all primes outside of an exceptional set of primes of logarithmic size .

For instance, setting to be the Möbius function, , , and (say), we see that

for all primes outside of an exceptional set of primes of logarithmic size . In particular, for large enough, one can obtain bounds of the form

for various moderately large sets of primes . It turns out that these double sums on the right-hand side can be estimated by methods which we will cover in later series of notes. Among other things, this allows us to establish estimates such as

as , which to date have only been established using these entropy methods (in conjunction with the methods discussed in later notes). This is progress towards an open problem in analytic number theory known as Chowla’s conjecture, which we will also discuss in later notes.

Kategorije: Matematički blogovi

Almost all Collatz orbits attain almost bounded values

Uto, 2019-09-10 16:32

I’ve just uploaded to the arXiv my paper “Almost all Collatz orbits attain almost bounded values“, submitted to the proceedings of the Forum of Mathematics, Pi. In this paper I returned to the topic of the notorious Collatz conjecture (also known as the conjecture), which I previously discussed in this blog post. This conjecture can be phrased as follows. Let denote the positive integers (with the natural numbers), and let be the map defined by setting equal to when is odd and when is even. Let be the minimal element of the Collatz orbit . Then we have

Conjecture 1 (Collatz conjecture) One has for all .

Establishing the conjecture for all remains out of reach of current techniques (for instance, as discussed in the previous blog post, it is basically at least as difficult as Baker’s theorem, all known proofs of which are quite difficult). However, the situation is more promising if one is willing to settle for results which only hold for “most” in some sense. For instance, it is a result of Krasikov and Lagarias that

for all sufficiently large . In another direction, it was shown by Terras that for almost all (in the sense of natural density), one has . This was then improved by Allouche to , and extended later by Korec to cover all . In this paper we obtain the following further improvement (at the cost of weakening natural density to logarithmic density):

Theorem 2 Let be any function with . Then we have for almost all (in the sense of logarithmic density).

Thus for instance one has for almost all (in the sense of logarithmic density).

The difficulty here is one usually only expects to establish “local-in-time” results that control the evolution for times that only get as large as a small multiple of ; the aforementioned results of Terras, Allouche, and Korec, for instance, are of this type. However, to get all the way down to one needs something more like an “(almost) global-in-time” result, where the evolution remains under control for so long that the orbit has nearly reached the bounded state .

However, as observed by Bourgain in the context of nonlinear Schrödinger equations, one can iterate “almost sure local wellposedness” type results (which give local control for almost all initial data from a given distribution) into “almost sure (almost) global wellposedness” type results if one is fortunate enough to draw one’s data from an invariant measure for the dynamics. To illustrate the idea, let us take Korec’s aforementioned result that if one picks at random an integer from a large interval , then in most cases, the orbit of will eventually move into the interval . Similarly, if one picks an integer at random from , then in most cases, the orbit of will eventually move into . It is then tempting to concatenate the two statements and conclude that for most in , the orbit will eventually move . Unfortunately, this argument does not quite work, because by the time the orbit from a randomly drawn reaches , the distribution of the final value is unlikely to be close to being uniformly distributed on , and in particular could potentially concentrate almost entirely in the exceptional set of that do not make it into . The point here is the uniform measure on is not transported by Collatz dynamics to anything resembling the uniform measure on .

So, one now needs to locate a measure which has better invariance properties under the Collatz dynamics. It turns out to be technically convenient to work with a standard acceleration of the Collatz map known as the Syracuse map , defined on the odd numbers by setting , where is the largest power of that divides . (The advantage of using the Syracuse map over the Collatz map is that it performs precisely one multiplication of at each iteration step, which makes the map better behaved when performing “-adic” analysis.)

When viewed -adically, we soon see that iterations of the Syracuse map become somewhat irregular. Most obviously, is never divisible by . A little less obviously, is twice as likely to equal mod as it is to equal mod . This is because for a randomly chosen odd , the number of times that divides can be seen to have a geometric distribution of mean – it equals any given value with probability . Such a geometric random variable is twice as likely to be odd as to be even, which is what gives the above irregularity. There are similar irregularities modulo higher powers of . For instance, one can compute that for large random odd , will take the residue classes with probabilities

respectively. More generally, for any , will be distributed according to the law of a random variable on that we call a Syracuse random variable, and can be described explicitly as

where are iid copies of a geometric random variable of mean .

In view of this, any proposed “invariant” (or approximately invariant) measure (or family of measures) for the Syracuse dynamics should take this -adic irregularity of distribution into account. It turns out that one can use the Syracuse random variables to construct such a measure, but only if these random variables stabilise in the limit in a certain total variation sense. More precisely, in the paper we establish the estimate

for any and any . This type of stabilisation is plausible from entropy heuristics – the tuple of geometric random variables that generates has Shannon entropy , which is significantly larger than the total entropy of the uniform distribution on , so we expect a lot of “mixing” and “collision” to occur when converting the tuple to ; these heuristics can be supported by numerics (which I was able to work out up to about before running into memory and CPU issues), but it turns out to be surprisingly delicate to make this precise.

A first hint of how to proceed comes from the elementary number theory observation (easily proven by induction) that the rational numbers

are all distinct as vary over tuples in . Unfortunately, the process of reducing mod creates a lot of collisions (as must happen from the pigeonhole principle); however, by a simple “Lefschetz principle” type argument one can at least show that the reductions

are mostly distinct for “typical” (as drawn using the geometric distribution) as long as is a bit smaller than (basically because the rational number appearing in (3) then typically takes a form like with an integer between and ). This analysis of the component (3) of (1) is already enough to get quite a bit of spreading on (roughly speaking, when the argument is optimised, it shows that this random variable cannot concentrate in any subset of of density less than for some large absolute constant ). To get from this to a stabilisation property (2) we have to exploit the mixing effects of the remaining portion of (1) that does not come from (3). After some standard Fourier-analytic manipulations, matters then boil down to obtaining non-trivial decay of the characteristic function of , and more precisely in showing that

for any and any that is not divisible by .

If the random variable (1) was the sum of independent terms, one could express this characteristic function as something like a Riesz product, which would be straightforward to estimate well. Unfortunately, the terms in (1) are loosely coupled together, and so the characteristic factor does not immediately factor into a Riesz product. However, if one groups adjacent terms in (1) together, one can rewrite it (assuming is even for sake of discussion) as

where . The point here is that after conditioning on the to be fixed, the random variables remain independent (though the distribution of each depends on the value that we conditioned to), and so the above expression is a conditional sum of independent random variables. This lets one express the characeteristic function of (1) as an averaged Riesz product. One can use this to establish the bound (4) as long as one can show that the expression

is not close to an integer for a moderately large number (, to be precise) of indices . (Actually, for technical reasons we have to also restrict to those for which , but let us ignore this detail here.) To put it another way, if we let denote the set of pairs for which

we have to show that (with overwhelming probability) the random walk

(which we view as a two-dimensional renewal process) contains at least a few points lying outside of .

A little bit of elementary number theory and combinatorics allows one to describe the set as the union of “triangles” with a certain non-zero separation between them. If the triangles were all fairly small, then one expects the renewal process to visit at least one point outside of after passing through any given such triangle, and it then becomes relatively easy to then show that the renewal process usually has the required number of points outside of . The most difficult case is when the renewal process passes through a particularly large triangle in . However, it turns out that large triangles enjoy particularly good separation properties, and in particular afer passing through a large triangle one is likely to only encounter nothing but small triangles for a while. After making these heuristics more precise, one is finally able to get enough points on the renewal process outside of that one can finish the proof of (4), and thus Theorem 2.

Kategorije: Matematički blogovi

254A announcement: Analytic prime number theory

Sri, 2019-09-04 22:46

In the fall quarter (starting Sep 27) I will be teaching a graduate course on analytic prime number theory.  This will be similar to a graduate course I taught in 2015, and in particular will reuse several of the lecture notes from that course, though it will also incorporate some new material (and omit some material covered in the previous course, to compensate).  I anticipate covering the following topics:

  1. Elementary multiplicative number theory
  2. Complex-analytic multiplicative number theory
  3. The entropy decrement argument
  4. Bounds for exponential sums
  5. Zero density theorems
  6. Halasz’s theorem and the Matomaki-Radziwill theorem
  7. The circle method
  8. (If time permits) Chowla’s conjecture and the Erdos discrepancy problem

Lecture notes for topics 3, 6, and 8 will be forthcoming.

 

Kategorije: Matematički blogovi