1 Introduction
One of the goals in dynamical systems is to classify families of continuous maps according to their dynamical properties. One way to do this is through a topological invariant, defined in an ordered set, which somehow measures the dynamical complexity of the systems. The notion of topological entropy achieves this. It measures the exponential growth rate at which orbits of a system are separated. It was introduced by Adler, Konheim, and McAndrew in [Reference Adler, Konheim and McAndrew1], and later on, Dinaburg [Reference Dinaburg14] and Bowen [Reference Bowen6] gave new equivalent definitions.
The objective of this work is to provide a framework wherein we can generalize the classical notion of entropy, allowing study beyond the scope of exponential growth. We will show that this construction is particularly useful for studying families of dynamical systems with vanishing entropy. Moreover, we will see that the space of orders of growth in which orbits are separated is wilder than expected. This is achieved by studying different types of examples.
We shall begin this article with the construction of what we call the complete set of orders of growth.
We start by considering the space of non-decreasing sequences in $[0,\infty )$ :
In this space, we define an equivalence relation as follows: given $a_1,a_2\in \mathcal {O}$ , we say that $a_1\approx a_2$ if there exists $c,C\in (0,+\infty )$ such that $c a_1(n)\leq a_2(n)\leq C a_1(n) \forall n\in \mathbb {N}$ . This is commonly written as $a_1\in \Theta (a_2)$ , and two sequences are related if both have the same order of growth. Because of this, we call the quotient space the space of orders of growth. If a belongs to $\mathcal {O}$ , we note $[a(n)]$ , the class associated to a which is an element of $\mathbb {O}$ . If we have a sequence defined by its formula $\left (\text {e.g., }n^2\right )$ , we will represent the order of growth associated to it with the formula between brackets: $[n^2]\in \mathbb {O}$ .
Since $\mathbb {O}$ is the space of orders of growth, there is a clear notion of some orders of growth being faster than others. This concept defines a partial order in $\mathbb {O}$ , which we formalize through the following construction: given $[a_1(n)],[a_2(n)] \in \mathbb {O}$ , we say that $[a_1(n)] \leq [a_2(n)]$ if there exists $C>0$ such that $a_1(n) \leq C a_2(n)$ . This partial order is well defined because it does not depend on the choices of $a_1$ and $a_2$ .
We have now $(\mathbb {O},\leq )$ , which is a partial order. We recall that the properties that define a partial order are reflexivity ( $o \leq o, \ \forall o\in \mathbb {O}$ ), antisymmetry (if $o_1\leq o_2$ and $o_2\leq o_1$ , then $o_1 = o_2$ ), and transitivity (if $o_1 \leq o_2$ and $o_2\leq o_3$ , then $o_1 \leq o_3$ ). For our purposes, we would like to be able to take ‘limits’ in this space, and we therefore need to complete it. We say that a set L with a partial order is a complete lattice if every subset $A\subset L$ has both an infimum and a supremum. We consider now $\overline {\mathbb {O}}$ , the Dedekind–MacNeille completion of $\mathbb {O}$ . This is the smallest complete lattice which contains $\mathbb {O}$ . In particular, it is uniquely defined, and from now on, we will consider that $\mathbb {O}\subset \overline {\mathbb {O}}$ . We will also call $\overline {\mathbb {O}}$ the complete set of orders of growth. Another way to define $\overline {\mathbb {O}}$ is to consider in $\mathbb {O}$ the order topology and then consider the compactification of $\mathbb {O}$ respecting the partial order.
Since $\overline {\mathbb {O}}$ is not a complete order, just a partial order, we are not going to represent its elements in a line. We are going to represent them in the plane. Given $o,u\in \overline {\mathbb {O}}$ , if we design o to the right of u, then o and u may or may not be comparable – but if they are, $u<o$ . However, if we design them on the same horizontal line and o is to the right of u, then $u<o$ holds.
We want now to define the entropy of dynamical systems in the complete space of orders of growth. We assume that the reader is familiar with the notion of topological entropy (see, e.g., [Reference Manning25], [Reference Viana and Oliveira38], [Reference Walters40] for more details). Let us briefly recall the concepts involved.
Given M a compact metric space and $f:M\to M$ a continuous map, we define the dynamical ball $B(x,n,\epsilon )=\{y\in M: d_n(x,y)\leq \epsilon \}$ , where $d_n(x,y)=\sup \left \{d\left (f^i(x),f^i(y)\right ):0\leq i \leq n\right \}$ . A set $E\subset M$ is an $(n,\epsilon )$ -generator if $M = \bigcup _{x\in E} B(x,n,\epsilon )$ . By the compactness of M, there always exists a finite $(n,\epsilon )$ -generator set. We define then $g(f,\epsilon ,n)$ as the smallest possible cardinality of a finite $(n,\epsilon )$ -generator. If we fix $\epsilon>0$ , then we observe that $g(f,\epsilon ,n)$ is an increasing sequence of natural numbers. And for a fixed n, if $\epsilon _1>\epsilon _2$ , then $g(f,\epsilon _1,n)\geq g(f,\epsilon _2,n)$ .
We will set our notation as follows: the sequence $g_{f,\epsilon }\in \mathcal {O}$ is defined by $g_{f,\epsilon }(n) =g(f,\epsilon ,n)$ . By the foregoing, we deduce that $\left [g_{f,\epsilon _1}(n)\right ]> \left [g_{f,\epsilon _2}(n)\right ]$ if $\epsilon _1<\epsilon _2$ . If we consider $G_f=\left \{\left [g_{f,\epsilon }(n)\right ]\in \mathbb {O}:\epsilon> 0\right \}$ , then we define the generalized topological entropy of f as
The first thing we want to state about generalized entropy is that it is a topological invariant.
Theorem 1. Let M and N be two compact metric spaces and $f:M\to M$ , $g:N\to N$ two continuous maps. Suppose there exists $h:M\to N$ a homeomorphism such that $h\circ f = g \circ h$ . Then $o(f)=o(g)$ .
Recalling that the topological entropy of a map is defined as
The natural question now is how generalized topological entropy is related to topological entropy. The answer to this question is very simple: the classical notion of topological entropy is the projection of generalized entropy into the family of exponential orders of growth.
The exponential orders of growth are the classes of the sequences $\{\exp (tn)\}_{n\in \mathbb {N}}$ , where t is a number between $0$ and $\infty $ . Then the family of exponential orders of growth is the set
Although it is not necessary for now, we take the opportunity to remark that the elements $\inf (\mathbb {E})$ and $\sup (\mathbb {E})$ belong to $\overline {\mathbb {O}}$ and are both abstract orders of growth which are not realizable by any sequence.
Once we have established the family of exponential orders of growth $\mathbb {E}$ , we say how we compare an element $o\in \overline {\mathbb {O}}$ with $\mathbb {E}$ . Given $o\in \overline {\mathbb {O}}$ , we consider the interval $I_{\mathbb {E}}(o)= \{t\in (0,\infty ):o \leq [\exp (tn)]\}\subset \mathbb {R}$ . We would like to observe that the order of growth o might not be comparable to any element of $\mathbb {E}$ , and therefore the set $I_{\mathbb {E}}(o)$ might be the empty set. In any case, we define the projection $\pi _{\mathbb {E}}:\overline {\mathbb {O}} \to [0,\infty ]$ by the following rule:
-
• If $I_{\mathbb {E}}(o)\neq \emptyset $ , then $\pi _{\mathbb {E}}(o)=\inf (I_{\mathbb {E}}(o))$ .
-
• If $I_{\mathbb {E}}(o)=\emptyset $ , then $\pi _{\mathbb {E}}(o)=\infty $ .
Now that we have defined how to project an order of growth into the family of exponential orders of growth, let us enunciate our second theorem.
Theorem 2. Let M be a compact metric space and $f:M\to M$ a continuous map. Then $\pi _{\mathbb {E}}(o(f))=h(f)$ , and $o(f)\leq \sup (\mathbb {E})$ .
We would like to point out that we are projecting into the closure of the set of indexes that define $\mathbb {E}$ , not into $\mathbb {E}$ itself. The reason for this is that $\overline {\mathbb {O}}$ is so big that $\mathbb {E}$ is not a closed set, and it is in fact discrete.
Let us show some examples:
Example 1. If $\Sigma _k = \{1,\dotsc ,k\}^{\mathbb {N}}$ and $\sigma :\Sigma _k \to \Sigma _k$ is the shift, then we know that
where $C(\epsilon )$ is a constant which depends only on $\epsilon $ . When we consider the order of growth associated to such a sequence, we can ignore $C(\epsilon )$ , and then $\left [g_{f,\epsilon }(n)\right ] = [\exp (\log (k)n)]$ for all $\epsilon $ . This implies that $o(\sigma )= [\exp (\log (k)n)]$ .
The next example shows a dynamical system such that its generalized entropy is an abstract order of growth $\left (\text {an element of }\overline {\mathbb {O}} \setminus \mathbb {O}\right )$ .
Example 2. Consider $\Sigma = [0,1]^{\mathbb {N}}$ and $\sigma :\Sigma \to \Sigma $ the shift. In this case, it is not hard to see that
From this, we deduce that $\left [g_{\sigma ,\epsilon }(n)\right ] =[\exp (\log (2/\epsilon )n)]$ , and since $\{\log (2/\epsilon ): 0 < \epsilon < 1\} = (\log (2),\infty )$ , we conclude that $o(\sigma ) = \sup (\mathbb {E})$ .
We would like to recall that it is also possible to construct examples with $o(f)=\sup (\mathbb {E})$ in the context of manifolds, with $C^0$ maps.
We are interested to know whether there are other examples such that their generalized topological entropy is an abstract order of growth. We know that expansivity in the compact case is an obstruction for this phenomenon (this is proved in Appendix B).
Since the space of maps such that $0<h(f)<\infty $ is relatively well understood, we ask what we can say in our context with maps such that $h(f)=\infty $ or $h(f)=0$ . The first category has been answered in Theorem 2. The inequality $o(f) \leq \sup (\mathbb {E})$ implies that $h(f)=\infty $ if and only if $o(f)=\sup (\mathbb {E})$ . In particular, from the standard perspective of separation of orbits, maps with infinite entropy cannot be told apart.
On the other hand, much can be said when $h(f)=0$ . For now, we are going to restrict ourselves to understanding those systems that have the simplest dynamics. Let us introduce an important element of $\overline {\mathbb {O}}$ . Since $\overline {\mathbb {O}}$ is a complete lattice, it has a minimum. Nonetheless, the minimum of $\overline {\mathbb {O}}$ already belongs to $\mathbb {O}$ , and it is the equivalence class of the constant sequence. To simplify the notation, we are going to denote such an element by $0$ .
We are interested to know which are the maps such that $o(f)=0$ . The following theorem answers this question and shows a simple condition to obtain at least linear growth. Recall that $\alpha (x)$ (the $\alpha $ -limit) is the set of accumulation points of the backward orbit of x, and $\omega (x)$ (the $\omega $ -limit) is the set of accumulation points of the forward orbit of x.
Theorem 3. Let M be a compact metric space and $f:M\to M$ a continuous map. Then $o(f)=0$ if and only if f is Lyapunov stable. In addition, if f is a homeomorphism and there exists $x\in M$ such that $x\notin \alpha (x)$ , then $o(f) \geq [n]$ .
The first part of Theorem 3 has already been proved by Blanchard, Host, and Mass in [Reference Blanchard, Host and Mass5], where the property $o(f)=0$ is called ‘bounded complexity’ and Lyapunov-stable maps are called ‘equicontinuous’. However, in this article we are also going to offer an alternate proof.
Recall that $Rec(f)$ is the set of recurrent points of f and $\Omega (f)$ is the set of nonwandering points of f. From the second part of the previous theorem, we conclude the following corollary:
Corollary 1. Let $f:M\to M$ be a continuous map on a metric compact space. If $o(f)<[n]$ , then every point is recurrent, and therefore $Rec(f)=\Omega (f)= M$ . In particular, when M is connected, either there exists $k>0$ such that $f^k = Id$ or f has a point x whose $\omega $ -limit is not a periodic orbit.
Our next objective is to discuss how to classify dynamical systems through generalized topological entropy. At first glance, one would be tempted to say that f is more chaotic than g if $o(f)>o(g)$ . This notion has two problems. Since there is no information loss when considering the generalized topological entropy, $o(f)$ can detect separation of orbits in places where topological entropy cannot. For example, a simple conclusion from the variational principle is that $h\left (f_{\lvert \Omega (f)}\right ) = h(f)$ . However, in the context of generalized topological entropy, this is false. We naturally have that $o\left (f_{\lvert \Omega (f)}\right ) \leq o(f)$ , yet there are examples where the inequality is strict. This means that $o(f)$ can detect separation of orbits in places like the wandering set, and so we consider that this should be taken into account. The strict inequality also holds between other important dynamical sets.
Example 3. There exists a map $f:\mathbb {D}^2\to \mathbb {D}^2$ such that $o\left (f_{\lvert \overline {\mathrm{Rec}(f)}}\right ) = 0, o\left (f_{\lvert \Omega (f)}\right ) = [n]$ , and $o(f)\geq \left [n^2\right ]$ .
This example is constructed and explained in §3.3, and therefore we move on with our discussion. The second problem we have is that in the context of topological entropy, the word ‘chaotic’ is reserved for maps with positive entropy. However, in our context, we work mostly with maps with vanishing entropy, and therefore we would prefer another word for maps with positive generalized entropy. Since generalized entropy implies separation orbits, we choose the word ‘dispersion’. Because of this, we propose the following criteria. We say that f is more dispersive than g if
-
• $ o\left (f_{\lvert \Omega (f)}\right )> o\left (g_{\lvert \Omega (g)}\right )$ or
-
• $ o\left (f_{\lvert \Omega (f)}\right )= o\left (g_{\lvert \Omega (g)}\right )$ and $o(f)>o(g)$ .
We would like stress that we choose to focus on the nonwandering set and the whole space because of preference. We could very well add into the discussion the limit set, the closure of the recurrent set, the chain recurrent set, or the closure of the union of the supports of all the invariant measures. The choice of which sets to consider should depend on the family of maps one is working with.
We will call the tuple $\left (o\left (f_{\lvert \Omega (f)}\right ),o(f)\right )$ the entropy numbers of f. With this criterion, we can prove the following:
Theorem 4. In the space of homeomorphisms of the circle, there are three categories:
-
• f has entropy numbers $(0,0)$ and is Lyapunov stable;
-
• f has entropy numbers $(0,[n])$ , is not Lyapunov stable, and has periodic points; or
-
• f has entropy numbers $([n],[n])$ and is a Denjoy map.
In particular, in the space of homeomorphisms of the circle, Denjoy maps are more dispersive than Morse–Smale maps, which are more dispersive than rotations.
We would like to recall that every homeomorphism of the circle has zero topological entropy. Therefore, with generalized topological entropy we can distinguish maps which are indistinguishable by topological entropy.
With this perspective, we cannot say that irrational rotations are dynamically more complex than rational rotations, since both of them have entropy numbers $(0,0)$ . On the other hand, Morse–Smale maps have bigger entropy numbers than irrational rotations. Now, the extra complexity of irrational rotations comes from the structure of the orbits, not from the separation of the orbits itself. This implies that in the context of vanishing entropy, orbit structure and dispersion of orbits are not intrinsically related as they are in the context of positive entropy. In particular, we presume that in the context of homeomorphisms of the circle, both the rotation number and generalized entropy are the keys to classifying them.
We continue with our study of maps with vanishing entropy through reviewing previous works. In all of them, the polynomial entropy of dynamical systems is studied. From our point of view, polynomial entropy is not a sufficient tool to measure dispersion of orbits on maps with vanishing entropy (this will be shown in Theorem 5). For now, we move to explaining what polynomial entropy is. This concept was introduced by Marco in the context of integrable Hamiltonian maps [Reference Marco27]; the definition is
If we define the family of polynomial orders of growth by $\mathbb {P}=\left \{\left [n^t\right ]\in \mathbb {O}: t\in (0,\infty )\right \}$ , then by the arguments of Theorem 2 we infer that
Figure 2 is a representation of the set $\left \{o(f)\in \overline {\mathbb {O}}:f\text { is a continuous map}\right \}$ that we add to give some perspective.
The polynomial entropy of a map was studied first by Labrousse [Reference Labrousse23], who studied the polynomial entropy of flows in the torus and the polynomial entropy of circle homeomorphisms. In particular, for circle homeomorphisms she shows that the polynomial entropy is always $0$ or $1$ , and that $0$ is taken only by homeomorphisms conjugate to a rotation. Our Theorem 4 is more general for two reasons. First, we take into account the nonwandering set. Second, we observe that saying $o(f)=[n]$ is stronger than saying $h_{\mathrm{pol}}(f)=1$ , because, for example, $\pi _{\mathbb {P}}([\log (n)n])=1$ .
A second work in polynomial entropy is [Reference Bernard and Labrousse4], in which Bernard and Labrousse study the polynomial entropy of geodesic flows for Riemannian metrics on the $2$ torus. They prove that the geodesic flow has polynomial entropy $1$ if and only if the torus is isometric to a flat torus.
After this came work by Artigue, Carrasco-Olivera, and Monteverde [Reference Artigue, Carrasco-Olivera and Monteverde3] showing two examples:
-
(1) a continuous map $f:M\to M$ , where M is a compact metric space, such that $h_{\mathrm{pol}}(f)=0$ yet f is not Lyapunov stable;
-
(2) for each $c>1$ , a continuous map $f:M\to M$ , where M is a compact metric space, such that $\frac {1}{c+1} \leq h_{\mathrm{pol}}(f)\leq \frac {1}{c}$ .
In our context, more can be said using their technique. In fact, the first example satisfies $o(f)= [\log (n)]$ and the second one satisfies $\left [n^{\frac {1}{c+1}}\right ] \leq o(f) \leq \left [n^{\frac {1}{c}}\right ]$ .
Finally, in [Reference Hauseux and Le Roux19] Hauseux and Le Roux study the polynomial entropy of Brouwer homeomorphisms. We would like to point out that since all the points in a Brouwer homeomorphism are wandering, there is no recurrence involved in the entropy of such maps. In that work, Hauseux and Le Roux define the wandering polynomial entropy of a map, and prove that a Brouwer homeomorphism has wandering polynomial entropy $1$ if and only if it is conjugate to a translation. No Brouwer homeomorphism has wandering polynomial entropy in the open interval $(1,2)$ . And for every $\alpha \in [2,\infty ]$ there exists a Brouwer homeomorphism $f_\alpha $ with wandering polynomial entropy $\alpha $ . Those results were translated to our context by de Paula in her doctoral thesis [Reference de Paula13].
Having discussed previous works, we now question whether studying only polynomial orders of growth is sufficient to understand maps with vanishing entropy. Since we have a complete picture of homeomorphisms of the circle, we move to studying generalized entropy on surfaces.
In our next example, we are going to construct a family of transitive maps, all of them with $0$ topological entropy and such that the generalized entropies form an interesting set in $\overline {\mathbb {O}}$ . We also would like to argue that studying generalized entropy is necessary, and that polynomial entropy is not enough.
Our next theorem talks about the generalized entropy of cylindrical cascades. For us, a cylindrical cascade is a map $f:S^1\times \mathbb {R} \to S^1\times \mathbb {R} $ of the form $f(x,y)=(x+\alpha , y +\varphi (x))$ , where $\varphi :S^1\to \mathbb {R}$ is a $C^1$ map. We will call $\mathcal {C}$ the family of cylindrical cascades. In studying cylindrical cascades, higher dimension and higher regularity are commonly considered. However, for our purposes this setup will be sufficient. The study of this type of dynamics is related to Fathi and Herman’s work in [Reference Fathi and Herman16] and the constructions via fast approximations developed by Anosov and Katok [Reference Anosov and Katok2]. A relevant fact about cylindrical cascades that we would like to stress is that all of them are isotopic to the identity.
Dynamical properties of these maps have been studied by many researchers. Recurrence in higher dimension has been studied by Yoccoz [Reference Yoccoz41], [Reference Yoccoz42] and by Chevallier and Conze [Reference Chevallier and Conze9]. Transitivity has been studied by Gottschalk and Hedlund [Reference Gottschalk and Hedlund18], and examples given by Sidorov [Reference Shub and Williams37]. Ergodic properties have been studied by Krygin [Reference Krygin22] and Conze [Reference Conze11] for the case $S^1\times \mathbb {R}$ . For higher dimension, Conze [Reference Conze10] worked in the case of fibers in the Heisenberg group, and most notably, Cirilo and Fayad communicated to us privately the genericity of ergodic maps in the general case $\mathbb {T}^d\times \mathbb {R}^r$ .
Since $S^1\times \mathbb {R}$ is not a compact space, we would like to observe that this is not a problem. By the definition of cylindrical cascades, we could very well project them in $\mathbb {T}^2$ and work there. Or we could define generalized topological entropy in noncompact spaces in the same way as Bowen [Reference Bowen6]. Since the projection from $S^1\times \mathbb {R}$ to $\mathbb {T}^2$ is a local isometry, both solutions are equivalent – that is, both a cylindrical cascade and its projection have the same generalized entropy. We would like to clarify that Theorem 1 also holds in the noncompact case, but only for uniformly continuous conjugations. For more details on the noncompact case, see §2.1.
In the following theorem, we construct cylindrical cascades with arbitrarily slow generalized entropy.
Theorem 5. For every $o\in \overline {\mathbb {O}}$ there exists a cylindrical cascade $f\in \mathcal {C}$ such that f is transitive and $0<o(f)\leq o$ . Moreover, the maps in $\mathcal {C}$ which verify this are dense in $\mathcal {C}$ .
This theorem implies that for the family of cylindrical cascades, polynomial entropy is not sufficient. If we consider $o= \inf (\mathbb {P})$ , then we obtain a dense set of maps in $\mathcal {C}$ with $0$ polynomial entropy.
We would like to compare our approach with another possible perspective on measuring the separation of orbits in a family of dynamical systems. Given an order of growth $[b(n)]$ , we can construct the one-parameter family of orders of growth $\mathbb {B}=\left \{\left [b(n)^t\right ]:0<t<\infty \right \}$ . The set $\mathbb {B}$ is a natural generalization of the sets $\mathbb {E}$ and $\mathbb {P}$ . In fact, if $b(n) = e^n$ , then $\mathbb {B} = \mathbb {E}$ , and if $b(n)= n$ , then $\mathbb {B} = \mathbb {P}$ . If we define
then by the arguments of Theorem 2 we deduce that $\pi _{\mathbb {B}}(o(f))= h_{\mathbb {B}}(f)$ .
This gives a natural approach: given a family of dynamical systems $\mathcal {H}$ , instead of working with $o(f)$ , find an order of growth $[b(n)]$ such that for any f in $\mathcal {H}$ , we have $0 < h_{\mathbb {B}}(f) < \infty $ . This perspective is tempting because dealing with $\lim _{\epsilon \to 0}\limsup _n \frac {\log \left (g_{f,\epsilon }(n)\right )}{\log (b(n))}$ seems technically easier than $o(f)$ . We have two objections to this. First, from our experience, computing $o(f)$ is not much more difficult than computing $h_{\mathbb {B}}(f)$ for maps with $0$ topological entropy. Also, by Theorem 5 this approach is not enough for the family of cylindrical cascades. Given a order of growth $[b(n)]$ , we know $0 < [\log (b(n))]<\inf (\mathbb {B})$ . By Theorem 5, there exists a dense set of maps in $\mathcal {C}$ such that $0 < o(f) \leq [\log (b(n))]$ . This implies that for any $\mathbb {B}$ , there exists a dense set in $\mathcal {C}$ with $h_{\mathbb {B}}(f) =0$ . Because of this, we conclude that in order to understand how cylindrical cascades separate orbits, we need to study their generalized topological entropy. Figure 3 represents the previous argument.
We would like now to show how the concept of generalized entropy allows us to formulate new questions and enrich our perspective. Let us recall Shub’s entropy conjecture and what is known so far. Given $M^m$ a manifold of dimension m and $f:M\to M$ a diffeomorphism, for each k in $\{0,\dotsc , m\}$ , consider $f_{*,k}: H_{k}(M,\mathbb {R})\to H_{k}(M,\mathbb {R})$ , the action induced by f on the real homology groups of M. If $sp\left (f_{*,k}\right )$ is the spectral radius of $f_{*,k}$ and $sp(f_*)=\max \left \{sp\left (f_{*,k}\right ): 0\leq k \leq \dim (M)\right \}$ , then Shub conjectured [Reference Sidorov35] that
Manning [Reference Mañé26] proved that the weaker inequality $\log \left (sp\left (f_{*,1}\right )\right )\leq h(f)$ always holds for homeomorphisms in any dimension. In particular, this implies that the conjecture is always true for homeomorphisms for $m\leq 3$ . This result was then improved by Bowen [Reference Bowen7], who studied the action in the first fundamental group instead of the first homology group.
From the work of Palis, Pugh, Shub, and Sullivan [Reference Palis, Pugh, Shub and Sullivan32] and Kirby and Siebenmann [Reference Kirby and Siebenmann21], we can conclude that the conjecture holds for an open and dense subset of the space of homeomorphisms when $m\neq 4$ .
Marzantowicz, Misiurewicz, and Przytycki [Reference Marzantowicz and Przytycki28], [Reference Misiurewicz and Przytycki29] proved that the conjecture also holds for homeomorphisms on any infra-nilmanifold. Some weaker versions of the conjecture were proved by Ivanov [Reference Ivanov20], Misiurewicz and Pryztycki [Reference Misiurewicz and Przytycki30], and Oliveira and Viana [Reference Oliveira and Viana31].
Major progress in the conjecture was made by Yomdin [Reference Yomdin43], who proved it for every $C^\infty $ diffeomorphism. When restricted to classes of dynamical systems with some kind of hyperbolicity, the conjecture was proved by Shub and Williams [Reference Shub36], Ruelle and Sullivan [Reference Ruelle and Sullivan33], and Saghin and Xia [Reference Saghin and Xia34]. So far, the strongest statement of this kind is the one by Liao, Viana, and Yang [Reference Liao, Viana and Yang24], who proved the conjecture for every diffeomorphism away from tangencies.
What is lacking in this context is a description for maps such that $sp(f_*)=1$ . We would like to observe that the environment of generalized topological entropy provides us a language to study such problems. The following theorem is a contribution to this topic.
Theorem 6. Let M be a manifold of finite dimension and $f:M\to M$ a homeomorphism. If $sp\left (f_{*,1}\right )=1$ , then there exists k, which depends only on $f_{*,1}$ , such that $\left [n^k\right ] \leq o(f)$ . Moreover, k is computed as follows: consider J, the Jordan normal form associated to a matrix that represents $f_{*,1}$ . Let $k_{\mathbb {R}}$ be the maximum dimension among the Jordan blocks associated to either $1$ or $-1$ . Let $k_{\mathbb {C}}$ be the maximum dimension among Jordan blocks associated to complex eigenvalues. Then $k = \max \{k_{\mathbb {R}}, k_{\mathbb {C}} /2\} - 1$ .
We would like to recall that the examples built in Theorem 5 can be projected into $\mathbb {T}^2$ , and all of them are isotopic to the identity. In particular, $k=0$ for the identity, and no lower bound can be obtained in this category. We compile this information in the following corollary.
Corollary 2. In the space $Hom\left (\mathbb {T}^2\right )$ there are three categories:
-
• $f_{*,1}$ is hyperbolic and $\log \left (sp\left (f_{*,1}\right )\right )\leq h(f)$ ,
-
• $f_{*,1}$ is a Dehn twist and $[n]\leq o(f)$ , or
-
• $f_{*,1}$ is a matrix of the form
$$ \begin{align*}A = \begin{pmatrix} \pm 1 & 0 \\ 0 & \pm 1 \end{pmatrix}, \end{align*} $$in which there are elements in the isotopy class with arbitrarily slow generalized entropy.
We are going to conclude this introduction with a few observations that were left over, two examples, and some questions we consider interesting.
Let us compute the generalized entropy of an example which is related to the previous theorem. We will consider skew products in the annulus $S^1\times [0,1]$ , where the map is the identity in the base $[0,1]$ and rotations of different angles on the fibers $S^1$ . Since the identity map in the interval and the rotations of circles are all Lyapunov stable, on each piece the map has $0$ generalized entropy. Therefore, it could be expected that the skew product also has $0$ generalized entropy. However, this is not the case.
Example 4. Consider the annulus $\mathbb {A}=S^1\times [0,1]$ , $\alpha :[0,1]\to [0,1]$ a continuous increasing map, and $R_{\alpha (t)}:S^1\to S^1$ the rotation in the circle of angle $\alpha (t)$ . If $f:\mathbb {A}\to \mathbb {A}$ is the homeomorphism defined by $f(s,t)=\left (R_{\alpha (t)}(s),t\right )$ , then f has entropy numbers $([n],[n])$ .
Observe that this example and Denjoy maps have both the same generalized entropy. Yet their dispersions of orbits come from very different structures. The separation of orbits in Denjoy maps come from an expansive dynamic in a Cantor set, whereas the dispersion in the skew product comes from invariant dynamics moving at different speeds. This shows that generalized entropy is a sensitive tool, and that understanding the phenomena which cause positive generalized entropy could be a delicate problem.
Returning to the topic of generalized entropy in the noncompact case, we would like to study another dynamical system: the Boole map. This map is defined by
and it is a classical example in infinite ergodic theory. Although the discontinuity of f at $0$ presents an obstruction to the definition of $o(f)$ , we circumvent this and prove the following:
Example 5. The Boole map verifies $o(f)=[\exp (\log (2)n)]$ .
Topological entropy can also be defined using separated sets, and thus arises a natural question: If we define generalized topological entropy with separated sets, do both definitions coincide? The answer is yes, and we prove this in §2.1. The same question can be asked for open coverings, and the answer is also yes. We give the proof of this in Appendix A, because throughout this paper we do not use open coverings.
It has come to our attention that Egashira [Reference Egashira15] made a similar construction to ours in the context of foliations, which was later translated by Walczak [Reference Walczak39] to the context of group actions. He does indeed define orders of growth classes and then ‘completes’ his space. However, his orders of growth classes are different from ours, since he allows comparison between some subsequences. On the other hand, he completes his space of orders of growth by considering the abstract limit of sequences of ordered classes. This way of completing the space, if translated to our construction, might result in a smaller set.
An important topic we have not discussed yet is metric entropy. A first difficulty in this topic is the choice of a definition for generalized metric entropy. The classical approach through partitions is inconvenient, mainly because the Kolgomorov–Sinai theorem cannot be translated. This implies that in order to compute the generalized metric entropy of a map, one has to understand the metric entropy in every partition. Another interesting fact is that if a variational principle happens to be true, then it will hold only in the closure of the union of the supports of the invariant measure, not in the whole space. This observation can be seen in Example 3.
There are many interesting families of dynamical systems with vanishing entropy to study in the context of generalized entropy. We wonder what can be said about smooth reparametrizations of irrational flows, Cherry flows, unimodal maps, or the quadratic family, to mention a few. A question we propose in this topic is the realization of orders of growth. That is, given $\mathcal {H}$ a family of dynamical systems such that $o_i = \inf \{o(f):f\in \mathcal {H}\}$ and $o_s = \sup \{o(f):f\in \mathcal {H}\}$ , does there exist, for every $o_i \leq o \leq o_s$ , $f\in \mathcal {H}$ such that $o(f)=o$ ?
We have another question related to the topic of realization. Among the maps with vanishing entropy, we know there exist maps with generalized entropy in the family of polynomial orders of growths. By Theorem 5, there are also maps with arbitrarily slow entropy. We would like to know a dynamical system such that $\sup (\mathbb {P}) < o(f)< \inf (\mathbb {E})$ – for example, a map with $o(f)= \left [\exp \left (\sqrt n\right )\right ]$ .
It is also intriguing for us to know how dynamical properties interact with $o(f)$ . Theorem 3 and Proposition B.4 are results in this vein. We expect that topological mixing or weak mixing has some impact on $o(f)$ . Reciprocally, we would like to know if there is a setting such that positive generalized entropy implies certain growth in the number of periodic orbits. Cylindrical cascades with an irrational rotation have no periodic points, yet we believe that none of them has generalized entropy beyond $[n]$ . The examples from [Reference Hauseux and Le Roux19] have only one fixed point, and generalized entropy between $\left [n^2\right ]$ and $\sup (\mathbb {P})$ . Therefore, some type of recurrence like transitivity is probably required.
Another important topic in entropy is continuity, and for this we have little hope. One of the problems is that since $\overline {\mathbb {O}}$ is very big, the order topology in $\overline {\mathbb {O}}$ is bad. Also, Theorem 5 shows how chaotic the function $f\to o(f)$ can be. In this family, we expect at most that there is some type of upper continuity in the $C^\infty $ topology.
A final question we think important is the understanding of generalized entropy in the border of chaos. The study of parametric families of dynamical systems where the classical entropy jumps from $0$ to positive is well studied, for instance in the Hénon map. This study has also been extended by the second author in a joint work with Crovisier and Tresser [Reference Crovisier, Pujals and Tresser12] to mildly dissipative diffeomorphisms of the disk. We wonder what can be said in terms of the generalized entropy about those maps with $0$ entropy in this context.
In this second-to-last paragraph of the introduction, we would like to specially thank the referee. They made many insightful comments and questions, which we share some of here. First, a definition for flows can be made in an analogous way, and it would be very interesting to understand the relationship, if there is any, with the generalized entropy of a Poincaré section. We think that the entropy of the flow is probably bigger than that of the Poincaré section, mainly because if the time it takes to return is not limited, orbits could distance themselves. One example to consider is the construction done by Fayad [Reference Fayad17], who reparametrized Liouville irrational flows to get weak-mixing volume-preserving nonsingular flows: the return map is always an irrational rotation and therefore has constant order of growth, but it seems to follow from his construction that the reparametrized flow would have a larger order of growth. Second, there is a phenomenon that happens in polynomial entropy where there is a gap in the possible values attained by the maps of certain families of dynamical systems. The first author and de Paula have a work in progress where they seem to have an explanation in the context of wandering dynamics (compactification of Brouwer homeomorphisms). Third, the referee observed that the results of Example 4 could be improved and extended to higher dimension using Marco’s technique [Reference Marco27]. Fourth, the following question was raised by the referee: Are there any natural families of dynamical systems for which the generalized topological entropy is totally ordered or has continuity properties? Finally, we missed noting the recent work by Cantat and Paris-Romaskevich [Reference Cantat and Paris-Romaskevich8], in which they compute an upper bound for the polynomial entropy of automorphisms in compact Kähler manifolds when the classical entropy vanishes.
This paper is structured as follows: in §2 we prove Theorems 1 and 2 and compute Example 5. In §3 we prove Theorems 3 and 4 and construct and explain Examples 3 and 4. In §4 we prove Theorem 5, and in §5 we prove Theorem 6. Then in Appendix A we study generalized topological entropy from the point of view of open coverings. Finally, in Appendix B we review some of the classical properties of topological entropy in the context of generalized topological entropy.
2 Generalized topological entropy
In this section, we will study the generalized topological entropy of continuous maps. First, we develop generalized topological entropy through the point of view of $(n,\epsilon )$ -separated sets; we also study the noncompact case. With this, we can prove Theorems 1 and 2. We end this section by computing the generalized entropy of the Boole map.
2.1 The noncompact case and separated sets
We start by observing that we can also define the generalized topological entropy of a system when M is not a compact set. We do this in an analogous way as in the definition of entropy. Given M a metric space, $f:M\to M$ a continuous map, and $K\subset M$ a compact set, we say that $E\subset K$ is an $(n,\epsilon )$ -generator of K if $K \subset \bigcup _{x\in E} B(x,n,\epsilon )$ . Then we define $g_{f,K,\epsilon }(n)$ equal to the minimum cardinality of an $(n,\epsilon )$ -generator of K, $G_{f,K} =\left \{\left [g_{f,K,\epsilon }(n)\right ]\in \mathbb {O}:\epsilon> 0\right \}$ , and $o(f,K)= \sup \left (G_{f,K}\right )\in \overline {\mathbb {O}}$ . Finally, we define $o(f) = \sup \left \{o(f,K)\in \overline {\mathbb {O}}:K\subset M \text { is compact}\right \}$ .
Another important observation is that the notion of entropy can be defined through $(n,\epsilon )$ -separated sets. We will define another generalized entropy through this perspective and see that both notions coincide. Given M a metric space, $f:M\to M$ a continuous map, and $K\subset M$ a compact set, we say that $E\subset K$ is $(n,\epsilon )$ -separated if $B(x,n,\epsilon )\cap E = \{x\}$ for all $x\in E$ . We define $s_{f,K,\epsilon }(n)$ as the maximal cardinality of an $(n,\epsilon )$ -separated set. Analogously as with $g_{f,K,\epsilon }$ , we know that $s_{f,K,\epsilon }$ is a nondecreasing sequence of natural numbers. Then we define $S_{f,K} = \left \{\left [s_{f,K,\epsilon }(n)\right ]\in \mathbb {O}:\epsilon> 0\right \}$ and $u(f,K)=\sup \left (S_{f,K}\right )\in \overline {\mathbb {O}}$ . Finally, we define $u(f) = \sup \left \{u(f,K)\in \overline {\mathbb {O}}:K\subset M \text { is compact}\right \}$ .
Proposition 2.1. Let us consider M a metric space and $f:M\to M$ a continuous map. If $K\subset M$ is a compact set, then $o(f,K) = u(f,K)$ . In particular, $o(f) = u(f)$ .
The proof of this proposition is a consequence of the following lemma:
Lemma 2.2. We have $g_{f,K,\epsilon }(n) \leq s_{f,K,\epsilon }(n)\leq g_{f,K,\epsilon /2}(n)$ for all $n\geq 1$ , for all $\epsilon>0$ , and for all compact $K\subset M$ .
A proof of this lemma can be found in [Reference Bowen6].
Proof of Proposition 2.1. By Lemma 2.2, we deduce that $\left [g_{f,K,\epsilon }(n)\right ] \leq \left [s_{f,K,\epsilon }(n)\right ]\leq \left [g_{f,K,\epsilon /2}(n)\right ]$ . The first inequality implies that $ u(f,K)\leq o(f,K)$ , and the second one implies that $ o(f,K)\leq u(f,K)$ . From this we conclude that $o(f,K)=u(f,K)$ .□
2.2 $o(f)$ is a topological invariant (proof of Theorem 1)
Proof of Theorem 1. Consider $f:M\to M$ and $g:N\to N$ , two continuous maps such that there exists $h:M\to N$ , a homeomorphism which satisfies $h\circ f = g \circ h$ . Given $\epsilon>0$ , consider $\delta>0$ from the uniform continuity of h. Let E be an $(n,\epsilon )$ -separated set of g such that $s_{g,\epsilon }(n) = \#E$ . We claim that $h^{-1}(E)$ is an $(n,\delta )$ -separated set of f. If this were not true, then there would exist $x_1,x_2\in h^{-1}(E)$ and $k\leq n$ such that $d\left (f^k(x_1),f^k(x_2)\right )\leq \delta $ . By the continuity of h, we know that $d\left (h\left (f^k(x_1)\right ),h\left (f^k(x_2)\right )\right )\leq \epsilon $ . Using the fact that h conjugates f and g, we see that $d\left (g^k(h(x_1)),g^k(h(x_2))\right )\leq \epsilon $ , which contradicts the fact that E is an $(n,\epsilon )$ -separated set of g.
If $h^{-1}(E)$ is an $(n,\delta )$ -separated set of f, we infer that $s_{f,\delta }(n) \geq \# h^{-1}(E) = \# E= s_{g,\epsilon }(n)$ . In particular, $\left [s_{f,\delta }(n)\right ]\geq \left [s_{g,\epsilon }(n)\right ]$ , and from this we deduce that $o(f)\geq o(g)$ . Since h is a homeomorphism, we analogously prove that $o(f)\leq o(g)$ , and then we conclude that $o(f)= o(g)$ .□
We would like to point out that this theorem also holds for the noncompact case where the conjugacy is uniformly continuous.
2.3 Relationship between $o(f)$ and $h(f)$ (proof of Theorem 2)
In order to prove that $\pi _{\mathbb {E}}(o(f))=h(f)$ , we would like to do two things: first, recall the definition of $\pi _{\mathbb {E}}:\overline {\mathbb {O}}\to [0,\infty ]$ . Once we consider the interval $I_{\mathbb {E}}(o)= \{t\in (0,\infty ):o \leq [\exp (tn)]\}\subset \mathbb {R}$ , we define $\pi _{\mathbb {E}}(o)=\inf (I_{\mathbb {E}} (o))$ if $I_{\mathbb {E}}(o)\neq \emptyset $ and $\pi _{\mathbb {E}}(o)=\infty $ otherwise. Second, we point out the following lemma, which we are not going to prove.
Lemma 2.3. The following four are equivalent:
-
(1) $[a_1(n)]\leq [a_2(n)]$ (there exists a constant c such that $a_1(n) \leq C a_2(n)$ for all n).
-
(2) $\liminf _n \frac {a_2(n)}{a_1(n)}>0$ .
-
(3) $\limsup _n \frac {a_1(n)}{a_2(n)} <\infty $ .
-
(4) There exist a constant c and $n_0$ such that $a_1(n) \leq c a_2(n)$ for all $n\geq n_0$ .
Proof of $\pi _{\mathbb {E}}(o(f))=h(f)$
Let us suppose that $h(f)<\infty $ . If so, then
which implies
This means that $\left [g_{f,\epsilon }(n)\right ] \leq [\exp ((h(f)+\delta )n)]$ , and therefore $ o(f) \leq [\exp ((h(f)+\delta )n)]$ . In particular, $\pi _{\mathbb {E}}(o(f))\leq (h(f)+\delta )$ for any $\delta $ , and then $\pi _{\mathbb {E}}(o(f))\leq h(f)$ . Moreover, if $h(f)<\infty $ , then $\pi _{\mathbb {E}}(o(f))<\infty $ .
Let us suppose that $\pi _{\mathbb {E}}(o(f))<\infty $ . Recalling that this means $I_{\mathbb {E}} (o(f)) \neq \emptyset $ , we take $t_0$ such that $ o(f) \leq [\exp (t_0 n)]$ . Then $ \left [g_{f,\epsilon }(n)\right ]\leq [\exp (t_0 n)]$ for all $\epsilon>0$ . This implies that
which is equivalent to
and therefore
Since this holds for all $\epsilon>0$ , we infer that $h(f) \leq t_0$ , and then $h(f) \leq \inf (I_{\mathbb {E}} (o(f)))=\pi _{\mathbb {E}}(o(f))$ . Moreover, if $\pi _{\mathbb {E}}(o(f))<\infty $ , then $h(f)<\infty $ .
From the previous two arguments we deduce two things. First, $h(f)<\infty $ if and only if $\pi _{\mathbb {E}}(o(f))<\infty $ . Second, if one of those is the case, then $h(f)= \pi _{\mathbb {E}}(o(f))$ .□
We would like to observe that Theorem 2 also holds when M is not compact.
From Theorem 2, it remains to see that $o(f)\leq \sup (\mathbb {E})$ . Since we are later going to use the main argument to prove this, we would like to set it aside. This argument will allow us to compute upper bounds for $s_{f,\epsilon }(n)$ .
Lemma 2.4. Let M be a compact metric space and $f:M\to M$ a continuous map. Let us fix $\epsilon>0$ and suppose that M can be covered by $B_1, \dotsc , B_k$ balls of radius $\epsilon /2$ . Let E be an $(n,\epsilon )$ -separated set and $\varphi :E\to \{1,\dotsc ,k\}^n$ a map which associates to each point an itinerary. This means that if $\varphi (x)= (i_0,\dotsc ,i_{n-1})$ , then $f^j(x)\in B_{i_j}$ . Then $\varphi $ is injective.
Proof. If not, we would have two points $x,y$ in E such that $d\left (f^i(x),f^i(y)\right )<\epsilon $ for all $0\leq i\leq n-1$ . This contradicts the fact that E is an $(n,\epsilon )$ -separated set.
We will call maps like $\varphi $ itinerary maps.
Proof of $o(f)\leq \sup (\mathbb {E})$
Let us fix $\epsilon>0$ and consider $B_1, \dotsc , B_k$ balls of radius $\epsilon /2$ which cover M. Take E an $(n,\epsilon )$ -separated set with $\#E = s_{f,\epsilon }(n)$ and $\varphi :E\to \{1,\dotsc ,k\}^n$ an itinerary map as in Lemma 2.4. We know by this lemma that $\varphi $ is injective, and therefore $s_{f,\epsilon }(n)\leq k^n$ . Since k depends on $\epsilon $ and $k(\epsilon )\to \infty $ as $\epsilon \to 0$ , we conclude that
□
2.4 Generalized topological entropy of the Boole map
We would like to finish this section with an example. The Boole map, defined by
is a classical system in infinite ergodic theory. Before we compute its generalized entropy, we need to define it. The lack of compactness of the spaces is a problem we already solved in a previous subsection. However, the Boole map has an extra obstruction: the existence of a discontinuity point. It is easy to observe that $f(1)=0$ and that there exists a sequence of points $x_k\nearrow _k 1$ such that $d(f(x_k), f(x_{k+1}))> 1$ . This implies the existence in a compact interval of infinitely many distinguishable $\epsilon $ orbits in a first step. This prevents measuring any type of growth, since in the first step we begin with infinitely many points. To circumvent this, we define the generalized entropy of f as the generalized entropy of f restricted to the maximal invariant set among the continuity points of f. If we define this set by $\Lambda _f$ , for the Boole map it is the real line $\mathbb {R}$ minus the preorbit of $0$ .
We claim that $\Lambda _f$ can be written as the union of Cantor sets $\Lambda _k$ , where $o\left (f_{\lvert \Lambda _k}\right ) = [\exp (\log (2) n)]$ and therefore $o(f) = o\left (f_{\lvert \Lambda _f}\right )=\sup \left \{o\left (f_{\lvert \Lambda _k}\right ):k\in \mathbb {N}\right \}=[\exp (\log (2) n]$ . The existence of such $\Lambda _k$ comes from the fact that when we consider the compactification of $\mathbb {R}$ , f becomes a continuous map in the circle $S^1$ conjugate to the map $g:S^1\to S^1$ defined by $g(x) = 2x\bmod 1$ .
If $\varphi :\overline {\mathbb {R}} \to S^1$ is such a conjugacy, then it verifies $\varphi (0) = 1/2\bmod 1$ and $\varphi (\infty )=0$ . Once this conjugacy is defined, we consider the open intervals $I_k=(1/k, 1/2-1/k)$ and $J_k= (1/2+1/k, 1-1/k)$ in the circle. We observe that g restricted to $I_k\cup J_k$ is Markovian. Therefore, the maximal invariant set for g in $I_k\cup J_k$ is a Cantor set $C_k$ , and $o\left (g_{\lvert C_k}\right )= [\exp (\log (2) n)]$ . In particular, if $\Lambda _k = \varphi ^{-1}(C_k)$ , then $o\left (f_{\lvert \Lambda _k}\right ) = o\left (g_{\lvert C_k}\right )$ . Consider $\Lambda _g$ the maximal invariant set of g in $S^1\setminus \left \{0, 1/2\right \}$ . Since
then $\Lambda _g = \cup _k C_k$ and therefore $\Lambda _f=\cup _k \Lambda _k$ . From this we conclude our statement.
3 Maps with vanishing entropy
In this section, we first prove Theorem 3. Then we study the generalized entropy of homeomorphisms of the circle, which means proving Theorem 4. Finally, we construct and explain Examples 3 and 4.
3.1 Lyapunov-stable maps (proof of Theorem 3)
Proof of Theorem 3. We first prove that $o(f)=0$ if and only if f is Lyapunov stable.
$\Longrightarrow $ : By the definition of Lyapunov stability, given $\epsilon>0$ , there exists $\delta $ such that if $d(x,y)<\delta $ , then $d(f^n(x),f^n(y))<\epsilon $ for all $n\in \mathbb {N}$ . In particular, $B(x,\delta )\subset B(x,n,\epsilon )$ . Since M is compact, there exist $x_1,\dotsc ,x_k$ points in M such that $\{B(x_i,\delta ):i=1,\dotsc ,k\}$ is a covering of M. By the previous, we know that $\{B(x_i,n,\epsilon ):i=1,\dots ,k\}$ is a covering of M, and therefore $g_{f,\epsilon }(n)\leq k$ . This implies that $\left [g_{f,\epsilon }(n)\right ] = 0$ and therefore $o(f)=0$ .
$\Longleftarrow $ : Suppose that $o(f)=0$ , and observe that given $\epsilon>0$ , we conclude that $\left [s_{f,\epsilon }(n)\right ] \leq o(f)=0$ , and therefore $\left [s_{f,\epsilon }(n)\right ] =0$ . This implies that $s_{f,\epsilon }(n)$ is a bounded sequence.
Since $s_{f,\epsilon }(n)$ is a bounded nondecreasing sequence, it is eventually constant. Let us say $s_{f,\epsilon }(n) = s_{f,\epsilon }(n_0)$ for all $n\geq n_0$ . If a set E is $(n,\epsilon )$ -separated, then it is $(m,\epsilon )$ -separated for all $m>n$ . From this, we see that we can take E an $(n,\epsilon )$ -separated set such that $s_{f,\epsilon }(n)=\#E$ for all $n\geq n_0$ .
Recall that if an $(n,\epsilon )$ -separated set is such that its cardinal is $s_{f,\epsilon }(n)$ , then it is also an $(n,\epsilon )$ -generator. In particular, we know that for every $n\in \mathbb {N}$ and $x\in M$ there exists $y_n\in E$ such that $d_n(x,y_n)<\epsilon $ . Since E is finite and $d_m(x,y)\geq d_n(x,y)$ if $m>n$ , then we can deduce that for every $x\in M$ there exists $y\in E$ such that $d_n(x,y)< \epsilon $ for all $n\in \mathbb {N}$ .
To simplify the notation, we will call $d_\infty (x,y)= \sup \{d_n(x,y):n\in \mathbb {N}\}$ .
Let us prove the result. Suppose by contradiction that there exists $\eta>0$ such that for every m there exist $x_m,y_m$ verifying $d(x_m,y_m)<1/m$ and $d_\infty (x_m,y_m)>\eta $ . Taking a subsequence if necessary, we can consider that there exist $z\in M$ and $x,y\in E$ such that $x_m\to z$ , $y_m\to z$ , $d_\infty (x_m,x)\leq \epsilon $ , and $d_\infty (y_m,y) \leq \epsilon $ . Since each $d_n$ is continuous, we conclude that $d_\infty (z,x)\leq \epsilon $ and $d_\infty (z,y)\leq \epsilon $ . Then we infer that
and if we take $\epsilon < \eta /4$ we have a contradiction.
Let us show now that if there exists $x\in M$ such that $x\notin \alpha (x)$ , then $o(f) \geq [n]$ . If $x\notin \alpha (x)$ , then there exists $\epsilon>0$ such that $d(x,f^{-n}(x))\geq \epsilon $ for all $n\geq 1$ . Given $n\in \mathbb {N}$ , we claim that $\left \{f^{-i}(x):0\leq i <n\right \}$ is an $(n,\epsilon )$ -separated set. From this, $s_{f,\epsilon }(n)\geq n$ , and then $o(f)\geq \left [s_{f,\epsilon }(n)\right ]\geq [n]$ , which implies the result. To prove the claim, observe that given $0\leq i<j\leq n-1$ , we have $d_n\left (f^{-i}(x),f^{-j}(x)\right ){\,\geq\,} d\!\left (f^i\!\left (f^{-i}(x)\right ),f^i\!\left (f^{-j}(x)\right )\!\right ){\,=\,}d\!\left (x,f^{i-j}(x)\right )\!>\epsilon $ .□
Having proved Theorem 3, we conclude Corollary 1.
Proof of Corollary 1. The first part is immediate from the second part of Theorem 3. For the second part, if every point is periodic, the map which associates each $x\in M$ to its period is upper semicontinuous. Then it must have a maximum, and therefore $f=Id^k$ .□
3.2 Homeomorphisms of the circle (proof of Theorem 4)
We split the proof of Theorem 4 into two lemmas.
Lemma 3.1. Let $f:S^1\to S^1$ be a homeomorphism. If f is not Lyapunov stable, then $o(f)=[n]$ .
Proof. Let us start by proving that $o(f)\leq [n]$ . The argument is the same as for proving that if f is a homeomorphism of the circle, then $h(f)=0$ . Let us fix $\epsilon>0$ and consider a finite covering of $S^1$ by intervals of length $\epsilon $ . Suppose that $I_1, \dotsc , I_k$ are such intervals and let E be an $(n,\epsilon )$ -separated set with $\#E = s_{f,\epsilon }(n)$ . Again, we consider an itinerary map $\varphi :E\to \{1,\dotsc ,k\}^n$ . We know by Lemma 2.4 that $\varphi $ is injective. The difference here with respect to the second part of Theorem 2 is that we can prove $\# \varphi (E)\leq 4k n$ . Let us consider an admissible itinerary $(i_1,\dotsc ,i_n)$ . In particular, $\bigcap _{j=0}^{n-1}f^{-j}\left (I_{i_j}\right )\neq \emptyset $ , and therefore it is an interval with two endpoints. Observe also that each endpoint is an endpoint of some $f^{-j}\left (I_{i_j}\right )$ . Since we have $k.n$ intervals $f^{-j}(I_{i})$ with $0\leq j\leq n-1$ and $1\leq i\leq k$ , we know that $\# \varphi (E)\leq 4k.n$ . Therefore $ s_{f,\epsilon }(n)\leq 4kn$ , which implies $\left [s_{f,\epsilon }(n)\right ]\leq [n]$ , and then $o(f) \leq [n]$ .
We need now to prove $o(f)\geq [n]$ , and for this we are going to use the second part of Theorem 3. Let us consider first the case when f reverses orientation. In this case, f has two fixed points. Now, we have two possibilities for the remaining points: they are all periodic of period $2$ or there are wandering points. For the first case, f is Lyapunov stable, and for the second, by Theorem 3 we deduce that $o(f)\geq [n]$ .
Let us study the case when f preserves orientation. In this case, we have the well-defined rotation number $\rho (f)$ . If $\rho (f) =p/q\in \mathbb {Q}$ , we know that f has periodic points, they all have period q, and the nonwandering set of f consists only of these periodic points. Now, we have two possibilities. If $\Omega (f)=S^1$ , then f is Lyapunov stable. If $\Omega (f)\neq S^1$ , we have wandering points, and again by the second part of Theorem 3 we conclude.
If $\rho (f) \notin \mathbb {Q}$ , we know that f is semiconjugate to an irrational rotation. Since the rotation is Lyapunov stable, if f is in fact conjugate, then f is also Lyapunov stable. If not, f has wandering points, and analogously by the second part of Theorem 3 we have finished.
From the previous lemma, we could not separate a Denjoy map (in which f is only semiconjugate to the irrational rotation) from a Morse–Smale map (in which the nonwandering set consists only of a finite number of hyperbolic periodic points). To solve this, we have the following result:
Lemma 3.2. Let $f:S^1\to S^1$ be a homeomorphism. If f is a Denjoy map, then $o(f,\Omega (f))=[n]$ .
Proof. Since $o(f,\Omega (f))\leq o(f)\leq [n]$ , it only remains to prove that $o(f,\Omega (f)) \geq [n]$ .
We will use wandering intervals to the connected components of $S^1\setminus \Omega (f)$ . Let us consider $\epsilon>0$ such that there exists some wandering interval of length greater than $\epsilon $ . We define now $A_1=\{I_1,\dotsc , I_k\}$ , the collection of all the wandering intervals of length greater than $\epsilon $ . We know that this is a finite set because $S^1$ has finite length. We proceed to define by induction the sets $A_{n+1}= f^{-1}(A_n)\cup A_1$ . Observe that for n big enough, $\#A_n\geq n$ . This is true because the intervals are wandering, and therefore at each step we have to add at least one new interval. To prove the result, let us observe that the two points $x,y$ in the border of an interval of $A_n$ belong to $\Omega (f)$ and $d_n(x,y)=\sup \left \{d\left (f^i(x),f^i(y)\right ): 0\leq i< n\right \}>\epsilon $ . Now, if we take for each connected component of $S^1\setminus \cup _{I\in A_n} I $ one point in the border, then by the previous argument we obtain an $(n,\epsilon )$ -separated set. This set has $\#A_n$ points and therefore $s_{f,\Omega (f),\epsilon }(n)\geq \#A_n$ , which implies $[n]\leq \left [s_{f,\Omega (f),\epsilon }(n)\right ]\leq o(f,\Omega (f))$ .
Let us now prove Theorem 4.
Proof of Theorem 4. For each homeomorphism of the circle f, we associate the tuple $(o(f,\Omega (f)), o(f))$ . For Denjoy maps we obtain $([n],[n])$ , and for Lyapunov-stable maps we obtain $(0,0)$ . For the rest, since $\Omega (f)=Per(f)$ we obtain $(0,[n])$ . In particular, by the criteria defined in the Introduction we infer that Denjoy maps are more dispersive than Morse–Smale maps, which are more dispersive than rotations.□
3.3 Example 3: A map with different entropy numbers
This example is inspired by [Reference Hauseux and Le Roux19] and Bowen’s eye map.
Consider $f:\mathbb {D}^2\to \mathbb {D}^2$ , the time $1$ map of a flow as in Figure 4.
Observe that in this flow there are two invariant regions, the inner disk and the outer ring. We will call D the inner disk and C its border. The purpose of the outer ring is to make the inner disk part of the nonwandering set. In particular, in C there are two singularities, which induce two parabolic fixed points $p_1$ and $p_2$ . Its not hard to see that $\overline {\mathrm{Rec}(f)}=\{p_1,p_2\} \cup \partial \mathbb {D}^2$ and $\Omega (f) =C \cup \partial \mathbb {D}^2$ . We choose for the map in $\partial \mathbb {D}^2$ to be a rotation. From this, we conclude that $o\left (f, \overline {\mathrm{Rec}(f)}\right )= 0$ and $o(f,\Omega (f))=[n]$ .
It remains then to prove that $o(f)\geq \left [n^2\right ]$ . In order to do so, we use the technique developed in [Reference Hauseux and Le Roux19]. We are not applying that theory directly, because the setting is different, but the main argument still holds.
Let us consider two open sets $U_0$ and $U_1$ inside D such that the following are true:
-
• $U_0$ and $U_1$ are wandering sets.
-
• $U_0$ is in the lower half of the disk D.
-
• $U_1$ is in the upper half of the disk D.
-
• $\partial U_0 \cap C \neq \emptyset $ and $\partial U_1 \cap C \neq \emptyset $ .
Given $\epsilon>0$ , we consider $V_0=\{x\in U_0:d(x, D \setminus U_0)>\epsilon \}$ and $V_1=\{x\in U_1:d(x,D\setminus U_1)>\epsilon \}$ . If $\epsilon $ is small enough, then $V_0$ and $V_1$ are not empty, and moreover, $\partial V_i\cap C \neq \emptyset $ for $i=0,1$ .
We would like to code the orbits in $int(D)$ . For this, we define $R= D\setminus (V_0\cup V_1)$ . Now we fix n and consider the itinerary map $\varphi _n:int(D)\to \{V_0,V_1,R\}^{n+1}$ .
We claim to have the following property: There exists $k_0>0$ such that for all n, for all $k_0 \leq l\leq n$ , and for all $0\leq i\leq n-l$ , there exists $x\in D$ with $\varphi _n(x)=(w_0,\dotsc , w_n)$ verifying $w_i = V_0$ , $w_{i+l}= V_1$ , and $w_j= R$ for all $j\neq i, i+l$ .
The reason for this claim to be true is that the speed of the flow near the singularity becomes arbitrarily close to $0$ and we have points in $V_0$ and $V_1$ as near as C as we want.
For each pair $(i,l)$ we consider the point $x_{i,l}$ as in the claim. If we define $S_n=\left \{x_{i,l}:k_0\leq l \leq n, 0\leq i\leq n- l \right \}$ , then $S_n$ is an $(n,\epsilon )$ -separated set. To see this, let us consider $x_{i,l},x_{i^{\prime },l^{\prime }}\in S$ with $\varphi _n\left (x_{i,l}\right )=(w_0,\dotsc , w_n)$ and $\varphi _n\left (x_{i^{\prime },l^{\prime }}\right )=\left (w^{\prime }_0,\dotsc , w^{\prime }_n\right )$ . Suppose that $i\neq i^{\prime }$ (the other case is analogous), and observe that $w_i = V_0$ , $w^{\prime }_i=R$ , and $w^{\prime }_{i^{\prime }}= V_0$ . Since $U_0$ is a wandering set and $w^{\prime }_{i^{\prime }}= V_0$ , we infer that $f^i\left (x_{i^{\prime },l^{\prime }}\right )\notin U_0$ and therefore $x_{i,l}$ and $x_{i^{\prime },l^{\prime }}$ are $(n,\epsilon )$ -separated.
By a simple computation we see that $[\#S_n] = \left [n^2\right ]$ , and therefore $\left [n^2\right ]\leq \left [s_{f,\epsilon }(n)\right ]\leq o(f)$ .
3.4 Example 4: Generalized entropy of some twist maps
We will study the generalized topological entropy of some of the homeomorphisms of the annulus $\mathbb {A}=S^1\times [0,1]$ , in particular those twist maps which leave the circles $S^1\times \{t\}$ invariant for all $t\in [0,1]$ .
To simplify the computation, we will use in $\mathbb {A}$ the metric
The first thing we do is lift f. Let us consider $\pi :\mathbb {R}\times [0,1]\to \mathbb {A}$ the natural projection, $\alpha :[0,1]\to [0,1]$ a continuous increasing map, and $f:\mathbb {A}\to \mathbb {A}$ defined by $f(s,t)=\left (R_{\alpha (t)}(s),t\right )$ . The map $F:\mathbb {R}\times [0,1]\to \mathbb {R}\times [0,1]$ such that $F(s,t)=(s+\alpha (t),t)$ is a lift of f which satisfies $f\circ \pi = \pi \circ F$ . Observe that if $D=[0,1]\times [0,1]\subset \mathbb {R}\times [0,1]$ , then $o(F,D)=o(f)$ . This is true because the entropy is locally computed, $\pi $ is a local isometry, and F and f are conjugated. Moreover, due to the fact that F commutes with the action of the fundamental group of $\mathbb {A}$ , we deduce that $o(F)=o(f)$ .
The reason we are going through this is the following: given two points $x,y\in \mathbb {A}$ , in order to know if they are $(n,\epsilon )$ -separated we need to know all the values $d(x,y), d(f(x),f(y)), \dotsc , d(f^n(x),f^n(y))$ . Now, if f is a twist map, any curve which is transverse to the horizontal direction in every point is stretched in every iterate of f. This implies that given any two close points $x=(s_1,t_1), y=(s_2,t_2)\in \mathbb {R}\times [0,1]$ with $t_1\neq t_2$ , if $d\left (F^k(x),F^k(y)\right )>\epsilon $ , then $d(F^n(x),F^n(y))>\epsilon $ for all $n>k$ . Although this might not happen to f, the $(n,\epsilon )$ -balls are isometric by $\pi $ , and therefore this does not contradict the claim that $o(F,D)=o(f)$ . In particular, the previous implies that we only need to consider the value of $d(F^n(x),F^n(y))$ to know whether two close points that do not belong to the same horizontal line are $(n,\epsilon )$ -separated.
We say that $\beta :[a,b]\to \mathbb {R}\times [0,1]$ is a vertical curve if $[a,b]\subset [0,1]$ and $\beta (t)=(s_0,t)$ for a fixed $s_0\in \mathbb {R}$ .
Lemma 3.3. Suppose that $F(s,t)=(s+\alpha (t),t)$ with $\alpha :[0,1]\to \mathbb {R}$ an increasing map. Given $\epsilon>0$ , consider $0\leq s_1<\dotsb <s_l\leq 1$ such that $s_{i+1}-s_i \leq \epsilon /2$ and also $\beta _1,\dotsc ,\beta _l:[a,b]\to \mathbb {R}\times [0,1]$ the vertical curves associated to $\{s_1,\dotsc ,s_l\}$ . Then there exists $G_\epsilon (n)$ an $(n,\epsilon )$ -generator of $[0,1]\times [a,b]$ such that
Proof. Observe that $d\left (F^n(s,t),F^n\left (\hat s, \hat t\right )\right )= \max \left \{\left \lvert n\left (\alpha \left (\hat t\right )-\alpha (t)\right ) + \hat s - s\right \rvert ,\left \lvert \hat t - t\right \rvert \right \}$ . If we consider $ t_1=a<t_2<\dotsb <t_q=b$ such that
then $G_\epsilon (n)= \left \{\beta _i\left (t_j\right ):1\leq i \leq l, 1\leq j \leq q\right \}$ is an $(n,\epsilon )$ -generator of $[0,1]\times [a,b]$ .
Since $\#G_\epsilon (n)=l.q$ , we just need to compute q. For this, we add on i in equation (1) and infer that $n(\alpha (b)-\alpha (a))\leq q \epsilon /2$ . Since $\alpha $ is continuous, we can take such $t_i$ verifying $q=\left \lceil 2 n(\alpha (b)-\alpha (a))/\epsilon \right \rceil ,$ and from this we have finished.
We also have the following:
Lemma 3.4. Suppose that $F(s,t)=(s+\alpha (t),t)$ with $\alpha :[0,1]\to \mathbb {R}$ an increasing map. Given $\epsilon>0$ , consider $0\leq s_1<\dotsb <s_l\leq 1$ such that $s_{i+1}-s_i> \epsilon $ and also $\beta _1,\dotsc ,\beta _l:[a,b]\to \mathbb {R}\times [0,1]$ the vertical curves associated to $\{s_1,\dotsc ,s_l\}$ . Then there exists $S_\epsilon (n)$ an $(n,\epsilon )$ -separated set of $[0,1]\times [a,b]$ such that
The proof of this lemma is analogous to the proof of Lemma 3.3, and therefore we omit it.
Since $\Omega (f)=\mathbb {A}$ , we just need to prove that $o(f)=[n]$ . Now, by our previous arguments, we just need to see $o(F,D)=[n]$ , where $D=[0,1]\times [0,1]$ .
Given $\epsilon>0$ , consider $G_\epsilon (n)$ as in Lemma 3.3. We know that $g_{F,D,\epsilon }(n) \leq \#G_\epsilon (n)$ and therefore $\left [g_{F,D,\epsilon }(n)\right ]\leq [n]$ . This implies $o(F,D)\leq [n]$ . Analogously, we consider $S_\epsilon (n)$ as in Lemma 3.4. Since $s_{F,D,\epsilon }(n) \geq \#S_\epsilon (n)$ , we infer that $\left [s_{F,D,\epsilon }(n)\right ]\geq [n]$ , and then $o(F,D)\geq [n]$ .
4 Cylindrical cascades
As mentioned in the Introduction, a cylindrical cascade for us is a map $f:S^1\times \mathbb {R} \to S^1\times \mathbb {R} $ of the form $f(x,y)=(x+\alpha , y +\varphi (x))$ , where $\varphi :S^1\to \mathbb {R}$ is a $C^1$ map. We will work with cylindrical cascades using the classical approach – that is, studying $\varphi $ as the limit of trigonometric polynomials. To prove Theorem 5, we will construct an example with $o(f)\leq [a(n)]$ for some fixed sequence $a(n)$ and then explain why f is transitive and why we can build this type of example in a dense set of $\mathcal {C}$ .
Let us start by considering an irrational number $\alpha $ and $\{p_k/q_k\}_{k\in \mathbb {N}}$ the sequence of Diophantine approximations. Consider also a sequence $b_k$ which decreases to $0$ , and define $ \varphi _k:S^1\to [-1,1]$ by $\varphi _k(x) = b_k \cos (2\pi q_k x)$ ; then
The sequences $\{p_k/q_k\}_{k\in \mathbb {N}}$ and $\{b_k\}_{k\in \mathbb {N}}$ are not going to arbitrary. In fact, their speed of convergence will be our variable in order to obtain the result. We will discuss throughout the proof what conditions $b_k$ and $q_k$ need to verify. At the end of the construction, we will explain the process of choosing these numbers such that all conditions are verified. To begin with, we need
for f to be a $C^1$ map.
Let us represent the Weyl sum of $\varphi _k$ under the rotation $R_\alpha $ by $S_n(\varphi _k) = \sum _{j=0}^{n-1} \varphi _k\circ R^j_\alpha $ . With this, when f is iterated we see that
4.1 Upper bound for $o(f)$
Our goal now is to construct f such that $o(f)\leq [a(n)]$ . For this to happen, our strategy will be set by the following lemma.
Lemma 4.1. Suppose $a(n)$ is a nondecreasing sequence such that $\sum _{k} \left \lvert S_n\left (\varphi ^{\prime }_k\right )\right \rvert \leq a(n)$ . Then $o(f)\leq [a(n)]$ .
Proof. The map f does not separate orbits in the vertical axis, so we need to compute the separation of orbits in the horizontal axis. Let us fix $\epsilon $ and consider two points $x_1,x_2 \in S^1$ such that $d(x_1,x_2)\leq \epsilon /a(n)$ . By a simple computation, we deduce that
Therefore, $(x_1,y)$ and $(x_2,y)$ are not $(n,\epsilon )$ -separated. This implies that $s_{f,\epsilon }(n)\leq a(n)/\epsilon ^2$ , and then $\left [s_{f,\epsilon }(n)\right ]\leq [a(n)] \forall \epsilon $ . From this, we conclude that $o(f)\leq [a(n)]$ .
This lemma gives us a way to bound $o(f)$ , which is to compute $\left \lvert S_n\left (\varphi ^{\prime }_k\right )\right \rvert $ .
4.2 Known facts about Diophantine approximations
Let us briefly recall some classical properties of Diophantine approximations. If
then $q_{k+1} = r_{k+1} q_k + q_{k-1}$ . Since $\frac {q_{k-1}}{q_k} \leq 1$ , we infer the estimate
We also know that
which implies
where $\left \lVert q_k \alpha \right \rVert $ is the distance in the circle between the projection of $0$ and $q_k \alpha $ .
4.3 Upper bounds for the Weyl sum of the derivatives
In this subsection, we show two things. First we obtain a constant upper bound for $\left \lvert S_n\left (\varphi _k^{\prime }\right )\right \rvert $ for any n, and second we obtain a linear upper bound for up to certain integer.
The upper bound we get for $\left \lvert S_n\left (\varphi _k^{\prime }\right )\right \rvert $ comes from the fact that $\varphi _k^{\prime }$ has a solution for the cohomological equation, and therefore the orbit of a point moves along the graph of said solution.
Lemma 4.2. For every $n\in \mathbb {N}$ , $\left \lvert S_n\left (\varphi _k^{\prime }\right )\right \rvert \leq 4\pi b_k q_k q_{k+1}$ .
Proof. To prove this, we start by observing that we can write $\varphi _k$ as
If we define the map
then we deduce that $\varphi _k(x) = u_k(x+\alpha )-u_k(x)$ . Therefore, $S_n\left (\varphi _k^{\prime }\right )(x)=u_k^{\prime }(x+n\alpha )-u_k^{\prime }(x)$ , and since
we infer that
Now $\lvert \exp (2\pi q_k i \alpha )-1\rvert $ is in fact $\left \lVert q_k \alpha \right \rVert $ , and by formulas (3) and (4) we see that
Since we are studying orders of growth, the constant $4\pi $ can be ignored. Therefore, from now on we assume
We proceed to show that $\left \lvert S_n\left (\varphi _k^{\prime }\right )\right \rvert $ has a linear upper bound for up to certain integer.
Lemma 4.3. If $n\leq \frac {\sqrt {q_{k+1}}}{\pi \sqrt {2 b_k q_k}}$ , then $\left \lvert S_n\left (\varphi _k^{\prime }\right )\right \rvert \leq 2\pi b_k q_k n +1$ .
Proof. To prove this, given $x\in S^1$ we compare $S_n\left (\varphi _k^{\prime }\right )(x)$ with $- 2 \pi b_k q_k sen(2\pi q_k x)n$ . Recall that $S_n\left (\varphi _k^{\prime }\right )(x) = \sum _{j=0}^{n-1} \varphi ^{\prime }_k\circ R^j_\alpha (x) = \sum _{j=0}^{n-1} - 2\pi b_k q_k sen(2\pi q_k ( x + j\alpha ))$ , and therefore
Now, by the mean value theorem we deduce that
and if we combine this with formulas (3) and (4), we conclude that
By the previous, as long as n is such that $\frac {2\pi ^2 b_k q_k n^2}{q_{k+1}} \leq 1$ , we know that
That is, $\left \lvert S_n\left (\varphi _k^{\prime }\right )\right \rvert \leq 2 \pi b_k q_k n + 1$ up to $n\approx \frac {\sqrt {q_{k+1}}}{\pi \sqrt {2 b_k q_k}}$ .
Again, we ignore the constants that do not depend on k and n, so we are going to work with the equations
up to
Since we are going to want $\lim _k n_k = \infty $ , we need
This is given by the fact that $\lim _k q_k= \infty $ and $\lim _k b_k q_k =0$ (formula (2)).
We recapitulate the information obtained in the previous two lemmas in Figure 5.
4.4 Sum of the upper bounds
We proceed now to add all of these upper bounds on k. Although it seems natural to add up these bounds on each interval $[n_{k-1}, n_k]$ , since $n_k$ depends on $b_k$ , working on such intervals would be troublesome for the inductive construction. Because of this, let us take a sequence $m_k$ such that $m_k\leq n_k$ and define the intervals $I_1 = [0,m_1]$ and $I_k= [m_{k-1}, m_k]$ . We cut the linear bound on $m_k$ and therefore, on each $I_k$ , the upper bounds add up to a function $C_k n + D_k$ , where
and
Figure 6 illustrates this piecewise linear sequence.
Observe that $C_k$ is the tail of the convergent series $\sum _k b_k q_k$ , and therefore the slopes in this piecewise linear sequence tend to $0$ .
Let us call $e(n) = C_k n + D_k$ if $m_{k-1}\leq n < m_k$ . Our goal now is to choose $b_k$ and $q_k$ such that $[e(n)]\leq [a(n)]$ . The construction will be by induction on k. However, since $n_k$ depends on $q_{k+1}$ , we have to choose $q_{k+1}$ in step k.
4.5 Inductive construction for the upper bound
For each k, and for each $j \leq k$ , define $C_j^k = \sum ^{i=k}_{i= j} b_i q_i$ ; then for every $n\leq m_k$ we define $e^k(n) = C_j^k n + D_j$ if $m_{j-1} \leq n < m_j$ . We need to do this because $C_j$ depends on future $b_k$ and $q_k$ . We also define $e^k(m_k)= D_k$ . Once this is set, our inductive hypothesis is
Since $e(n) = \lim _k e^k(n)$ , if this holds, then we infer that $e(n) \leq a(n)$ and therefore that $[e(n)] \leq [a(n)]$ .
Suppose that $b_k$ , $q_{k+1}$ , and $m_k$ have been chosen such that $e^k(n) < a(n) - \frac {1}{2^k}\ \forall n \leq m_k$ . Fix some big $m_{k+1}$ ; then if $q_{k+2}$ is such that $\frac {\sqrt {q_{k+2}}}{\sqrt {q_{k+1}}}> m_{k+1}$ , since $b_{k+1}$ is going to be smaller than $1$ we have
We will first see which restrictions we need for $b_{k+1}$ such that the inductive hypothesis in step $k+1$ holds up to $m_k$ . We know that $C_j^k n + D_j < a(n) - \frac {1}{2^k}$ if $n \leq m_k$ , and we want
Now $C_j^{k+1}= C_j^k + b_{k+1}$ , and thus we maintain our inductive property up to $m_k$ if we ask for $b_{k+1} m_k <1/2^{k+1}$ .
Now, for n between $m_k$ and $m_{k+1}$ , we want
and
We observe that the inequality in equation (10) depends on $b_{k+1}$ but does not depend on $q_{k+2}$ . On the other hand, equation (11) depends on both $b_{k+1}$ and $q_{k+2}$ . Because of this, we choose $b_{k+1}$ first and then $q_{k+2}$ for both equations to hold.
Figure 7 illustrates the previous argument.
With this, we finish our study of the conditions needed for $o(f) \leq [a(n)]$ .
4.6 f is not Lyapunov stable
In this subsection, we are going to investigate the necessary conditions for f to not be Lyapunov stable. For this to happen, we need to show that $s_{f,\epsilon }(n)$ is not a bounded sequence, or equivalently that $\lim _n s_{f,\epsilon }(n) = \infty $ . Again, to estimate $s_{f,\epsilon }(n)$ we are going to study $\left \lvert S_n\left (\varphi _k^{\prime }\right )(x)\right \rvert $ . However, in this situation we will control the Weyl sums not for every x but for big subsets of $S^1$ .
The idea to prove that $o(f)>0$ follows from the arguments of Lemma 4.3. We would like to observe that from the proof of that lemma we can conclude that
When $n= n_k$ , we infer that
Now, if we consider the set $\Lambda _k = \left \{x\in S^1:\lvert sen(2\pi q_k x)\rvert> \frac {1}{\sqrt {2}} \right \}$ and $x\in \Lambda _k$ , then
We assert that when $x\in \Lambda _k$ , then $\sum _j S_{n_k}\left (\varphi _j^{\prime }\right )(x)$ is comparable to $\left \lvert S_{n_k}\left (\varphi _k^{\prime }\right )(x)\right \rvert $ . This happens for two reasons. For $j>k$ , $\left \lvert \sum _{j>k} S_{n_k}\left (\varphi _j^{\prime }\right )(x)\right \rvert $ is going to be small. If we define $f_k(x,y)= \left (R_\alpha (x), y +\sum ^k_{j=1} \varphi _j(R_\alpha (x))\right )$ , then we can choose $b_j$ for $j> k$ small enough such that
For $j<k$ , $\left \lvert \sum ^{j=k-1}_{j=1} S_{n_k}\left (\varphi _j^{\prime }\right )(x)\right \rvert \leq D_k$ , and we can choose $b_k$ and $q_{k+1}$ such that
For this choice, if $x\in \Lambda _k$ , then we deduce that
Now, $\Lambda _k$ is the union of $q_k$ intervals of length $\frac {A}{q_k}$ , where A is a constant independent of k. If I is one of these intervals, there are in I at least $ \frac {\left (\sqrt {b_k q_k q_{k+1}} - 1\right )A}{2\epsilon q_k}$ points which are $(n,\epsilon )$ -separated. If $q_{k+1}$ is big enough such that
then $s_{f,\epsilon }(n_k) \geq q_k$ . Therefore, $\lim _k s_{f,\epsilon }(n_k)= \infty $ , which implies that $o(f)>0$ .
4.7 Coherence in the inductive construction and final remarks
It remains to verify that there is no conflict in the choices of $b_k$ and $q_k$ . Let us state here the construction process. Suppose that we have chosen $m_k$ , $b_k$ , and $q_{k+1}$ . We first pick $m_{k+1}$ big enough such that $D_{k+1} < \sqrt {a(m_{k+1}) -D_{k+1}}/2$ . This restriction is for formula (13). Then, we obtain $\hat q_{k+2}$ , a lower bound for $q_{k+2}$ , such that $n_{k+1}$ is going to be bigger than $m_{k+1}$ (formula (8)). We follow by choosing $b_{k+1}$ such that formulas (2), (9), (10), and (12) hold. Observe that none of these formulas depends on $q_{k+2}$ ; they depend only on $b_{k+1}$ , and equation (10) depends on $m_{k+1}$ , which has already been fixed. We also choose $b_{k+1}$ small enough such that we can pick $q_{k+2}> \hat q_{k+2}$ and
The first inequality in this equation implies formula (13). The second one implies equation (11). Once we choose $q_{k+2}$ according to the previous and such that formula (14) is verified, we have finished.
With this, we conclude how to build an example such that $0<o(f)<[a(n)]$ . It is known since [Reference Gottschalk and Hedlund18] that a cylindrical cascade is transitive if and only if the cohomological equation has no continuous solution. If this example had a continuous solution, then the orbits would move along the translated graph of said solution, and then f would be Lyapunov stable. This would imply $o(f)=0$ , which is a contradiction, and therefore our example is transitive.
In order to construct a dense family of examples like these, we approximate any cylindrical cascade $f(x,y)=(R_\alpha (x), y + \varphi (x))$ by a map of the form $\hat f(x,y) =\left (R_\alpha (x), y + \hat \varphi (x)\right )$ , where $\hat \varphi $ is a trigonometric polynomial. We then consider $g(x,y) = \left (x+\alpha ^{\prime }, y + \hat \varphi (x) + \sum _{k\geq k_0}\varphi _k(x)\right )$ , where $\alpha ^{\prime }$ is close to $\alpha $ and $\varphi _k$ are as the ones we have already built. The map g will verify $0<o(f)<[a(n)]$ . To see this, observe that $\hat \varphi $ has a solution to the cohomological equation, and therefore what it adds to the separation of orbits is finite. In particular, we can ignore it. Now, by the same argument we conclude that it is the tail of the series $\sum _{k}\varphi _k(x)$ that creates the positive and bounded generalized entropy, and therefore g satisfies the desired property.
As a final observation on this topic, we would like to point out that we could have made this construction taking subsequences of the $q_k$ instead of constructing them one by one. This approach would certainly lighten the constrictions of $\alpha $ , but it would overcharge the notation.
5 Relationship between $o(f)$ and $f_{*,1}$ (proof of Theorem 6)
In this section, we prove Theorem 6. Let M be a manifold of finite dimension and $f:M\to M$ be a homeomorphism. By the arguments of Manning [Reference Mañé26], we have the following lemma.
Lemma 5.1. If A is the matrix that represents the action of $f_{*,1}$ , then
Proof of Theorem 6. By the previous lemma, we know that $\left [\left \lVert A^{n-1}\right \rVert \right ] \leq o(f)$ , and therefore we must study $\left [\left \lVert A^{n-1}\right \rVert \right ]$ when $sp(A)=1$ .
If J is the Jordan normal form of A, then there exists an invertible matrix Q such that $A= Q^{-1} J Q$ . Since $A^n= Q^{-1} J^n Q$ , we see that $\left [\left \lVert A^{n-1}\right \rVert \right ] = \left [\left \lVert J^{n-1}\right \rVert \right ]$ . If $J_l$ are the Jordan blocks associated to J, then $\left [\left \lVert J^{n-1}\right \rVert \right ]= \sup \left \{\left [\left \lVert J^{n-1}_l\right \rVert \right ]\right \}$ .
If $sp(A)=1$ and $J_l$ is associated to a real eigenvalue, then it must be either $1$ or $-1$ . In any case, $J_l^n$ is a superior triangular matrix such that in the entrance $i,j$ has a number with order of growth $\left [n^{j - i}\right ]$ . Since the maximum value for $j-i$ is $\dim (J_l) -1$ , we infer that $\left [\left \lVert J^n_l\right \rVert \right ]$ = $\left [n^{\dim \left (J_l\right ) - 1}\right ]$ . When $J_l$ is associated to a complex eigenvalue, the argument is analogous, and with this we conclude the proof of Theorem 6.□
A Topological entropy through open coverings
Topological entropy is usually defined using open coverings. Here we show how to define generalized topological entropy in this way. Let us quickly recall this approach. Given $\alpha $ an open covering of a compact space M, we define $H(\alpha )=\log (N(\alpha ))$ , where
If f is a continuous map, then we consider
and $h(f,\alpha )=\lim _n \frac {1}{n} H(\alpha ^n)$ . Finally, we define
We translate this to our setting with the following definitions. Given $f:M\to M$ a continuous map and $\alpha $ an open covering of M, we define
and then $\hat o(f)= \sup \left \{\left [a_{f,\alpha }(n)\right ]:\alpha \text { is an open covering of }M\right \}$ . Before we prove $\hat o (f) = o(f)$ , let us observe that
By the arguments of Theorem 2, we conclude that $\pi _{\mathbb {E}}\left (\hat o(f)\right )=h(f)$ .
This proves that $\hat o(f)$ is also a generalization of topological entropy. But it does not prove that it coincides with our previous definition of generalized entropy. For this, we use standard arguments to show that all the definitions of topological entropy coincide.
Lemma A.1. Let $f:M\to M$ be a continuous map of a compact metric space. Given $\epsilon>0$ , if $\alpha $ is an open covering of M such that $diam(\alpha )<\epsilon $ , then $s_{f,\epsilon }(n)\leq a_{f,\alpha }(n)$ .
Lemma A.2. Let $f:M\to M$ be a continuous map of a compact metric space. Given $\alpha $ an open covering of M, if $\epsilon $ is a Lebesgue number of $\alpha $ , then $a_{f,\alpha }(n)\leq g_{f,\epsilon }(n)$ .
A proof of both lemmas can be found in [Reference Viana and Oliveira38]. Lemma A.1 implies that $o(f)\leq \hat o(f)$ , and Lemma A.2 implies that $ \hat o(f) \leq o(f)$ . Therefore we have the following:
Proposition A.3. Let $f:M\to M$ be a continuous map of a compact metric space. Then $ \hat o(f)= o(f)$ .
Once we know that $\hat o(f) = o(f)$ , we apply Lemma A.1 again to obtain the following:
Proposition A.4. Let $f:M\to M$ be a continuous map of a compact metric space and $\alpha _k$ a sequence of finite open coverings such that $\lim _k diam(\alpha _k) = 0$ . Then $o(f) =`\lim _k\text {'} \left [a_{f,\alpha _k}(n)\right ]= \sup \left \{\left [a_{f,\alpha _k}(n)\right ]\right \}$ .
B Classical properties of topological entropy revisited
In this appendix, we rove some properties verified by generalized topological entropy. None of these is used for the main results of the article, and therefore we leave them here for curious readers.
The topological entropy of a map f is related to the topological entropy of $f^k$ when $k\geq 1$ by the formula $h\left (f^k\right )=k h(f)$ . Since in $\overline {\mathbb {O}}$ there is no additive structure, this property is lost. However, at least we have the following:
Proposition B.1. Let M be a compact metric space and $f:M\to M$ a continuous map. The following inequalities hold:
Proof. To prove this, observe that $g_{f^k,\epsilon }(n) = g_{f,\epsilon }(n k)$ , and since $g_{f,\epsilon }$ is nondecreasing, we infer that $g_{f^k,\epsilon }(n) \geq g_{f^{k-1},\epsilon }(n)$ for all $k\geq 2$ and for all $n\geq 1$ . This implies that $\left [g_{f^k,\epsilon }(n)\right ]\geq \left [g_{f^{k-1},\epsilon }(n)\right ]$ and therefore $o\left (f^k\right )\geq o\left (f^{k-1}\right )$ .
When f is a homeomorphism, we know that $h(f)=h\left (f^{-1}\right )$ , and this property is also true for $o(f)$ .
Proposition B.2. Let M be a compact metric space and $f:M\to M$ a homeomorphism. Then $o(f)=o\left (f^{-1}\right )$ .
Proof. Observe that if E is an $(n,\epsilon )$ -separated set for f, then $f^{n-1}(E)$ is an $(n,\epsilon )$ -separated set for $f^{-1}$ . From this we deduce that $s_{f,\epsilon }(n)=s_{f^{-1},\epsilon }(n)$ , and then $o(f)=o\left (f^{-1}\right )$ .
Another interesting property of entropy is the following: given $K_1,\dotsc ,K_l$ a finite number of compact sets, we know that $h\left (f,\cup _{i=1}^l K_i\right ) = \max \{h(f,K_i):1\leq i \leq l\}$ . This is translated as the following:
Proposition B.3. Let M be a metric space and $f:M\to M$ a continuous map. If $K_1,\dotsc ,K_l$ are a finite number of compact sets, then $o\left (f,\cup _{i=1}^l K_i\right ) = \sup \{o(f,K_i):1\leq i \leq l\}$ .
Proof. Let us consider $K = \cup _{i=1}^l K_i$ . Given a sequence $b(n)$ such that $o(f,K_i)\leq [b(n)]$ for all $i=1,\dotsc , l$ , there exist $C_1,\dotsc , C_l$ positive constants such that
From this we conclude that $o(f,K)\leq \sup \{o(f,K_i):1\leq i \leq l\}$ . On the other hand, since $K_i\subset K$ , we infer that $o(f,K_i)\leq o(f,K)$ for all i and therefore $\sup \{o(f,K_i):1\leq i \leq l\}\leq o(f,K)$ .
We also know for expansive homeomorphisms that $h(f)=g(f,\epsilon )$ for some $\epsilon $ smaller than the expansivity constant. For generalized topological entropy this result is also true.
Proposition B.4. Let M be a compact metric space and $f:M\to M$ a homeomorphism. If f is expansive, there exists $\epsilon $ such that $o(f) = \left [g_{f,\epsilon }(n)\right ]= \left [s_{f,\epsilon }(n)\right ]$ . In particular, $o(f)\in \mathbb {O}$ .
To prove this we will revisit the arguments of [Reference Manning25]. We will start by pointing out the following two lemmas, whose proofs can be found in [Reference Manning25].
Lemma B.5. Let M be a compact metric space and $f:M\to M$ an expansive homeomorphism with $\epsilon _0$ an expansivity constant of f. If $\delta <\epsilon <\epsilon _0$ , then there exist $k>0$ and $n_0>2k$ such that if $x,y\in M$ verifies
for some $n\geq n_0$ , then
Lemma B.6. Let M be a compact metric space and $f:M\to M$ a continuous map. If $n_1,\dotsc ,n_j$ are positive integers and $\epsilon>0$ , then
Proof of Proposition B.4. Let us take $\epsilon <\epsilon _0$ (the expansivity constant) and $\epsilon ^{\prime }<\epsilon /4$ . We now apply Lemma B.5 to f with $\delta =\epsilon ^{\prime }$ and obtain k and $n_0$ . If $n\geq n_0-2k$ , consider E an $(n,\epsilon ^{\prime })$ -separated set with $\#E=s_{f,\epsilon ^{\prime }}(n)$ . By Lemma B.5, we know that $f^{-k}(E)$ is an $(n+2k,\epsilon )$ -separated set. This implies that $s_{f,\epsilon ^{\prime }}(n)\leq s_{f,\epsilon }(n+2k)$ , and by Lemmas 2.2 and B.5 we deduce that
In particular, $\left [g_{f,\epsilon ^{\prime }}(n)\right ]\leq \left [g_{f,\epsilon /4}(n)\right ]$ , and by taking the supremum over $\epsilon ^{\prime }$ we infer that $o(f)\leq \left [g_{f,\epsilon /4}(n)\right ]$ . Since clearly $\left [g_{f,\epsilon /4}(n)\right ]\leq o(f)$ , we conclude that $o(f)= \left [g_{f,\epsilon /4}(n)\right ]$ .□
Acknowledgments
The first author was supported by CAPES, and would like to thank UFRJ, since this work started at his postdoctoral position there. The second author was supported by the NSF via grant DMS-1956022.
Competing Interest
None.