diff --git "a/2dE2T4oBgHgl3EQfjAeF/content/tmp_files/2301.03964v1.pdf.txt" "b/2dE2T4oBgHgl3EQfjAeF/content/tmp_files/2301.03964v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/2dE2T4oBgHgl3EQfjAeF/content/tmp_files/2301.03964v1.pdf.txt" @@ -0,0 +1,3208 @@ +Trade-offs between cost and information in cellular prediction +Age J. Tjalma,1 Vahe Galstyan,1 Jeroen Goedhart,1 Lotte Slim,1 Nils B. Becker,2 and Pieter Rein ten Wolde1, ∗ +1AMOLF, Science Park 104, 1098 XG Amsterdam, The Netherlands +2Theoretical Systems Biology, German Cancer Research Center, 69120 Heidelberg, Germany +(Dated: January 11, 2023) +Living cells can leverage correlations in environmental fluctuations to predict the future environ- +ment and mount a response ahead of time. To this end, cells need to encode the past signal into the +output of the intracellular network from which the future input is predicted. Yet, storing information +is costly while not all features of the past signal are equally informative on the future input signal. +Here, we show, for two classes of input signals, that cellular networks can reach the fundamental +bound on the predictive information as set by the information extracted from the past signal: push- +pull networks can reach this information bound for Markovian signals, while networks that take +a temporal derivative can reach the bound for predicting the future derivative of non-Markovian +signals. However, the bits of past information that are most informative about the future signal are +also prohibitively costly. As a result, the optimal system that maximizes the predictive information +for a given resource cost is, in general, not at the information bound. Applying our theory to the +chemotaxis network of Escherichia coli reveals that its adaptive kernel is optimal for predicting +future concentration changes over a broad range of background concentrations, and that the system +has been tailored to predicting these changes in shallow gradients. +Keywords: prediction, information bottleneck, sensing, resource allocation +Single-celled organisms live in a highly dynamic envi- +ronment to which they continually have to respond and +adapt. +To this end, they employ a range of response +strategies, tailored to the temporal structure of the envi- +ronmental variations. When these variations are highly +regular, such as the daily light variations, it becomes ben- +eficial to develop a clock from which the time and hence +the current and future environment can be inferred [1, 2]. +In the other limit, when the fluctuations are entirely un- +predictable, cells have no choice but to resort to either +the strategy of detect-and-respond or the bet-hedging +strategy of stochastic switching between different phe- +notypes [3]. Yet arguably the most fascinating strategy +lies in between these two extremes. When the environ- +mental fluctuations happen with some regularity, then it +becomes feasible to predict the future environment and +initiate a response ahead of time. While it is commonly +believed that only higher organisms can predict the fu- +ture, experiments have vividly demonstrated that even +single-cell organisms can leverage temporal correlations +in environmental fluctuations in order to predict, e.g., +future nutrient levels [4, 5]. +The ability to predict future signals can provide a fit- +ness benefit [6]. The capacity to anticipate changes in +oxygen levels [4], or the arrival of sugars or stress signals +[5], can increase the growth rate of single-celled organ- +isms; modeling has revealed that prediction can enhance +bacterial chemotaxis [7]. +Yet, a predict-and-anticipate +strategy is only advantageous if the cell can reliably pre- +dict the future on timescales that are longer than the +time it takes to mount a response. What fundamentally +limits the accuracy of cellular prediction remains, how- +ever, poorly understood. +∗ p.t.wolde@amolf.nl +While the cell needs to predict the future environ- +ment, it can only sense the present and remember the +past (Fig. 1A). Consequently, for a given amount of in- +formation the cell can store about the present and past +signal, there is a maximum amount of information it can +possibly have about the future [6, 8] (Fig. 1C-I). This in- +formation bound is determined by the temporal structure +of the environmental fluctuations [8, 9]. +How close cells can come to this bound depends on +the design of the intracellular biochemical network that +senses and processes the environmental signals (Fig. 1B). +To maximize the predictive power the cell must use its +memory effectively: it should extract only those charac- +teristics from the present and past signal that are most +informative about the future [7]. Whether it can do so, +is determined by the topology of the signaling network. +Moreover, like any information processing device, bio- +chemical networks require resources to be built and run. +Molecular components are needed to construct the net- +work, space is required to accommodate the components, +time is needed to process the information, and energy is +required to synthesize the components and operate the +network [10]. These resources constrain the design and +performance of any biochemical network, and the ca- +pacity to sense and process information is no exception +(Fig. 1C-II). +Cellular signaling systems provide a unique opportu- +nity for revealing the resource requirements for predic- +tion. Cells live in a highly dynamic environment, with +temporal statistics that are expected to vary markedly. +Moreover, signaling networks have distinct topologies, +which are likely tailored to the temporal statistics of the +environment [7]. In addition, for cellular systems we can +actually quantify the information processing capacity as +a function of the resources that are necessary to build +and run them—protein copies, time, and energy [10, 11]. +arXiv:2301.03964v1 [physics.bio-ph] 10 Jan 2023 + +2 +Cellular systems are thus ideal for elucidating the rela- +tionships between future and past information, system +design (i.e. network topology) and resource constraints. +Here, we derive the bound on the prediction precision as +set by the information extracted from the past signal for +two types of input signals. We will determine how close +cellular networks can come to this bound, and how this +depends on the topology of the network and the resources +to build and run it. +We find that for the two classes of input signals stud- +ied, cellular networks exists that can reach the informa- +tion bound, yet reaching the bound is exceedingly costly. +The first class of input signals consists of Markovian sig- +nals. Using the Information Bottleneck Method (IBM) +[8, 12], we first show that the system that reaches the +information bound copies the most recent input signal +into the output from which the future input is predicted. +Push-pull networks consisting of chemical modification or +GTPase cycles, which are ubiquitous in prokaryotic and +eukaryotic cells [13, 14], should be able to reach the infor- +mation bound, because they are at heart copying devices +[10, 11]. Yet, copying the most recent input into the out- +put is extremely costly, because the operating cost, as set +by the chemical power to drive the cycle, diverges at high +copying speed. More surprisingly, our results show that +the predictive and past information can be raised simul- +taneously by moving away from the information bound, +even when the operating cost is negligible: the optimal +system that maximizes the predictive information for a +given protein synthesis cost is, in general, not at the in- +formation bound. The number of bits of past information +per protein cost can be raised by increasing the integra- +tion time. While this decreases the predictive power per +bit of past information, thereby moving the system away +from the information bound, it can increase the total pre- +dictive information per protein cost. Our analysis thus +highlights that not all bits of past information are equally +costly, nor predictive. +Living cells that navigate their environment typically +experience signals with persistence as generated by their +own motion, which motivated us to study a simple class +of non-Markovian signals. Moreover, these cells can typ- +ically detect changes in the concentration over a range of +background concentrations that is orders of magnitude +larger than the change in the concentration over the ori- +entational correlation time of their movement. Our anal- +ysis reveals that in such a scenario the optimal kernel that +allows the system to reach the information bound on pre- +dicting the future input derivative is a perfectively adap- +tive, derivative-taking kernel, precisely as the bacterium +E. coli employs [15]. We again find, however, that reach- +ing the information bound is prohibitively costly. The +reason is that taking an instantaneous derivative, which +is the characteristic of the input that is most informative +about the future derivative, reduces the gain to zero be- +cause the system instantly adapts; the response becomes +thwarted by biochemical noise. The optimal system that +maximizes the predictive information under a resource +constraint thus emerges from a trade-off between taking a +derivative that is recent and one that is reliable. Finally, +our analysis reveals that the E. coli chemotaxis system +has been optimally designed to predict future concentra- +tion changes in shallow gradients. +RESULTS +We focus on cellular signaling systems that respond +linearly to changes in the input signal [11, 16–19]. These +systems not only allow for analytical results, but also +describe information transmission often remarkably well +[19–22]. The output of these systems can be written as +x(t) = +� t +−∞ +dt′k(t − t′)ℓ(t′) + ηx(t), +(1) +where k(t) is the linear response function, ℓ(t) the input +signal, and ηx(t) describes the noise in the output. We +will consider stationary signals with different temporal +correlations, obeying Gaussian statistics. +Any prediction about the future state of the environ- +ment must be based on information obtained from its +past (Fig. 1C-I). In particular, the cell needs to predict +the input ℓτ ≡ ℓ(t + τ) at a time τ into the future from +the current output x0 ≡ x(t), which itself depends on +the input signal in the past, Lp ≡ (ℓ(t), ℓ(t′), · · · ), with +t > t′ > · · · . The (qualitative) shape of the integration +kernel k(t), e.g. exponential, adaptive or oscillatory, is +determined by the topology of the signaling network [7]. +The kernel shape describes how the past signal is mapped +onto the current output, and hence which characteristics +of the past signal the cell uses to predict the future signal. +To maximize the accuracy of prediction, the cell should +extract those features that are most informative about +the future signal. These depend on the statistics of the +input signal. +Deriving the upper bound on the predictive informa- +tion as set by the past information is an optimisation +problem, which can be solved using the IBM [8]. It en- +tails the maximization of an objective function L: +max +P (x0|Lp) [L ≡ I(x0; ℓτ) − γI(x0; Lp)] . +(2) +Here, Ipred ≡ I(x0; ℓτ) is the predictive information, +which is the mutual information between the system’s +current output x0 and the future ligand concentration +ℓτ. The past information Ipast ≡ I(x0; Lp) is the mutual +information between x0 and the trajectory of past lig- +and concentrations Lp. The Lagrange multiplier γ sets +the relative cost of storing past over obtaining predic- +tive information. Given a value of γ, the objective func- +tion in Eq. 2 is maximized by optimizing the conditional +probability distribution of the output given the past in- +put trajectory, P(x0|Lp). For the linear systems consid- +ered here, this corresponds to optimizing the mapping +of the past input signal onto the current output via the + +3 +past information +predictive information +resources +inaccessible +inaccessible +I +II +information bound +A +B +C +Different input signals +maintenance +operating +Optimal networks +X +X +RL +* +time +signal +output +past info +predictive info +now +concentration +FIG. 1. Cells use biochemical networks to remember the past and predict the future. (A) Cells compress the past +input into the dynamics of the signalling network from which the future input is then predicted. (B) The optimal topology +of the network for predicting the future signal depends on the temporal statistics of the input signal. Push-pull networks, +consisting of chemical modification cycles or GTPase cycles, can optimally predict the future value of Markovian signals, with +correlation time τℓ; derivative-taking networks, like the E. coli chemotaxis system, can optimally predict the future derivative +of non-Markovian signals, with correlation time τv. The push-pull network consists of a receptor that drives a downstream +phosphorylation cycle. +The ligand binds the receptor with a correlation time τc. +The push-pull network, driven by ATP +turnover, integrates the receptor with an integration time τr. The chemotaxis system is a push-pull network, yet augmented +with negative feedback on the receptor activity via methylation on a timescale τm, as indicated by the dashed grey line. The +total resource cost consists of a maintenance cost of receptor and readout synthesis at the growth rate λ, and an operating +cost of driving the cycle. (C) The predictive information on the future signal Ipred is fundamentally bounded by how much +information Ipast it has about the past signal (panel I), which in turn is limited by the resources necessary to build and operate +the biochemical network (panel II) [6]. +integration kernel k(t). Since our model obeys Gaussian +statistics, we use the Gaussian IBM to derive the optimal +kernel kopt(t) and the information bound, defined to be +the maximum predictive information as set by the past +information [12] (see Appendix C). +Markovian signals +Optimal prediction of Markovian signals: biochemical +copying +Arguably the most elementary type of signal, albeit +perhaps the hardest to predict, is a Markovian signal. +We consider a Markovian signal ℓ(t), of which the devia- +tions δℓ(t) = ℓ(t) − ¯ℓ from its mean ¯ℓ follow an Ornstein- +Uhlenbeck (OU) process: +δ ˙ℓ = −δℓ(t)/τℓ + ηℓ(t), +(3) +where τℓ is the correlation time of the fluctuations, and +ηℓ(t) is Gaussian white noise, ⟨η(t)η(t′)⟩ = 2σ2 +ℓ/τℓ δ(t − +t′), with σ2 +ℓ the amplitude of the signal fluctuations. This +input signal obeys Gaussian statistics, characterized by +⟨δℓ(0)δℓ(t)⟩ = σ2 +ℓ exp(−t/τℓ). The optimal mapping is +therefore a linear one. Utilizing the Gaussian IBM frame- +work [12], we find that the optimal integration kernel is +given by (see Appendix C 2) +kopt(t − t′) = aδ(t − t′). +(4) +This optimal integration kernel corresponds to a signaling +system that copies the current input into the output. +This is intuitive, since for a Markovian signal there is +no additional information in the past signal that is not +already contained in the present one. The prefactor a +determines the gain ∂¯x/∂¯ℓ, which together with the noise +strength σ2 +ηx (Eq. 1) and the signal amplitude σ2 +ℓ set the +magnitude of the past and predictive information, Ipast +and Ipred, respectively (see Appendix C 1). +Fig. 2-I shows the maximum predictive information as +set by the past information. This information bound ap- +plies to any linear system that needs to predict a Marko- +vian signal. How close can biochemical systems come to +this bound? +Push-pull network can be at the information bound, yet +increase the predictive and past information by moving away +from it +Although the upper bound on the accuracy of predic- +tion is determined by the signal statistics, how close cells +can come to this bound depends on the topology of the +cellular signaling system, and the resources devoted to +building and operating it. A network motif that could +reach the information bound for Markovian signals is the +push-pull network (Fig. 2), because it is at heart a copy- +ing device: it samples the input by copying the state of + +4 +I +II +predictive information (bits) +resources +past information (bits) +FIG. 2. The optimal push-pull network is not at the +information bound. Panel I: The black line is the informa- +tion bound that maximizes the predictive information Ipred = +I(x0; ℓτ) for a given past information Ipast = I(x0; Lp). The +red curve shows Ipred against Ipast for systems in which Ipred +has been maximized for a given resource cost C = RT + XT. +The blue curve shows Ipred versus Ipast for systems where +Ipast has been maximized for a given C. Panel II shows Ipast +against C for the corresponding systems. +The forecast in- +terval is τ = τℓ. The optimization parameters are the ratio +XT/RT, τr, p and f (see Appendix E). Parameter values: +(σℓ/¯ℓ)2 = 10−2, τc/τℓ = 10−2. +the input, e.g. the ligand-binding state of a receptor or +the activation state of a kinase, into the activation state +of the output, e.g. phosphorylation state of the readout +[10, 11, 23]. +We model the push-pull network in the linear-noise +approximation: +δ ˙ +RL = bδℓ(t) − δRL(t)/τc + ηRL(t), +(5) +˙ +δx∗ = γ δRL(t) − δx∗(t)/τr + ηx(t). +(6) +Here, δRL represents the number of ligand-bound recep- +tors and δx∗ the number of modified readout molecules, +defined as deviations from their mean values; b and γ +are parameters that depend on the number of recep- +tor and readout molecules, RT and XT respectively, the +fraction of ligand-bound receptors p and active readout +molecules f; ηRL and ηx are Gaussian white noise terms +(see Appendix E). Key parameters are the correlation +time of receptor-ligand binding, τc, and the relaxation +time of x∗, τr. The latter determines for how long x∗ +carries information on the ligand-binding state of the re- +ceptor and thus sets the integration time. The readout- +modification dynamics yield an exponential integration +kernel k(t) ∝ exp(−t/τr), which in the limit τr → 0 re- +duces to a δ-function, hinting that the system may reach +the information bound. +How much information cells can extract from the past +signal depends on the resources devoted to building and +operating the network (Fig. 2-II). We define the total +resource cost to be: +C = λ(RT + XT) + c1XT∆µ/τr +(7) +The first term expresses the fact that over the course +of the cell cycle all components need to be duplicated, +which means that they have to be synthesized at a speed +that is at least the growth rate λ. The second term de- +scribes the chemical power that is necessary to run the +push-pull network [10, 11]; it depends on the flux through +the network, XT/τr, and the free-energy drop ∆µ over a +cycle, e.g. the free energy of ATP hydrolysis in the case +of a phosphorylation cycle. The coefficient c1 describes +the relative energetic cost of synthesising the components +during the cell cycle versus that of running the system. +For simplicity, we first consider the scenario that the cost +is dominated by that of protein synthesis, setting c1 → 0. +While in this scenario RT + XT is constrained, XT/RT +and other system parameters are free for optimization. +The available resources put a hard bound on the in- +formation Ipast that can be extracted from the past sig- +nal, which in turn sets a hard limit on the predictive +information Ipred (Fig. 1C). To maximize the predictive +information, it therefore seems natural to maximize the +past information Ipast for a given resource cost C. The +blue line in Fig. 2-II shows the result for the push-pull +network. We then compute the corresponding predictive +information for the systems along this line, which is the +blue line in Fig. 2-I. Strikingly, the resulting information +curve lies far below the information bound, i.e. the upper +bound on the predictive information as set by the past +information (black line, Fig. 2-I). This shows that sys- +tems that maximize past information under a resource +constraint, do not in general also maximize predictive in- +formation. It implies that not all bits of past information +are equally predictive about the future. +Precisely because not all bits of past information are +equally predictive about the future, it is paramount to +directly maximize the predictive information for a given +resource cost in order to obtain the most efficient pre- +diction device. This yields the red lines in panels I and +II in Fig. 2. +It can be seen that the predictive infor- +mation is higher while the past information is lower, as +compared to the information curves of the systems opti- +mized for maximizing the past information under a re- +source constraint (blue lines). It reflects the idea that not +all bits are equally predictive. More surprisingly, while +the bound on the predictive information as set by the +resource cost (red line panel I) is close to the bound on +the predictive information as set by the past information +(black line), it does remain lower. This is surprising, be- +cause the push-pull network is a copying device [10, 23], +which can, as we will also show below, reach the latter +bound. These two observations together imply that not +all bits of past information are equally costly. +If they +were, the cell would select under the two constraints the +same bits based on their predictive information content, +and the bound on the predictive information as set by + +5 +the resource cost would overlap with that as set by the +past information. +We thus find that not all bits of past information are +equally predictive, nor equally costly. As we show next, +it implies that the optimal information processing system +faces a trade-off between using those bits of past infor- +mation that are most informative about the future and +those that are cheapest. +Trade-off between cost and predictive power per bit +To understand the connection between predictive and +past information, and resource cost, we map out the re- +gion in the information plane that can be reached given +a resource constraint C (Fig. 3A, green region). We im- +mediately make two observations. +Firstly, the system +can indeed reach the information bound. Secondly, the +system can increase both the past and the predictive in- +formation by moving away from the bound. To elucidate +these two observations, we investigate the system along +the isocost line of C = 104, which together with the in- +formation bound envelopes the accessible region for the +maximum resource cost C ≤ 104. +Along the isocost line, +the ratio of the number +of +readout +over +receptor +molecules +is +XT/RT += +2 +� +p/(1 − p) +� +1 + τr/τc (see Appendix E 3). This can be +understood intuitively using the optimal resource alloca- +tion principle [10]. It states that in a sensing system that +employs its proteins optimally, the total number of inde- +pendent concentration measurements at the level of the +receptor during the integration time τr, RT(1 + τr/τc), +equals the number of readout molecules XT that store +these measurements, so that neither the receptors nor +the readout molecules are in excess. This design prin- +ciple specifies, for a given integration time τr, the ratio +XT/RT at which the readout molecules sample each re- +ceptor molecule roughly once every receptor correlation +time τc. +While the optimal allocation principle gives the opti- +mal ratio XT/RT of the number of readouts over recep- +tors for a given integration time τr, it does not prescribe +what the optimal integration time τropt, and hence (glob- +ally) optimal ratio Xopt +T /Ropt +T , is that maximizes Ipred for +a given resource constraint C = RT +XT. Fig. 3B shows +that as the distance θ along the isocost line is increased, +τr and hence XT/RT increase monotonically. Near the +information bound, corresponding to θ = 0, the integra- +tion time τr is zero and the number of readout molecules +equals the number of receptor molecules: XT = RT. In +this limit, the push-pull network is an instantaneous re- +sponder, with an integration kernel given by Eq. 4; only +the finite receptor correlation time τc prevents the sys- +tem from fully reaching the information bound. Yet, as +θ increases and the system moves away from the bound, +the predictive and past information first rise along the +contour, and thus with XT/RT and τr, before they even- +tually both fall. +To understand why the predictive and past informa- +tion first rise and then fall with XT/RT and τr, we note +that each readout molecule constitutes 1 physical bit and +that its binary state (phosphorylated or not) encodes at +most 1 bit of information on the ligand concentration. +The number of readout molecules XT thus sets a hard +upper bound on the sensing precision and hence the pre- +dictive information. To raise this bound, XT must be +increased. For a given resource constraint C = RT +XT, +XT can only be increased if the number of receptors RT +is simultaneously decreased. However, the cell infers the +concentration not from the readout molecules directly, +but via the receptor molecules: a readout molecule is a +sample of the receptor that provides at most 1 bit of in- +formation about the ligand-binding state of a receptor +molecule, which in turn provides at most 1 bit of infor- +mation about the input signal. To raise the lower bound +on the predictive information, the information on the in- +put must increase at both the receptor and the readout +level. +To elucidate how this can be achieved, we note that the +maximum number of independent receptor samples and +hence concentration measurements is given by N max +I += +min(XT, RT(1 + τr/τc)) [10]. For θ > 0, the system can +increase N max +I +if, and only if, XT and RT(1 + τr/τc) can +be raised simultaneously. +This can be achieved, while +obeying the constraint C = XT + RT, by decreasing RT +yet increasing τr (Fig. 3B). This is the mechanism of time +averaging, which makes it possible to increase the num- +ber of independent receptor samples [11], and explains +why both the predictive and the past information initially +increase (Fig. 3C). However, as τr is raised further, the +receptor samples become older: the readout molecules in- +creasingly reflect receptor states in the past that are less +informative about the future ligand concentration. The +collected bits of past information have become less pre- +dictive about the future (Fig. 3C). For a given resource +cost, the cell thus faces a trade-off between maximizing +the number of physical bits of past information (i.e. the +receptor samples XT) and the predictive information per +bit. This antagonism gives rise to an optimal integration +time τropt that maximizes the total predictive informa- +tion Ipred (Fig. 3C). +Interestingly, while Ipred decreases beyond τropt, the +past information Ipast first continues to rise because +N max +I +still increases. However, when the integration time +becomes longer than the input signal correlation time, +the correlation between input and output will be lost +and Ipast will fall too. +Chemical power prevents the system from reaching the +information bound +So far, we have only considered the cost of maintain- +ing the cellular system, the protein cost C = RT + XT. +Yet, running a push-pull network also requires energy. +As Eq. 7 shows, the running cost scales with the flux + +6 +A +B +C +FIG. 3. The push-pull network maximizes the predictive power under a resource constraint by moving away +from the information bound. (A) The region of accessible predictive information Ipred = I(x0; ℓτ) and past information +Ipast = I(x0; Lp) in the push-pull network under a resource constraint C ≤ (RT + XT), for the Markovian signals specified by +Eq. 3 (green). The black line is the information bound at which Ipred is maximized for a given Ipast. The push-pull network +can be at the information bound (black points), but maximizing Ipred for a resource constraint C moves the system away from +it. The red and blue lines connect, respectively, the points where Ipred and Ipast are maximized along the green isocost lines +(the contourlines of constant C); they correspond to the red and blue lines in Fig. 2, respectively. The accessible region of +Ipred and Ipast for a given C has been obtained by optimizing over τr, p, f, and XT/RT. The forecast interval is τ = τℓ. +(B) The integration time τr over the receptor correlation time τc, τr/τc, and the ratio of the number of readout and receptor +molecules, XT/RT, as a function of the distance θ along the isocost line corresponding to C = 104 in panel A; the red and +blue points denote where Ipred and Ipast are maximized along the contourline, respectively. For θ → 0, τr → 0: the system is +an instantaneous responder, which is essentially at the information boundary; as predicted by the optimal resource allocation +principle, XT = RT. The system can increase Ipred and Ipast by increasing τr and XT/RT. (C) While this decreases the +predictive information Ipred per physical bit of past information, Ipred/XT (dashed line), increasing XT/RT does increase the +number of physical bits per resource cost, XT/C (purple line). This trade-off gives rise to an optimal predictive information +per resource cost, Ipred/C (red dot on solid black line). Parameter values unless specified: (σℓ/¯ℓ)2 = 10−2, τc/τℓ = 10−2. +around the phosphorylation cycle, which is proportional +to the inverse of the integration time, τr−1. The power +thus diverges for τr → 0. Since the information bound is +reached precisely in this limit, it is clear that the chem- +ical power prevents the push-pull network from reaching +the bound (see Fig. 7 in the appendix). +Non-Markovian signals +Predicting the future change +The push-pull network can optimally predict Marko- +vian signals, yet not all signals are expected to be Marko- +vian. Especially organisms that navigate through an en- +vironment with directional persistence will sense a non- +Markovian signal, as generated by their own motion. +Moreover, when these organisms need to climb a con- +centration gradient, as E. coli during chemotaxis, then +knowing the change in the concentration is arguably more +useful than knowing the concentration itself. Indeed, it +is well known that the kernel of the E. coli chemotaxis +system detects the (relative) change in the ligand con- +centration by taking a temporal derivative of the concen- +tration [15]. However, as we will show here, the converse +statement is more subtle. If the system needs to predict +the (future) change in the signal, then the optimal ker- +nel is not necessarily one that is based on the derivative +only: in general, the optimal kernel uses a combination of +the signal value and its derivative. However, the E. coli +chemotaxis system can respond to concentrations that +vary between the dissociation constants of the inactive +and active state of the receptors, which differ by several +orders of magnitude [24]. This range of possible back- +ground concentrations is much larger than the typical +concentration change over the orientational correlation +time of the bacterium. +As our analysis below reveals, +in this regime the optimal kernel is a perfectly adaptive, +derivative-taking kernel that is insensitive to the current +signal value, precisely like that of the E. coli chemotaxis +system [15, 25–28]. Our analysis thus predicts that this +system has an adaptive kernel, because this is the opti- +mal kernel for predicting concentration derivatives over +a broad range of background concentrations. +To reveal the signal characteristics that control the +shape of the optimal integration kernel, we will consider +the family of signals that are generated by a harmonic + +7 +oscillator: +δ ˙ℓ = v(t), +(8) +˙v = −ω2 +0δℓ(t) − v(t)/τv + ηv(t), +(9) +where δℓ is the deviation of ligand concentration from its +mean ¯ℓ, v its derivative, τv a relaxation time, ηv a Gaus- +sian white noise term, and the frequency ω2 +0 = σ2 +v/σ2 +ℓ +controls the variance σ2 +ℓ of the concentration and that of +its derivative σ2 +v. +Using the IBM framework it can be shown that the +optimal encoding that allows the system to reach the +information bound, is based on a linear combination of +the current concentration ℓ(t) and its derivative v(t), such +that the output x(t) is given by (Appendix C 3): +x(t) = aδℓ(t) +σℓ ++ bv(t) +σv ++ ηx(t). +(10) +This can be understood by noting that while the signal +of Eqs. 8 and 9 is non-Markovian in the space of ℓ, it is +Markovian in ℓ and v: all the information on the future +signal is thus contained in the current concentration and +its derivative. To maximize the predictive information +Ipred = I(x0; vτ) between the current output x0 and the +future derivative of the input vτ for a given amount of +past information Ipast = I(x0; Lp), i.e to reach the infor- +mation bound for predicting the future signal derivative, +the coefficients must obey +aopt = G⟨δℓ(0)δv(τ)⟩ +σℓσv +≡ Gρℓ0vτ , +(11) +bopt = G⟨δv(0)δv(τ)⟩ +σ2v +≡ Gρv0vτ . +(12) +Here, G is the gain, which together with the noise σ2 +ηx sets +the scale of Ipred and Ipast, ρℓ0vτ is the cross-correlation +coefficient between the current concentration value ℓ0 and +the future concentration derivative vτ and ρv0vτ that be- +tween the current and future derivative (Appendix C 3). +These expressions can be understood intuitively: if the +future signal derivative that needs to be predicted is cor- +related with the current signal derivative, it is useful +to include in the prediction strategy the current signal +derivative, leading to a non-zero value of bopt. Perhaps +more surprisingly, if the future signal derivative is also +correlated with the current signal value, then the system +can enhance the prediction accuracy by also including the +current signal value, yielding a non-zero aopt. Clearly, +in general, to optimally predict the future signal change, +the system should base its prediction on both the current +signal value and its derivative. +The degree to which the systems bases its prediction on +the current value versus the current derivative depends +on the relative magnitudes of aopt and bopt, respectively. +In Appendix B 2, we show that when the concentration +change over the timescale τv, σvτv, is much smaller than +the range of possible concentrations σℓ that the bac- +terium can experience, i.e. when σvτv ≪ σℓ such that +ω0 ≪ τ −1 +v , the cross-correlation coefficient ρℓ0vτ vanishes, +such that aopt becomes zero (see Eq. 11). The optimal +kernel has become a perfectly adaptive, derivative-taking +kernel. We emphasize that while we have derived this re- +sult for the class of signals defined by Eqs. 8 and 9, the +idea is far more generic. In particular, while we do not +know the temporal structure of the ligand statistics that +E. coli experiences, we do know that it can detect con- +centration changes over a range of background concentra- +tions that is much wider that the typical concentration +change over a run, such that the correlation between the +concentration value and its future change is likely to be +very small. As our analysis shows, a perfectively adap- +tive kernel then emerges naturally from the requirement +to predict the future concentration change. +While the class of signals specified by Eqs. 8 and 9 is +arguably limited, it does describe the biologically impor- +tant regime of chemotaxis in shallow gradients. In the +limit that ω0 ≪ τv−1, Eq. 9 reduces to ˙v = −v/τv + ηv. +In shallow gradients, the stimulus only weakly affects +the swimming behavior, such that the perceived signal +is mostly determined by the intrinsic orientational dy- +namics of the bacterium in the absence of a gradient. In +this regime, the temporal statistics of the concentration +derivative v is completely determined by the steepness of +the concentration gradient g and the swimming statistics +of the bacterium in the absence of a gradient: +⟨δv(0)δv(τ)⟩ = g2¯ℓ2⟨δvx(0)δvx(τ)⟩ ≃ σ2 +vxe−τ/τvx , +(13) +where the latter is the autocorrelation function of the +(positional) velocity of the bacterium in the absence of a +gradient. It is a characteristic of the bacterium, not of +the environment, and has been measured to decay expo- +nentially with a correlation time τvx [18], precisely as our +model, with τv = τvx, predicts. This correlation time is +on the order of the typical run time of the bacterium in +the absence of a gradient, τv ∼ 0.9s [18]. +Finite resources prevent the chemotaxis system from taking +an instantaneous derivative and reaching the information +bound +The above analysis indicates that the chemotaxis sys- +tem seems ideally designed to predict the future concen- +tration change, because its integration kernel is nearly +perfectly adaptive [15, 25–28]. But how close can this +system come to the information bound for the non- +Markovian signals specified by Eqs. 8 and 9? +To address this, we consider a molecular model that +can accurately describe the response of the chemotaxis +system to a wide range of time-varying signals [29–32]. +In this model, the receptors are partitioned into clusters. +Each cluster is described via a Monod-Wyman-Changeux +model [33]. While each receptor can switch between an +active and an inactive conformational state, the energetic +cost of having different conformations in the same cluster +is prohibitively large. Each cluster is thus either active or + +8 +inactive. Ligand binding favors the inactive state while +methylation does the opposite. +Lastly, active receptor +clusters can via the associated kinase CheA phosphory- +late the downstream messenger protein CheY. +Linearizing around the steady state, we obtain: +δai(t) = αδmi(t) − βδℓ(t), +(14) +δ ˙mi = −δai(t)/(ατm) + ηmi(t), +(15) +δ ˙x∗ = γ +RT +� +i=1 +δai(t) − δx∗(t)/τr + ηx(t). +(16) +Here, δai(t) and δmi(t) are the deviations of the activ- +ity and methylation level of receptor cluster i from their +steady-state values, and RT is the total number of recep- +tor clusters; δℓ(t) and δx∗(t) are, respectively, the devi- +ations of the ligand and CheYp concentration from their +steady-state values; τm and τr are the timescales of re- +ceptor methylation and CheYp dephosphorylation; ηmi +and ηx are independent Gaussian white noise sources. In +Eq. 14, we have assumed that ligand binding is much +faster than the other timescales in the system, so that +it can be integrated out. There is therefore no need to +time average receptor-ligand binding noise, which means +that, in the absence of running costs, the optimal re- +ceptor integration time τr is zero. In what follows, we +set τr to the value measured experimentally, τr ≈ 100ms +[10, 34]. We consider the non-Markovian signals speci- +fied by Eqs. 8 and 9 in the physiologically relevant limit +ω0 → 0, such that the optimal kernel is perfectly adap- +tive, like that of E. coli. For these signals, we determine +the accessible region of Ipast and Ipred under a resource +constraint C = RT + XT (see Fig. 4) by optimizing over +the methylation time τm and the ratio of readout over +receptor molecules XT/RT. +The forecast interval τ is +set to τv, but we emphasize that the optimal design is +independent of the value of τ (see Appendix F 4). +Fig. 4A shows that the chemotaxis system is, in gen- +eral, not at the information bound that maximizes the +predictive information Ipred = I(x0; vτ) for a given past +information Ipast = I(x0; Lp). The optimal systems that +maximize Ipred under a resource constraint C, marked by +the red dots, are indeed markedly away from the infor- +mation bound. Yet, as the resource constraint is relaxed +and C is increased, the optimal system moves towards +the bound. +Panel B shows that the methylation time +τm rises along the three respective isocost lines of panel +A. It highlights that there exists an optimal methyla- +tion time τ opt +m +that maximizes the predictive information +Ipred. Moreover, τ opt +m +decreases as the resource constraint +is relaxed. +Along the respective isocost lines, XT/RT +varies only mildly (see Fig. 9 in the appendix). +These observations can be understood by noting that +the system faces a trade-off between taking a derivative +that is recent versus one that is robust. All the infor- +mation on the future derivative, which the cell aims to +predict, is contained in the current derivative of the sig- +nal; measuring the current derivative would allow the +system to reach the information bound. However, com- +puting the recent derivative is extremely costly. The cell +takes the temporal derivative of the ligand concentration +at the level of the receptor via two antagonistic reac- +tions that occur on two distinct timescales: ligand bind- +ing rapidly deactivates the receptor, while methylation +slowly reactivates it [30]. The receptor ligand-occupancy +thus encodes the current concentration, the methylation +level stores the average concentration over the past τm, +and the receptor activity reflects the difference between +the two—the temporal derivative of the signal over the +timescale τm. To obtain an instantaneous derivative, τm +must go to zero. However, this dramatically reduces the +gain; in fact, in this limit, the gain is zero, because the +receptor activity instantly adapts to the change in the +ligand concentration. Since the push-pull network down- +stream of the receptor is a device that samples the re- +ceptor stochastically [10, 36], the gain, i.e. the change in +the receptor activity due to the signal, must be raised to +lift the signal above the sampling noise. This requires a +finite methylation time τm: as we show in Appendix F 3, +the gain increases monotonically with τm. The trade-off +between a recent derivative and a reliable one gives rise +to an optimal methylation time τ opt +m +that maximizes the +predictive information for a given resource cost. +The same analysis also explains why the optimal +methylation time τ opt +m +decreases and the predictive infor- +mation increases when the resource constraint is relaxed. +The sampling noise in estimating the average receptor +activity decreases as the number of readout molecules +increases [10, 36]. A smaller gain is thus required to lift +the signal above the sampling noise. In addition, a larger +number of receptors decreases the noise in the methyla- +tion level, which also allows for a smaller gain, and hence +a smaller methylation time. These two effects together +explain why τ opt +m +decreases and Ipred increases with C. +Fig. 4A also shows that the past information Ipast = +I(x0; Lp) does not return to zero along the contourline of +constant resource cost. Along the contourline, the methy- +lation time τm rises (Fig. 4B). While the predictive infor- +mation Ipred exhibits an optimal methylation time τmopt, +the past information Ipast continues to rise with τm be- +cause the system increasingly becomes a copying device, +rather than one that takes a temporal derivative. +Comparison with experiment +To test our theory, we study the predictive power of +the E. coli chemotaxis system as a function of the steep- +ness of the ligand concentration gradient, keeping the +resource constraint at the biologically relevant value of +C = RT + XT = 104 [35]. Panel C of Fig. 4 shows Ipred +and Ipast for cells swimming in an exponential concen- +tration gradient ℓ(x) = ℓ0egx, for different values of the +gradient steepness g; along the green iso-steepness lines +τm is varied and XT/RT is optimized to maximize Ipred +and Ipast, with the red dots marking τ opt +m , while along + +9 +A +B +C +FIG. 4. Finite resources prevent chemotaxis system from reaching the information bound. (A) The region of +accessible predictive information Ipred = I(x0; vτ) and past information Ipast = I(x; Lp) for the chemotaxis system under a +resource constraint C = RT + XT, for the non-Markovian signals specified by Eqs. 8 and 9 (green). The black line shows the +information bound at which Ipred is maximized for a given Ipast. The chemotaxis system is not at the information bound, but +it does move towards it as C is increased. The red line connects the red points where Ipred is maximized for a given resource +cost C. The accessible region of Ipred and Ipast under a given resource constraint C = RT + XT is obtained by optimizing over +the methylation time τm and the ratio of readout over receptor molecules XT/RT. The forecast interval is τ = τv. (B) The +methylation time τm over the input correlation time τv as a function of the distance θ along the three respective isocost lines +shown in panel A. The methylation time τm increases along the isocost line, but there exists an optimal τm that maximizes +the predictive information, marked by the red points; θ → 0 corresponds to the origin of panel A, (Ipred, Ipast) = (0, 0); the +points where θ = 0.2 along the isocost lines of panel A are marked with a bar. As the resource constraint is relaxed (higher +C), the optimal τm decreases: the system moves towards the information bound, where it takes an instantaneous derivative, +corresponding to τr, τm → 0. (C) The contourlines of Ipred and Ipast for increasing values of the steepness g of an exponential +ligand concentration gradient ℓ(x) = ℓ0egx, keeping the total resource cost fixed at C = RT + XT = 104; τm and XT/RT +have been optimized. It is seen that the maximal predictive information Ipred under the resource constraint C (marked by +the red points) increases with the gradient steepness. The blue line shows Ipred and Ipast for the E. coli chemotaxis system +with τm = 10s and XT = RT = 5000 fixed at their measured values [35]. Our analysis predicts that this system has been +optimized to detect shallow gradients. Parameter values unless specified: τr = 100ms [10, 34]; τv = 0.9s and σ2 +v = g2¯ℓ2σ2 +vx, +with ¯ℓ = 100µM and σ2 +vx = 157.1µm2s−2 [18]; ω0 → 0; g is given in units of mm−1; in A, g = 4/mm. +the blue line τm and XT and RT are fixed at their exper- +imentally measured values [29, 30, 35]. Clearly, both the +predictive and the past information rise as the gradient +steepness g increases—a steeper concentration gradient +yields a larger change in the concentration, and thus a +stronger signal. +More interestingly, in the optimal system Ipred rises +much faster with Ipast (red line) than in the E. coli system +(blue line). A steeper gradient g yields a stronger input +signal, which raises the signal above the sampling noise +more. This allows the optimal system to take a more re- +cent derivative, with a smaller τm, which is more informa- +tive about the future. In contrast, the methylation time +τm of the E. coli chemotaxis system is fixed. As Fig. 4C +shows, this value is beneficial for detecting shallow gra- +dients, g ≲ 0.2mm−1. Moreover, in this regime, not only +Ipred but also Ipast are close to the respective values for +the optimal system. For steeper gradients Ipast becomes +much higher in the E. coli system than in the optimal +one, even though Ipred remains lower. +The bacterium +increasingly collects information that is less informative +about the future. Taken together, these results strongly +suggest that the system has been optimized to predict +future concentration changes in shallow gradients, which +necessitate a relatively long methylation time. +DISCUSSION +Cellular systems need to predict the future signal by +capitalizing on information that is contained in the past +signal. To this end, they need to encode the past sig- +nal into the dynamics of the intracellular biochemical +network from which the future input is inferred. +To +maximize the predictive information for a given amount +of information that is extracted, the cell should store +those signal characteristics that are most informative +about the future signal. For a Markovian signal obeying +an Ornstein-Uhlenbeck process this is the current signal +value, while for the non-Markovian signal corresponding +to an underdamped particle in a harmonic well, this is +the current signal value and its derivative. As we have +seen here, cellular systems are able to extract these sig- + +10 +nal characteristics: the push-pull network can copy the +current input into the output, while the chemotaxis net- +work can take an instantaneous derivative. We have thus +demonstrated that at least for two classes of signals, cel- +lular systems are in principle able to extract the most +predictive information, allowing them to reach the infor- +mation bound. +Yet, our analysis also shows that extracting the most +relevant information can be exceedingly costly. To copy +the most recent input signal into the output, the integra- +tion time of the push-pull network needs to go to zero, +which means that the chemical power diverges. More- +over, taking an instantaneous derivative reduces the gain +to zero, such that the signal is no longer lifted above +the inevitable intrinsic biochemical noise of the signalling +system. In fact, taking the chemical power cost to drive +the adaptation cycle into account [27, 37] would push the +system away from the information bound even more. +While information is a resource—the cell cannot pre- +dict the future without extracting information from the +past signal—the principal resources that have a direct +cost are time, building blocks and energy. The predic- +tive information per protein and energy cost is therefore +most likely a more relevant fitness measure than the pre- +dictive information per past information. Our analysis +reveals that, in general, it is not optimal to operate at +the information bound: cells can increase the predictive +information for a given resource constraint by moving +away from the bound. Increasing the integration time in +the push-pull network reduces the chemical power and +makes it possible to take more concentration measure- +ments per protein copy. And increasing the methylation +time in the chemotaxis system increases the gain. Both +enable the system to extract more information from the +past signal. Yet, increasing the integration time or the +methylation time also means that the information that +has been collected, is less informative about the future +signal. This interplay gives rise to an optimal integration +and methylation time, which maximize the predictive in- +formation for a given resource constraint. This argument +also explains why the respective systems move towards +the information bound when the resource constraint is +relaxed: Increasing the number of receptor and readout +molecules allows the system to take more instantaneous +concentration measurements, which makes time averag- +ing less important, thus reducing the integration time. +Increasing the number of readout molecules also reduces +the error in sampling the receptor state. This makes it +easier to detect a change in the receptor activity result- +ing from the signal, thus allowing for a smaller dynamical +gain and a shorter methylation time. +Information theory shows that the amount of transmit- +ted information depends not only on the characteristics +of the information processing system, but also on the +statistics of the input signal. While much progress has +been made in characterizing cellular signalling systems, +the statistics of the input signal is typically not known, +with a few notable exceptions [38]. +Here, we have fo- +cussed on two classes of input signals, but it seems likely +that the signals encountered by natural systems are much +more diverse. It will be interesting to extend our analy- +sis to signals with a richer temporal structure [9], and see +whether cellular systems exist that can optimally encode +these signals for prediction. +Finally, while we have analyzed the design of cellular +signaling networks to optimally predict future signals, we +have not addressed the utility of information for function +or behavior. It is clear that many functional or behavioral +tasks, like chemotaxis [18], require information, but what +the relevant bits of information are is poorly understood +[7]. Moreover, cells ultimately employ their resources— +protein copies, time, and energy—for function or behav- +ior, not for processing information per se. Here, we have +shown that maximizing predictive information under a +resource constraint, C → Ipast → Ipred, does not nec- +essarily imply maximizing past information. This hints +that optimizing a functional or behavioral task under a +resource constraint, C → Ipred → function, may not im- +ply maximizing the predictive information necessary to +carry out this task. +ACKNOWLEDGMENTS +We thank Jenny Poulton, Manuel Reinhardt, Michael +Vennettilli and Daan de Groot for many useful discus- +sions. This work is part of the Dutch Research Coun- +cil (NWO) and was performed at the research institute +AMOLF. This project has received funding from the +European Research Council (ERC) under the European +Union’s Horizon 2020 research and innovation program +(grant agreement No. 885065). + +11 +Appendix A: General +1. +Linear signalling networks +Since the systems studied in the main text have a single steady state, we will study them in the linear-noise +approximation [39]. For non-linear systems, the quality of the approximation improves with system size, but it can +already be remarkably good for systems with only 10 copies [20, 22, 40]. In the linear-noise approximation, we expand +the rate equations to first order around the steady state of the mean-field chemical rate equations, and compute the +noise at this steady state. In this approximation the network dynamics are a multidimensional Ornstein-Uhlenbeck +(OU-)process: +˙δy = Gδs(t) + J δy(t) + Bξ(t), +(A1) +where δs(t) is a length k vector of input signals and δy is the vector of all network species of length n, both +defined in terms of deviations from their mean. The vector ξ(t) describes the m independent white noise processes +associated with the m network reactions; they have zero mean, unit variance, and are delta correlated: ⟨ξi(t)⟩ = 0, +⟨ξi(t)ξj(t′)⟩ = δijδ(t − t′), with δij the Kronecker delta. The n × n matrix J is the Jacobian of the network, the n × k +signal gain matrix G describes the strength by which each signal impacts each species directly, the n × m matrix B +contains the noise strengths. The eigenvalues of the Jacobian J must be negative for the system to be stable, and we +require all signals to be stationary. +2. +Integration kernels, power spectra, and correlation functions +We continue by deriving the stationary auto-correlation matrix of a multidimensional OU-process, such as Eq. A1, +via the networks’ power spectra. The power spectrum of a real-valued random process X(t) is the squared modulus +of its Fourier transform: Sx(ω) = ⟨δ˜x(−ω)δ˜x(ω)⟩ and Sx→y(ω) = ⟨δ˜x(−ω)δ˜y(ω)⟩. Throughout this work we use +the following conventions for the Fourier transform and its inverse: F{f(t)} ≡ ˜f(ω) = +� ∞ +−∞ dtf(t) exp(−iωt) and +F−1{ ˜f(ω)} = 1/(2π) +� ∞ +−∞ dω ˜f(ω) exp(iωt) = f(t). To obtain the correlation functions from the power spectra we +invoke the Wiener-Khinchin theorem. +The general solution to Eq. A1 is +δy(t) = +� t +−∞ +dt′ eJ (t−t′) (Gδs(t′) + Bξ(t′)) , +(A2) +which shows the two contributions to the time dependent solution: that of the external signal and that of the internal +noise. The n × k matrix eJ (t−t′)G contains the integration kernels, its (i, j)th entry determines how the jth signal +affects the ith system component over time. The n × m matrix eJ (t−t′)B is similar, but contains the functions that +map the noise terms onto the system components. These matrices can be obtained by taking the Fourier transform +of Eq. A1 and solving for δ˜y(ω) +iωδ˜y(ω) = Gδ˜s(ω) + J δ˜y(ω) + B˜ξ(ω), +(A3) +δ˜y(ω) = (iωIn − J )−1 � +Gδ˜s(ω) + B˜ξ(ω) +� +. +(A4) +Using the convolution theorem to take the Fourier transform of Eq. A2, and comparing the result to Eq. A4, now +shows that F{eJ (t−t′)} = (iωIn − J )−1. We obtain for the power-spectra of the network components +Sy(ω) = ⟨δ ˜y(−ω)δ ˜y(ω)T ⟩, += G(−ω)Ss(ω)G(ω)T + |N(ω)|2, +(A5) +with the matrices of frequency dependent gains G(ω) ≡ (iωIn − J )−1G, and frequency dependent noise N(ω) ≡ +(iωIn−J )−1B. The cross terms vanish because the fluctuations of the external signal are uncorrelated from the internal +noise. Furthermore, the power spectrum of a white noise process is constant, and all the noise terms are independent +of one another, such that the spectral density of the noise vector is the identity matrix ⟨˜ξ(−ω)˜ξ(ω)T ⟩ = Im. We +also need to consider the cross-spectra between the signals and the network components, specifically we will need the +spectra from the network to the signals +Sy→s(ω) = ⟨δ ˜y(−ω)δ˜s(ω)T ⟩, += G(−ω)Ss(ω). +(A6) + +12 +From Eq. A5 and Eq. A6 we can obtain all necessary correlation functions and (co-)variances, by taking the inverse +Fourier transform of the component of interest (for a variance we can directly set t = 0). The advantage of using +this form, is that the contribution of each signal and of the noise terms appear separately. When we are for example +interested in a variance that is only caused by noise, we can omit the terms depending on the signal power spectra, +and vice versa. Moreover, the power spectra are usually simpler in form than the corresponding correlation functions. +The covariance and auto-correlation matrices can also be found by solving Eq. A2 directly in the time domain; the +solutions are shown here for completeness. For a derivation, see for example the work by Vennettilli et al. [41]. In this +case it is most convenient to include the signals as system components, we thus have a new Jacobian J ′ and a new +noise strength matrix B′ which include all network components and the signals themselves. The covariance matrix C +is then obtained by solving the Lyapunov equation +J ′C + CJ ′T + B′B′T = 0, +(A7) +and the correlation matrix is given by +C(τ) = eJ ′τC +for τ > 0. +(A8) +Appendix B: Signals and statistics +1. +Markovian signal +For the Markovian ligand concentration dynamics we use a 1-dimensional OU-process +δ ˙ℓ = −δℓ/τℓ + ηℓ(t), +(B1) +where the ligand concentration is defined in terms of the deviation from its mean δℓ = ℓ(t) − ¯ℓ. The correlation +time is give by τℓ, and the noise ηℓ(t) is derived from a unit white noise process ηℓ(t) ≡ σℓ +� +2/τℓξ(t), such that +⟨ηℓ(t)ηℓ(t′)⟩ = 2σ2 +ℓ/τℓδ(t − t′). We obtain for the steady-state auto-correlation using Eq. A7 and Eq. A8: +⟨δℓ(τ)δℓ(0)⟩ = σ2 +ℓe−τ/τℓ. +(B2) +2. +Non-Markovian signal +Not all ligand concentration trajectories encountered by cells are expected to be Markovian. For example, E. coli +swims in its environment with a speed which exhibits persistence. This leads to an auto-correlation function for the +concentrations’ derivative which does not decay instantaneously [18]. To model such a persistent signal, we use the +classical model of a particle in a harmonic well +δ ˙ℓ = v(t), +˙v = −ω2 +0δℓ(t) − v(t)/τv + ηv(t), +(B3) +where ω0 = +� +k/m, with k the spring constant and m the mass of the particle, τv is a relaxation timescale, and +ηv(t) = σv +� +2/τvξ(t), with ξ(t), as used throughout, a Gaussian white noise process of unit variance, and σv the +standard deviation of v. If the signal would obey the fluctuation-dissipation relation, then mσ2 +v = kBT, but since the +biochemical signal could very well be generated via an active process this relation may not hold. This process can be +expressed as a 2-dimensional OU-process with: +J = +� +0 +1 +−ω2 +0 +−1/τv +� +, +(B4) +B = +�0 +0 +0 +σv +� +2/τv +� +. +(B5) +We find for the covariance matrix, using Eq. A7: +C = +� +σ2 +ℓ +σℓv +σℓv +σ2 +v +� += σ2 +v +� +1/ω2 +0 +0 +0 +1 +� +. +(B6) + +13 +Using Eq. A8 we obtain the auto-correlation matrix in the overdamped regime, τ −1 +v +> 2ω0, +C(τ) = +�⟨δℓ(τ)δℓ(0)⟩ +⟨δℓ(τ)δv(0)⟩ +⟨δv(τ)δℓ(0)⟩ +⟨δv(τ)δv(0)⟩ +� +, += +� +� +� +σ2 +ℓe−µτ/2 � +cosh(ρτ) + µ +2ρ sinh(ρτ) +� +σ2 +ve−µτ/2 1 +ρ sinh(ρτ) +−σ2 +ve−µτ/2 1 +ρ sinh(ρτ) +σ2 +ve−µτ/2 � +cosh(ρτ) − µ +2ρ sinh(ρτ) +� +� +� +� , +(B7) +where ρ = +� +µ2/4 − ω2 +0, with µ = τv−1. The range of ligand concentrations which E. coli might encounter is very +large, based on the dissociation constants of the inactive and active receptor conformations, which for the Tar-MeAsp +receptor ligand combination respectively are KI +D = 18µM and KA +D = 2900µM [42, 43]. This suggests that the variance +in the ligand concentration is very large relative to that of the derivative of the ligand concentration, which is set by +the swimming behaviour of the cell. For this reason we specifically focus on the limit where ω0 → 0, which corresponds +to a vanishingly small spring constant, or a harmonic potential which becomes extremely wide. The variance in the +concentration σ2 +ℓ then diverges, the normalized correlation functions in this limit are +lim +ω0→0 +� +� +⟨δℓ(τ)δℓ(0)⟩ +σ2 +ℓ +⟨δℓ(τ)δv(0)⟩ +σℓσv +⟨δv(τ)δℓ(0)⟩ +σℓσv +⟨δv(τ)δv(0)⟩ +σ2v +� +� = +�1 +0 +0 +e−µτ +� +. +(B8) +Appendix C: Information bottleneck framework and solutions +Anticipating future environmental conditions allows for timely adaptation. +However, storing information costs +resources such as proteins, energy and time, and not all information in the past ligand concentrations will be relevant +for predicting the signal’s future state. Assuming that resources are in limited supply, this means that cells must be +efficient in which, and how much information they store. This is elegantly captured in the Information Bottleneck +Method (IBM), which describes the problem of maximizing the information on the future signal while minimizing the +information on the past signal that is stored in the network output, from which the future input is predicted [8]. The +objective function for the prediction of a variable of interest zτ ≡ z(t + τ) is: +max +P (X0|Lp) : +L = I(x0; zτ) − γI(x0; Lp). +(C1) +The value of the sensing system output at the current time t is x0 ≡ x(t). The variable of interest zτ at a future +time t + τ is the future concentration ℓτ ≡ ℓ(t + τ) for the Markovian signal, and the future concentration derivative +vτ ≡ v(t+τ) for the non-Markovian signal. Since the system of interest needs to predict one signal characteristic (either +the future signal value or its derivative), one output component is sufficient for encoding the required information, +as we describe in more detail below. The vector Lp = (δℓ(0), δℓ(−∆t), . . . , δℓ(−(N − 1)∆t))T is the past trajectory +of ligand concentrations of length N, discretized with timestep ∆t. The mutual information between the current +system output and the future property of interest is the predictive information Ipred ≡ I(x0; zτ), and the mutual +information between the current system output and the past ligand concentration trajectory is the past information +Ipast ≡ I(x0; Lp). The Lagrange multiplier γ sets the relative cost of storing past information over obtaining predictive +information. Given a value of γ, Eq. C1 is maximized by optimizing the mapping of the past ligand concentration +trajectory Lp onto the current output x0. Since, by the data processing inequality, we have Ipast ≥ Ipred, for γ = 1 +the objective function is maximized by Ipast = Ipred = 0. As γ is decreased both the past and predictive information +increase, and the parametric curve in the Ipast − Ipred plane that arises is the information bound. For γ = 0 there +is no cost to storing past information. The predictive information is then only limited by the amount of information +contained in the past about the future signal property: Ipred ≤ I(Lp; zτ). +1. +Gaussian information bottleneck +In general equation C1 can be difficult to solve, as all mappings from Lp to X0 are allowed. However, the problem +becomes analytically tractable when the joint probability distribution of Lp and zτ is a multivariate Gaussian. Here, + +14 +we follow the procedure of Chechik and coworkers to obtain this mapping [12]. In the Gaussian model, the optimal +mapping from Lp to x0 is a linear one [12] +x0 = ALp + ξ; +ξ ∼ N(0, σ2 +ξ), +(C2) +where A is a row vector which determines how strongly each entry in Lp contributes to the scalar output X0 at any +point in time. The random variable ξ is the noise added to the signal due to the stochastic nature of the mapping; it is +a Gaussian random variable independent of Lp with 0 mean and variance σ2 +ξ. Finding the optimal mapping from Lp +to x0 corresponds to finding the optimal combination of A and σ2 +ξ. It can be shown that for any pair (A, σ2 +ξ), there +exists a pair (A′, 1) which yields the same values for Ipast and Ipred after maximization of Eq. C1 [12]. Therefore, we +can set σ2 +ξ = 1 without altering the information curve. +To obtain the information bound, we rewrite Eq. C1 using the definition of the mutual information between Gaussian +random variables: +L = 1 +2 log(σ2 +x/σ2 +x|z) − γ 1 +2 log(σ2 +x/σ2 +x|L), +(C3) +with the total variance σ2 +x in the output x0, the output variance conditional on the future signal property σ2 +x|z ≡ σ2 +x|zτ , +and the output variance conditional on the complete history of ligand concentrations σ2 +x|L ≡ σ2 +x|Lp. The latter is just +the variance caused by the intrinsic noise, σ2 +x|L = σ2 +ξ = 1. The total variance in x0 can be expressed in terms of the +mapping vector A and the variance in the past signal using Eq. C2, σ2 +x = AΣLAT + 1, where ΣL ≡ ΣLp is the +covariance matrix of the past ligand concentration trajectory Lp. To express the output variance conditional on the +future signal property zτ we use the Schur complement formula, which in general form reads: +Σx|y = Σx − ΣxyΣ−1 +y Σyx, +(C4) +where Σyx = ΣT +xy. Using this formula to rewrite σ2 +x|z, and then using the linear relation from Eq. C2 again, we obtain +σ2 +x|z = AΣL|zAT + 1. +Filling in the expressions for the variances in L (Eq. C3) gives: +L = 1 +2 +� +(1 − γ) log +���AΣLAT + 1 +��� +− log +���AΣL|zAT + 1 +���� +. +(C5) +For any symmetric matrix C we have +δ +δA log |ACAT | = +� +ACAT �−1 2AC, such that we obtain for the derivative of +L to A: +δL +δA = (1 − γ) +AΣL +AΣLAT + 1 − +AΣL|z +AΣL|zAT + 1. +(C6) +In our case A is a row vector, and both denominators are thus scalars. We find the maximum of L by equating its +derivative to 0, which gives: +AΣL|zΣ−1 +L += (1 − γ)AΣL|zAT + 1 +AΣLAT + 1 A. +(C7) +For this equality to hold A must either be identically 0, or a left eigenvector of the matrix ΣL|zΣ−1 +L +with eigenvalue: +λ = (1 − γ)AΣL|zAT + 1 +AΣLAT + 1 . +(C8) +Here, we note that if the signal statistics is sufficiently rich and the prediction complexity sufficiently large (because, +for example, multiple signal characteristics need to be predicted), then the matrix ΣL|zΣ−1 +L +has multiple eigenvectors +with non-trivial eigenvalues 0 < λi < 1 [12]. This reflects the idea that storing the past information that is necessary +to enable this complex prediction task may require multiple output components, i.e. +an output vector x, where +each output component has an integration kernel given by one of the eigenvectors of ΣL|zΣ−1 +L +[12]. However, for +Markovian signals only one eigenvector with non-trivial eigenvalue 0 < λ < 1 emerges, which means that one output +component is sufficient to encode the required information. For the non-Markovian signals studied here, ΣL|zΣ−1 +L +has two eigenvectors if both the future value and its derivative need to be predicted (and z = (ℓτ, vτ)); to optimally +predict both features from the current output, two output components are then required, provided Ipast is sufficiently + +15 +large. However, here we consider the scenario that only the future derivative needs to be predicted, in which case only +one non-trivial eigenvector emerges, and one output component is sufficient for encoding the required information. +We leave the problem of predicting multiple signal features via multiple output components for future work. +We can define the optimal mapping A = ||A||ν where ν is the normalized left eigenvector of ΣL|zΣ−1 +L corresponding +to its smallest eigenvalue, 0 < λ < 1. The magnitude can be found by solving Eq. C8 for ||A||, using from Eq. C7 +that λνΣLνT = νΣL|zνT . This gives for the optimal mapping: +Aopt = +�� +1−γ−λ +ν1ΣLνT +1 λγ ν1 +for +0 < λ < 1 − γ, +0 +for +1 − γ ≤ λ ≤ 1. +(C9) +We can substitute ||A||2 = (1 − γ − λ)/(νΣLνT λγ) in the definitions for the mutual information to express them +in terms of λ and γ. For the past information we obtain: +Ipast = 1 +2 log +� +||A||2νΣLνT + 1 +� +, += 1 +2 log +�1 − γ +γ +1 − λ +λ +� +. +(C10) +And for the predictive information: +Ipred = 1 +2 log +� +||A||2νΣLνT + 1 +� +− 1 +2 log +� +||A||2νΣL|ℓτ νT + 1 +� +, += Ipast − 1 +2 log +�1 − λ +γ +� +, += 1 +2 log +�1 − γ +λ +� +. +(C11) +2. +Markovian signal +To obtain the information bound for prediction of the future ligand concentration of a Markovian signal, we need +to determine the eigenvalues and vectors of the matrix (see Eqs. C7 and C8) +W = ΣL|ℓτ Σ−1 +L . +(C12) +Using the Schur complement formula (Eq. C4) to rewrite the conditional matrix gives ΣL|ℓτ = ΣL − ΣLℓτ ΣT +Lℓτ /σ2 +ℓ. +Then defining the normalized matrices RL = ΣL/σ2 +ℓ and RLℓτ = ΣLℓτ /σ2 +ℓ we find +W = IN − RLℓτ RT +Lℓτ R−1 +L . +(C13) +where N is the length of the input trajectory Lp. The correlation matrix of the past trajectory is symmetric with +entries R(i,j) +L += exp(−|i − j|∆t/τℓ), where ∆t is the discretization timestep of the past trajectory Lp and i ranges +from 1 to N. This is a Kac-Murdock-Szeg¨o matrix, and its inverse is known: +R−1 +L += +1 +1 − e2∆t/τℓ +� +� +� +� +� +� +� +� +� +� +1 +−e−∆t/τℓ +0 +. . . +. . . +0 +−e−∆t/τℓ 1 + e−2∆t/τℓ +−e−∆t/τℓ +. . . +. . . +0 +0 +−e−∆t/τℓ +1 + e−2∆t/τℓ +... +. . . +0 +... +... +... +... +... +... +0 +. . . +. . . +−e−∆t/τℓ 1 + e−2∆t/τℓ −e−∆t/τℓ +0 +. . . +. . . +0 +−e−∆t/τℓ +1 +� +� +� +� +� +� +� +� +� +� +. +(C14) +Note that the inverse matrix is tridiagonal. The length N cross-correlation vector between past trajectory and future +concentration has entries R(i) +Lℓτ = exp(−(τ + (i − 1)∆t)/τℓ). The product of the correlation matrices is surprisingly +simple: +RLℓτ RT +Lℓτ R−1 +L += e−2τ/τℓ +� +� +� +� +� +1 +0 . . . 0 +e−∆t/τℓ +0 . . . 0 +... +... +... +... +e−(N−1)∆t/τℓ 0 . . . 0 +� +� +� +� +� . +(C15) + +16 +Using this result we can straightforwardly determine the eigenvalues, +|W − λIN| = 0, +��������� +� +� +� +� +� +1 − λ − e−2τ/τℓ +0 +. . . +0 +−e−(τ+∆t)/τℓ +1 − λ . . . +0 +... +... +... +... +−e−(τ+(N−1)∆t)/τℓ +0 +. . . 1 − λ +� +� +� +� +� +��������� += 0. +(C16) +The only contribution to the determinant comes from the diagonal, and the only nontrivial eigenvalue is thus λ = +1−e−2τ/τl. The optimal mapping is thus onto a one-dimensional scalar output x0. The corresponding left eigenvector +is given by +ν1W = (1 − e−2τ/τl)ν1, +(C17) +which holds for ν1 = +�1 0 . . . 0� +. The optimal mapping for the prediction of a one-dimensional OU-process is thus +to copy its most recent value. This agrees with intuition as for any Markovian process, all the information about the +future signal is contained in the most recent value. For a continuous input signal (rather than a discretized signal), +and a continuous integration kernel k(t) (rather than a mapping vector A), this means that the optimal integration +kernel is kopt(t) = aδ(t). +3. +Non-Markovian signal +To find the optimal mapping for the prediction of the derivative of a non-Markovian signal, based on its history of +ligand concentrations, we need to find the eigenvalues and vectors of the matrix +W = ΣL|vτ Σ−1 +L , += IN − 1 +σ2v +ΣLvτ ΣT +Lvτ Σ−1 +L . +(C18) +The covariance matrix of the past trajectory is symmetric with entries Σ(i,j) +L += ⟨δℓ(0)δℓ(|i − j|∆t)⟩ where both i and +j range from 1 to N, the past trajectory length. The covariance vector between past trajectory and future derivative +has entries Σ(i,j) +Lvτ = ⟨δℓ(0)δv(τ + (i − 1)∆t)⟩. Both the concentration auto-correlation function, and the concentration +to future derivative cross-correlation function, are shown in Eq. B7. +To better understand the optimal mapping of this signal we numerically investigate the eigenvalues of the matrix +W . For the prediction of vτ, there is only one non-trivial eigenvalue. Like for the Markovian signal, this shows that +for the prediction of the derivative of this non-Markovian signal, the optimal mapping is always onto a scalar output. +The non-trivial eigenvalue λ decreases with the discretization timestep ∆t and is minimal for ∆t → 0 Fig. 5. In this +limit, λ has the same magnitude for any N ≥ 2, see Fig. 5. A smaller eigenvalue λ corresponds to larger past and +predictive information and a larger ratio Ipred/Ipast (Eq. C10 and Eq. C11), given any value of the Lagrange multiplier +γ. For the optimal mapping we must thus have N ≥ 2 and ∆t → 0, where N sets both the past trajectory and the +mapping vector length. Because increasing the length above two does not yield an improvement in the value of λ1 we +focus on N = 2. +The fact that to reach the optimum we must have N = 2 and ∆t → 0, shows that the optimal kernel A takes an +instantaneous measurement of a combination of the most recent ligand concentration, and its derivative. This can be +understood as follows, for a trajectory of length two, the mapping vector also has length two, A = ||A||( ˆw1, ˆw2), with +� +ˆw2 +1 + ˆw2 +2 = 1. We can then express the linear mapping of Lp to x0 (Eq. C2) as: +x0 = ||A|| +� +( ˆw1 + ˆw2)δℓ(0) − ˆw2∆tδℓ(0) − δℓ(−∆t) +∆t +� ++ ξ, +(C19) +This expression shows that, as ∆t → 0, the two entries of A combine both the most recent signal value and the most +recent derivative to generate x0. This is intuitive because the signal is completely defined by its concentration and +derivative (Eq. B3). For this reason, and to obtain analytical insight into the optimal weights, we inspect the final +two entries of the past ligand concentration trajectory in the limit ∆t → 0, which defines the past signal in terms of +its most recent concentration and derivative +S0 ≡ +�δℓ(0) v(0)�T . +(C20) + +17 +0.05 +0.10 +0.15 +0.20 +0.560 +0.565 +0.570 +0.575 +0.580 +Discretization timestep Δt +Smallest eigenvalue λ +N = 2 +N = 3 +N = 4 +FIG. 5. The smallest eigenvalue of the IB matrix is minimal for N ≥ 2 and ∆t → 0. A smaller eigenvalue corresponds +to a larger ratio Ipred/Ipast for any given value of the Lagrange multiplier γ. Parameters: friction timescale τ −1 +v += 0.862s−1 as +determined in [18], prediction interval τ = τv, and ω0 = 0.4s−1 such that the system is slightly overdamped. +Because the signal is Markovian in the joint properties δℓ and v, the vector S0 contains the same information as the +trajectory Lp. The past information is now the mutual information between x0 and S0, i.e. Ipast = I(x0; S0). The +output x0 can then also be written as a projection of S0 via the alternative mapping vector ˜ +A = ||A||(ˆa,ˆb): +x0 = ||A|| +� +ˆaδℓ(0) + ˆbv(0) +� ++ ξ. +(C21) +Comparison with Eq. C19 shows how the components of ˜ +A relate back to those in A, +ˆw1 = ˆa + ˆb/∆t, +(C22) +ˆw2 = −ˆb/∆t. +(C23) +To obtain the optimal mapping vector ˜ +A the matrix of signal statistics of which the eigenvalues and -vectors need +to be determined is +W = Σs|vτ Σ−1 +s , +(C24) +with +Σs = +� +σ2 +ℓ +0 +0 +σ2 +v +� +, +(C25) +Σs|vτ = Σs − 1 +σ2v +Σsvτ ΣT +svτ , +(C26) +Σsvτ = +� +⟨δℓ(0)δv(τ)⟩ +⟨δv(0)δv(τ)⟩ +� +. +(C27) +We thus obtain +W = I − +� +� +� +⟨δℓ(0)δv(τ)⟩2 +σ2 +ℓ σ2v +⟨δℓ(0)δv(τ)⟩⟨δv(0)δv(τ)⟩ +σ4v +⟨δℓ(0)δv(τ)⟩⟨δv(0)δv(τ)⟩ +σ2 +ℓ σ2v +⟨δv(0)δv(τ)⟩2 +σ4v +� +� +� . +(C28) +This matrix has one nontrivial eigenvalue, λ = 1 − ⟨δv(0)δv(τ)⟩2 +σ4v +− ⟨δℓ(0)δv(τ)⟩2 +σ2 +ℓ σ2v +, which depends on the normalized +correlation functions between on the one hand the current concentration or derivative, and on the other hand the +future derivative. The corresponding left eigenvector is +ν1 = Q−1 � +1 +σℓ +⟨δℓ(0)δv(τ)⟩ +σℓσv +1 +σv +⟨δv(0)δv(τ)⟩ +σ2 +v +� +, +(C29) +where Q normalizes the vector.Using the linear mapping x0 = ||A||ν1S0 + ξ, and defining G ≡ ||A||/Q, shows that +the optimal output should be generated as follows +xopt +0 += G +�⟨δℓ(0)δv(τ)⟩ +σℓσv +δℓ(0) +σℓ ++ ⟨δv(0)δv(τ)⟩ +σ2v +v(0) +σv +� ++ ξ. +(C30) + +18 +Clearly, the optimal mapping depends on the (normalized) cross-correlation coefficient ρℓ0vτ ≡ ⟨δℓ(0)δv(τ)⟩/(σℓσv) +between the current concentration δℓ(0) and future derivative δv(τ), and the cross-correlation coefficient ρv0vτ between +the current derivative δv(0) and future derivative δv(τ). Indeed, to optimally predict the future derivative, the cell +should also use the current concentration and not only its current derivative. However, in the limit that the range of +concentrations sensed becomes very large, corresponding to ω0 → 0, the current concentration is no longer correlated +with the future derivative, and ρℓ0vτ → 0 (Eq. B8). In this limit, ˆa = 0 and ˆb = 1, and the kernel becomes a perfectly +adaptive, derivative-taking kernel: +lim +ω0→0 xopt +0 += ||A||v(0) + ξ. +(C31) +If we translate this back to the vector ||A||( ˆw1, ˆw2), operating on a ligand concentration trajectory Lp, the optimal +weights become ˆw1 = − ˆw2. +Appendix D: Past and predictive information for linear signalling networks +In order to address how close biochemical networks can come to the information bounds derived above, we here +describe how we obtain the past and predictive information for any linear (biochemical) network. +We then use +the resulting general expressions to compute the past and predictive information for the push-pull network and the +chemotaxis system of the main text. +For any linear network the output can be written as +δx(t) = +� t +−∞ +ds k(t − s)δℓ(s) + ηx(t). +(D1) +The mapping kernel k(t) is a property of the network and describes how the input signal is mapped onto the output. +The noise term ηx(t) is a sum of convolutions over all white noise processes in the network and corresponding network +mapping functions, see Eq. A2. The variance in the output can generally be split up in a part caused by the signal +and a part cause by the noise, and we have +σ2 +x = +� t +−∞ +ds +� t +−∞ +ds′k(t − s)k(t − s′)⟨δℓ(s)δℓ(s′)⟩ + σ2 +ηx, += σ2 +x|η + σ2 +x|L, +(D2) +where σ2 +x|η is the signal variance, i.e. all noise terms are fixed, and σ2 +x|L is the noise variance, i.e. the complete history +of the signal is fixed. Using this decomposition we find for the past information, which is the mutual information +between the current output and the complete signal history, +Ipast(x0; Lp) = 1 +2 log +� +σ2 +x +σ2 +x|L +� += 1 +2 log(1 + SNR), +(D3) +where the signal-to-noise ratio is defined as SNR = σ2 +x|η/σ2 +x|L. Using the same definition for the mutual information +when deriving the predictive information between current output and future ligand concentration, we obtain +Ipred(x0; ℓτ) = 1 +2 log +� +σ2 +x +σ2 +x|ℓτ +� +, += 1 +2 log +� +1 + +σ2 +x|η +σ2 +x|L +� +− 1 +2 log +� +1 + +σ2 +x|η − ⟨δx(0)δℓ(τ)⟩2/σ2 +ℓ +σ2 +x|L +� +, += Ipast − 1 +2 log(1 + cSNR). +(D4) +In the second line we used the Schur complement formula, Eq. C4, to decompose the variance in the output conditioned +on the future signal: σ2 +x|ℓτ = σ2 +x−⟨δx(0)δℓ(τ)⟩2/σ2 +ℓ. The quantity σ2 +x|η−⟨δx(0)δℓ(τ)⟩2/σ2 +ℓ can be understood as follows: +the first term σ2 +x|η is the contribution to the total variance of the output σ2 +x that comes from the signal variations, +while the second term quantifies the variance in the output that is correlated with the future input. The difference is +thus the variance in the output coming from the signal variations that are not correlated with the future input. The + +19 +ratio in the second logarithm can thus be understood as a conditional SNR that quantifies the part of the signal to +noise ratio that does not contain information about the future signal. This becomes more clear when considering its +form in terms of the mapping kernel and signal correlation functions. For any linear signalling network we have +σ2 +x|η − ⟨δx(0)δℓ(τ)⟩2/σ2 +ℓ = +� 0 +−∞ +ds +� 0 +−∞ +ds′k(−s)k(−s′) +� +⟨δℓ(s)δℓ(s′)⟩ − ⟨δℓ(τ)δℓ(s′)⟩⟨δℓ(τ)δℓ(s)⟩ +σ2 +ℓ +� +, +(D5) +where the term in parentheses is the conditional variance in the past signal trajectory given a future value, ΣL|ℓτ . +The form in Eq. D4 thus tells us that the predictive information is equal to the past information, minus the bits +that do not contain information about the future ligand concentration. This difference is indeed the part of the past +information that does contain predictive information about the future signal. +Although the expression above (Eq. D4) nicely relates the past and predictive information, a more straightforward +way of obtaining the predictive information is by expressing it directly in terms of the correlation between the current +output and the future ligand concentration: +Ipred(x0; ℓτ) = 1 +2 log +� +σ2 +x +σ2 +x|ℓτ +� += −1 +2 log +� +1 − ⟨δx(0)δℓ(τ)⟩2 +σ2xσ2 +ℓ +� +, +(D6) +where we again used the Schur complement formula to rewrite σ2 +x|ℓτ . Written this way we thus see that the predictive +information depends on the normalized correlation between the current network output and the future ligand con- +centration. We can simply exchange the future ligand concentration for the future derivative when considering the +chemotaxis network. +To compute the past information for linear signalling networks we use Eq. D3, and we thus need to compute the +SNR. To compute the predictive information for the prediction of a future ligand concentration, we need to compute +the ‘future correlation function’ ⟨δx(0)δℓ(τ)⟩. For the prediction of the future derivative we need ⟨δx(0)δv(τ)⟩. +Appendix E: Push-pull network +We consider a push-pull network that consists of a phosphorylation-dephosphorylation cycle downstream of a +receptor. +When bound to ligand, the receptor itself or its associated kinase, such as CheA in E. coli, catalyzes +the phosphorylation of a readout protein x, like CheY. Active readout molecules x∗ can decay spontaneously or be +deactivated by an enzyme (phosphatase), such as CheZ in E. coli. This cycle is driven by the turnover of fuel such as +ATP. We recognize that inside the living cell, the chemical driving is typically large: for example, the free energy of +ATP hydrolysis is about 20kBT, which means that the system essentially operates in the irreversible regime [10, 36]. +This system consists of the following reactions: +R + L +k+ +−−⇀ +↽−− +k− +RL +(E1) +RL + x +kf +−−→ RL + x∗ +(E2) +X∗ +kr +−−→ X +(E3) +Both the total number of receptors RT = R + RL and read-out molecules XT = X + X∗ are conserved moieties. The +chemical Langevin equations of this system are: +˙ +RL = [RT − RL(t)]ℓ(t)k+ − RL(t)k− + Bc(RL, ℓ)ξc(t), +(E4) +˙x∗ = [XT − x∗(t)]RL(t)kf − x∗(t)kr + Bx(RL, x∗)ξx(t), +(E5) +where RL is the number of bound receptors, x∗ the number of phosphorylated read-out molecules, and ξi denote +independent Gaussian white noise with unit variance, ⟨ξi(t)ξj(t′)⟩ = δijδ(t − t′). The noise strengths are Bc(RL, ℓ) = +� +(RT − RL(t))ℓ(t)k+ + RL(t)k− and Bx(RL, x∗) = +� +(XT − x∗(t))RL(t)kf + x∗(t)kr. The steady-state fraction of +ligand-bound receptors is p ≡ RL/RT = ¯ℓ/(¯ℓ+KD) with the dissociation constant KD = k−/k+, and the steady-state +fraction of phosphorylated readout molecules is f ≡ ¯x∗/XT = pRT/(pRT + kr/kf). +In the linear-noise approximation, expanding Eqs. E4 and E5 to first order around their steady state, the equations +become +δ ˙ +RL = b δℓ(t) − δRL(t)/τc + ηc(t), +(E6) +δ ˙x∗ = γ δRL(t) − δx∗(t)/τr + ηx(t). +(E7) + +20 +The parameters b = RTp(1 − p)/(¯ℓτc) and γ = XTf(1 − f)/(RTpτr) are effective rates of receptor-ligand binding +and readout phosphorylation, respectively. +The decay rate of correlations in the receptor-ligand binding state is +τc−1 = ¯ℓk+ + k−, and that of the readout phosphorylation state is τr−1 = pRTkf + kr. The rescaled white noise +processes have strengths ⟨η2 +c⟩ = B2 +c = 2RTp(1 − p)/τc and ⟨η2 +x⟩ = B2 +x = 2XTf(1 − f)/τr. +1. +Model statistics +The relevant quantity to compute the past information is the variance in the output, decomposed into the part +caused by signal variation and the part caused by noise. To compute the predictive information we further need the +correlation function between the current output and a future ligand concentration ⟨δℓ(τ)δx∗(0)⟩. These quantities +can be obtained via their Fourier transforms, as in Eq. A5 and Eq. A6. The matrices describing the properties of the +signalling network are, as defined below Eq. A1, +G = +� +b +0 +� +, +(E8) +J = +� +−τc−1 +0 +γ +−τr−1 +� +, +(E9) +B = +�� +⟨η2c⟩ +0 +0 +� +⟨η2x⟩ +� += +�� +2RTp(1 − p)/τc +0 +0 +� +2XTf(1 − f)/τr +� +. +(E10) +A useful property of the network is the matrix exponential of its Jacobian, which in Fourier space is (see Eq. A2 +and Eq. A4) +F{eJ t}(ω) = (iωI2 − J )−1, += +� +� +1 +1/τc+iω +0 +γ +(1/τc+iω)(1/τr+iω) +1 +1/τr+iω +� +� . +(E11) +We then have G(ω) = F{eJ t}(ω)G and N(ω) = F{eJ t}(ω)B, see also Eq. A4 and Eq. A5. The integration kernel that +maps the ligand concentration onto the output of the push-pull network, see Eq. D1, is given by the inverse Fourier +transform of the second entry of G(ω), which is the frequency dependent gain, ˜gℓ→x(ω), from ℓ to x: +k(t) ≡ F−1{˜gℓ→x(ω)} = bγτcτr +1 +τr − τc +� +e−t/τr − e−t/τc� +, += XTf(1 − f)(1 − p)/¯ℓ +1 +τr − τc +� +e−t/τr − e−t/τc� +, +(E12) +The so-called static gain of the network is the integral of this kernel over all time, ¯gℓ→x ≡ +� ∞ +0 +k(t)dt = XTf(1 − +f)(1 − p)/¯ℓ. +This parameter quantifies how much a step change in the input concentration changes the steady- +state level of the output: ¯gℓ→x = ∂ ¯x∗/∂¯ℓ. We will use this parameter in the statistical quantities that follow. The +static gain is also given by ¯gℓ→x = ¯gℓ→RL¯gRL→x, with ¯gℓ→RL = p(1 − p)RT/¯ℓ the static gain from ¯ℓ to RL and +¯gRL→x = f(1 − f)XT/(pRT) the static gain from RL to x∗. +We model the Markovian ligand concentration as a 1-dimensional OU process Eq. B1, which has the following +power spectrum +Sℓ(ω) = ⟨|δℓ(ω)|2⟩ = +2σ2 +ℓ/τℓ +1/τ 2 +ℓ + ω2 . +(E13) +This yields the following expression for the power spectra (see Eq. A5): +G(−ω)Sℓ(ω)G(ω)T = b2 +� +� +1 +1/τc2+ω2 +γ +1 +1/τr−iω +1 +1/τc2+ω2 +γ +1 +1/τr+iω +1 +1/τc2+ω2 γ2 +1 +1/τr2+ω2 +1 +1/τc2+ω2 +� +� +2σ2 +ℓ/τℓ +1/τ 2 +ℓ + ω2 +(E14) +|N(ω)|2 = ⟨η2 +c⟩ +� +� +� +1 +1/τc2+ω2 +γ +1 +1/τr−iω +1 +1/τc2+ω2 +γ +1 +1/τr+iω +1 +1/τc2+ω2 γ2 +1 +1/τr2+ω2 +1 +1/τc2+ω2 + ⟨η2 +x⟩ +⟨η2c⟩ +1 +1/τr2+ω2 +� +� +� +(E15) + +21 +We thus have for the power spectrum of the read-out: +Sx(ω) = ˜g2 +ℓ→x(ω)Sℓ(ω) + N 2 +x(ω) += +2b2γ2σ2 +ℓ/τℓ +(1/τr2 + ω2)(1/τc2 + ω2)(1/τ 2 +ℓ + ω2) + +γ2⟨η2 +c⟩ +(1/τr2 + ω2)(1/τc2 + ω2) + +⟨η2 +x⟩ +1/τr2 + ω2 , +(E16) +The variance in the read-out σ2 +x = 1/(2π) +� ∞ +−∞ Sx(ω) is hence given by +σ2 +x = σ2 +x|η + σ2 +x|L += ¯g2 +ℓ→x +1 + τr/τℓ + τr/τc +(1 + τc/τℓ)(1 + τr/τℓ)(1 + τr/τc)σ2 +ℓ + ¯g2 +RL→xRTp(1 − p) +1 +1 + τr/τc ++ XTf(1 − f), += ¯g2 +ℓ→x +1 + τr/τℓ + τr/τc +(1 + τc/τℓ)(1 + τr/τℓ)(1 + τr/τc) +� +�� +� +dynamical gain +σ2 +ℓ + XTf(1 − f) +� +1 + ¯gℓ→x +¯ℓ +RTp +1 +1 + τr/τc +� +, +(E17) +where ¯gRL→x = γτr = XTf(1 − f)/(RTp) is the static gain from the receptor to the readout. The expression above +gives insight into the role of the different network components in shaping the noise in the readout. It can be seen +that the contribution from the signal variance σ2 +ℓ to σ2 +x is determined by the static gain ¯g2 +ℓ→x, which is proportional +to XT, and a factor that only depends on ratios of timescales. Their product is the dynamical gain, which decreases +monotonically with τr. +The intrinsic noise in the phosphorylation state of the read-outs leads to the noise term +XTf(1 − f), which cannot be averaged out. The noise arising from ligand binding and unbinding increases with the +static gain, but can be mitigated by increasing the number of receptors or the integration time τr. The latter strategy +is what we call time-averaging. +The signal-to-noise ratio SNR = σ2 +x|η/σ2 +x|L can straightforwardly be obtained from Eq. E17. This is the quantity +that sets the magnitude of the past information, see Eq. D3. +To determine the predictive information we need +to compute the correlation function from the current output to the future ligand concentration ⟨δx(0)δℓ(τ)⟩. This +requires the cross-spectrum from output to ligand concentration, which is given by (Eq. A6) +˜gℓ→x(−ω)Sℓ(ω) = +bγ +(1/τc − iω)(1/τr − iω) +2σ2 +ℓ/τℓ +1/τ 2 +ℓ + ω2 . +(E18) +From this power spectrum we obtain the required correlation function by taking the inverse Fourier transfrom: +⟨δx(0)δℓ(τ)⟩ = F−1{˜gℓ→x(−ω)Sℓ(ω)}, += +¯gℓ→xσ2 +ℓ +(1 + τc/τℓ)(1 + τr/τℓ)e−τ/τℓ. +(E19) +This correlation function thus decays exponentially with the prediction interval τ at a rate τ −1 +ℓ +, just as the signal auto- +correlation. The (squared) correlation coefficient, which sets Ipred, is given by ⟨δx(0)δℓ(τ)⟩2/(σ2 +ℓσ2 +x) = ρ2 +ℓxe−2τ/τℓ, +with the (squared) instantaneous correlation coefficient (for convenience given as its inverse) +ρ−2 +ℓx = +¯ℓ2 +σ2 +ℓ +� +1 + τr +τℓ +�2 � +1 + τc +τℓ +�2 � +1 +XT f(1 − f)(1 − p)2 + +1 +RT p(1 − p)(1 + τr/τc) + σ2 +ℓ +¯ℓ2 +1 + τr/τℓ + τr/τc +(1 + τc/τℓ)(1 + τr/τℓ)(1 + τr/τc) +� +. +(E20) +When the right-hand-side is minimized, the correlation is thus maximized. This expression shows that increasing XT +and RT always increases the instantaneous correlation coefficient, and that the fraction of phosphorylated readout +molecules in steady state that maximizes the correlation coefficient is f = 1/2. +2. +Past and predictive information of the push-pull network +Using the quantitites computed above, we can determine both the past and the predictive information. For the +past information we use Eq. D3, whith the SNR from Eq. E17: +SNR = σ2 +x|η/σ2 +x|L = (1 − p)σ2 +ℓ +¯ℓ2 +1 + τr/τℓ + τr/τc +(1 + τc/τℓ)(1 + τr/τℓ)(1 + τr/τc) +� � +1 +XTf(1 − f)(1 − p) + +1 +RTp(1 + τr/τc) +� +. + +22 +The predictive information is a function of the correlation between the current output and the future ligand concen- +tration, Eq. D6. This correlation can be decomposed into the instantaneous correlation coefficient and an exponential +decay on the timescale of the ligand concentration fluctuations, Eq. E19. We thus obtain for the predictive information, +Ipred(x0; ℓτ) = −1 +2 log(1 − ρ2 +ℓxe−2τ/τℓ). +(E21) +The instantaneous correlation coefficient ρ2 +ℓx is given in Eq. E20. From Eq. E21 it also becomes clear that while +the value of the predictive information depends on the forecast interval τ, the optimal design of the network that +maximizes the predictive information, determined by the optimal ratio XT/RT, the optimal integration time τr, and +the optimal ligand-bound receptor fraction p, does not depend on the forecast interval τ. +3. +Optimal resource allocation +Increasing the number of receptor or readout molecules always increases the precision with which the cell can predict +a signal (see Eq. E20). However, when the total resource pool is constrained, the cell has to choose whether it makes +more receptors or more readout molecules. To find the optimal ratio of read-out to receptor molecules we, can use +the C = ART + BXT to express XT and RT in terms of the total cost C and the ratio XT/RT: +XT = C +XT/RT +A + BXT/RT +, +(E22) +RT = C +1 +A + BXT/RT +. +(E23) +The factors A and B set the cost of receptors and readout molecules, respectively. Substituting these expressions for +XT and RT into the expression for the correlation coefficient between the output and ligand concentration (Eq. E20), +setting the derivative of the resulting expression with respect to XT/RT to zero, and solving for XT/RT gives +(XT/RT)opt = +�� +1 + τr +τc +� +p +1 − p +1 +f(1 − f) +A +B , += 2 +� +p/(1 − p) +� +1 + τr/τc, +(E24) +where for the second line we used A = B = 1 and f = f opt = 1/2. This is the optimal ratio of readout to receptor +molecules in the push-pull network, given an integration time τr and a steady state fraction of ligand-bound receptors +p. +Perhaps surprisingly, this optimal ratio (XT/RT)opt maximizes, for a given τr and p, not only the predictive +information, but also the past information. This is because the ratio XT/RT determines, together with τr and p, the +interval ∆ for sampling the ligand-binding state of the receptor: when the ratio XT/RT obeys Eq. E24, the readout +molecules sample each receptor molecule roughly once every correlation time: ∆ ∼ τc [10, 36]. Eq. E24 is thus a +statement about optimally extracting the information that is encoded in the receptor-ligand binding history, both +concerning the past information and the predictive information. This is illustrated in Fig. 6. +4. +Operating costs diverge when approaching the information bound +The precision of any sensing device is limited by the resources that are devoted to it. The cost function we consider +in this work is +C = λ(RT + XT) + c1XT∆µ/τr. +(E25) +The first term is the maintenance cost; this is the cost of producing new network components at the growth rate +λ. The second term is the operating cost and describes the chemical power that is necessary to run the network; it +depends on the flux through the network, XT/τr, and the free-energy drop ∆µ over a full cycle of phosphorylation +and dephosphorylation, given by the free energy of ATP hydrolysis. The coefficient c1 describes the relative energetic +cost of synthesising the components during the cell cycle, versus that of running the system. In the main text we +consider the case where c1 → 0. Here we will investigate how close cells can come to the information bound when c1 +is finite, thus including the chemical power cost of running the network. +It is clear from Eq. E25 that for finite c1 the operating cost diverges when τr → 0. Because the optimal IBM +solutions are instantaneous, this is precisely the limit in which the network must be to reach the information bound. + +23 +0.0 +0.5 +1.0 +1.5 +0.00 +0.02 +0.04 +0.06 +0.08 +0.10 +Past info (bits) +Predictive info (bits) +τr=0.01 +τr=0.2 +τr=0.5 +τr=1 +Varied XT/RT +p=0.1 +p=0.2 +p=0.4 +FIG. 6. +The past and predictive information are maximized by the same ratio XT/RT and fraction p. +The +information plane, showing the information bound in black, and the isocost line C = 104 in gray. To construct the coloured +lines in this figure the ratio XT/RT has been varied from zero to a value beyond the optimal value that maximizes Ipast and +Ipred. This is done for several values of the receptor occupancy p (p = 0.1 in red, p = 0.2 in blue, p = 0.4 in orange), and +for several values of τr (indicated in the figure). When XT/RT reaches its optimal value, both Ipast and Ipred are maximal. +When the ratio is increased further the system moves back to the origin via the same coordinates. Only the integration time +τr meaningfully distinuishes between strategies that maximize predictive or past information, or that approach the information +bound. The reason is that XT/RT, together with τr and p, control the optimal extraction of information that is encoded in +the receptor-ligand binding history, both concerning Ipast and Ipred. The gray isocost line is obtained by varying τr, while +maximizing for each τr the correlation coefficient given by Eq. E20; the latter is done by substituting Eq. E24 into Eq. E20 and +numerically optimizing the resulting expression over p. The isocost line gives the region of Ipast and Ipred that is accessible for +a given resource cost C. Parameter values are A = B = 1, f = 1/2, (σℓ/¯ℓ)2 = 10−2, τc/τℓ = 10−2. +As a consequence, when we consider the operating costs, the push-pull network can only be at the information bound +when (Ipast, Ipred) → (0, 0) or C → ∞ (Fig. 7A). The system can mitigate the operating costs by decreasing XT, +because this decreases the flux through the cycle. However, this also decreases the gain and thus, eventually, any +information transduced through the network. In the limit that both XT and τr approach zero, the system approaches +the information bound at the origin, see both Fig. 7A and B. More generally, when the running costs are taken into +account, the system time averages more (i.e., τr rises), because frequent measurements are now even more costly. Still, +τr decreases as the total resource availability C grows. +Appendix F: Chemotaxis network +The evidence is mounting that in the E. coli chemotaxis system, receptors cooperatively control the activity of the +kinase CheA [29, 44–46]. Furthermore, the kinase activity is adaptive due to the methylation of inactive receptors +[15, 47]. A widely used approach to describe the effects of receptor cooperativity and methylation on kinase activity, +has been to employ the Monod-Wyman-Changeux (MWC) model [18, 24, 29, 33, 42, 43, 48, 49]. We will follow this +approach and, more specifically, model the chemotaxis system as described by Tu and colleagues [30]. In this model, +each receptor can switch between an active and inactive conformational state. Moreover, receptors are partitioned +into clusters of equal size N. In the spirit of the MWC model, receptors within a cluster switch conformation in +concert, so that each cluster is either active or inactive [33]. Furthermore, it is assumed that receptor-ligand binding +and conformational switching are faster than the other timescales in the system. The probability for the kinase, i.e. +the receptor cluster, to be active, is then described by: +a(ℓ, m) = +1 +1 + exp(∆FT (ℓ, m)), +(F1) +where ∆FT (ℓ, m) is the total free-energy difference between the active and inactive state, which is a function of the +ligand concentration ℓ(t) and the methylation level of the cluster m(t). The simplest model adopted here assumes +a linear dependence of the total free-energy difference on the free-energy difference arising from ligand binding and +methylation: +∆FT (ℓ, m) = −∆E0 + N(∆Fℓ(ℓ) + ∆Fm(m)), +(F2) + +24 +A +B +FIG. 7. Due to diverging operating costs the push-pull network only reaches the information bound for infinite +resource availability. (A) In green, the region of accessible predictive and past information in the push-pull network under +a resource constraint C = λ(RT + XT) + c1XT∆µ/τr, with λ = 1 and c1 = 1/∆µ, corresponding to a cell doubling time of +roughly 20min [10]. The black line is the information bound; the red and blue dots mark the points where Ipred and Ipast are +maximized, respectively, under a resource constraint C; the red and blue lines connect these points, respectively, for increasing +C. The accesible region for C ≤ 104 and the isocost lines for C = 103 and C = 105 have been obtained as described under +Fig. 6. The forecast interval has been set to one signal correlation time in the future: τ = τℓ. (B) The integration time over +the receptor correlation time, τr/τc, and the ratio of the number of readout and receptor molecules, XT/RT, as a function of +the distance θ along the iscocost line for C = 104 in panel A. For θ → 0, both τr and XT go to zero, thus reducing both Ipast +and Ipred to zero. Other parameter values in both panels are f = f opt = 1/2, (σℓ/¯ℓ)2 = 10−2, τc/τℓ = 10−2. +where the free-energy difference due to ligand binding is +∆Fℓ(ℓ) = ln(1 + ℓ(t)/KI +D) − ln(1 + ℓ(t)/KA +D). +(F3) +Between the two states the cluster has an altered dissociation constant, which is denoted KI +D for the inactive state, +and KA +D for the active state. The free-energy difference due to methylation has been experimentally shown to depend +approximately linearly on the methylation level [29]: +∆Fm(m) = ˜α( ¯m − m(t)). +(F4) +We assume that inactive receptors are irreversibly methylated, and active receptors irreversibly demethylated, with +zero-order ultrasensitive kinetics [30, 31, 50]. The dynamics of the methylation level of the ith receptor cluster is then +given by: +˙mi =(1 − ai(ℓ, mi))kR − ai(ℓ, mi)kB + Bmi(ai)ξ(t), +(F5) +with B(i) +m (ai) = +� +(1 − ai(ℓ, mi))kR + ai(ℓ, mi)kB, and unit white noise ξ(t). These dynamics indeed give rise to +perfect adaptation, since from this equation we find that the steady state cluster activity is given by p ≡ ¯a = +1/(1 + kB/kR), thus indeed independent of the ligand concentration. +Finally, active receptors catalyze phosphorylation of read-out molecules, and phosphorylated read-out molecules +decay at a constant rate. We have +˙x∗ = +RT +� +i=1 +ai(t)(XT − x∗(t))kf − x∗(t)kr + Bx(ai, x∗)ξ(t), +(F6) +where RT is the total number of receptor clusters. The steady state fraction of phosphorylated read-outs is given by +f ≡ ¯x∗/XT = (1 + kr/(kfRTp))−1. + +25 +1. +Linear dynamics +We again do a first order approximation around the steady state, defining all variables in terms of deviations from +their mean: δℓ(t) = ℓ(t) − ¯ℓ, δm(t) = m(t) − ¯m and δa(t) = a(t) − p. The linear form of this model has previously +been studied in for example [30] and [31]. We obtain for the linear dynamics of the ith cluster activity +δai(t) = αδmi(t) − βδℓ(t), +(F7) +with α = ˜αNp(1 − p) and β = κNp(1 − p), with κ = (¯ℓ + KI +D)−1 − (¯ℓ + KA +D)−1. For the methylation on the ith cluster +and for the readout dynamics we then obtain, as a function of δa(t), +˙ +δmi = −δai(t)/(ατm) + ηmi(t), +(F8) +˙ +δx∗ = γ +RT +� +i=1 +δai(t) − δx∗(t)/τr + ηx(t), +(F9) +where we have introduced the relaxation times τm = (α(kR + kB))−1 for methylation and τr = (RTpkf + kr)−1 for +phosphorylation. +We have further defined the rate at which an active cluster phosphorylates the readout CheY: +γ = XTf(1 − f)/(pRTτr). Substituting the expression for δai in Eq. F7 into Eqs. F8 and F9, and expressing the +dynamics in terms of the methylation on all clusters gives +d +dt +� RT +� +i=1 +δmi +� += − +RT +� +i=1 +δmi/τm + qδℓ(t)/(ατm) + ηm(t), +(F10) +˙ +δx∗ = −δx∗(t)/τr − γqδℓ(t) + γα +RT +� +i=1 +δmi(t) + ηx(t), +(F11) +with q = RTβ (see Eq. F7 for β). The rescaled white noise ηm is the sum of the methylation noise on all receptor +clusters, ⟨η2 +m⟩ = 2RTp(1 − p)/(ατm), where we have assumed that the methylation noise on the respective receptor +clusters is independent. The phosphorylation noise has strength ⟨η2 +x⟩ = 2XTf(1 − f)/τr. +2. +Parameter values +A large body of work has studied the parameters of the MWC model for the E. coli chemotaxis system. We have +listed the parameters relevant for our model in table I. We choose the background concentration ¯ℓ to be in between +KI +D and KA +D, at ¯ℓ = 100µM. +In this work we analyze the impact of the methylation timescale τm, and the numbers of receptor clusters and +readout molecules RT and XT, on the past and predictive information. We therefore do not set them to a fixed value, +but experimental estimates are listed in table II. +3. +Model statistics +Again we take the power spectrum route to determine the variance in the network output, the SNR, and the +correlation coefficient between current output and the future signal. +We consider the system to sense the non- +Markovian ligand concentration defined in equation Eq. B3. Such a signal is characterized by both its concentration +TABLE I. Measured E. coli chemotaxis parameter values. +Parameter +Value +Source +Description +KI +D +18µM +[42, 43] +MeAsp-Tar dissociation constant inactive receptor +KA +D +2900µM +[42, 43] +MeAsp-Tar dissociation constant active receptor +N +∼ 6 +[29, 42, 43, 51] +Number of receptors per cluster +˜α +2kBT +[29] +Free energy change per added methyl group +p +1 +3, 1 +2 +[29, 43] +Steady state activity at 22◦C, 32◦C +τr +∼ 0.1s +[10, 18, 51] +Phosphorylation timescale + +26 +TABLE II. Approximate E. coli chemotaxis timescales and abundances. +Parameter +Value +Source +Description +τm +∼ 10s +[15, 18, 29] +Adaptation time +Tsr+Tar +14000, 3300 +[35] +Rich medium; RP437, OW1 strain +Tsr+Tar +24000, 37000 +[35] +Minimal medium; RP437, OW1 strain +CheY +8200, 1400 +[35] +Rich medium; RP437, OW1 strain +CheY +6300, 14000 +[35] +Minimal medium; RP437, OW1 strain +and derivative, and the (cross-)power spectra of these properties are +Ss(ω) = +� +Sℓ(ω) +Sℓ→v(ω) +Sv→ℓ(ω) +Sv(ω) +� += +� +Sℓ(ω) +iωSℓ(ω) +−iωSℓ(ω) ω2Sℓ(ω) +� +, +(F12) +with +Sℓ(ω) = +2σ2 +v/τv +(ω2 + ((2τv)−1 + ρ)2)(ω2 + ((2τv)−1 − ρ)2), +(F13) +where ρ = +� +(4τ 2v )−1 − ω2 +0. The chemotaxis signalling network is fully determined by the following matrices (Eq. A1) +G = q +� +1/(ατm) 0 +−γ +0 +� +, +(F14) +J = +� +−1/τm +0 +αγ +−1/τr +� +, +(F15) +B = +�� +⟨η2m⟩ +0 +0 +� +⟨η2x⟩ +� +. +(F16) +The Fourier transform of the matrix exponential of the Jacobian is +F{eJ t} = (iωIn − J )−1 += +� +� +1 +1/τm+iω +0 +αγ +(1/τm+iω)(1/τr+iω) +1 +1/τr+iω +� +� , +(F17) +which allows us to determine the gain matrix via G(ω) = F{eJ t}(ω)G, and the noise matrix using N(ω) = F{eJ t}(ω)B; +see also Eq. A4 and Eq. A5. +To gain more insight in the way in which the network maps the signal onto its output, we first study the integration +kernels of the system. The integration kernel from ligand concentration to output is given by the inverse Fourier +transform of element (1, 2) of the gain matrix G(ω), which is +k(t) ≡ F−1{˜gℓ→x(ω)} = κNf(1 − f)(1 − p)XT +1 +1 − τr/τm +� 1 +τm +e−τ/τm − 1 +τr +e−τ/τr +� +, +(F18) +with κ = (¯ℓ+KI +D)−1−(¯ℓ+KA +D)−1. Due to the adaptive nature of the network, the static gain from ligand concentration +to output is zero: ¯gℓ→x = +� ∞ +0 +k(t)dt = 0; the long-time response to a step change in a constant input is zero. The +kernel does indeed not change the output based on the input concentration directly, but instead takes a (time-averaged) +derivative of the input (Fig. 8A). It is therefore useful to consider the kernel that maps the signal derivative onto the +output. This kernel can be found by rearranging the expression for the output of a linear signalling network, Eq. D1. +Disregarding the noise terms and integrating by parts gives +� 0 +−∞ +k(−t)ℓ(t)dt = K(−t)ℓ(t)|0 +−∞ − +� 0 +−∞ +K(−t)v(t)dt, +(F19) + +27 +where v(t) ≡ ˙ℓ and K(t) is the primitive of k(t). To make progress we first determine K(t), +K(t) = κNf(1 − f)(1 − p)XT +1 +1 − τr/τm +� +−e−τ/τm + e−τ/τr� +. +(F20) +The form of K(t) is that of a simple exponential kernel with a delay (Fig. 8B). We thus have both K(0) = 0 and +K(∞) = 0. It is now clear that the convolution over the ligand concentration simply maps onto the convolution over +its derivative as +� 0 +−∞ +k(−t)ℓ(t)dt = − +� 0 +−∞ +K(−t)v(t)dt. +(F21) +The static gain of K(t) is ¯gv→x = +� ∞ +0 +K(t)dt = qγτrτm = κNXT(1 − p)f(1 − f)τm. The gain thus increases with the +number of receptors per cluster, N, the number of readout molecules, XT, and notably, with the adaptation time τm. +This static gain from signal derivative to network output is a useful quantity which we will use to describe the other +statistics of the network below. +0 +5 +10 +15 +20 +-5 +0 +5 +10 +15 +20 +25 +30 +Time (s) +-k(t): kernel ℓ → x* +0 +5 +10 +15 +20 +-5 +0 +5 +10 +15 +20 +25 +30 +Time (s) +K(t): kernel v → x* +0.01 +0.10 +1 +10 +100 +0.01 +1 +100 +104 +Frequency ω (s-1) +Power +τr +-1 +τm +-1 +Nx +2 +gℓ→x +2 +gℓ→x +2 /Nx +2 +A +B +C +FIG. 8. Integration kernel and power spectra. (A) The integration kernel k(t) takes a temporal derivative by weighing the +most recent signal values with an opposite sign from the preceding ones. (B) The integration kernel K(t) from the derivative +of the input concentration to the network output. The kernel K(t) is the primitive of k(t), and its static gain is proportional +to the adaptation timescale τm. (C) Frequency dependent gain ˜g2 +ℓ→x(ω), frequency dependent noise N 2 +x(ω), and their ratio, as +a function of frequency. The chemotaxis network is a band-pass filter, the frequencies that are passed through are set by τr +on the high end and τm on the low end. At low frequencies, the methylation noise dominates. Parameters used in all panels +τr = 0.1s and τm = 10s. Model parameters are ˜a = 2, N = 6, KI +D = 18µM, KA +D = 2900µM, ¯ℓ = 100µM, p = f = 0.5. +To compute the past and predictive information, we need to determine the variance in the output, the SNR, and the +correlation between the current output and the future ligand derivative. To that end we require the power spectrum +of the output, and the cross-spectrum from output to future derivative. For the power spectrum of the output we use +Eq. A5 to find +Sx(ω) = +q2γ2ω2 +(τr−2 + ω2)(τm−2 + ω2)Sℓ(ω) + +α2γ2⟨η2 +m⟩ +(τr−2 + ω2)(τm−2 + ω2) + +⟨η2 +m⟩ +τr−2 + ω2 . +(F22) +From this power spectrum we can see that the network is a band-pass filter, where the gain is maximal in the frequency +range τm−1 < ω < τr−1. Both for ω ≫ τr−1 and ω ≪ τm−1 the gain goes to 0. On long timescales the methylation +noise dominates (Fig. 8C). The cross-power spectrum between current output and future ligand derivative is given by +element (2, 2) of the matrix G(−ω)Ss(ω) which is (also see Eq. A6) +Sx→v(ω) = qγ +−ω2Sℓ(ω) +(τm−1 − iω)(τr−1 − iω). +(F23) +In the main text, we argue that the biologically relevant regime of the input signal is the limit ω0 → 0. We therefore +present below the network statistics in this limit. We start by determining the variance in the readout, via the inverse +Fourier transform of its power spectrum (Eq. F22): +lim +ω0→0 σ2 +x = ¯g2 +v→x +1 + τr/τv + τr/τm +(1 + τm/τv)(1 + τr/τv)(1 + τr/τm)σ2 +v + ¯g2 +a→xαRTp(1 − p) +1 +1 + τr/τm ++ XTf(1 − f), += ¯g2 +v→x +1 + τr/τv + τr/τm +(1 + τm/τv)(1 + τr/τv)(1 + τr/τm) +� +�� +� +dynamical gain +σ2 +v + XTf(1 − f) +� +1 + ¯gv→x +˜α(1 − p) +RTκτm +1 +1 + τr/τm +� +, +(F24) + +28 +where ¯ga→x = γτr = XTf(1−f)/(RTp) is the static gain from receptor activity to readout, and we used the definition +of α = ˜αNp(1 − p). Because there is no receptor-ligand binding noise, there is also no time averaging as in the +push-pull network (and hence no factor depending on τr/τc). There is methylation noise on a timescale τm, but +this cannot be time-averaged effectively because the integration time τr of the push-pull network is shorter than the +receptor methylation timescale τm. The methylation noise can only be averaged out significantly by increasing RT. +The contribution from the variance in the signal derivative, σ2 +v, to the output noise σ2 +x, depends on the dynamical gain, +which is the product of the static gain ¯g2 +v→v and a factor that only depends on ratios of timescales. The dynamical +gain is maximized for τr → 0 and τm → ∞, which is intuitive since subtracting a signal from an earlier one reduces +the amplification of the signal. Hence, when the system has too few XT molecules to lift the signal above the noise, +τm must be increased to raise the gain. Only when XT is sufficiently large, can τm be reduced. This allows the system +to take more recent derivatives. The signal to noise ratio SNR = σ2 +x|η/σ2 +x|L can straightforwardly be obtained from +Eq. F24. For the covariance between the current output and the future derivative we have +lim +ω0→0⟨δx(0)δv(τ)⟩ = F−1{Sx→v(ω)}, += +−¯gv→xσ2 +v +(1 + τm/τv)(1 + τr/τv)e−τ/τv. +(F25) +The variance in Eq. F24 can be used to obtain the normalized correlation function ⟨δx(0)δv(τ)⟩/(σxσv). +4. +Past and predictive information of the chemotaxis network +The past and predictive information are straightforward to compute from the quantities above. The definition of +the past information is the same as for the push-pull network, and is given by Eq. D3. The SNR is now given by, +using Eq. F24: +SNR = σ2 +x|η/σ2 +x|L = κ2Nτ 2 +mσ2 +v +1 + τr/τv + τr/τm +(1 + τm/τv)(1 + τr/τv)(1 + τr/τm) +� � +1 +NXTf(1 − f)(1 − p)2 + +˜α +RT(1 + τr/τm) +� +, +where κ = (¯ℓ + KI +D)−1 − (¯ℓ + KA +D)−1. The predictive information is found in the same manner as in Eq. D6, but now +it is a function of the correlation between the current output and the future derivative of the ligand concentration. +This correlation can be decomposed into the instantaneous correlation coefficient and an exponential decay on the +timescale of the fluctuations of the derivative of the concentration, Eq. F25. Specifically, the predictive information +is given by +Ipred(x0; vτ) = −1 +2 log(1 − ρ2 +ℓve−2τ/τv). +(F26) +The instantaneous correlation coefficient ρ2 +ℓv can be found using Eq. F25 and Eq. F24. From Eq. F26 it is clear +that just like for the push-pull network, the optimal design of the network that maximizes the predictive information, +determined by the optimal ratio XT/RT and the optimal adaptation time τm, does not depend on the forecast interval +τ. The forecast interval only affects the magnitude of the predictive information. +5. +Optimal allocation +We can determine the optimal ratio (XT/RT)opt that maximizes either the past information or the predictive infor- +mation, given all other network parameters, most notably τm. Just as for the push-pull network, we find however that +the optimal ratio (XT/RT)opt is the same regardless of whether the past or the predictive information is maximized. +This is again because the information on the future signal (be it the value or the derivative) is encoded in the receptor +occupancy, while the ratio XT/RT controls the interval by which the downstream readout samples the receptor to +estimates its occupancy. Nonetheless, the optimal methylation timescale τmopt that maximizes either the past or the +predictive information is different—maximizing predictive information requires a more recent derivative and hence a +shorter τm than obtaining past information. +Given τm and all other parameters, the optimal ratio of the number of readout molecules over receptor clusters is, + +29 +using C = RT + XT, +�XT +RT +�opt += +� +1 +α +1 +f(1 − f) +p +1 − p +� +1 + τr +τm +, += 2 +� +2/N +� +1 + τr +τm +, +(F27) +where in the second line we have used that α = ˜αNp(1 − p), and ˜α = 2, and f = p = 0.5. Because for the chemotaxis +network τr < τm the ratio τr/τm only varies between 0 and 1. For this reason, the optimal ratio (XT/RT)opt depends +only weakly on τm, and does not vary strongly along the isocost lines of Fig. 4A in the main text, see Fig. 9. +0.00 +0.05 +0.10 +0.15 +0.20 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +4.0 +Distance along fixed cost line θ +XT/RT +FIG. 9. The optimal allocation ratio XT/RT varies only slightly along the isocost lines of Fig. 4A in the main +text. The optimal ratio XT/RT as a function of the distance θ along the isocost lines of Fig. 4A of the main text; dotted line +C = 102, solid line C = 104, dashed line C = 106. The red dots mark the points where the predictive information is maximal. +Along the isocost lines XT/RT varies much more weakly than for the push-pull network; for resource availability C ≤ 104 the +ratio is almost constant. Parameters used g = 4mm−1, τr = 0.1s, KI +D = 18µM, KA +D = 2900µM, N = 6, ˜α = 2, p = f = 0.5, +¯ℓ = 100µM. + +30 +[1] M. Monti, D. K. Lubensky, and P. R. ten Wolde, Robust- +ness of Clocks to Input Noise, Physical Review Letters +121, 078101 (2018), publisher: American Physical Soci- +ety. +[2] W. Pittayakanchit, Z. Lu, J. Chew, M. J. Rust, and +A. Murugan, Biophysical clocks face a trade-off between +internal and external noise resistance., eLife 7 (2018). +[3] E. Kussell and S. Leibler, Phenotypic Diversity, Pop- +ulation Growth, and Information in Fluctuating Envi- +ronments, Science 10.1126/science.1114383 (2005), pub- +lisher: American Association for the Advancement of Sci- +ence. +[4] I. Tagkopoulos, Y.-C. Liu, and S. Tavazoie, Predic- +tive Behavior Within Microbial Genetic Networks, Sci- +ence 10.1126/science.1154456 (2008), publisher: Ameri- +can Association for the Advancement of Science. +[5] A. Mitchell, G. H. Romano, B. Groisman, A. Yona, +E. Dekel, M. Kupiec, O. Dahan, and Y. Pilpel, Adaptive +prediction of environmental changes by microorganisms, +Nature 460, 220 (2009), number: 7252 Publisher: Nature +Publishing Group. +[6] W. Bialek, Biophysics: searching for principles, edited by +Princeton Editorial Associates Inc. (Princeton University +Press, Woodstock, Oxfordshire, 2012) pages: 563. +[7] N. B. Becker, A. Mugler, and P. R. Ten Wolde, Opti- +mal Prediction by Cellular Signaling Networks, Physical +Review Letters 115, 1 (2015). +[8] N. Tishby, F. C. Pereira, and W. Bialek, The information +bottleneck method, Proceedings of 37th Allerton Confer- +ence on communication and computation (1999). +[9] V. Sachdeva, T. Mora, A. M. Walczak, and S. E. Palmer, +Optimal prediction with resource constraints using the +information bottleneck, PLOS Computational Biology +17, e1008743 (2021), publisher: Public Library of Sci- +ence. +[10] C. C. Govern and P. R. ten Wolde, Optimal resource +allocation in cellular sensing systems, Proceedings of the +National Academy of Sciences 111, 17486 LP (2014). +[11] G. Malaguti and P. R. ten Wolde, Theory for the opti- +mal detection of time-varying signals in cellular sensing +systems, eLife 10, e62574 (2021). +[12] G. Chechik, A. Globerson, N. Tishby, and Y. Weiss, In- +formation Bottleneck for Gaussian Variables, Journal of +Machine Learning Research 6, 165 (2005). +[13] A. Goldbeter and D. E. Koshland, An amplified sensitiv- +ity arising from covalent modification in biological sys- +tems, Proceedings of the National Academy of Sciences +78, 6840 (1981), publisher: National Academy of Sci- +ences Section: Research Article. +[14] U. Alon, An Introduction to Systems Biology: +De- +sign Principles of Biological Circuits (Chapman and +Hall/CRC, New York, 2006). +[15] J. E. Segall, S. M. Block, and H. C. Berg, Temporal com- +parisons in bacterial chemotaxis., Proceedings of the Na- +tional Academy of Sciences of the United States of Amer- +ica 83, 8987 (1986), publisher: Proc Natl Acad Sci U S +A. +[16] C. C. Govern and P. R. ten Wolde, Optimal resource +allocation in cellular sensing systems, Proceedings of the +National Academy of Sciences of the United States of +America 111, 17486 (2014). +[17] M. Hinczewski and D. Thirumalai, Cellular Signaling +Networks Function as Generalized Wiener-Kolmogorov +Filters to Suppress Noise, Physical Review X 4, 3 (2014). +[18] H. H. Mattingly, +K. Kamino, +B. B. Machta, and +T. Emonet, Escherichia coli chemotaxis is information +limited, Nature Physics 2021 17:12 17, 1426 (2021), pub- +lisher: Nature Publishing Group. +[19] T.-L. +Wang, +B. +Kuznets-Speck, +J. +Broderick, +and +M. Hinczewski, The price of a bit: +energetic costs +and +the +evolution +of +cellular +signaling, +bioRxiv +, +2020.10.06.327700 (2022). +[20] S. Tanase-Nicola, P. B. Warren, and P. R. ten Wolde, Sig- +nal detection, modularity, and the correlation between +extrinsic and intrinsic noise in biochemical networks, +Phys. Rev. Lett. 97, 068102 (2006). +[21] E. Ziv, I. Nemenman, and C. H. Wiggins, Optimal sig- +nal processing in small stochastic biochemical networks, +PLoS One 2, 10.1371 (2007). +[22] W. De Ronde, F. Tostevin, and P. R. ten Wolde, Effect +of feedback on the fidelity of information transmission of +time-varying signals, Phys. Rev. E 82, 031914 (2010). +[23] T. E. Ouldridge, C. C. Govern, and P. R. ten Wolde, +Thermodynamics of computational copying in biochem- +ical +systems, +Physical +Review +X +7, +10.1103/Phys- +RevX.7.021004 (2017). +[24] J. E. Keymer, R. G. Endres, M. Skoge, Y. Meir, and +N. S. Wingreen, Chemosensing in Escherichia coli: Two +regimes of two-state receptors, Proceedings of the Na- +tional Academy of Sciences 103, 1786 (2006), publisher: +National Academy of Sciences. +[25] N. Barkai and S. Leibler, Robustness in simple biochem- +ical networks, Nature 387, 913 (1997). +[26] R. G. Endres and N. S. Wingreen, Precise adaptation +in bacterial chemotaxis through “assistance neighbor- +hoods”, Proceedings of the National Academy of Sciences +103, 13040 (2006). +[27] G. Lan, P. Sartori, S. Neumann, V. Sourjik, and Y. Tu, +The energy–speed–accuracy trade-off in sensory adapta- +tion, Nature Physics 8, 422 (2012). +[28] P. Sartori and Y. Tu, Free Energy Cost of Reducing Noise +while Maintaining a High Sensitivity, Physical Review +Letters 115, 118102 (2015), 1505.07413. +[29] T. S. Shimizu, Y. Tu, and H. C. Berg, A modular +gradient-sensing network for chemotaxis in Escherichia +coli revealed by responses to time-varying stimuli, Molec- +ular Systems Biology 6, 1 (2010), publisher: Nature Pub- +lishing Group. +[30] Y. Tu, T. S. Shimizu, and H. C. Berg, Modeling the +chemotactic response of Escherichia coli to time-varying +stimuli, Proceedings of the National Academy of Sciences +of the United States of America 105, 14855 (2008). +[31] F. Tostevin and P. R. Ten Wolde, Mutual information be- +tween input and output trajectories of biochemical net- +works, Physical Review Letters 102, 1 (2009). +[32] M. Reinhardt, G. Tkaˇcik, and P. R. t. Wolde, Path +Weight Sampling: Exact Monte Carlo Computation of +the Mutual Information between Stochastic Trajectories, +arXiv 10.48550/arxiv.2203.03461 (2022), 2203.03461. +[33] J. Monod, J. Wyman, and J.-P. Changeux, On the nature +of allosteric transitions: A plausible model, Journal of +Molecular Biology 12, 88 (1965). + +31 +[34] V. Sourjik and H. C. Berg, Binding of the Escherichia coli +response regulator CheY to its target measured in vivo +by fluorescence resonance energy transfer, Proceedings of +the National Academy of Sciences of the United States of +America 99, 12669 (2002), publisher: National Academy +of Sciences. +[35] M. Li and G. L. Hazelbauer, Cellular stoichiometry of the +components of the chemotaxis signaling complex, Journal +of Bacteriology 186, 3687 (2004), publisher: J Bacteriol. +[36] G. Malaguti and P. R. T. Wolde, Theory for the optimal +detection of time-varying signals in cellular sensing sys- +tems, eLife 10, 1 (2021), publisher: eLife Sciences Pub- +lications Ltd. +[37] P. Sartori and Y. Tu, Free Energy Cost of Reducing Noise +while Maintaining a High Sensitivity, Physical Review +Letters 115, 118102 (2015), publisher: American Physi- +cal Society. +[38] J. O. Dubuis, G. Tkacik, E. F. Wieschaus, T. Gregor, and +W. Bialek, Positional information, in bits, Proceedings of +the National Academy of Sciences of the United States +of America 110, 16301 (2013). +[39] N. Van Kampen, Stochastic processes in physics and +chemistry (North Holland, Amsterdam, 1992). +[40] E. Ziv, I. Nemenman, and C. H. Wiggins, Optimal sig- +nal processing in small stochastic biochemical networks., +PLoS ONE 2, e1077 (2007). +[41] M. Vennettilli, S. Saha, U. Roy, and A. Mugler, Precision +of Protein Thermometry, Physical Review Letters 127, +098102 (2021), publisher: American Physical Society. +[42] V. Sourjik and H. C. Berg, Functional interactions be- +tween receptors in bacterial chemotaxis, Nature 2004 +428:6981 428, 437 (2004), publisher: Nature Publishing +Group. +[43] B. A. Mello and Y. Tu, Effects of adaptation in main- +taining high sensitivity over a wide range of backgrounds +for Escherichia coli chemotaxis, Biophysical Journal 92, +2329 (2007), publisher: Biophysical Society. +[44] J. R. Maddock and L. Shapiro, Polar Location of the +Chemoreceptor Complex in the Escherichia coli Cell, Sci- +ence 259, 1717 (1993), publisher: American Association +for the Advancement of Science. +[45] T. A. J. Duke and D. Bray, Heightened sensitivity of a +lattice of membrane receptors, Proceedings of the Na- +tional Academy of Sciences 96, 10104 (1999). +[46] J. M. Keegstra, K. Kamino, F. Anquez, M. D. Lazova, +T. Emonet, and T. S. Shimizu, Phenotypic diversity and +temporal variability in a bacterial signaling network re- +vealed by single-cell FRET., eLife 6, 708 (2017). +[47] J. S. Parkinson, G. L. Hazelbauer, and J. J. Falke, Signal- +ing and sensory adaptation in Escherichia coli chemore- +ceptors: 2015 update, Trends in Microbiology Special Is- +sue: Microbial Translocation, 23, 257 (2015). +[48] B. A. Mello and Y. Tu, An allosteric model for het- +erogeneous receptor complexes: +Understanding bacte- +rial chemotaxis responses to multiple stimuli, Proceed- +ings of the National Academy of Sciences of the United +States of America 102, 17354 (2005), publisher: National +Academy of Sciences. +[49] K. Kamino, J. M. Keegstra, J. Long, T. Emonet, and +T. S. Shimizu, Adaptive tuning of cell sensory diversity +without changes in gene expression, Science Advances +6, eabc1087 (2020), publisher: American Association for +the Advancement of Science. +[50] T. Emonet and P. Cluzel, Relationship between cellular +response and behavioral variability in bacterial chemo- +taxis, Proceedings of the National Academy of Sciences +105, 3304 (2008), 0705.4635. +[51] M. N. Levit, T. W. Grebe, and J. B. Stock, Organiza- +tion of the Receptor-Kinase Signaling Array That Regu- +lates Escherichia coli Chemotaxis *, Journal of Biological +Chemistry 277, 36748 (2002), publisher: Elsevier. +