doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1504.00310
2
This paper studies the utility maximization problem on the terminal wealth with both random endowments and proportional transaction costs. To deal with unbounded random payoffs from some illiquid claims, we propose to work with the acceptable portfolios defined via the consistent price system (CPS) such that the liquidation value processes stay above some stochastic thresholds. In the market consisting of one riskless bond and one risky asset, we obtain a type of the super-hedging result. Based on this characterization of the primal space, the existence and uniqueness of the optimal solution for the utility maximization problem are established using the convex duality analysis . As an important application of the duality theory , we provide some sufficient conditions for the existence of a shadow price process with random endowments in a generalized form as well as in the usual sense using acceptable portfolios.
This paper studies the utility maximization on the terminal wealth with random endowments and proportional transaction costs. To deal with unbounded random payoffs from some illiquid claims, we propose to work with the acceptable portfolios defined via the consistent price system (CPS) such that the liquidation value processes stay above some stochastic thresholds. In the market consisting of one riskless bond and one risky asset, we obtain a type of super-hedging result. Based on this characterization of the primal space, the existence and uniqueness of the optimal solution for the utility maximization problem are established using the duality approach . As an important application of the duality theorem , we provide some sufficient conditions for the existence of a shadow price process with random endowments in a generalized form as well as in the usual sense using acceptable portfolios.
[ { "type": "D", "before": "problem", "after": null, "start_char_pos": 44, "end_char_pos": 51 }, { "type": "D", "before": "both", "after": null, "start_char_pos": 80, "end_char_pos": 84 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 468, "end_char_pos": 471 }, { "type": "R", "before": "convex duality analysis", "after": "duality approach", "start_char_pos": 662, "end_char_pos": 685 }, { "type": "R", "before": "theory", "after": "theorem", "start_char_pos": 731, "end_char_pos": 737 } ]
[ 0, 138, 380, 493, 687 ]
1504.00353
1
Polar codes are a new class of capacity-achieving error-correcting codes with low encoding and decoding complexity. Their low-complexity decoding algorithms render them attractive for use in software-defined radio applications where computational resources are limited. In this work, we present low-latency software polar decoders that exploit modern processor capabilities. We show how adapting the algorithm at various levels can lead to significant improvements in latency and throughput, yielding polar decoders that are suitable for high-performance software-defined radio applications on modern desktop processors and embedded-platform processors. These proposed decoders have an order of magnitude lower latency compared to state of the art decoders, while maintaining comparable throughput. In addition, we present strategies and results for implementing polar decoders on graphical processing units. Finally, we show that the energy efficiency of the proposed decoders , running on desktop a processor, is comparable to state of the art software polar decoders.
Polar codes are a new class of capacity-achieving error-correcting codes with low encoding and decoding complexity. Their low-complexity decoding algorithms rendering them attractive for use in software-defined radio applications where computational resources are limited. In this work, we present low-latency software polar decoders that exploit modern processor capabilities. We show how adapting the algorithm at various levels can lead to significant improvements in latency and throughput, yielding polar decoders that are suitable for high-performance software-defined radio applications on modern desktop processors and embedded-platform processors. These proposed decoders have an order of magnitude lower latency and memory footprint compared to state-of-the-art decoders, while maintaining comparable throughput. In addition, we present strategies and results for implementing polar decoders on graphical processing units. Finally, we show that the energy efficiency of the proposed decoders is comparable to state-of-the-art software polar decoders.
[ { "type": "R", "before": "render", "after": "rendering", "start_char_pos": 157, "end_char_pos": 163 }, { "type": "R", "before": "compared to state of the art", "after": "and memory footprint compared to state-of-the-art", "start_char_pos": 719, "end_char_pos": 747 }, { "type": "D", "before": ", running on desktop a processor,", "after": null, "start_char_pos": 978, "end_char_pos": 1011 }, { "type": "R", "before": "state of the art", "after": "state-of-the-art", "start_char_pos": 1029, "end_char_pos": 1045 } ]
[ 0, 115, 269, 374, 653, 798, 908 ]
1504.00414
1
Charge hydration asymmetry (CHA) -- a characteristic dependence of hydration free energy on the sign of the solute charge -- quantifies the asymmetric response of water to electric field at microscopic level. Accurate estimates of CHA are critical for understanding of hydration effects ubiquitous in chemistry and biology. However, measuring hydration energies of charged species is fraught with significant difficulties, which lead to unacceptably large (up to 300\%) variation in the available estimates of the CHA effect. We circumvent these difficulties by developing a framework which allows us to extract and accurately estimate the intrinsic propensity of water to exhibit CHA from accurate experimental hydration free energies of neutral polar molecules. Specifically, from a set of 504 small molecules we identify two pairs that are analogous, with respect to CHA, to the K+/F- pair -- a classical probe for the effect. We use these "CHA-conjugate" molecule pairs to quantify the intrinsic charge-asymmetric response of water to the microscopic charge perturbations: the asymmetry of the response is strong, ~50\% of the average hydration free energy of these molecules. The ability of widely used classical water models to predict hydration energies of small molecules strongly correlates with their ability to predict CHA.
Charge hydration asymmetry (CHA) --a characteristic dependence of hydration free energy on the sign of the solute charge--quantifies the asymmetric response of water to electric field at microscopic level. Accurate estimates of CHA are critical for understanding hydration effects ubiquitous in chemistry and biology. However, measuring hydration energies of charged species is fraught with significant difficulties, which lead to unacceptably large (up to 300\%) variation in the available estimates of the CHA effect. We circumvent these difficulties by developing a framework which allows us to extract and accurately estimate the intrinsic propensity of water to exhibit CHA from accurate experimental hydration free energies of neutral polar molecules. Specifically, from a set of 504 small molecules we identify two pairs that are analogous, with respect to CHA, to the K+/F- pair--a classical probe for the effect. We use these "CHA-conjugate" molecule pairs to quantify the intrinsic charge-asymmetric response of water to the microscopic charge perturbations: the asymmetry of the response is strong, ~50\% of the average hydration free energy of these molecules. The ability of widely used classical water models to predict hydration energies of small molecules correlates with their ability to predict CHA.
[ { "type": "R", "before": "-- a", "after": "--a", "start_char_pos": 33, "end_char_pos": 37 }, { "type": "R", "before": "charge -- quantifies", "after": "charge--quantifies", "start_char_pos": 115, "end_char_pos": 135 }, { "type": "D", "before": "of", "after": null, "start_char_pos": 266, "end_char_pos": 268 }, { "type": "R", "before": "pair -- a", "after": "pair--a", "start_char_pos": 888, "end_char_pos": 897 }, { "type": "D", "before": "strongly", "after": null, "start_char_pos": 1280, "end_char_pos": 1288 } ]
[ 0, 208, 323, 525, 763, 929, 1180 ]
1504.00821
1
We introduce an extended version of oxDNA, a coarse-grained model of DNA designed to capture the thermodynamic, structural and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves, and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures such as DNA origami which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na + ]=0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.
We introduce an extended version of oxDNA, a coarse-grained model of DNA designed to capture the thermodynamic, structural and mechanical properties of single- and double-stranded DNA. By including explicit major and minor grooves, and by slightly modifying the coaxial stacking and backbone-backbone interactions, we improve the ability of the model to treat large (kilobase-pair) structures such as DNA origami which are sensitive to these geometric features. Further, we extend the model, which was previously parameterised to just one salt concentration ([Na ^+ ]=0.5M), so that it can be used for a range of salt concentrations including those corresponding to physiological conditions. Finally, we use new experimental data to parameterise the oxDNA potential so that consecutive adenine bases stack with a different strength to consecutive thymine bases, a feature which allows a more accurate treatment of systems where the flexibility of single-stranded regions is important. We illustrate the new possibilities opened up by the updated model, oxDNA2, by presenting results from simulations of the structure of large DNA objects and by using the model to investigate some salt-dependent properties of DNA.
[ { "type": "R", "before": "+", "after": "^+", "start_char_pos": 563, "end_char_pos": 564 } ]
[ 0, 184, 461, 690, 983 ]
1504.01152
1
We prove the uniqueness of an equilibrium solution to a general time-inconsistent LQ control problem under mild conditions which ensure the existence of a solution. This is the first positive result on the uniqueness of the solution to a time inconsistent dynamic decision problem in continuous-time setting. Key words. time-inconsistency, stochastic linear-quadratic control, uniqueness of equilibrium control, ] forward--backward stochastic differential equation, mean--variance portfolio selection .
In this paper, we continue our study on a general time-inconsistent stochastic linear--quadratic (LQ) control problem originally formulated in 6]. We derive a necessary and sufficient condition for equilibrium controls via a flow of forward--backward stochastic differential equations. When the state is one dimensional and the coefficients in the problem are all deterministic, we prove that the explicit equilibrium control constructed in \mbox{%DIFAUXCMD HJZ mean--variance portfolio selection model in a complete financial market where the risk-free rate is a deterministic function of time but all the other market parameters are possibly stochastic processes .
[ { "type": "R", "before": "We prove the uniqueness of an equilibrium solution to", "after": "In this paper, we continue our study on", "start_char_pos": 0, "end_char_pos": 53 }, { "type": "R", "before": "LQ control problem under mild conditions which ensure the existence of a solution. This is the first positive result on the uniqueness of the solution to a time inconsistent dynamic decision problem in continuous-time setting. Key words. time-inconsistency, stochastic linear-quadratic control, uniqueness of equilibrium control,", "after": "stochastic linear--quadratic (LQ) control problem originally formulated in", "start_char_pos": 82, "end_char_pos": 411 }, { "type": "A", "before": null, "after": "6", "start_char_pos": 412, "end_char_pos": 412 }, { "type": "A", "before": null, "after": ". We derive a necessary and sufficient condition for equilibrium controls via a flow of", "start_char_pos": 413, "end_char_pos": 413 }, { "type": "R", "before": "stochastic differential equation,", "after": "stochastic differential equations. When the state is one dimensional and the coefficients in the problem are all deterministic, we prove that the explicit equilibrium control constructed in \\mbox{%DIFAUXCMD HJZ", "start_char_pos": 432, "end_char_pos": 465 }, { "type": "A", "before": null, "after": "model in a complete financial market where the risk-free rate is a deterministic function of time but all the other market parameters are possibly stochastic processes", "start_char_pos": 501, "end_char_pos": 501 } ]
[ 0, 164, 308 ]
1504.01381
1
The effects of soft errors in processor cores have been widely studied in literature. On the contrary , little has been published about soft errors in uncore components, such as memory subsystem and I/O controllers, in a System-on-Chip (SoC). In this work, we study how soft errors in uncore components affect system-level behaviors. We have created a new mixed-mode simulation platform that combines simulators at two different levels of abstraction and achieves 20,000x speedup over RTL-only simulation. Using this platform, we present the first study of the system-level impact of soft errors inside various uncore components of a large-scale, multi-core SoC using the industrial-grade, open-source OpenSPARC T2 SoC design. Our results show that soft errors in uncore components can significantly impact system-level reliability. We also demonstrate that uncore soft errors can create major challenges for traditional system-level checkpoint-based recovery techniques. To overcome such recovery challenges, we present a new replay recovery technique for uncore components belonging to the memory subsystem. For the L2 cache controller and the DRAM controller components of OpenSPARC T2, our new technique reduces the probability that an application run results in an erroneous outcome due to soft errors by more than 100x with only 3.13\% and 5.69 \% chip-level area and power impact, respectively.
The effects of soft errors in processor cores have been widely studied . However , little has been published about soft errors in uncore components, such as memory subsystem and I/O controllers, of a System-on-a-Chip (SoC). In this work, we study how soft errors in uncore components affect system-level behaviors. We have created a new mixed-mode simulation platform that combines simulators at two different levels of abstraction , and achieves 20,000x speedup over RTL-only simulation. Using this platform, we present the first study of the system-level impact of soft errors inside various uncore components of a large-scale, multi-core SoC using the industrial-grade, open-source OpenSPARC T2 SoC design. Our results show that soft errors in uncore components can significantly impact system-level reliability. We also demonstrate that uncore soft errors can create major challenges for traditional system-level checkpoint recovery techniques. To overcome such recovery challenges, we present a new replay recovery technique for uncore components belonging to the memory subsystem. For the L2 cache controller and the DRAM controller components of OpenSPARC T2, our new technique reduces the probability that an application run fails to produce correct results due to soft errors by more than 100x with 3.32\% and 6.09 \% chip-level area and power impact, respectively.
[ { "type": "R", "before": "in literature. On the contrary", "after": ". However", "start_char_pos": 71, "end_char_pos": 101 }, { "type": "R", "before": "in a System-on-Chip", "after": "of a System-on-a-Chip", "start_char_pos": 216, "end_char_pos": 235 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 451, "end_char_pos": 451 }, { "type": "R", "before": "checkpoint-based", "after": "checkpoint", "start_char_pos": 935, "end_char_pos": 951 }, { "type": "R", "before": "results in an erroneous outcome", "after": "fails to produce correct results", "start_char_pos": 1257, "end_char_pos": 1288 }, { "type": "R", "before": "only 3.13\\% and 5.69", "after": "3.32\\% and 6.09", "start_char_pos": 1331, "end_char_pos": 1351 } ]
[ 0, 85, 242, 333, 506, 727, 833, 972, 1110 ]
1504.01857
1
The DebtRank algorithm has been increasingly investigated as a method to estimate the impact of shocks in financial networks, as it overcomes the limitations of the traditional default-cascade approaches. Here we formulate a dynamical "microscopic" theory of instability for financial networks by iterating balance sheet identities of individual banks and by assuming a simple rule for the transfer of shocks from borrowers to lenders. By doing so, we generalise the DebtRank formulation, both providing an interpretation of the effective dynamics in terms of basic accounting principles and preventing the underestimation of losses on certain network topologies. Depending on the structure of leverages the dynamics is either stable, in which case the asymptotic state can be computed analytically, or unstable, meaning that at least a bank will default. We apply this results to a network of roughly 200 among the largest European banks in the period 2008 - 2013. We show that network effects generate an amplification of exogenous shocks of a factor ranging between three (in normal periods) and six (during the crisis) , when we stress the system with a 0.5\% shock on external (i.e. non-interbank) assets for all banks.
The DebtRank algorithm has been increasingly investigated as a method to estimate the impact of shocks in financial networks, as it overcomes the limitations of the traditional default-cascade approaches. Here we formulate a dynamical "microscopic" theory of instability for financial networks by iterating balance sheet identities of individual banks and by assuming a simple rule for the transfer of shocks from borrowers to lenders. By doing so, we generalise the DebtRank formulation, both providing an interpretation of the effective dynamics in terms of basic accounting principles and preventing the underestimation of losses on certain network topologies. Depending on the structure of the interbank leverage matrix the dynamics is either stable, in which case the asymptotic state can be computed analytically, or unstable, meaning that at least one bank will default. We apply this framework to a dataset of the top listed European banks in the period 2008 - 2013. We find that network effects can generate an amplification of exogenous shocks of a factor ranging between three (in normal periods) and six (during the crisis) when we stress the system with a 0.5\% shock on external (i.e. non-interbank) assets for all banks.
[ { "type": "R", "before": "leverages the", "after": "the interbank leverage matrix the", "start_char_pos": 694, "end_char_pos": 707 }, { "type": "R", "before": "a", "after": "one", "start_char_pos": 835, "end_char_pos": 836 }, { "type": "R", "before": "results to a network of roughly 200 among the largest", "after": "framework to a dataset of the top listed", "start_char_pos": 870, "end_char_pos": 923 }, { "type": "R", "before": "show", "after": "find", "start_char_pos": 969, "end_char_pos": 973 }, { "type": "A", "before": null, "after": "can", "start_char_pos": 995, "end_char_pos": 995 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1124, "end_char_pos": 1125 } ]
[ 0, 204, 435, 663, 855, 965 ]
1504.02139
1
Protein distributions measured under a broad set of conditions in bacteria and yeast exhibit a universal skewed shape, with variances depending quadratically on means. For bacteria these properties are reproduced by protein accumulation and division dynamics across generations. We present a stochastic growth-and-division model with feedback which captures these observed properties. The limiting copy number distribution is calculated exactly, and a single parameter is found to determine the distribution shape and the variance-to-mean relation. Estimating this parameter from bacterial temporal data reproduces the measured universal distribution shape with high accuracy, and leads to predictions for future experiments.
Protein distributions measured under a broad set of conditions in bacteria and yeast were shown to exhibit a common skewed shape, with variances depending quadratically on means. For bacteria these properties were reproduced by temporal measurements of protein content, showing accumulation and division across generations. Here we present a stochastic growth-and-division model with feedback which captures these observed properties. The limiting copy number distribution is calculated exactly, and a single parameter is found to determine the distribution shape and the variance-to-mean relation. Estimating this parameter from bacterial temporal data reproduces the measured distribution shape with high accuracy, and leads to predictions for future experiments.
[ { "type": "R", "before": "exhibit a universal", "after": "were shown to exhibit a common", "start_char_pos": 85, "end_char_pos": 104 }, { "type": "R", "before": "are reproduced by protein", "after": "were reproduced by temporal measurements of protein content, showing", "start_char_pos": 198, "end_char_pos": 223 }, { "type": "D", "before": "dynamics", "after": null, "start_char_pos": 250, "end_char_pos": 258 }, { "type": "R", "before": "We", "after": "Here we", "start_char_pos": 279, "end_char_pos": 281 }, { "type": "D", "before": "universal", "after": null, "start_char_pos": 628, "end_char_pos": 637 } ]
[ 0, 167, 278, 384, 548 ]
1504.02280
1
We study historical dynamics of joint equilibrium distribution of stock returns in the US stock market using the Boltzmann distribution model being parametrized by external fields and pairwise couplings. Within Boltzmann learning framework for statistical inference, we analyze historical behavior of the parameters inferred using exact and approximate learning algorithms. Since the model and inference methods require use of binary variables, effect of this mapping of continuous returns to the discrete domain is studied. Properties of distributions of external fields and couplings as well as industry sector clustering structure are studied for different historical dates and moving window sizes. We show that discrepancies between them might be used as a precursor of financial instabilities.
We study historical dynamics of joint equilibrium distribution of stock returns in the U.S. stock market using the Boltzmann distribution model being parametrized by external fields and pairwise couplings. Within Boltzmann learning framework for statistical inference, we analyze historical behavior of the parameters inferred using exact and approximate learning algorithms. Since the model and inference methods require use of binary variables, effect of this mapping of continuous returns to the discrete domain is studied. The presented analysis shows that binarization preserves market correlation structure. Properties of distributions of external fields and couplings as well as industry sector clustering structure are studied for different historical dates and moving window sizes. We found that a heavy positive tail in the distribution of couplings is responsible for the sparse market clustering structure. We also show that discrepancies between the model parameters might be used as a precursor of financial instabilities.
[ { "type": "R", "before": "US", "after": "U.S.", "start_char_pos": 87, "end_char_pos": 89 }, { "type": "A", "before": null, "after": "The presented analysis shows that binarization preserves market correlation structure.", "start_char_pos": 525, "end_char_pos": 525 }, { "type": "A", "before": null, "after": "found that a heavy positive tail in the distribution of couplings is responsible for the sparse market clustering structure. We also", "start_char_pos": 706, "end_char_pos": 706 }, { "type": "R", "before": "them", "after": "the model parameters", "start_char_pos": 739, "end_char_pos": 743 } ]
[ 0, 203, 373, 524, 702 ]
1504.02435
1
We propose a new method, detrended partial cross-correlation analysis (DPXA) , to uncover the intrinsic power-law cross-correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis by taking into account the partial correlation analysis. We illustrate the performance of the method using bivariate fractional Brownian motions and multifractal binomial measures with analytical expressions and apply it to extract the intrinsic cross-correlation between crude oil and gold futures by considering the impact of the US dollar index .
When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using classic detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross-correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multi-scale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross-correlation between crude oil and gold futures by taking into consideration the impact of the US dollar index . We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the MF-DCCA method fails .
[ { "type": "R", "before": "We propose a new method, detrended", "after": "When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using classic detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended", "start_char_pos": 0, "end_char_pos": 34 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 77, "end_char_pos": 78 }, { "type": "R", "before": "by taking into account the", "after": "that takes into account", "start_char_pos": 369, "end_char_pos": 395 }, { "type": "R", "before": "illustrate the performance of the method", "after": "demonstrate the method by", "start_char_pos": 429, "end_char_pos": 469 }, { "type": "R", "before": "and multifractal binomial measures with analytical expressions and apply it to", "after": "contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multi-scale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to", "start_char_pos": 514, "end_char_pos": 592 }, { "type": "R", "before": "considering", "after": "taking into consideration", "start_char_pos": 671, "end_char_pos": 682 }, { "type": "A", "before": null, "after": ". We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the MF-DCCA method fails", "start_char_pos": 717, "end_char_pos": 717 } ]
[ 0, 288, 425 ]
1504.02734
1
We examine the issue of sensitivity with respect to model parameters for the problem of utility maximization from final wealth in an incomplete Samuelson model and mainly for utility functions of power-type. The method consists in moving the parameters through change of measure, which we call a weak perturbation, in particular decoupling the usual wealth equation from the varying parameters. By rewriting the maximization problem in terms of a convex-analytical support function of a weakly-compact set, crucially leveraging on the recent work by Backhoff and Fontbona arXiv:1405.0251 , the previous formulation let us prove the Hadamard directional differentiability of the value function w.r.t. the drift and interest rate parameters, as well as for volatility matrices under a stability condition on their Kernel, and derive explicit expressions for the directional derivatives. We contrast our proposed weak perturbations against what we call strong perturbations, whereby the wealth equation is directly influenced by the changing parameters , and find that both points of view generally yield different sensitivities unless e.g. if initial parameters and their perturbations are deterministic.
We examine the issue of sensitivity with respect to model parameters for the problem of utility maximization from final wealth in an incomplete Samuelson model and mainly for utility functions of positive power-type. The method consists in moving the parameters through change of measure, which we call a weak perturbation, decoupling the usual wealth equation from the varying parameters. By rewriting the maximization problem in terms of a convex-analytical support function of a weakly-compact set, crucially leveraging on the work of Backhoff and Fontbona (SIFIN 2016) , the previous formulation let us prove the Hadamard directional differentiability of the value function w.r.t. the drift and interest rate parameters, as well as for volatility matrices under a stability condition on their Kernel, and derive explicit expressions for the directional derivatives. We contrast our proposed weak perturbations against what we call strong perturbations, where the wealth equation is directly influenced by the changing parameters . Contrary to conventional wisdom, we find that both points of view generally yield different sensitivities unless e.g. if initial parameters and their perturbations are deterministic.
[ { "type": "A", "before": null, "after": "positive", "start_char_pos": 196, "end_char_pos": 196 }, { "type": "D", "before": "in particular", "after": null, "start_char_pos": 316, "end_char_pos": 329 }, { "type": "R", "before": "recent work by", "after": "work of", "start_char_pos": 536, "end_char_pos": 550 }, { "type": "R", "before": "arXiv:1405.0251", "after": "(SIFIN 2016)", "start_char_pos": 573, "end_char_pos": 588 }, { "type": "R", "before": "whereby", "after": "where", "start_char_pos": 973, "end_char_pos": 980 }, { "type": "R", "before": ", and", "after": ". Contrary to conventional wisdom, we", "start_char_pos": 1051, "end_char_pos": 1056 } ]
[ 0, 208, 395, 885 ]
1504.03074
1
This paper will discuss the importance of the Black-Scholes equation and its applications in finance . Also, the ways to solve the Black-Scholes equation will be discuss in length .
The paper proposes a different method of solving a simplified version of the Black-Scholes equation. This paper will discuss the importance of the Black-Scholes equation and its applications in finance .
[ { "type": "A", "before": null, "after": "The paper proposes a different method of solving a simplified version of the Black-Scholes equation.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "D", "before": ". Also, the ways to solve the Black-Scholes equation will be discuss in length", "after": null, "start_char_pos": 102, "end_char_pos": 180 } ]
[ 0, 103 ]
1504.03238
1
In this article, we explore a class of tractable interest rate models that have the property that the prices of zero-coupon bonds can be expressed as polynomials of a state diffusion process. These models are, in a sense, generalisations of exponential polynomial models. Our main result is a classification of such models in the spirit of Filipovic's maximal degree theorem for exponential polynomial models .
In this article, we explore a class of tractable interest rate models that have the property that the price of a zero-coupon bond can be expressed as a polynomial of a state diffusion process. Our results include a classification of all such time-homogeneous single-factor models in the spirit of Filipovic's maximal degree theorem for exponential polynomial models , as well as an explicit characterisation of the set of feasible parameters in the case when the factor process is bounded. Extensions to time-inhomogeneous and multi-factor polynomial models are also considered .
[ { "type": "R", "before": "prices of", "after": "price of a", "start_char_pos": 102, "end_char_pos": 111 }, { "type": "R", "before": "bonds", "after": "bond", "start_char_pos": 124, "end_char_pos": 129 }, { "type": "R", "before": "polynomials", "after": "a polynomial", "start_char_pos": 150, "end_char_pos": 161 }, { "type": "R", "before": "These models are, in a sense, generalisations of exponential polynomial models. Our main result is", "after": "Our results include", "start_char_pos": 192, "end_char_pos": 290 }, { "type": "R", "before": "such", "after": "all such time-homogeneous single-factor", "start_char_pos": 311, "end_char_pos": 315 }, { "type": "A", "before": null, "after": ", as well as an explicit characterisation of the set of feasible parameters in the case when the factor process is bounded. Extensions to time-inhomogeneous and multi-factor polynomial models are also considered", "start_char_pos": 409, "end_char_pos": 409 } ]
[ 0, 191, 271 ]
1504.03614
1
In molecular mechanics, current generation potential energy functions provide a reasonably good compromise between accuracy and effectiveness. This paper firstly reviewed several most commonly used classical potential energy functions and their optimization methods used for energy minimization. To minimize a potential energy function, about 95\% efforts are spent on the Lennard-Jones potential of van der Waals interactions; we also give a detailed review on some effective computational optimization methods listed in the Cambridge Cluster Database to solve the problem of Lennard-Jones clusters. From the reviews, we found the hybrid idea of optimization methods is effective, necessary and efficient for solving the potential energy minimization problem and the Lennard-Jones clusters problem. An application to prion protein structures is then done by the hybrid idea; interesting results were found.
In molecular mechanics, current generation potential energy functions provide a reasonably good compromise between accuracy and effectiveness. This paper firstly reviewed several most commonly used classical potential energy functions and their optimization methods used for energy minimization. To minimize a potential energy function, about 95\% efforts are spent on the Lennard-Jones potential of van der Waals interactions; we also give a detailed review on some effective computational optimization methods listed in the Cambridge Cluster Database to solve the problem of Lennard-Jones clusters. From the reviews, we found the hybrid idea of optimization methods is effective, necessary and efficient for solving the potential energy minimization problem and the Lennard-Jones clusters problem. An application to prion protein structures is then done by the hybrid idea; interesting results (e.g. (i) the species that has the clearly and highly ordered S2-H2 loop usually owns a 3-10-helix in this loop, (ii) a "pi-circle" Y128-F175-Y218-Y163-F175-Y169-R164-Y128(-Y162) is around the S2-H2 loop of prion protein structures) were found.
[ { "type": "A", "before": null, "after": "(e.g. (i) the species that has the clearly and highly ordered S2-H2 loop usually owns a 3-10-helix in this loop, (ii) a \"pi-circle\" Y128-F175-Y218-Y163-F175-Y169-R164-Y128(-Y162) is around the S2-H2 loop of prion protein structures)", "start_char_pos": 896, "end_char_pos": 896 } ]
[ 0, 142, 295, 427, 600, 799, 875 ]
1504.03614
2
In molecular mechanics, current generation potential energy functions provide a reasonably good compromise between accuracy and effectiveness. This paper firstly reviewed several most commonly used classical potential energy functions and their optimization methods used for energy minimization. To minimize a potential energy function, about 95 \% efforts are spent on the Lennard-Jones potential of van der Waals interactions; we also give a detailed review on some effective computational optimization methods listed in the Cambridge Cluster Database to solve the problem of Lennard-Jones clusters. From the reviews, we found the hybrid idea of optimization methods is effective, necessary and efficient for solving the potential energy minimization problem and the Lennard-Jones clusters problem. An application to prion protein structures is then done by the hybrid idea; interesting results (e.g. (i) the species that has the clearly and highly ordered S2-H2 loop usually owns a 3-10-helix in this loop, (ii) a "pi-circle" Y128-F175-Y218-Y163-F175-Y169-R164-Y128(-Y162) is around the S2-H2 loop of prion protein structures) were found. \\%DIF > efforts are spent on the Lennard-Jones potential of van der Waals interactions; we also give a detailed review on some effective computational optimization methods in the Cambridge Cluster Database to solve the problem of Lennard-Jones clusters. From the reviews, we found the hybrid idea of optimization methods is effective, necessary and efficient for solving the potential energy minimization problem and the Lennard-Jones clusters problem. An application to prion protein structures is then done by the hybrid idea. We focus on the \beta2-\alpha2 loop of prion protein structures, and we found (i) the species that has the clearly and highly ordered \beta2-\alpha2 loop usually owns a 3_{10}-helix in this loop, (ii) a "\pi-circle" Y128--F175--Y218--Y163--F175--Y169--R164--Y128(--Y162) is around the \beta2-\alpha2 loop.
In molecular mechanics, current generation potential energy functions provide a reasonably good compromise between accuracy and effectiveness. This paper firstly reviewed several most commonly used classical potential energy functions and their optimization methods used for energy minimization. To minimize a potential energy function, about 95 \\%DIF > efforts are spent on the Lennard-Jones potential of van der Waals interactions; we also give a detailed review on some effective computational optimization methods in the Cambridge Cluster Database to solve the problem of Lennard-Jones clusters. From the reviews, we found the hybrid idea of optimization methods is effective, necessary and efficient for solving the potential energy minimization problem and the Lennard-Jones clusters problem. An application to prion protein structures is then done by the hybrid idea. We focus on the \beta2-\alpha2 loop of prion protein structures, and we found (i) the species that has the clearly and highly ordered \beta2-\alpha2 loop usually owns a 3_{10}-helix in this loop, (ii) a "\pi-circle" Y128--F175--Y218--Y163--F175--Y169--R164--Y128(--Y162) is around the \beta2-\alpha2 loop.
[ { "type": "D", "before": "\\% efforts are spent on the Lennard-Jones potential of van der Waals interactions; we also give a detailed review on some effective computational optimization methods listed in the Cambridge Cluster Database to solve the problem of Lennard-Jones clusters. From the reviews, we found the hybrid idea of optimization methods is effective, necessary and efficient for solving the potential energy minimization problem and the Lennard-Jones clusters problem. An application to prion protein structures is then done by the hybrid idea; interesting results (e.g. (i) the species that has the clearly and highly ordered S2-H2 loop usually owns a 3-10-helix in this loop, (ii) a \"pi-circle\" Y128-F175-Y218-Y163-F175-Y169-R164-Y128(-Y162) is around the S2-H2 loop of prion protein structures) were found.", "after": null, "start_char_pos": 346, "end_char_pos": 1141 } ]
[ 0, 142, 295, 428, 601, 800, 876, 1141, 1230, 1396, 1595, 1671 ]
1504.03895
1
A graph representation of the financial relations in a given monetary structure is proposed. It is argued that the graph of debt-liability relations is URLanized and simplified into a tree structure, around banks and a central bank. Indeed, this optimal graph allows to perform payments very easily as it amounts to the suppression of loops introduced by pending payments. Using this language of graphs to analyze the monetary system, we first examine the systems based on commodity money and show their incompatibility with credit. After dealing with the role of the state via its ability to spend and raise taxes, we discuss the chartalist systems based on pure fiat money, which are the current systems. We argue that in those cases, the Treasury and the central bank can be meaningfully consolidated. After describing the interactions of various autonomous currencies, we argue that fixed exchanged rates can never be maintained , and we discuss the controversial role of the IMF in international financial relations. We finally use graph representations to give our interpretation on open problems, such as the monetary aggregates, the sectoral financial balances and the endogenous nature of money. Indeed, once appropriately consolidated, graphs of financial relations allow to formulate easily unambiguous statements about the monetary arrangements .
The nature of monetary arrangements is often discussed without any reference to its detailed construction. We present a graph representation which allows for a clear understanding of modern monetary systems. First, we show that systems based on commodity money are incompatible with credit. We then study the current chartalist systems based on pure fiat money, and we discuss the consolidation of the central bank with the Treasury. We obtain a visual explanation about how commercial banks are responsible for endogenous money creation whereas the Treasury and the central bank are in charge of the total amount of net money. Finally we draw an analogy between systems based on gold convertibility and currency pegs to show that fixed exchange rates can never be maintained .
[ { "type": "R", "before": "A graph representation of the financial relations in a given monetary structure is proposed. It is argued that the graph of debt-liability relations is URLanized and simplified into a tree structure, around banks and a central bank. Indeed, this optimal graph allows to perform payments very easily as it amounts to the suppression of loops introduced by pending payments. Using this language of graphs to analyze the monetary system, we first examine the", "after": "The nature of monetary arrangements is often discussed without any reference to its detailed construction. We present a graph representation which allows for a clear understanding of modern monetary systems. First, we show that", "start_char_pos": 0, "end_char_pos": 455 }, { "type": "R", "before": "and show their incompatibility", "after": "are incompatible", "start_char_pos": 489, "end_char_pos": 519 }, { "type": "R", "before": "After dealing with the role of the state via its ability to spend and raise taxes, we discuss the", "after": "We then study the current", "start_char_pos": 533, "end_char_pos": 630 }, { "type": "R", "before": "which are the current systems. We argue that in those cases,", "after": "and we discuss the consolidation of the central bank with the Treasury. We obtain a visual explanation about how commercial banks are responsible for endogenous money creation whereas", "start_char_pos": 676, "end_char_pos": 736 }, { "type": "R", "before": "can be meaningfully consolidated. After describing the interactions of various autonomous currencies, we argue that fixed exchanged", "after": "are in charge of the total amount of net money. Finally we draw an analogy between systems based on gold convertibility and currency pegs to show that fixed exchange", "start_char_pos": 771, "end_char_pos": 902 }, { "type": "D", "before": ", and we discuss the controversial role of the IMF in international financial relations. We finally use graph representations to give our interpretation on open problems, such as the monetary aggregates, the sectoral financial balances and the endogenous nature of money. Indeed, once appropriately consolidated, graphs of financial relations allow to formulate easily unambiguous statements about the monetary arrangements", "after": null, "start_char_pos": 933, "end_char_pos": 1356 } ]
[ 0, 92, 232, 372, 532, 706, 804, 1021, 1204 ]
1504.03940
1
We discuss a model of protein conformations where conformations are combinations of fragments from some small set. For these fragments we consider a distribution of frequencies of occurrence of pairs (sequence of amino acids, conformation), averaged over some balls in the spaces of sequences and conformations. These frequencies can be estimated due to smallness of the epsilon-entropy of the set of conformations of protein fragments. We consider statistical potentials for protein fragments which describe the mentioned frequencies of occurrence and discuss a model of the free energy of a protein where the free energy is equal to a sum of statistical potentials of the fragments. We discuss application of this model to the problem of prediction of the native conformation of a protein from its primary structure and to the description of the dynamics of a protein.
We discuss a model of protein conformations where conformations are combinations of short fragments from some small set. For these fragments we consider a distribution of frequencies of occurrence of pairs (sequence of amino acids, conformation), averaged over some balls in the spaces of sequences and conformations. These frequencies can be estimated due to smallness of the epsilon-entropy of the set of conformations of protein fragments. We consider statistical potentials for protein fragments which describe the mentioned frequencies of occurrence and discuss model of free energy of a protein where the free energy is equal to a sum of statistical potentials of the fragments. We also consider contribution of contacts of fragments to the energy of protein conformation, and contribution from statistical potentials of some hierarchical set of larger protein fragments. This set of fragments is constructed using the distribution of frequencies of occurrence of short fragments. We discuss analogy between this approach and deep learning methods. We discuss applications of this model to problem of prediction of the native conformation of a protein from its primary structure and to description of dynamics of a protein.
[ { "type": "A", "before": null, "after": "short", "start_char_pos": 84, "end_char_pos": 84 }, { "type": "R", "before": "a model of the", "after": "model of", "start_char_pos": 562, "end_char_pos": 576 }, { "type": "R", "before": "discuss application", "after": "also consider contribution of contacts of fragments to the energy of protein conformation, and contribution from statistical potentials of some hierarchical set of larger protein fragments. This set of fragments is constructed using the distribution of frequencies of occurrence of short fragments. We discuss analogy between this approach and deep learning methods. We discuss applications", "start_char_pos": 689, "end_char_pos": 708 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 726, "end_char_pos": 729 }, { "type": "R", "before": "the description of the", "after": "description of", "start_char_pos": 826, "end_char_pos": 848 } ]
[ 0, 115, 312, 437, 685 ]
1504.03940
2
We discuss a model of protein conformations where conformations are combinations of short fragments from some small set. For these fragments we consider a distribution of frequencies of occurrence of pairs (sequence of amino acids, conformation), averaged over some balls in the spaces of sequences and conformations. These frequencies can be estimated due to smallness of the epsilon-entropy of the set of conformations of protein fragments. We consider statistical potentials for protein fragments which describe the mentioned frequencies of occurrence and discuss model of free energy of a protein where the free energy is equal to a sum of statistical potentials of the fragments. We also consider contribution of contacts of fragments to the energy of protein conformation, and contribution from statistical potentials of some hierarchical set of larger protein fragments. This set of fragments is constructed using the distribution of frequencies of occurrence of short fragments. We discuss analogy between this approach and deep learning methods. We discuss applications of this model to problem of prediction of the native conformation of a protein from its primary structure and to description of dynamics of a protein .
We discuss a model of protein conformations where the conformations are combinations of short fragments from some small set. For these fragments we consider a distribution of frequencies of occurrence of pairs (sequence of amino acids, conformation), averaged over some balls in the spaces of sequences and conformations. These frequencies can be estimated due to smallness of epsilon-entropy of the set of conformations of protein fragments. We consider statistical potentials for protein fragments which describe the mentioned frequencies of occurrence and discuss model of free energy of a protein where the free energy is equal to a sum of statistical potentials of the fragments. We also consider contribution of contacts of fragments to the energy of protein conformation, and contribution from statistical potentials of some hierarchical set of larger protein fragments. This set of fragments is constructed using the distribution of frequencies of occurrence of short fragments. We discuss applications of this model to problem of prediction of the native conformation of a protein from its primary structure and to description of dynamics of a protein . Modification of structural alignment taking into account statistical potentials for protein fragments is considered and application to threading procedure for proteins is discussed .
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 50, "end_char_pos": 50 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 374, "end_char_pos": 377 }, { "type": "D", "before": "analogy between this approach and deep learning methods. We discuss", "after": null, "start_char_pos": 999, "end_char_pos": 1066 }, { "type": "A", "before": null, "after": ". Modification of structural alignment taking into account statistical potentials for protein fragments is considered and application to threading procedure for proteins is discussed", "start_char_pos": 1230, "end_char_pos": 1230 } ]
[ 0, 121, 318, 443, 685, 878, 987, 1055 ]
1504.04520
1
We analyze a non-symmetric feedback module being an extension for the repressilator, the first synthetic biological oscillator, which was invented by Elowitz and Leibler. We propose an alternative approach to model the dynamical behaviors , that is, a type-dependent spin system, this class of stochastic models was introduced by Fern\'andez et. al (2009), and are useful since take account to inherent variability of gene expression. We consider a mean-field dynamics for a type-dependent Ising model, and then study the empirical-magnetization vector in the thermodynamic limit . We apply a convergence result from jump processes to deterministic trajectories and present a bifurcation analysis for the associated dynamical system. We show that non-symmetric module under study can exhibit very rich behaviors , including the empirical oscillations described by repressilator.
We study an alternative approach to model the dynamical behaviors of biological feedback loop , that is, a type-dependent spin system, this class of stochastic models was introduced by Fern\'andez et. al (2009), and are useful since take account to inherent variability of gene expression. We analyze a non-symmetric feedback module being an extension for the repressilator, the first synthetic biological oscillator, invented by Elowitz and Leibler (2000). We consider a mean-field dynamics for a type-dependent Ising model, and then study the empirical-magnetization vector representing concentration of molecules . We apply a convergence result from stochastic jump processes to deterministic trajectories and present a bifurcation analysis for the associated dynamical system. We show that non-symmetric module under study can exhibit very rich behaviours , including the empirical oscillations described by repressilator.
[ { "type": "R", "before": "analyze a non-symmetric feedback module being an extension for the repressilator, the first synthetic biological oscillator, which was invented by Elowitz and Leibler. We propose an", "after": "study an", "start_char_pos": 3, "end_char_pos": 184 }, { "type": "A", "before": null, "after": "of biological feedback loop", "start_char_pos": 239, "end_char_pos": 239 }, { "type": "A", "before": null, "after": "analyze a non-symmetric feedback module being an extension for the repressilator, the first synthetic biological oscillator, invented by Elowitz and Leibler (2000). We", "start_char_pos": 439, "end_char_pos": 439 }, { "type": "R", "before": "in the thermodynamic limit", "after": "representing concentration of molecules", "start_char_pos": 555, "end_char_pos": 581 }, { "type": "A", "before": null, "after": "stochastic", "start_char_pos": 619, "end_char_pos": 619 }, { "type": "R", "before": "behaviors", "after": "behaviours", "start_char_pos": 805, "end_char_pos": 814 } ]
[ 0, 170, 435, 583, 736 ]
1504.04532
1
The main purpose of this work is to prove a theorem about the probability that a random mapping possesses a unique highest tree in its underlying graph. However, we hope that some of auxiliary statements that we present here can be useful for proving results appealing to the theory of critical Galton-Watson branching processes.
We prove the exact asymptotic 1-\Theta(\frac{1 a random mapping of n elements possesses a unique highest tree . The property of having a unique highest tree turned out to be crucial in the solution of the famous Road Coloring Problem as well as in the proof of the author's result about the probability of being synchronizable for a random automaton. Furthermore, some of auxiliary statements that we present here can be useful for proving results appealing to the theory of critical Galton-Watson branching processes.
[ { "type": "R", "before": "The main purpose of this work is to prove a theorem about the probability that", "after": "We prove the exact asymptotic 1-\\Theta(\\frac{1", "start_char_pos": 0, "end_char_pos": 78 }, { "type": "A", "before": null, "after": "of n elements", "start_char_pos": 96, "end_char_pos": 96 }, { "type": "R", "before": "in its underlying graph. However, we hope that", "after": ". The property of having a unique highest tree turned out to be crucial in the solution of the famous Road Coloring Problem as well as in the proof of the author's result about the probability of being synchronizable for a random automaton. Furthermore,", "start_char_pos": 129, "end_char_pos": 175 } ]
[ 0, 153 ]
1504.04532
2
We prove the exact asymptotic 1- \Theta(\frac{1 \left({{3}-827{288\pi}}}\right) for the probability that the underlying graph of a random mapping of n elements possesses a unique highest tree. The property of having a unique highest tree turned out to be crucial in the solution of the famous Road Coloring Problem \mbox{%DIFAUXCMD TRRCP08 as well as in the proof of the author's result about the probability of being synchronizable for a random automaton \mbox{%DIFAUXCMD RandSynch .
We prove the exact asymptotic 1- \left({\frac{2\pi{3}-827{288\pi}}}+o(1)\right)/ \sqrt{n for the probability that the underlying graph of a random mapping of n elements possesses a unique highest tree. The property of having a unique highest tree turned out to be crucial in the solution of the famous Road Coloring Problem as well as the generalization of this property in the proof of the author's result about the probability of being synchronizable for a random automaton .
[ { "type": "D", "before": "\\Theta(\\frac{1", "after": null, "start_char_pos": 33, "end_char_pos": 47 }, { "type": "A", "before": null, "after": "\\frac{2\\pi", "start_char_pos": 55, "end_char_pos": 55 }, { "type": "A", "before": null, "after": "+o(1)", "start_char_pos": 72, "end_char_pos": 72 }, { "type": "A", "before": null, "after": "/", "start_char_pos": 79, "end_char_pos": 79 }, { "type": "A", "before": null, "after": "\\sqrt{n", "start_char_pos": 80, "end_char_pos": 80 }, { "type": "D", "before": "\\mbox{%DIFAUXCMD TRRCP08", "after": null, "start_char_pos": 316, "end_char_pos": 340 }, { "type": "A", "before": null, "after": "the generalization of this property", "start_char_pos": 352, "end_char_pos": 352 }, { "type": "D", "before": "\\mbox{%DIFAUXCMD RandSynch", "after": null, "start_char_pos": 458, "end_char_pos": 484 } ]
[ 0, 193 ]
1504.04774
1
In this paper we study time-consistent risk measures for returns that are given by a GARCH(1,1) model. We present a construction of risk measures based on their static counterparts that overcomes the lack of time-consistency. We then study in detail our construction for the risk measures Value-at-Risk (VaR) and Average Value-at-Risk (AVaR). While in the VaR case we can derive an analytical formula for its time-consistent counterpart, in the AVaR case we derive lower and upper bounds to its time-consistent version. Furthermore, we incorporate techniques from Extreme Value Theory (EVT) to allow for a more tail-geared analysis of the corresponding risk measures. We conclude with an application of our results to stock prices to investigate the applicability of our results .
In this paper we study time-consistent risk measures for returns that are given by a GARCH(1,1) model. We present a construction of risk measures based on their static counterparts that overcomes the lack of time-consistency. We then study in detail our construction for the risk measures Value-at-Risk (VaR) and Average Value-at-Risk (AVaR). While in the VaR case we can derive an analytical formula for its time-consistent counterpart, in the AVaR case we derive lower and upper bounds to its time-consistent version. Furthermore, we incorporate techniques from Extreme Value Theory (EVT) to allow for a more tail-geared statistical analysis of the corresponding risk measures. We conclude with an application of our results to a data set of stock prices .
[ { "type": "A", "before": null, "after": "statistical", "start_char_pos": 623, "end_char_pos": 623 }, { "type": "R", "before": "stock prices to investigate the applicability of our results", "after": "a data set of stock prices", "start_char_pos": 719, "end_char_pos": 779 } ]
[ 0, 102, 225, 342, 519, 668 ]
1504.05470
1
Photosynthesis -- the conversion of sunlight to chemical energy -- is fundamental for supporting life on our planet. Despite its importance, the physical principles that underpin the primary steps of photosynthesis , from photon absorption to electronic charge separation, remain to be understood in full. Previously, electronic coherence within tightly-packed light-harvesting (LH) units or within individual reaction centers (RCs) has been recognized as an important ingredient for a complete understanding of the excitation energy transfer dynamics. However, the electronic coherence across RC and LH units has been consistently neglected as it does not play a significant role during these relatively slow transfer processes . Here, we turn our attention to the absorption process, which occurs on much shorter timescales. We demonstrate that the - often overlooked - spatially extended but short-lived excitonic delocalization across RC and LH units plays a relevant role in general photosynthetic systems, as it causes a redistribution of direct absorption towards the charge separation unit. Using the complete core complex of Rhodospirillum rubrum, we verify experimentally an 80 \% increase in the direct optical absorption of the RC in situ as compared to isolated RCs. Numerical calculations reveal that similar enhancements can be expected for a wide variety of photosynthetic units in both plant and bacterial systems, suggesting that this mechanism is conserved across species and providing a clear new design principle for light-harvesting nanostructures
The early steps of photosynthesis involve the photo-excitation of reaction centres (RCs) and light-harvesting (LH) units . Here, we show that the --historically overlooked-- excitonic delocalisation across RC and LH pigments results in a redistribution of dipole strengths that benefits the absorption cross section of the optical bands associated with the RC of several species. While we prove that this redistribution is robust to the microscopic details of the dephasing between these units in the purple bacterium Rhodospirillum rubrum, we are able to show that the redistribution witnesses a more fragile, but persistent, coherent population dynamics which directs excitations from the LH towards the RC units under incoherent illumination and physiological conditions. Stochastic optimisation allows us to delineate clear guidelines and develop simple analytic expressions, in order to achieve directed coherent population dynamics in artificial nano-structures.
[ { "type": "R", "before": "Photosynthesis -- the conversion of sunlight to chemical energy -- is fundamental for supporting life on our planet. Despite its importance, the physical principles that underpin the primary", "after": "The early", "start_char_pos": 0, "end_char_pos": 190 }, { "type": "R", "before": ", from photon absorption to electronic charge separation, remain to be understood in full. Previously, electronic coherence within tightly-packed light-harvesting (LH) units or within individual reaction centers", "after": "involve the photo-excitation of reaction centres", "start_char_pos": 215, "end_char_pos": 426 }, { "type": "R", "before": "has been recognized as an important ingredient for a complete understanding of the excitation energy transfer dynamics. However, the electronic coherence across RC and LH units has been consistently neglected as it does not play a significant role during these relatively slow transfer processes", "after": "and light-harvesting (LH) units", "start_char_pos": 433, "end_char_pos": 728 }, { "type": "R", "before": "turn our attention to the absorption process, which occurs on much shorter timescales. We demonstrate that the - often overlooked - spatially extended but short-lived excitonic delocalization", "after": "show that the --historically overlooked-- excitonic delocalisation", "start_char_pos": 740, "end_char_pos": 931 }, { "type": "R", "before": "units plays a relevant role in general photosynthetic systems, as it causes", "after": "pigments results in", "start_char_pos": 949, "end_char_pos": 1024 }, { "type": "R", "before": "direct absorption towards the charge separation unit. Using the complete core complex of Rhodospirillum rubrum, we verify experimentally an 80 \\% increase in the direct optical absorption of the RC in situ as compared to isolated RCs. Numerical calculations reveal that similar enhancements can be expected for a wide variety of photosynthetic units in both plant and bacterial systems, suggesting that this mechanism is conserved across species and providing a clear new design principle for light-harvesting nanostructures", "after": "dipole strengths that benefits the absorption cross section of the optical bands associated with the RC of several species. While we prove that this redistribution is robust to the microscopic details of the dephasing between these units in the purple bacterium Rhodospirillum rubrum, we are able to show that the redistribution witnesses a more fragile, but persistent, coherent population dynamics which directs excitations from the LH towards the RC units under incoherent illumination and physiological conditions. Stochastic optimisation allows us to delineate clear guidelines and develop simple analytic expressions, in order to achieve directed coherent population dynamics in artificial nano-structures.", "start_char_pos": 1045, "end_char_pos": 1569 } ]
[ 0, 116, 305, 552, 730, 826, 1098, 1279 ]
1504.06045
1
This paper proposes a control theoretic framework to model and analyze the URLanized pattern formation of molecular concentrations in biomolecular communication networks, where bio-nanomachines, or biological cells, communicate each other using cell-to-cell communication mechanism mediated by a diffusible signaling molecule . We first introduce a feedback model representation of the reaction-diffusion dynamics of biomolecular communication networks. A systematic local stability/instability analysis tool is then provided based on the root locus of the feedback system. Using the instability analysis , we analytically derive the conditions for the URLanized spatial pattern formation, or Turing pattern formation, of the bio-nanomachines. The theoretical results are demonstrated on a novel biochemical circuit called activator-repressor-diffuser system , and the Turing pattern formation is numerically confirmed. Finally, we show that the activator-repressor-diffuser system is a minimum biochemical circuit that admits URLanized patterns in biomolecular communication networks .
This paper proposes a control theoretic framework to model and analyze the URLanized pattern formation of molecular concentrations in biomolecular communication networks, emerging applications in synthetic biology. In biomolecular communication networks, bio-nanomachines, or biological cells, communicate with each other using a cell-to-cell communication mechanism mediated by a diffusible signaling molecule , thereby the dynamics of molecular concentrations are approximately modeled as a reaction-diffusion system with a single diffuser . We first introduce a feedback model representation of the reaction-diffusion system and provide a systematic local stability/instability analysis tool using the root locus of the feedback system. The instability analysis then allows us to analytically derive the conditions for the URLanized spatial pattern formation, or Turing pattern formation, of the bio-nanomachines. We propose a novel synthetic biocircuit motif called activator-repressor-diffuser system and show that it is one of the minimum biomolecular circuits that admit URLanized patterns over cell population .
[ { "type": "R", "before": "where", "after": "emerging applications in synthetic biology. In biomolecular communication networks,", "start_char_pos": 171, "end_char_pos": 176 }, { "type": "A", "before": null, "after": "with", "start_char_pos": 228, "end_char_pos": 228 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 246, "end_char_pos": 246 }, { "type": "A", "before": null, "after": ", thereby the dynamics of molecular concentrations are approximately modeled as a reaction-diffusion system with a single diffuser", "start_char_pos": 328, "end_char_pos": 328 }, { "type": "R", "before": "dynamics of biomolecular communication networks. A", "after": "system and provide a", "start_char_pos": 408, "end_char_pos": 458 }, { "type": "R", "before": "is then provided based on", "after": "using", "start_char_pos": 512, "end_char_pos": 537 }, { "type": "R", "before": "Using the instability analysis , we", "after": "The instability analysis then allows us to", "start_char_pos": 577, "end_char_pos": 612 }, { "type": "R", "before": "The theoretical results are demonstrated on a novel biochemical circuit", "after": "We propose a novel synthetic biocircuit motif", "start_char_pos": 747, "end_char_pos": 818 }, { "type": "R", "before": ", and the Turing pattern formation is numerically confirmed. Finally, we show that the activator-repressor-diffuser system is a minimum biochemical circuit that admits URLanized patterns in biomolecular communication networks", "after": "and show that it is one of the minimum biomolecular circuits that admit URLanized patterns over cell population", "start_char_pos": 862, "end_char_pos": 1087 } ]
[ 0, 330, 456, 576, 746, 922 ]
1504.06045
2
This paper proposes a control theoretic framework to model and analyze the URLanized pattern formation of molecular concentrations in biomolecular communication networks, emerging applications in synthetic biology. In biomolecular communication networks, bio-nanomachines , or biological cells, communicate with each other using a cell-to-cell communication mechanism mediated by a diffusible signaling molecule, thereby the dynamics of molecular concentrations are approximately modeled as a reaction-diffusion system with a single diffuser. We first introduce a feedback model representation of the reaction-diffusion system and provide a systematic local stability/instability analysis tool using the root locus of the feedback system. The instability analysis then allows us to analytically derive the conditions for the URLanized spatial pattern formation, or Turing pattern formation, of the bio-nanomachines . We propose a novel synthetic biocircuit motif called activator-repressor-diffuser system and show that it is one of the minimum biomolecular circuits that admit URLanized patterns over cell population.
This paper proposes a control theoretic framework to model and analyze the URLanized pattern formation of molecular concentrations in biomolecular communication networks, emerging applications in synthetic biology. In biomolecular communication networks, bionanomachines , or biological cells, communicate with each other using a cell-to-cell communication mechanism mediated by a diffusible signaling molecule, thereby the dynamics of molecular concentrations are approximately modeled as a reaction-diffusion system with a single diffuser. We first introduce a feedback model representation of the reaction-diffusion system and provide a systematic local stability/instability analysis tool using the root locus of the feedback system. The instability analysis then allows us to analytically derive the conditions for the URLanized spatial pattern formation, or Turing pattern formation, of the bionanomachines . We propose a novel synthetic biocircuit motif called activator-repressor-diffuser system and show that it is one of the minimum biomolecular circuits that admit URLanized patterns over cell population.
[ { "type": "R", "before": "bio-nanomachines", "after": "bionanomachines", "start_char_pos": 255, "end_char_pos": 271 }, { "type": "R", "before": "bio-nanomachines", "after": "bionanomachines", "start_char_pos": 898, "end_char_pos": 914 } ]
[ 0, 214, 542, 738, 916 ]
1504.06447
1
We present an analytically solvable model for self-assembly of a molecular complex on a filament. The process is driven by a seed molecule that undergoes facilitated diffusion, which is a search strategy that combines diffusion in three-dimensions and one-dimension. Our study is motivated by single molecule level observations revealing the dynamics of transcription factors that bind to the DNA at early stages of transcription. We calculate the probability that a complex made up of a given number of molecules is completely formed, as well as the distribution of completion times, upon the binding of a seed molecule at a target site on the filament . We compare two different mechanisms of assembly where molecules bind in sequential and random order. Our results indicate that while the probability of completion is greater for random binding, the completion time scales exponentially with the size of the complex; whereas it scales as a power-law or slower for sequential binding, asymptotically. Furthermore, we provide model predictions for the dissociation and residence times of the seed molecule, which are observables accessible in single molecule tracking experiments.
We present an analytically solvable model for self-assembly of a molecular complex on a filament. The process is driven by a seed molecule that undergoes facilitated diffusion, which is a search strategy that combines diffusion in three-dimensions and one-dimension. Our study is motivated by single molecule level observations revealing the dynamics of transcription factors that bind to the DNA at early stages of transcription. We calculate the probability that a complex made up of a given number of molecules is completely formed, as well as the distribution of completion times, upon the binding of a seed molecule at a target site on the filament (without explicitly modeling the three-dimensional diffusion that precedes binding) . We compare two different mechanisms of assembly where molecules bind in sequential and random order. Our results indicate that while the probability of completion is greater for random binding, the completion time scales exponentially with the size of the complex; in contrast, it scales as a power-law or slower for sequential binding, asymptotically. Furthermore, we provide model predictions for the dissociation and residence times of the seed molecule, which are observables accessible in single molecule tracking experiments.
[ { "type": "A", "before": null, "after": "(without explicitly modeling the three-dimensional diffusion that precedes binding)", "start_char_pos": 654, "end_char_pos": 654 }, { "type": "R", "before": "whereas", "after": "in contrast,", "start_char_pos": 922, "end_char_pos": 929 } ]
[ 0, 97, 266, 430, 656, 757, 921, 1004 ]
1504.06789
1
In this paper we offer a novel type of network model , which is capable of capturing the precise structure of a financial market based, for example, on empirical findings. With the attached stochastic framework it is further possible to study how an arbitrary network structure and its expected counterparty credit risk are analytically related to each other. This allows us, for the first time, to model and to analytically analyse the precise structure of a financial market . It further enables us to draw implications for the study of systemic risk. We apply the powerful theory of characteristic functions and Hilbert transforms , which have not been used in this combination before . We then characterise Eulerian digraphs as distinguished exposure structures and we show that considering the precise network structures is crucial for the study of systemic risk. The introduced network model is then applied to study the features of an over-the-counter and a centrally cleared market. We also give a more general answer to the question of whether it is more advantageous for the overall counterparty credit risk to clear via a central counterparty or classically bilateral between the two involved counterparties. We then show that the exact market structure is a crucial factor in answering the raised question.
In this paper we offer a novel type of network model which can capture the precise structure of a financial market based, for example, on empirical findings. With the attached stochastic framework it is further possible to study how an arbitrary network structure and its expected counterparty credit risk are analytically related to each other. This allows us, for the first time, to model the precise structure of an arbitrary financial market and to derive the corresponding expected exposure in a closed-form expression . It further enables us to draw implications for the study of systemic risk. We apply the powerful theory of characteristic functions and Hilbert transforms . The latter concept is used to express the characteristic function (c.f.) of the random variable (r.v.) \max(Y, 0) in terms of the c.f. of the r.v. Y. The present paper applies this concept for the first time in mathematical finance . We then characterise Eulerian digraphs as distinguished exposure structures and show that considering the precise network structures is crucial for the study of systemic risk. The introduced network model is then applied to study the features of an over-the-counter and a centrally cleared market. We also give a more general answer to the question of whether it is more advantageous for the overall counterparty credit risk to clear via a central counterparty or classically bilateral between the two involved counterparties. We then show that the exact market structure is a crucial factor in answering the raised question.
[ { "type": "R", "before": ", which is capable of capturing", "after": "which can capture", "start_char_pos": 53, "end_char_pos": 84 }, { "type": "D", "before": "and to analytically analyse", "after": null, "start_char_pos": 405, "end_char_pos": 432 }, { "type": "R", "before": "a financial market", "after": "an arbitrary financial market and to derive the corresponding expected exposure in a closed-form expression", "start_char_pos": 458, "end_char_pos": 476 }, { "type": "R", "before": ", which have not been used in this combination before", "after": ". The latter concept is used to express the characteristic function (c.f.) of the random variable (r.v.) \\max(Y, 0) in terms of the c.f. of the r.v. Y. The present paper applies this concept for the first time in mathematical finance", "start_char_pos": 634, "end_char_pos": 687 }, { "type": "D", "before": "we", "after": null, "start_char_pos": 770, "end_char_pos": 772 } ]
[ 0, 171, 359, 478, 553, 689, 868, 990, 1219 ]
1504.07152
1
Systemic risk in banking systems is a crucial issue that remains to be completely addressed . In our toy model, banks are exposed to two sources of risks, namely, market risk from their investments in assets external to the system and credit risk from their lending in the interbank market. By and large, both risks increase during severe financial turmoil. Under this scenario, the paper shows how both the individual and the systemic default tend to coincide.
Systemic risk in banking systems remains a crucial issue that it has not been completely understood . In our toy model, banks are exposed to two sources of risks, namely, market risk from their investments in assets external to the banking system and credit risk from their lending in the interbank market. By and large, both risks increase during severe financial turmoil. Under this scenario, the paper shows the conditions under which both the individual and the systemic default tend to coincide.
[ { "type": "R", "before": "is", "after": "remains", "start_char_pos": 33, "end_char_pos": 35 }, { "type": "R", "before": "remains to be completely addressed", "after": "it has not been completely understood", "start_char_pos": 57, "end_char_pos": 91 }, { "type": "A", "before": null, "after": "banking", "start_char_pos": 224, "end_char_pos": 224 }, { "type": "R", "before": "how", "after": "the conditions under which", "start_char_pos": 396, "end_char_pos": 399 } ]
[ 0, 93, 291, 358 ]
1505.00507
1
Precisely and accurately locating point objects is a long-standing common thread in science. Super-resolved imaging of single molecules has revolutionized our view of quasi-static nanostructures . A wide-field approach based on localizing individual fluorophores has emerged as a versatile method to surpass the standard resolution limit. In those techniques, the super-resolution is realized by sparse photoactivation and localization together with the statistical analysis based on point spread functions. Nevertheless, the slow temporal resolution of super-resolved imaging severely restricts the utility to the study of live-cell phenomena. Clearly, a major breakthrough to observe fast, nanoscale dynamics needs to be made. Here we present a super-resolved imaging method that achieves the theoretical-limit time resolution . By invoking information theory, we can achieve the robust localization of overlapped light emitters at an order of magnitude faster speed than the conventional super-resolution microscopy . Our method thus provides a general way to uncover hidden structures below the diffraction limit and should have a wide range of applications in all disciplines of science and technology .
We present a method that can simultaneously locate positions of overlapped multi-emitters at the theoretical-limit precision. We derive a set of simple equations whose solution gives the maximum likelihood estimator of multi-emitter positions. We compare the performance of our simultaneous localization analysis with the conventional single-molecule analysis for simulated images and show that our method can improve the time-resolution of superresolution microscopy an order of magnitude. In particular, we derive the information-theoretical bound on time resolution of localization-based superresolution microscopy and demonstrate that the bound can be closely attained by our analysis .
[ { "type": "D", "before": "Precisely and accurately locating point objects is a long-standing common thread in science. Super-resolved imaging of single molecules has revolutionized our view of quasi-static nanostructures", "after": null, "start_char_pos": 0, "end_char_pos": 194 }, { "type": "R", "before": ". A wide-field approach based on localizing individual fluorophores has emerged as a versatile method to surpass the standard resolution limit. In those techniques, the super-resolution is realized by sparse photoactivation and localization together with the statistical analysis based on point spread functions. Nevertheless, the slow temporal resolution of super-resolved imaging severely restricts the utility to the study of live-cell phenomena. Clearly, a major breakthrough to observe fast, nanoscale dynamics needs to be made. Here we present a super-resolved imaging method that achieves the theoretical-limit time resolution . By invoking information theory, we can achieve the robust localization of overlapped light emitters at an order of magnitude faster speed than the conventional super-resolution microscopy . Our method thus provides a general way to uncover hidden structures below the diffraction limit and should have a wide range of applications in all disciplines of science and technology", "after": "We present a method that can simultaneously locate positions of overlapped multi-emitters at the theoretical-limit precision. We derive a set of simple equations whose solution gives the maximum likelihood estimator of multi-emitter positions. We compare the performance of our simultaneous localization analysis with the conventional single-molecule analysis for simulated images and show that our method can improve the time-resolution of superresolution microscopy an order of magnitude. In particular, we derive the information-theoretical bound on time resolution of localization-based superresolution microscopy and demonstrate that the bound can be closely attained by our analysis", "start_char_pos": 195, "end_char_pos": 1206 } ]
[ 0, 92, 196, 338, 507, 644, 728, 830, 1020 ]
1505.01333
1
Sharpe ratios are much used in finance, yet cannot be measured directly because price returnsare non-Gaussian . On the other hand, the number of records of a discrete-time random walk in a given time-interval follows a Gaussian distribution provided that its increment distribution has finite variance. As as consequence, record statistics of uncorrelated, biased, random walks provide an attractive new estimator of Sharpe ratios . First, I derive an approximate expression of the expected number of price records in a given time interval when the increments follow Student's t distribution with tail exponent equal to 4 in the limit of vanishing Sharpe ratios . Remarkably, this expression explicitly links the expected record numbers to Sharpe ratios and and suggests to estimate the average Sharpe ratio from record statistics. Numerically, the asymptotic efficiency of a permutation estimator of Sharpe ratios based on record statistics is several times larger than that of the t-statistics for uncorrelated returnswith a Student's t distribution with tail exponent of 4.
Estimating Sharpe ratios requires the computation of the moments of price returns, which is problematic because the latter are heavy-tailed . On the other hand, the total duration of drawdowns of a time series, or equivalently the number of its upper price records, follows a Gaussian distribution that depends on its true Sharpe ratio. Reversely, this suggests an estimator of Sharpe ratios based the total duration of drawdowns. Its efficiency is several times larger than moment-based estimators for symmetric heavy-tailed price returns. Such type of data also leads mechanically to a larger number of expected price records, hence, to a smaller duration of drawdowns. Thus, for a given number of records, the absolute value of the estimated Sharpe ratios is smaller when price returns are heavy tailed. This means that moment-based estimators are prone to overestimate true Sharpe ratios in difficult market conditions, which implies that using them for investment decisions may amplify large price returns .
[ { "type": "R", "before": "Sharpe ratios are much used in finance, yet cannot be measured directly because price returnsare non-Gaussian", "after": "Estimating Sharpe ratios requires the computation of the moments of price returns, which is problematic because the latter are heavy-tailed", "start_char_pos": 0, "end_char_pos": 109 }, { "type": "R", "before": "number of records of a discrete-time random walk in a given time-interval", "after": "total duration of drawdowns of a time series, or equivalently the number of its upper price records,", "start_char_pos": 135, "end_char_pos": 208 }, { "type": "R", "before": "provided that its increment distribution has finite variance. As as consequence, record statistics of uncorrelated, biased, random walks provide an attractive new", "after": "that depends on its true Sharpe ratio. Reversely, this suggests an", "start_char_pos": 241, "end_char_pos": 403 }, { "type": "A", "before": null, "after": "based the total duration of drawdowns. Its efficiency is several times larger than moment-based estimators for symmetric heavy-tailed price returns. Such type of data also leads mechanically to a larger number of expected price records, hence, to a smaller duration of drawdowns. Thus, for a given number of records, the absolute value of the estimated Sharpe ratios is smaller when price returns are heavy tailed. This means that moment-based estimators are prone to overestimate true Sharpe ratios in difficult market conditions, which implies that using them for investment decisions may amplify large price returns", "start_char_pos": 431, "end_char_pos": 431 }, { "type": "D", "before": "First, I derive an approximate expression of the expected number of price records in a given time interval when the increments follow Student's t distribution with tail exponent equal to 4 in the limit of vanishing Sharpe ratios . Remarkably, this expression explicitly links the expected record numbers to Sharpe ratios and and suggests to estimate the average Sharpe ratio from record statistics. Numerically, the asymptotic efficiency of a permutation estimator of Sharpe ratios based on record statistics is several times larger than that of the t-statistics for uncorrelated returnswith a Student's t distribution with tail exponent of 4.", "after": null, "start_char_pos": 434, "end_char_pos": 1077 } ]
[ 0, 111, 302, 433, 832 ]
1505.01333
2
Estimating Sharpe ratios requires the computation of the moments of price returns, which is problematic because the latter are heavy-tailed. On the other hand, the total duration of drawdowns of a time series, or equivalently the number of its upper price records, follows a Gaussian distribution that depends on its true Sharpe ratio. Reversely, this suggests an estimator of Sharpe ratios based the total duration of drawdowns. Its efficiency is several times larger than moment-based estimators for symmetric heavy-tailed price returns. Such type of data also leads mechanically to a larger number of expected price records, hence, to a smaller duration of drawdowns. Thus, for a given number of records, the absolute value of the estimated Sharpe ratios is smaller when price returnsare heavy tailed. This means that moment-based estimators are prone to overestimate true Sharpe ratios in difficult market conditions , which implies that using them for investment decisions may amplify large price returns .
The total duration of drawdowns is shown to be an efficient and robust estimator of Sharpe ratios , especially for heavy-tailed price returns. Because such type of data mechanically reduces the expected total drawdown duration with respect to Gaussian price returns, moment-based estimators are prone to overestimate true Sharpe ratios in leptokurtic market conditions and may further amplify large price returns when they are used by trend-followers .
[ { "type": "R", "before": "Estimating Sharpe ratios requires the computation of the moments of price returns, which is problematic because the latter are heavy-tailed. On the other hand, the", "after": "The", "start_char_pos": 0, "end_char_pos": 163 }, { "type": "R", "before": "of a time series, or equivalently the number of its upper price records, follows a Gaussian distribution that depends on its true Sharpe ratio. Reversely, this suggests an", "after": "is shown to be an efficient and robust", "start_char_pos": 192, "end_char_pos": 363 }, { "type": "R", "before": "based the total duration of drawdowns. Its efficiency is several times larger than moment-based estimators for symmetric", "after": ", especially for", "start_char_pos": 391, "end_char_pos": 511 }, { "type": "R", "before": "Such", "after": "Because such", "start_char_pos": 540, "end_char_pos": 544 }, { "type": "R", "before": "also leads mechanically to a larger number of expected price records, hence, to a smaller duration of drawdowns. Thus, for a given number of records, the absolute value of the estimated Sharpe ratios is smaller when price returnsare heavy tailed. This means that", "after": "mechanically reduces the expected total drawdown duration with respect to Gaussian price returns,", "start_char_pos": 558, "end_char_pos": 820 }, { "type": "R", "before": "difficult market conditions , which implies that using them for investment decisions may", "after": "leptokurtic market conditions and may further", "start_char_pos": 893, "end_char_pos": 981 }, { "type": "A", "before": null, "after": "when they are used by trend-followers", "start_char_pos": 1010, "end_char_pos": 1010 } ]
[ 0, 140, 335, 429, 539, 670, 804 ]
1505.01333
3
The total duration of drawdowns is shown to be an efficient and robust estimator of Sharpe ratios , especially for heavy-tailed price returns. Because such type of data mechanically reduces the expected total drawdown duration with respect to Gaussian price returns , moment-based estimators are prone to overestimate true Sharpe ratios in leptokurtic market conditions and may further amplify large price returns when they are used by trend-followers .
The total duration of drawdowns is shown to be an efficient and robust estimator of Sharpe ratios . Its properties are distribution-dependent: the expected total drawdown duration is smaller for heavy-tailed returns than for Gaussian ones. As a consequence, in leptokurtic market conditions , the new estimator yields smaller Sharpe ratios than moment-based estimators, which implies that the standard estimator overestimates the information content of prices when the return distribution has heavy tails. Accordingly, using the standard estimator for taking trend-following decisions enhances large price fluctuations .
[ { "type": "R", "before": ", especially for heavy-tailed price returns. Because such type of data mechanically reduces", "after": ". Its properties are distribution-dependent:", "start_char_pos": 98, "end_char_pos": 189 }, { "type": "R", "before": "with respect to Gaussian price returns , moment-based estimators are prone to overestimate true Sharpe ratios", "after": "is smaller for heavy-tailed returns than for Gaussian ones. As a consequence,", "start_char_pos": 227, "end_char_pos": 336 }, { "type": "R", "before": "and may further amplify large price returns when they are used by trend-followers", "after": ", the new estimator yields smaller Sharpe ratios than moment-based estimators, which implies that the standard estimator overestimates the information content of prices when the return distribution has heavy tails. Accordingly, using the standard estimator for taking trend-following decisions enhances large price fluctuations", "start_char_pos": 370, "end_char_pos": 451 } ]
[ 0, 142 ]
1505.01333
4
The total duration of drawdowns is shown to be an efficient and robust estimator of Sharpe ratios . Its properties are distribution-dependent: the expected total drawdown duration is smaller for heavy-tailed returns than for Gaussian ones. As a consequence, in leptokurtic market conditions, the new estimator yields smaller Sharpe ratios than moment-based estimators, which implies that the standard estimator overestimates the information content of prices when the return distribution has heavy tails. Accordingly, using the standard estimator for taking trend-following decisions enhances large price fluctuations .
The total duration of drawdowns is shown to provide an moment-free, unbiased, efficient and robust estimator of Sharpe ratios both for Gaussian and heavy-tailed price returns. We then use this quantity to infer an analytic expression of the the bias of moment-based Sharpe ratio estimators and the tail exponent of the distribution of heavy-tailed price returns. The heterogeneity of tail exponents at any given time among assets implies that our new methods yields significantly different asset rankings than moment-based methods, especially in periods large volatility. This is fully confirmed by using 20 years of historical data on 3449 liquid US equities .
[ { "type": "R", "before": "be an", "after": "provide an moment-free, unbiased,", "start_char_pos": 44, "end_char_pos": 49 }, { "type": "R", "before": ". Its properties are distribution-dependent: the expected total drawdown duration is smaller for", "after": "both for Gaussian and", "start_char_pos": 98, "end_char_pos": 194 }, { "type": "R", "before": "returns than for Gaussian ones. As a consequence, in leptokurtic market conditions, the new estimator yields smaller Sharpe ratios than", "after": "price returns. We then use this quantity to infer an analytic expression of the the bias of", "start_char_pos": 208, "end_char_pos": 343 }, { "type": "R", "before": "estimators, which implies that the standard estimator overestimates the information content of prices when the return distribution has heavy tails. Accordingly, using the standard estimator for taking trend-following decisions enhances large price fluctuations", "after": "Sharpe ratio estimators and the tail exponent of the distribution of heavy-tailed price returns. The heterogeneity of tail exponents at any given time among assets implies that our new methods yields significantly different asset rankings than moment-based methods, especially in periods large volatility. This is fully confirmed by using 20 years of historical data on 3449 liquid US equities", "start_char_pos": 357, "end_char_pos": 617 } ]
[ 0, 99, 239, 504 ]
1505.01333
5
The total duration of drawdowns is shown to provide an moment-free, unbiased, efficient and robust estimator of Sharpe ratios both for Gaussian and heavy-tailed price returns. We then use this quantity to infer an analytic expression of the the bias of moment-based Sharpe ratio estimators and the tail exponent of the distribution of heavy-tailed price returns . The heterogeneity of tail exponents at any given time among assets implies that our new methods yields significantly different asset rankings than moment-based methods, especially in periods large volatility. This is fully confirmed by using 20 years of historical data on 3449 liquid US equities.
The total duration of drawdowns is shown to provide a moment-free, unbiased, efficient and robust estimator of Sharpe ratios both for Gaussian and heavy-tailed price returns. We then use this quantity to infer an analytic expression of the bias of moment-based Sharpe ratio estimators as a function of the return distribution tail exponent . The heterogeneity of tail exponents at any given time among assets implies that our new method yields significantly different asset rankings than those of moment-based methods, especially in periods large volatility. This is fully confirmed by using 20 years of historical data on 3449 liquid US equities.
[ { "type": "R", "before": "an", "after": "a", "start_char_pos": 52, "end_char_pos": 54 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 241, "end_char_pos": 244 }, { "type": "R", "before": "and the tail exponent of the distribution of heavy-tailed price returns", "after": "as a function of the return distribution tail exponent", "start_char_pos": 290, "end_char_pos": 361 }, { "type": "R", "before": "methods", "after": "method", "start_char_pos": 452, "end_char_pos": 459 }, { "type": "A", "before": null, "after": "those of", "start_char_pos": 511, "end_char_pos": 511 } ]
[ 0, 175, 363, 573 ]
1505.02100
1
FPGA technology can offer significantly higher performance at much lower power than is available from CPUs and GPUs in many computational problems. Unfortunately, programming for FPGA (using hardware description languages, HDL) is a difficult and not-trivial task and is not intuitive for C/C++/Java programmers. To bring the gap between programming effectiveness and difficulty the High Level Synthesis (HLS) approach is promoting by many FPGA vendors. Nowadays, time-intensive calculations are mainly performed on GPU/CPU architectures, but can also be successfully performed using HLS approach. In the paper we implement a selected numerical algorithm (bandwidth selection for kernel density estimators, KDE) using HLS and show techniques which were used to optimise the final FPGA implementation .
FPGA technology can offer significantly hi\-gher performance at much lower power consumption than is available from CPUs and GPUs in many computational problems. Unfortunately, programming for FPGA (using ha\-rdware description languages, HDL) is a difficult and not-trivial task and is not intuitive for C/C++/Java programmers. To bring the gap between programming effectiveness and difficulty the High Level Synthesis (HLS) approach is promoting by main FPGA vendors. Nowadays, time-intensive calculations are mainly performed on GPU/CPU architectures, but can also be successfully performed using HLS approach. In the paper we implement a bandwidth selection algorithm for kernel density estimation ( KDE) using HLS and show techniques which were used to optimize the final FPGA implementation . We are also going to show that FPGA speedups, comparing to highly optimized CPU and GPU implementations, are quite substantial. Moreover, power consumption for FPGA devices is usually much less than typical power consumption of the present CPUs and GPUs .
[ { "type": "R", "before": "higher", "after": "hi\\-gher", "start_char_pos": 40, "end_char_pos": 46 }, { "type": "A", "before": null, "after": "consumption", "start_char_pos": 79, "end_char_pos": 79 }, { "type": "R", "before": "hardware", "after": "ha\\-rdware", "start_char_pos": 192, "end_char_pos": 200 }, { "type": "R", "before": "many", "after": "main", "start_char_pos": 436, "end_char_pos": 440 }, { "type": "R", "before": "selected numerical algorithm (bandwidth selection", "after": "bandwidth selection algorithm", "start_char_pos": 627, "end_char_pos": 676 }, { "type": "R", "before": "estimators,", "after": "estimation (", "start_char_pos": 696, "end_char_pos": 707 }, { "type": "R", "before": "optimise", "after": "optimize", "start_char_pos": 762, "end_char_pos": 770 }, { "type": "A", "before": null, "after": ". We are also going to show that FPGA speedups, comparing to highly optimized CPU and GPU implementations, are quite substantial. Moreover, power consumption for FPGA devices is usually much less than typical power consumption of the present CPUs and GPUs", "start_char_pos": 801, "end_char_pos": 801 } ]
[ 0, 148, 313, 454, 598 ]
1505.02281
1
Numerical challenges inherent in algorithms for computing worst Value-at-Risk in homogeneous portfolios are identified and words of warning concerning their implementation are raised . Furthermore, both conceptual and computational improvements to the Rearrangement Algorithm for approximating worst Value-at-Risk for portfolios with arbitrary marginal loss distributions are provided . In particular, a novel Adaptive Rearrangement Algorithm is introduced and investigated. These algorithms are implemented using the R package qrmtools.
Numerical challenges inherent in algorithms for computing worst Value-at-Risk in homogeneous portfolios are identified and solutions as well as words of warning concerning their implementation are provided . Furthermore, both conceptual and computational improvements to the Rearrangement Algorithm for approximating worst Value-at-Risk for portfolios with arbitrary marginal loss distributions are given . In particular, a novel Adaptive Rearrangement Algorithm is introduced and investigated. These algorithms are implemented using the R package qrmtools.
[ { "type": "A", "before": null, "after": "solutions as well as", "start_char_pos": 123, "end_char_pos": 123 }, { "type": "R", "before": "raised", "after": "provided", "start_char_pos": 177, "end_char_pos": 183 }, { "type": "R", "before": "provided", "after": "given", "start_char_pos": 377, "end_char_pos": 385 } ]
[ 0, 185, 387, 475 ]
1505.02348
1
We propose a complexity-theoretic approach to studying biological networks . We use a simple graph representation of biological networks capturing objects (molecules : DNA, RNA, proteins and chemicals) as nodes, and relations between them as directed and signed (promotional (+) or inhibitory (-)) edges. Based on this model, we formally define the problem of network evolution (NE) and subsequently prove it to be fundamentally hard by means of reduction from the Knapsack problem (KP). Second, for empirical validation, various biological networks of experimentally-validated interactions are compared against randomly generated networks with varying degree distributions. An NE instance is created using a given real or random network. After being reverse-reduced to a KP instance, each NE instance is fed to a KP solver and the average achieved knapsack value-to-weight ratio is recorded from multiple rounds of simulated evolutionary pressure. The results show that biological networks (and synthetic networks of similar degree distribution) achieve the highest ratios as evolutionary pressure increases . The more distant (in degree distribution) a synthetic network is from biological networks the lower its achieved ratio. This reveals how computational intractability has shaped the evolution of biological networks into their current topology . We propose a restatement of the principle of "survival of the fittest" into the more concrete "survival of the computationally efficient" .
A complexity-theoretic approach to studying biological networks is proposed. A simple graph representation is used where molecules ( DNA, RNA, proteins and chemicals) are vertices and relations between them are directed and signed (promotional (+) or inhibitory (-)) edges. Based on this model, the problem of network evolution (NE) is defined formally as an optimization problem and subsequently proven to be fundamentally hard (NP-hard) by means of reduction from the Knapsack problem (KP). Second, for empirical validation, various biological networks of experimentally-validated interactions are compared against randomly generated networks with varying degree distributions. An NE instance is created using a given real or synthetic (random) network. After being reverse-reduced to a KP instance, each NE instance is fed to a KP solver and the average achieved knapsack value-to-weight ratio is recorded from multiple rounds of simulated evolutionary pressure. The results show that biological networks (and synthetic networks of similar degree distribution) achieve the highest ratios at maximal evolutionary pressure and minimal error tolerance conditions . The more distant (in degree distribution) a synthetic network is from biological networks the lower its achieved ratio. The results shed light on how computational intractability has shaped the evolution of biological networks into their current topology .
[ { "type": "R", "before": "We propose a", "after": "A", "start_char_pos": 0, "end_char_pos": 12 }, { "type": "R", "before": ". We use a", "after": "is proposed. A", "start_char_pos": 75, "end_char_pos": 85 }, { "type": "R", "before": "of biological networks capturing objects (molecules :", "after": "is used where molecules (", "start_char_pos": 114, "end_char_pos": 167 }, { "type": "R", "before": "as nodes,", "after": "are vertices", "start_char_pos": 202, "end_char_pos": 211 }, { "type": "R", "before": "as", "after": "are", "start_char_pos": 239, "end_char_pos": 241 }, { "type": "D", "before": "we formally define", "after": null, "start_char_pos": 326, "end_char_pos": 344 }, { "type": "R", "before": "and subsequently prove it", "after": "is defined formally as an optimization problem and subsequently proven", "start_char_pos": 383, "end_char_pos": 408 }, { "type": "A", "before": null, "after": "(NP-hard)", "start_char_pos": 434, "end_char_pos": 434 }, { "type": "R", "before": "random", "after": "synthetic (random)", "start_char_pos": 724, "end_char_pos": 730 }, { "type": "R", "before": "as evolutionary pressure increases", "after": "at maximal evolutionary pressure and minimal error tolerance conditions", "start_char_pos": 1075, "end_char_pos": 1109 }, { "type": "R", "before": "This reveals", "after": "The results shed light on", "start_char_pos": 1232, "end_char_pos": 1244 }, { "type": "D", "before": ". We propose a restatement of the principle of \"survival of the fittest\" into the more concrete \"survival of the computationally efficient\"", "after": null, "start_char_pos": 1354, "end_char_pos": 1493 } ]
[ 0, 76, 304, 488, 675, 739, 949, 1111, 1231, 1355 ]
1505.02431
1
In this paper we consider a variation of the Merton's problem with stochastic volatility and finite time horizon. The corresponded optimal control problem may be reduced to a linear parabolic boundary problem under some assumptions on the underlying process and the utility function. The resulting parabolic PDE is often quite difficult to solve, even when it is linear. In several special cases the explicit solutions were obtained. The present paper contributes to the pool of explicit solutions for stochastic optimal control problems. Our main result is the exact solution to the optimal control problem within the framework of the Heston model.
In this paper we consider a variation of the Merton's problem with added stochastic volatility and finite time horizon. It is known that the corresponding optimal control problem may be reduced to a linear parabolic boundary problem under some assumptions on the underlying process and the utility function. The resulting parabolic PDE is often quite difficult to solve, even when it is linear. The present paper contributes to the pool of explicit solutions for stochastic optimal control problems. Our main result is the exact solution for optimal investment in Heston model.
[ { "type": "A", "before": null, "after": "added", "start_char_pos": 67, "end_char_pos": 67 }, { "type": "R", "before": "The corresponded", "after": "It is known that the corresponding", "start_char_pos": 115, "end_char_pos": 131 }, { "type": "D", "before": "In several special cases the explicit solutions were obtained.", "after": null, "start_char_pos": 372, "end_char_pos": 434 }, { "type": "R", "before": "to the optimal control problem within the framework of the", "after": "for optimal investment in", "start_char_pos": 578, "end_char_pos": 636 } ]
[ 0, 114, 284, 371, 434, 539 ]
1505.02521
1
The identification of low-energy conformers for a given molecule is a fundamental problem in computational chemistry and cheminformatics. We assess here a conformer search that employs a genetic algorithm for sampling the conformational space of molecules. The algorithm is designed to work with first-principles methods, facilitated by the incorporation of local optimization and blacklisting conformers that prevents repeated evaluations of very similar solutions. The aim of the search is not only to find the global minimum, but to predict all conformers within an energy window above the global minimum. The performance of the search strategy is evaluated for a reference data set extracted from a database with amino acid dipeptide conformers obtained by an extensive combined force field and first-principles search .
The identification of low-energy conformers for a given molecule is a fundamental problem in computational chemistry and cheminformatics. We assess here a conformer search that employs a genetic algorithm for sampling the low-energy segment of the conformation space of molecules. The algorithm is designed to work with first-principles methods, facilitated by the incorporation of local optimization and blacklisting conformers to prevent repeated evaluations of very similar solutions. The aim of the search is not only to find the global minimum, but to predict all conformers within an energy window above the global minimum. The performance of the search strategy is : (i) evaluated for a reference data set extracted from a database with amino acid dipeptide conformers obtained by an extensive combined force field and first-principles search and (ii) compared to the performance of a systematic search and a random conformer generator for the example of a drug-like ligand with 43 atoms, 8 rotatable bonds and 1 cis/trans bond .
[ { "type": "R", "before": "conformational", "after": "low-energy segment of the conformation", "start_char_pos": 222, "end_char_pos": 236 }, { "type": "R", "before": "that prevents", "after": "to prevent", "start_char_pos": 405, "end_char_pos": 418 }, { "type": "A", "before": null, "after": ": (i)", "start_char_pos": 651, "end_char_pos": 651 }, { "type": "A", "before": null, "after": "and (ii) compared to the performance of a systematic search and a random conformer generator for the example of a drug-like ligand with 43 atoms, 8 rotatable bonds and 1 cis/trans bond", "start_char_pos": 824, "end_char_pos": 824 } ]
[ 0, 137, 256, 466, 608 ]
1505.02627
1
We study the problem of option replication under constant proportional transaction costs in models where stochastic volatility and jumps are combined to capture market's important features. In particular, transaction costs can be approximately compensated applying the Leland adjusting volatility principle and asymptotic property of the hedging error due to discrete readjustments is characterized. We show that jump risk is approximately eliminated and the results established in continuous diffusion models are recovered. The study also confirms that for constant trading cost rate, the results established by Kabanov and Safarian (1997)and Pergamenshchikov (2003) are valid in jump-diffusion models with deterministic volatility using the classical Leland parameter .
We study the problem of option replication under constant proportional transaction costs in models where stochastic volatility and jumps are combined to capture the market's important features. Assuming some mild condition on the jump size distribution we show that transaction costs can be approximately compensated by applying the Leland adjusting volatility principle and the asymptotic property of the hedging error due to discrete readjustments is characterized. In particular, the jump risk can be approximately eliminated and the results established in continuous diffusion models are recovered. The study also confirms that for the case of constant trading cost rate, the approximate results established by Kabanov and Safarian (1997)and by Pergamenschikov (2003) are still valid in jump-diffusion models with deterministic volatility using the classical Leland parameter in Leland (1986) .
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 161, "end_char_pos": 161 }, { "type": "R", "before": "In particular,", "after": "Assuming some mild condition on the jump size distribution we show that", "start_char_pos": 191, "end_char_pos": 205 }, { "type": "A", "before": null, "after": "by", "start_char_pos": 257, "end_char_pos": 257 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 313, "end_char_pos": 313 }, { "type": "R", "before": "We show that jump risk is", "after": "In particular, the jump risk can be", "start_char_pos": 403, "end_char_pos": 428 }, { "type": "A", "before": null, "after": "the case of", "start_char_pos": 561, "end_char_pos": 561 }, { "type": "A", "before": null, "after": "approximate", "start_char_pos": 594, "end_char_pos": 594 }, { "type": "R", "before": "Pergamenshchikov", "after": "by Pergamenschikov", "start_char_pos": 649, "end_char_pos": 665 }, { "type": "A", "before": null, "after": "still", "start_char_pos": 677, "end_char_pos": 677 }, { "type": "A", "before": null, "after": "in Leland (1986)", "start_char_pos": 776, "end_char_pos": 776 } ]
[ 0, 190, 402, 527 ]
1505.02644
1
One of the problems faced by a firm that sells certain goods is to determine which is the number of products that must supply to maximize profits . In this article, we give an answer to this problem of economic interest. To solve it we use the theorem "unconscious statistician". The proposed problem is a generalization of the results obtained by Stirzaker and Kupferman where the authors do not present a situation where the sale of a quantity from some goods is constrained by the marketing of another. In addition, the described procedure is simple and can be successfully applied to any number of goods . The obtained results can be easily put into practice.
One of the problems faced by a firm that sells certain commodities is to determine the number of products that it must supply in order to maximize its profit . In this article, the authors give an answer to this problem of economic interest. The proposed problem is a generalization of the results obtained by Stirzaker (Probability and Random Variables: A Beginner's Guide, 1999) and Kupferman (Lecture Notes in Probability, 2009) where the authors do not present a situation where the sale of a quantity from some commodities is constrained by the marketing of another. In addition, the described procedure is simple and can be successfully applied to any number of commodities . The obtained results can be easily put into practice.
[ { "type": "R", "before": "goods", "after": "commodities", "start_char_pos": 55, "end_char_pos": 60 }, { "type": "D", "before": "which is", "after": null, "start_char_pos": 77, "end_char_pos": 85 }, { "type": "R", "before": "must supply to maximize profits", "after": "it must supply in order to maximize its profit", "start_char_pos": 114, "end_char_pos": 145 }, { "type": "R", "before": "we", "after": "the authors", "start_char_pos": 165, "end_char_pos": 167 }, { "type": "D", "before": "To solve it we use the theorem \"unconscious statistician\".", "after": null, "start_char_pos": 221, "end_char_pos": 279 }, { "type": "R", "before": "and Kupferman", "after": "(Probability and Random Variables: A Beginner's Guide, 1999) and Kupferman (Lecture Notes in Probability, 2009)", "start_char_pos": 358, "end_char_pos": 371 }, { "type": "R", "before": "goods", "after": "commodities", "start_char_pos": 456, "end_char_pos": 461 }, { "type": "R", "before": "goods", "after": "commodities", "start_char_pos": 602, "end_char_pos": 607 } ]
[ 0, 147, 220, 279, 505, 609 ]
1505.03374
1
Energy consumption of the software running on a device has become increasingly important as a growing number of devices rely on batteries or other limited sources of power. Of particular interest is constructing a bounded measure of the energy consumption - the maximum energy a program could consume for any input given to it. We explore the effect of different data on the energy consumption of individual instructions, instruction sequences and full programs. The whole program energy consumption of two benchmarks is analysed over random and hand-crafted data , and maximized with genetic algorithms for two embedded processors . We find that the worst case can be predicted from the distribution created by the random data, however, hand-crafted data can often achieve lower energy consumption. A model is constructed that allows the worst case energy for a sequence of instructions to be predicted. This is based on the observation that the transition between instructions is important and thus is not a single energy cost - it is a distribution dependent on the input and output values of the two consecutive instructions. We characterise the transition distributions for several instructions in the AVR instruction set, and show that this gives a useful upper bound on the energy consumption. We explore the effect that the transfer function of the instruction has on the data, and give an example which leads to a bimodal energy distribution. Finally, we conclude that a probabilistic approach is appropriate for estimating the energy consumption of programs .
This paper examines the impact of operand values upon instruction level energy models of embedded processors, to explore whether the requirements for safe worst case energy consumption (WCEC) analysis can be met. WCEC is similar to worst case execution time (WCET) analysis, but seeks to determine whether a task can be completed within an energy budget rather than within a deadline. Existing energy models that underpin such analysis typically use energy measurements from random input data, providing average or otherwise unbounded estimates not necessarily suitable for worst case analysis. We examine energy consumption distributions of two benchmarks under a range of input data on two cache-less embedded architectures, AVR and XS1-L . We find that the worst case can be predicted with a distribution created from random data. We propose a model to obtain energy distributions for instruction sequences that can be composed, enabling WCEC analysis on program basic blocks. Data dependency between instructions is also examined, giving a case where dependencies create a bimodal energy distribution. The worst case energy prediction remains safe. We conclude that worst-case energy models based on a probabilistic approach are suitable for safe WCEC analysis .
[ { "type": "R", "before": "Energy consumption of the software running on a device has become increasingly important as a growing number of devices rely on batteries or other limited sources of power. Of particular interest is constructing a bounded measure of the energy consumption - the maximum energy a program could consume for any input given to it. We explore the effect of different data on the energy consumption of individual instructions, instruction sequences and full programs. The whole program energy consumption", "after": "This paper examines the impact of operand values upon instruction level energy models of embedded processors, to explore whether the requirements for safe worst case energy consumption (WCEC) analysis can be met. WCEC is similar to worst case execution time (WCET) analysis, but seeks to determine whether a task can be completed within an energy budget rather than within a deadline. Existing energy models that underpin such analysis typically use energy measurements from random input data, providing average or otherwise unbounded estimates not necessarily suitable for worst case analysis. We examine energy consumption distributions", "start_char_pos": 0, "end_char_pos": 499 }, { "type": "R", "before": "is analysed over random and hand-crafted data , and maximized with genetic algorithms for two embedded processors", "after": "under a range of input data on two cache-less embedded architectures, AVR and XS1-L", "start_char_pos": 518, "end_char_pos": 631 }, { "type": "R", "before": "from the distribution created by the random data, however, hand-crafted data can often achieve lower energy consumption. A model is constructed that allows the worst case energy for a sequence of instructions to be predicted. This is based on the observation that the transition", "after": "with a distribution created from random data. We propose a model to obtain energy distributions for instruction sequences that can be composed, enabling WCEC analysis on program basic blocks. Data dependency", "start_char_pos": 679, "end_char_pos": 957 }, { "type": "R", "before": "important and thus is not a single energy cost - it is a distribution dependent on the input and output values of the two consecutive instructions. We characterise the transition distributions for several instructions in the AVR instruction set, and show that this gives a useful upper bound on the energy consumption. We explore the effect that the transfer function of the instruction has on the data, and give an example which leads to a bimodal energy distribution. Finally, we conclude that a probabilistic approach is appropriate for estimating the energy consumption of programs", "after": "also examined, giving a case where dependencies create a bimodal energy distribution. The worst case energy prediction remains safe. We conclude that worst-case energy models based on a probabilistic approach are suitable for safe WCEC analysis", "start_char_pos": 982, "end_char_pos": 1567 } ]
[ 0, 172, 327, 462, 633, 799, 904, 1129, 1300, 1451 ]
1505.03374
2
This paper examines the impact of operand values upon instruction level energy models of embedded processors, to explore whether the requirements for safe worst case energy consumption (WCEC) analysis can be met. WCEC is similar to worst case execution time (WCET) analysis, but seeks to determine whether a task can be completed within an energy budget rather than within a deadline. Existing energy models that underpin such analysis typically use energy measurements from random input data, providing average or otherwise unbounded estimates not necessarily suitable for worst case analysis. We examine energy consumption distributions of two benchmarks under a range of input data on two cache-less embedded architectures, AVR and XS1-L. We find that the worst case can be predicted with a distribution created from random data. We propose a model to obtain energy distributions for instruction sequences that can be composed , enabling WCEC analysis on program basic blocks. Data dependency between instructions is also examined, giving a case where dependencies create a bimodal energy distribution. The worst case energy prediction remains safe. We conclude that worst-case energy models based on a probabilistic approach are suitable for safe WCEC analysis .
Safely meeting Worst Case Energy Consumption (WCEC) criteria requires accurate energy modeling of software. We investigate the impact of instruction operand values upon energy consumption in cacheless embedded processors. Existing instruction-level energy models typically use measurements from random input data, providing estimates unsuitable for safe WCEC analysis. We examine probabilistic energy distributions of instructions and propose a model for composing instruction sequences using distributions , enabling WCEC analysis on program basic blocks. The worst case is predicted with statistical analysis. Further, we verify that the energy of embedded benchmarks can be characterised as a distribution, and compare our proposed technique with other methods of estimating energy consumption .
[ { "type": "R", "before": "This paper examines", "after": "Safely meeting Worst Case Energy Consumption (WCEC) criteria requires accurate energy modeling of software. We investigate", "start_char_pos": 0, "end_char_pos": 19 }, { "type": "A", "before": null, "after": "instruction", "start_char_pos": 34, "end_char_pos": 34 }, { "type": "D", "before": "instruction level energy models of embedded processors, to explore whether the requirements for safe worst case energy consumption (WCEC) analysis can be met. WCEC is similar to worst case execution time (WCET) analysis, but seeks to determine whether a task can be completed within an energy budget rather than within a deadline. Existing", "after": null, "start_char_pos": 55, "end_char_pos": 394 }, { "type": "R", "before": "models that underpin such analysis typically use energy", "after": "consumption in cacheless embedded processors. Existing instruction-level energy models typically use", "start_char_pos": 402, "end_char_pos": 457 }, { "type": "R", "before": "average or otherwise unbounded estimates not necessarily suitable for worst case", "after": "estimates unsuitable for safe WCEC", "start_char_pos": 505, "end_char_pos": 585 }, { "type": "R", "before": "energy consumption distributions of two benchmarks under a range of input data on two cache-less embedded architectures, AVR and XS1-L. We find that the worst case can be predicted with a distribution created from random data. We", "after": "probabilistic energy distributions of instructions and", "start_char_pos": 607, "end_char_pos": 836 }, { "type": "R", "before": "to obtain energy distributions for instruction sequences that can be composed", "after": "for composing instruction sequences using distributions", "start_char_pos": 853, "end_char_pos": 930 }, { "type": "D", "before": "Data dependency between instructions is also examined, giving a case where dependencies create a bimodal energy distribution.", "after": null, "start_char_pos": 981, "end_char_pos": 1106 }, { "type": "R", "before": "energy prediction remains safe. We conclude that worst-case energy models based on a probabilistic approach are suitable for safe WCEC analysis", "after": "is predicted with statistical analysis. Further, we verify that the energy of embedded benchmarks can be characterised as a distribution, and compare our proposed technique with other methods of estimating energy consumption", "start_char_pos": 1122, "end_char_pos": 1265 } ]
[ 0, 213, 385, 595, 742, 833, 980, 1106, 1153 ]
1505.03587
1
We consider options that pays the complexity deficiency of a sequence of up and down ticks of a stock upon exercise. We study the price of European and American versions of this option numerically for automatic complexity, and theoretically for Kolmogorov complexity. We also consider the case of run complexity, which is a restricted form of automatic complexity.
We consider options that pay the complexity deficiency of a sequence of up and down ticks of a stock upon exercise. We study the price of European and American versions of this option numerically for automatic complexity, and theoretically for Kolmogorov complexity. We also consider run complexity, which is a restricted form of automatic complexity.
[ { "type": "R", "before": "pays", "after": "pay", "start_char_pos": 25, "end_char_pos": 29 }, { "type": "D", "before": "the case of", "after": null, "start_char_pos": 285, "end_char_pos": 296 } ]
[ 0, 116, 267 ]
1505.04573
1
Convergence of binomial tree method and explicit difference schemes for the variational inequality model of American put options with time dependent coefficients is studied. When volatility is time dependent, it is not reasonable to assume that the dynamics of the underlying asset's price forms a binomial tree if a partition of time interval with equal parts is used. A time interval partition method that allows binomial tree dynamics of the underlying asset's price is provided. Conditions under which the prices of American put option by BTM and explicit difference scheme have the monotonic property on time variable are found. Convergence of BTM and explicit difference schemes for the variational inequality model of American put options to viscosity solution is proved .
Binomial tree methods (BTM) and explicit difference schemes (EDS) for the variational inequality model of American options with time dependent coefficients are studied. When volatility is time dependent, it is not reasonable to assume that the dynamics of the underlying asset's price forms a binomial tree if a partition of time interval with equal parts is used. A time interval partition method that allows binomial tree dynamics of the underlying asset's price is provided. Conditions under which the prices of American option by BTM and EDS have the monotonic property on time variable are found. Using convergence of EDS for variational inequality model of American options to viscosity solution the decreasing property of the price of American put options and increasing property of the optimal exercise boundary on time variable are proved. First, put options are considered. Then the linear homogeneity and call-put symmetry of the price functions in the BTM and the EDS for the variational inequality model of American options with time dependent coefficients are studied and using them call options are studied .
[ { "type": "R", "before": "Convergence of binomial tree method", "after": "Binomial tree methods (BTM)", "start_char_pos": 0, "end_char_pos": 35 }, { "type": "A", "before": null, "after": "(EDS)", "start_char_pos": 68, "end_char_pos": 68 }, { "type": "D", "before": "put", "after": null, "start_char_pos": 118, "end_char_pos": 121 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 163, "end_char_pos": 165 }, { "type": "D", "before": "put", "after": null, "start_char_pos": 530, "end_char_pos": 533 }, { "type": "R", "before": "explicit difference scheme", "after": "EDS", "start_char_pos": 552, "end_char_pos": 578 }, { "type": "R", "before": "Convergence of BTM and explicit difference schemes for the", "after": "Using convergence of EDS for", "start_char_pos": 635, "end_char_pos": 693 }, { "type": "D", "before": "put", "after": null, "start_char_pos": 735, "end_char_pos": 738 }, { "type": "R", "before": "is proved", "after": "the decreasing property of the price of American put options and increasing property of the optimal exercise boundary on time variable are proved. First, put options are considered. Then the linear homogeneity and call-put symmetry of the price functions in the BTM and the EDS for the variational inequality model of American options with time dependent coefficients are studied and using them call options are studied", "start_char_pos": 769, "end_char_pos": 778 } ]
[ 0, 174, 370, 483, 634 ]
1505.04757
1
Using an extended version of the credit risk model CreditRisk^+ , we develop a flexible framework to estimate stochastic life tables and to model credit, life insurance and annuity portfolios , including actuarial reserves. Deaths are driven by common stochastic risk factors which may be interpreted as death causes like neoplasms, circulatory diseases or idiosyncratic components. Our approach provides an efficient, numerically stable algorithm for an exact calculation of the one-period loss distribution where various sources of risk are considered. As required by many regulators, we can then derive risk measures for the one-period loss distribution such as value at risk and expected shortfall. Using publicly available data, we provide estimation procedures for model parameters including classical approaches, as well as Markov chain Monte Carlo methods. We conclude with a real world example using Australian death data . In particular, our model allows stress testing and, therefore, offers insight into how certain health scenarios influence annuity payments of an insurer. Such scenarios may include outbreaks of epidemics, improvement in health treatment, or development of better medication. Further applications of our model include modelling of stochastic life tables with corresponding forecasts of death probabilities and demographic changes .
We introduce an additive stochastic mortality model which allows joint modelling and forecasting of underlying death causes. Parameter families for mortality trends can be chosen freely. As model settings become high dimensional, Markov chain Monte Carlo (MCMC) is used for parameter estimation. We then link our proposed model to an extended version of the credit risk model CreditRisk^+ . This allows exact risk aggregation via an efficient numerically stable Panjer recursion algorithm and provides numerous applications in credit, life insurance and annuity portfolios to derive P\&L distributions. Furthermore, the model allows exact (without Monte Carlo simulation error) calculation of risk measures and their sensitivities with respect to model parameters for P\&L distributions such as value-at-risk and expected shortfall. Numerous examples, including an application to partial internal models under Solvency II, using Austrian and Australian data are shown .
[ { "type": "R", "before": "Using an", "after": "We introduce an additive stochastic mortality model which allows joint modelling and forecasting of underlying death causes. Parameter families for mortality trends can be chosen freely. As model settings become high dimensional, Markov chain Monte Carlo (MCMC) is used for parameter estimation. We then link our proposed model to an", "start_char_pos": 0, "end_char_pos": 8 }, { "type": "R", "before": ", we develop a flexible framework to estimate stochastic life tables and to model", "after": ". This allows exact risk aggregation via an efficient numerically stable Panjer recursion algorithm and provides numerous applications in", "start_char_pos": 64, "end_char_pos": 145 }, { "type": "R", "before": ", including actuarial reserves. Deaths are driven by common stochastic risk factors which may be interpreted as death causes like neoplasms, circulatory diseases or idiosyncratic components. Our approach provides an efficient, numerically stable algorithm for an exact calculation of the one-period loss distribution where various sources of risk are considered. As required by many regulators, we can then derive risk measures for the one-period loss distribution such as value at risk", "after": "to derive P\\&L distributions. Furthermore, the model allows exact (without Monte Carlo simulation error) calculation of risk measures and their sensitivities with respect to model parameters for P\\&L distributions such as value-at-risk", "start_char_pos": 192, "end_char_pos": 678 }, { "type": "R", "before": "Using publicly available data, we provide estimation procedures for model parameters including classical approaches, as well as Markov chain Monte Carlo methods. We conclude with a real world example using Australian death data . In particular, our model allows stress testing and, therefore, offers insight into how certain health scenarios influence annuity payments of an insurer. Such scenarios may include outbreaks of epidemics, improvement in health treatment, or development of better medication. Further applications of our model include modelling of stochastic life tables with corresponding forecasts of death probabilities and demographic changes", "after": "Numerous examples, including an application to partial internal models under Solvency II, using Austrian and Australian data are shown", "start_char_pos": 703, "end_char_pos": 1361 } ]
[ 0, 223, 382, 554, 702, 864, 932, 1086, 1207 ]
1505.04810
1
Motivated by various optimization problems and models in algorithmic trading , this paper analyzes the limiting behavior for order positions and related queues in a limit order book. In addition to the fluid and diffusion limits for the processes, fluctuations of order positions and related queues around their fluid limits are analyzed. As a corollary, explicit analytical expressions for various quantities of interests in a limit order book are derived.
Order positions are key variables in algorithmic trading . This paper studies the limiting behavior of order positions and related queues in a limit order book. In addition to the fluid and diffusion limits for the processes, fluctuations of order positions and related queues around their fluid limits are analyzed. As a corollary, explicit analytical expressions for various quantities of interests in a limit order book are derived.
[ { "type": "R", "before": "Motivated by various optimization problems and models", "after": "Order positions are key variables", "start_char_pos": 0, "end_char_pos": 53 }, { "type": "R", "before": ", this paper analyzes", "after": ". This paper studies", "start_char_pos": 77, "end_char_pos": 98 }, { "type": "R", "before": "for", "after": "of", "start_char_pos": 121, "end_char_pos": 124 } ]
[ 0, 182, 338 ]
1505.04996
1
We consider a two-queue polling model with switch-over times and k-limited service (serve at most k_i customers during one visit period to queue i) in each queue. The major benefit of the k-limited service discipline is that it - besides bounding the cycle time - effectuates prioritization by assigning different service limits to different queues. System performance is studied in the heavy-traffic regime, in which one of the queues becomes critically loaded with the other queue remaining stable. By using a singular-perturbation technique, we rigorously prove heavy-traffic limits for the joint queue-length distribution. Moreover, it is observed that an interchange exists among the first two moments in service and switch-over times such that the HT limits remain unchanged. \geq2
We consider a two-queue polling model with switch-over times and k-limited service (serve at most k_i customers during one visit period to queue i) in each queue. The major benefit of the k-limited service discipline is that it - besides bounding the cycle time - effectuates prioritization by assigning different service limits to different queues. System performance is studied in the heavy-traffic regime, in which one of the queues becomes critically loaded with the other queue remaining stable. By using a singular-perturbation technique, we rigorously prove heavy-traffic limits for the joint queue-length distribution. Moreover, it is observed that an interchange exists among the first two moments in service and switch-over times such that the HT limits remain unchanged. Not only do the rigorously proven results readily carry over to N(\geq2) queue polling systems, but one can also easily relax the distributional assumptions. The results and insights of this note prove their worth in the performance analysis of Wireless Personal Area Networks (WPAN) and mobile networks, where different users compete for access to the shared scarce resources.
[ { "type": "A", "before": null, "after": "Not only do the rigorously proven results readily carry over to N(", "start_char_pos": 782, "end_char_pos": 782 }, { "type": "A", "before": null, "after": ") queue polling systems, but one can also easily relax the distributional assumptions. The results and insights of this note prove their worth in the performance analysis of Wireless Personal Area Networks (WPAN) and mobile networks, where different users compete for access to the shared scarce resources.", "start_char_pos": 787, "end_char_pos": 787 } ]
[ 0, 162, 349, 500, 626 ]
1505.05179
1
Each mammalian olfactory sensory neuron stochastically expresses only one out of thousands of olfactory receptor alleles and the molecular mechanism for this selection remains as one of the biggest puzzles in neurobiology. Through constructing and analyzing a mathematical model based on extensive experimental observations , we identified an evolutionarily optimized three-layer regulation mechanism that robustly generates single-allele expression. Zonal separation reduces the number of competing alleles. Bifunctional LSD1 and cooperative histone modification dynamics minimize multiple allele epigenetic activation and alleles trapped in incomplete epigenetic activation states. Subsequent allele competition for a limited number of enhancers through cooperative binding serves as final safeguard for single allele expression. The identified design principles demonstrate the importance of molecular cooperativity in selecting and maintaining monoallelic olfactory receptor expression.
Multiple-objective optimization is common in biological systems. In the mammalian olfactory system, each sensory neuron stochastically expresses only one out of up to thousands of olfactory receptor (OR) gene alleles; URLanism level the types of expressed ORs need to be maximized. Existing models focus only on monoallele activation, and cannot explain recent observations in mutants, especially the reduced global diversity of expressed ORs in G9a/GLP knockouts. In this work we integrated existing information on OR expression, and proposed an evolutionarily optimized three-layer regulation mechanism , which includes zonal segregation, epigenetic barrier crossing coupled to a negative feedback loop that mechanistically differs from previous theoretical proposals, and a novel enhancer competition step. This model not only recapitulates monoallelic OR expression, but also elucidates how the olfactory system maximizes and maintains the diversity of OR expression. The model has multiple predictions validated by existing experimental results, and particularly underscores cooperativity and synergy as a general design principle for multi-objective optimization in biology
[ { "type": "R", "before": "Each mammalian olfactory", "after": "Multiple-objective optimization is common in biological systems. In the mammalian olfactory system, each", "start_char_pos": 0, "end_char_pos": 24 }, { "type": "A", "before": null, "after": "up to", "start_char_pos": 81, "end_char_pos": 81 }, { "type": "R", "before": "alleles and the molecular mechanism for this selection remains as one of the biggest puzzles in neurobiology. Through constructing and analyzing a mathematical model based on extensive experimental observations , we identified", "after": "(OR) gene alleles; URLanism level the types of expressed ORs need to be maximized. Existing models focus only on monoallele activation, and cannot explain recent observations in mutants, especially the reduced global diversity of expressed ORs in G9a/GLP knockouts. In this work we integrated existing information on OR expression, and proposed", "start_char_pos": 114, "end_char_pos": 340 }, { "type": "R", "before": "that robustly generates single-allele expression. Zonal separation reduces the number of competing alleles. Bifunctional LSD1 and cooperative histone modification dynamics minimize multiple allele epigenetic activation and alleles trapped in incomplete epigenetic activation states. Subsequent allele competition for a limited number of enhancers through cooperative binding serves as final safeguard for single allele", "after": ", which includes zonal segregation, epigenetic barrier crossing coupled to a negative feedback loop that mechanistically differs from previous theoretical proposals, and a novel enhancer competition step. This model not only recapitulates monoallelic OR expression, but also elucidates how the olfactory system maximizes and maintains the diversity of OR", "start_char_pos": 402, "end_char_pos": 820 }, { "type": "R", "before": "identified design principles demonstrate the importance of molecular cooperativity in selecting and maintaining monoallelic olfactory receptor expression.", "after": "model has multiple predictions validated by existing experimental results, and particularly underscores cooperativity and synergy as a general design principle for multi-objective optimization in biology", "start_char_pos": 837, "end_char_pos": 991 } ]
[ 0, 223, 451, 509, 684, 832 ]
1505.05179
2
Multiple-objective optimization is common in biological systems. In the mammalian olfactory system, each sensory neuron stochastically expresses only one out of up to thousands of olfactory receptor (OR) gene alleles; URLanism level the types of expressed ORs need to be maximized. Existing models focus only on monoallele activation, and cannot explain recent observations in mutants, especially the reduced global diversity of expressed ORs in G9a/GLP knockouts. In this work we integrated existing information on OR expression, and proposed an evolutionarily optimized three-layer regulation mechanism, which includes zonal segregation, epigenetic barrier crossing coupled to a negative feedback loop that mechanistically differs from previous theoretical proposals, and a novel enhancer competition step. This model not only recapitulates monoallelic OR expression, but also elucidates how the olfactory system maximizes and maintains the diversity of OR expression . The model has multiple predictions validated by existing experimental results , and particularly underscores cooperativity and synergy as a general design principle for multi-objective optimization in biology
Multiple-objective optimization is common in biological systems. In the mammalian olfactory system, each sensory neuron stochastically expresses only one out of up to thousands of olfactory receptor (OR) gene alleles; URLanism level the types of expressed ORs need to be maximized. Existing models focus only on monoallele activation, and cannot explain recent observations in mutants, especially the reduced global diversity of expressed ORs in G9a/GLP knockouts. In this work we integrated existing information on OR expression, and constructed a comprehensive model that has all its components based on physical interactions. Analyzing the model reveals an evolutionarily optimized three-layer regulation mechanism, which includes zonal segregation, epigenetic barrier crossing coupled to a negative feedback loop that mechanistically differs from previous theoretical proposals, and a previously unidentified enhancer competition step. This model not only recapitulates monoallelic OR expression, but also elucidates how the olfactory system maximizes and maintains the diversity of OR expression , and has multiple predictions validated by existing experimental results . Through making analogy to a physical system with thermally activated barrier crossing and comparative reverse engineering analyses, the study reveals that the olfactory receptor selection system is optimally designed, and particularly underscores cooperativity and synergy as a general design principle for multi-objective optimization in biology .
[ { "type": "R", "before": "proposed", "after": "constructed a comprehensive model that has all its components based on physical interactions. Analyzing the model reveals", "start_char_pos": 535, "end_char_pos": 543 }, { "type": "R", "before": "novel", "after": "previously unidentified", "start_char_pos": 776, "end_char_pos": 781 }, { "type": "R", "before": ". The model", "after": ", and", "start_char_pos": 970, "end_char_pos": 981 }, { "type": "R", "before": ",", "after": ". Through making analogy to a physical system with thermally activated barrier crossing and comparative reverse engineering analyses, the study reveals that the olfactory receptor selection system is optimally designed,", "start_char_pos": 1050, "end_char_pos": 1051 }, { "type": "A", "before": null, "after": ".", "start_char_pos": 1181, "end_char_pos": 1181 } ]
[ 0, 64, 217, 281, 464, 808, 971 ]
1505.05214
1
Crystallography may be the gold standard of protein structure determination, but obtaining the necessary high-quality crystals is in some ways akin to prospecting for the precious metal. The tools and models developed by soft matter to understand colloidal self-assembly, including their crystallization, offer some insights into the protein crystallization problem . This topical review describes the various analogies between protein crystal and colloidal self-assembly that have been made . We highlight the explanatory power of patchy models, but also the challenges of providing specific guidance . We conclude with a presentation of possible future research directions .
Crystallography may be the gold standard of protein structure determination, but obtaining the necessary high-quality crystals is also in some ways akin to prospecting for the precious metal. The tools and models developed in soft matter physics to understand colloidal assembly offer some insights into the problem of crystallizing proteins . This topical review describes the various analogies that have been made between proteins and colloids in that context . We highlight the explanatory power of patchy particle models, but also the challenges of providing guidance for crystallizing specific proteins . We conclude with a presentation of possible future research directions . This article is intended for soft matter scientists interested in protein crystallization as a self-assembly problem, and as an introduction to the pertinent physics literature for protein scientists more generally .
[ { "type": "A", "before": null, "after": "also", "start_char_pos": 130, "end_char_pos": 130 }, { "type": "R", "before": "by soft matter", "after": "in soft matter physics", "start_char_pos": 219, "end_char_pos": 233 }, { "type": "R", "before": "self-assembly, including their crystallization,", "after": "assembly", "start_char_pos": 258, "end_char_pos": 305 }, { "type": "R", "before": "protein crystallization problem", "after": "problem of crystallizing proteins", "start_char_pos": 335, "end_char_pos": 366 }, { "type": "D", "before": "between protein crystal and colloidal self-assembly", "after": null, "start_char_pos": 421, "end_char_pos": 472 }, { "type": "A", "before": null, "after": "between proteins and colloids in that context", "start_char_pos": 493, "end_char_pos": 493 }, { "type": "A", "before": null, "after": "particle", "start_char_pos": 541, "end_char_pos": 541 }, { "type": "R", "before": "specific guidance", "after": "guidance for crystallizing specific proteins", "start_char_pos": 587, "end_char_pos": 604 }, { "type": "A", "before": null, "after": ". This article is intended for soft matter scientists interested in protein crystallization as a self-assembly problem, and as an introduction to the pertinent physics literature for protein scientists more generally", "start_char_pos": 678, "end_char_pos": 678 } ]
[ 0, 187, 368, 495, 606 ]
1505.05256
1
We consider the class of self-similar Gaussian stochastic volatility models, and compute the small-time (near-maturity) asymptotics for the corresponding asset price density, the call and put pricing functions, and the implied volatilities. Unlike the well-known model-free behavior for extreme-strike asymptotics, small-time behaviors of the above depend heavily on the model, and require a control of the asset price density which is uniform with respect to the asset price variable, in order to translate into results for call prices and implied volatilities. Away from the money, we express the asymptotics explicitly using the volatility process' self-similarity parameter H, its first Karhunen-Lo\`{e eigenvalue at time 1, and the latter's multiplicity. Several model-free estimators for H result. At the money, a separate study is required: the asymptotics for small time depend instead on the integrated variance's moments of orders 1/2 and 3/2, and the estimator for H sees an affine adjustment, while remaining model-free.
We consider the class of self-similar Gaussian stochastic volatility models, and compute the small-time (near-maturity) asymptotics for the corresponding asset price density, the call and put pricing functions, and the implied volatilities. Unlike the well-known model-free behavior for extreme-strike asymptotics, small-time behaviors of the above depend heavily on the model, and require a control of the asset price density which is uniform with respect to the asset price variable, in order to translate into results for call prices and implied volatilities. Away from the money, we express the asymptotics explicitly using the volatility process' self-similarity parameter H, its first Karhunen-Loeve eigenvalue at time 1, and the latter's multiplicity. Several model-free estimators for H result. At the money, a separate study is required: the asymptotics for small time depend instead on the integrated variance's moments of orders 1/2 and 3/2, and the estimator for H sees an affine adjustment, while remaining model-free.
[ { "type": "R", "before": "Karhunen-Lo\\`{e", "after": "Karhunen-Loeve", "start_char_pos": 691, "end_char_pos": 706 } ]
[ 0, 240, 562, 759, 803 ]
1505.05730
1
Hybrid quantum mechanical-molecular mechanical (QM/MM) simulations are widely used in studies of enzymatic catalysis. Up until now , it has usually been cost prohibitive to determine the convergence of these calculations with respect to the size of the QM region. Recent advances in reformulating electronic structure algorithms for stream processors such as graphical processing units have made QM/MM calculations of optimized reaction paths with QM regions comprising up to O(10^3) atoms feasible. Here, we leverage these GPU-accelerated quantum chemistry methods to investigate catalytic properties in catechol O-methyltransferase . Using QM regions ranging in size from the reactant only (63 atoms) up to nearly one-third of the entire protein (940 atoms), we show that convergence of properties such as the activation energy of the catalyzed reaction can be quite slow. Convergence to within chemical accuracy for this case requires a quantum mechanical region with approximately 500 atoms . These results call for a more careful determination of QM region sizes in future QM/MM studies of enzymes.
Hybrid quantum mechanical-molecular mechanical (QM/MM) simulations are widely used in studies of enzymatic catalysis. Until recently , it has been cost prohibitive to determine the asymptotic limit of key energetic and structural properties with respect to increasingly large QM regions. Leveraging recent advances in electronic structure efficiency and accuracy, we investigate catalytic properties in catechol O-methyltransferase , a representative example of a methyltransferase critical to human health . Using QM regions ranging in size from reactants-only (64 atoms) to nearly one-third of the entire protein (940 atoms), we show that properties such as the activation energy approach within chemical accuracy of the large-QM asymptotic limits rather slowly, requiring approximately 500-600 atoms if the QM residues are chosen simply by distance from the substrate. This slow approach to asymptotic limit is due to charge transfer from protein residues to the reacting substrates. Our large QM/MM calculations enable identification of charge separation for fragments in the transition state as a key component of enzymatic methyl transfer rate enhancement. We introduce charge shift analysis that reveals the minimum number of protein residues (ca. 11-16 residues or 200-300 atoms for COMT) needed for quantitative agreement with large-QM simulations. The identified residues are not those that would be typically selected using criteria such as chemical intuition or proximity. These results provide a recipe for a more careful determination of QM region sizes in future QM/MM studies of enzymes.
[ { "type": "R", "before": "Up until now", "after": "Until recently", "start_char_pos": 118, "end_char_pos": 130 }, { "type": "D", "before": "usually", "after": null, "start_char_pos": 140, "end_char_pos": 147 }, { "type": "R", "before": "convergence of these calculations", "after": "asymptotic limit of key energetic and structural properties", "start_char_pos": 187, "end_char_pos": 220 }, { "type": "R", "before": "the size of the QM region. Recent advances in reformulating electronic structure algorithms for stream processors such as graphical processing units have made QM/MM calculations of optimized reaction paths with QM regions comprising up to O(10^3) atoms feasible. Here, we leverage these GPU-accelerated quantum chemistry methods to", "after": "increasingly large QM regions. Leveraging recent advances in electronic structure efficiency and accuracy, we", "start_char_pos": 237, "end_char_pos": 568 }, { "type": "A", "before": null, "after": ", a representative example of a methyltransferase critical to human health", "start_char_pos": 634, "end_char_pos": 634 }, { "type": "R", "before": "the reactant only (63 atoms) up", "after": "reactants-only (64 atoms)", "start_char_pos": 675, "end_char_pos": 706 }, { "type": "D", "before": "convergence of", "after": null, "start_char_pos": 775, "end_char_pos": 789 }, { "type": "R", "before": "of the catalyzed reaction can be quite slow. Convergence to within chemical accuracy for this case requires a quantum mechanical region with approximately 500 atoms . These results call", "after": "approach within chemical accuracy of the large-QM asymptotic limits rather slowly, requiring approximately 500-600 atoms if the QM residues are chosen simply by distance from the substrate. This slow approach to asymptotic limit is due to charge transfer from protein residues to the reacting substrates. Our large QM/MM calculations enable identification of charge separation for fragments in the transition state as a key component of enzymatic methyl transfer rate enhancement. We introduce charge shift analysis that reveals the minimum number of protein residues (ca. 11-16 residues or 200-300 atoms for COMT) needed for quantitative agreement with large-QM simulations. The identified residues are not those that would be typically selected using criteria such as chemical intuition or proximity. These results provide a recipe", "start_char_pos": 831, "end_char_pos": 1016 } ]
[ 0, 117, 263, 499, 875, 997 ]
1505.06440
1
Biological systems are driven by intricate interactions among the complex array of molecules that comprise the cell. Many methods have been developed to reconstruct network models that attempt to capture those interactions. These methods often draw on large numbers of measured expression samples to tease out subtle signals and infer connections between genes (or gene products). The result is an aggregate network model representing a single estimate for edge likelihoods . While informative, aggregate models fail to capture the heterogeneity that is often represented in a population. Here we propose a method to reverse engineer sample-specific networks from aggregate network models. We demonstrate the accuracy and applicability of our approach in several datasets , including simulated data, microarray expression data from synchronized yeast cells, and RNA-seq data collected from human subjects . We show that these sample-specific networks can be used to study the evolution of network topology across time and to characterize shifts in gene regulation that may not be apparent in the expression data. We believe the ability to generate sample-specific networks will revolutionize the field of network biology and has the potential to usher in an era of precision network medicine.
Biological systems are driven by intricate interactions among the complex array of molecules that comprise the cell. Many methods have been developed to reconstruct network models of those interactions. These methods often draw on large numbers of samples with measured gene expression profiles to infer connections between genes (or gene products). The result is an aggregate network model representing a single estimate for the likelihood of each interaction, or "edge," in the network . While informative, aggregate models fail to capture the heterogeneity that is represented in any population. Here we propose a method to reverse engineer sample-specific networks from aggregate network models. We demonstrate the accuracy and applicability of our approach in several data sets , including simulated data, microarray expression data from synchronized yeast cells, and RNA-seq data collected from human lymphoblastoid cell lines . We show that these sample-specific networks can be used to study changes in network topology across time and to characterize shifts in gene regulation that may not be apparent in expression data. We believe the ability to generate sample-specific networks will greatly facilitate the application of network methods to the increasingly large, complex, and heterogeneous multi-omic data sets that are currently being generated, and ultimately support the emerging field of precision network medicine.
[ { "type": "R", "before": "that attempt to capture", "after": "of", "start_char_pos": 180, "end_char_pos": 203 }, { "type": "R", "before": "measured expression samples to tease out subtle signals and", "after": "samples with measured gene expression profiles to", "start_char_pos": 269, "end_char_pos": 328 }, { "type": "R", "before": "edge likelihoods", "after": "the likelihood of each interaction, or \"edge,\" in the network", "start_char_pos": 457, "end_char_pos": 473 }, { "type": "R", "before": "often represented in a", "after": "represented in any", "start_char_pos": 554, "end_char_pos": 576 }, { "type": "R", "before": "datasets", "after": "data sets", "start_char_pos": 763, "end_char_pos": 771 }, { "type": "R", "before": "subjects", "after": "lymphoblastoid cell lines", "start_char_pos": 896, "end_char_pos": 904 }, { "type": "R", "before": "the evolution of", "after": "changes in", "start_char_pos": 972, "end_char_pos": 988 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 1092, "end_char_pos": 1095 }, { "type": "R", "before": "revolutionize the field of network biology and has the potential to usher in an era", "after": "greatly facilitate the application of network methods to the increasingly large, complex, and heterogeneous multi-omic data sets that are currently being generated, and ultimately support the emerging field", "start_char_pos": 1178, "end_char_pos": 1261 } ]
[ 0, 116, 223, 380, 475, 588, 689, 906, 1112 ]
1505.07224
1
We study existence and uniqueness of continuous-time stochastic Radner equilibria in an incomplete markets model . An assumption of "smallness" type - imposed through the new notion of "closeness to Pareto optimality" - is shown to be sufficient for existence and uniqueness . Central role in our analysis is played by a fully-coupled nonlinear system of quadratic BSDEs.
We study existence and uniqueness of continuous-time stochastic Radner equilibria in an incomplete market model among a group of agents whose preference is characterized by cash invariant time-consistent monetary utilities . An assumption of "smallness" type is shown to be sufficient for existence and uniqueness . In particular, this assumption encapsulates settings with small endowments, small time-horizon, or a large population of weakly heterogeneous agents . Central role in our analysis is played by a fully-coupled nonlinear system of quadratic BSDEs.
[ { "type": "R", "before": "markets model", "after": "market model among a group of agents whose preference is characterized by cash invariant time-consistent monetary utilities", "start_char_pos": 99, "end_char_pos": 112 }, { "type": "D", "before": "- imposed through the new notion of \"closeness to Pareto optimality\" -", "after": null, "start_char_pos": 149, "end_char_pos": 219 }, { "type": "A", "before": null, "after": ". In particular, this assumption encapsulates settings with small endowments, small time-horizon, or a large population of weakly heterogeneous agents", "start_char_pos": 275, "end_char_pos": 275 } ]
[ 0, 114 ]
1505.07335
1
A gene regulatory network is a central concept in Systems Biology. It links the expression levels of a set of genes via regulatory controls that gene products exert on one another. There have been numerous suggestions for models of gene regulatory networks , with varying degrees of expressivity and ease of analysis. Perhaps the simplest model is the Boolean network, introduced by Kauffman several decades ago: expression levels take a Boolean value, and regulation of expression is expressed by Boolean functions. Even for this simple formulation, the problem of fitting a given model to an expression dataset is NP-Complete. In this paper we introduce a novel algorithm for this problem that makes use of sampling in order to handle large datasets. In order to demonstrate its performance we test it on multiple large in-silico datasets with several levels and types of noise. Our results support the notion that network analysis is applicable to large datasets, and that the production of such datasets is desirable for the study of gene regulatory networks .
Gene regulatory networks (GRNs) are increasingly used for explaining biological processes with complex transcriptional regulation. A GRN links the expression levels of a set of genes via regulatory controls that gene products exert on one another. Boolean networks are a common modeling choice since they balance between detail and ease of analysis. However, even for Boolean networks the problem of fitting a given network model to an expression dataset is NP-Complete. Previous methods have addressed this issue heuristically or by focusing on acyclic networks and specific classes of regulation functions. In this paper we introduce a novel algorithm for this problem that makes use of sampling in order to handle large datasets. Our algorithm can handle time series data for any network type and steady state data for acyclic networks. Using in-silico time series data we demonstrate good performance on large datasets with a significant level of noise .
[ { "type": "R", "before": "A gene regulatory network is a central concept in Systems Biology. It", "after": "Gene regulatory networks (GRNs) are increasingly used for explaining biological processes with complex transcriptional regulation. A GRN", "start_char_pos": 0, "end_char_pos": 69 }, { "type": "R", "before": "There have been numerous suggestions for models of gene regulatory networks , with varying degrees of expressivity", "after": "Boolean networks are a common modeling choice since they balance between detail", "start_char_pos": 181, "end_char_pos": 295 }, { "type": "R", "before": "Perhaps the simplest model is the Boolean network, introduced by Kauffman several decades ago: expression levels take a Boolean value, and regulation of expression is expressed by Boolean functions. Even for this simple formulation,", "after": "However, even for Boolean networks", "start_char_pos": 318, "end_char_pos": 550 }, { "type": "A", "before": null, "after": "network", "start_char_pos": 582, "end_char_pos": 582 }, { "type": "A", "before": null, "after": "Previous methods have addressed this issue heuristically or by focusing on acyclic networks and specific classes of regulation functions.", "start_char_pos": 630, "end_char_pos": 630 }, { "type": "R", "before": "In order to demonstrate its performance we test it on multiple large in-silico datasets with several levels and types of noise. Our results support the notion that network analysis is applicable to large datasets, and that the production of such datasets is desirable for the study of gene regulatory networks", "after": "Our algorithm can handle time series data for any network type and steady state data for acyclic networks. Using in-silico time series data we demonstrate good performance on large datasets with a significant level of noise", "start_char_pos": 755, "end_char_pos": 1064 } ]
[ 0, 66, 180, 317, 516, 629, 754, 882 ]
1506.00082
1
In this paper we discuss sufficient conditions for weak convergence of the Euler approximation to general multi-dimensional stochastic differential equations (SDEs) driven by correlated Brownian motions and with discontinuous and path-dependent coefficients. We prove tightness of the approximating processes and the weak solution of the limiting process. Furthermore, motivated by a basket credit default swap (CDS ) pricing problem with counterparty risk and contagion risk, we show weak convergence for some functions of correlated first-passage times of the approximating processes. We also discuss an open question on the probability of hitting a fixed point by correlated Brownian motions in a fixed time-interval .
We investigate the computational aspects of the basket CDS pricing with counterparty risk under a credit contagion model of multinames. This model enables us to capture the systematic volatility increases in the market triggered by a particular bankruptcy. The drawback of this problem is its analytical complication due to its path-dependent functional, which bears a potential failure in its convergence of numerical approximation under standing assumptions. In this paper we find sufficient conditions for the desired convergence by means of the weak convergence method to a class of path-dependent stochastic differential equations .
[ { "type": "R", "before": "In this paper we discuss sufficient conditions for weak convergence of the Euler approximation to general multi-dimensional stochastic differential equations (SDEs) driven by correlated Brownian motions and with discontinuous and path-dependent coefficients. We prove tightness of the approximating processes and the weak solution of the limiting process. Furthermore, motivated by a basket credit default swap (CDS ) pricing problem", "after": "We investigate the computational aspects of the basket CDS pricing", "start_char_pos": 0, "end_char_pos": 433 }, { "type": "R", "before": "and contagion risk, we show weak convergence for some functions of correlated first-passage times of the approximating processes. We also discuss an open question on the probability of hitting a fixed point by correlated Brownian motions in a fixed time-interval", "after": "under a credit contagion model of multinames. This model enables us to capture the systematic volatility increases in the market triggered by a particular bankruptcy. The drawback of this problem is its analytical complication due to its path-dependent functional, which bears a potential failure in its convergence of numerical approximation under standing assumptions. In this paper we find sufficient conditions for the desired convergence by means of the weak convergence method to a class of path-dependent stochastic differential equations", "start_char_pos": 457, "end_char_pos": 719 } ]
[ 0, 258, 355, 586 ]
1506.00082
2
We investigate the computational aspects of the basket CDS pricing with counterparty risk under a credit contagion model of multinames. This model enables us to capture the systematic volatility increases in the market triggered by a particular bankruptcy. The drawback of this problem is its analytical complication due to its path-dependent functional, which bears a potential failure in its convergence of numerical approximation under standing assumptions. In this paper we find sufficient conditions for the desired convergence by means of the weak convergence method to a class of path-dependent stochastic differential equations .
We investigate the computational aspects of the basket CDS pricing with counterparty risk under a credit contagion model of multinames. This model enables us to capture the systematic volatility increases in the market triggered by a particular bankruptcy. The drawback of this problem is its analytical complication due to its path-dependent functional, which bears a potential failure in its convergence of numerical approximation under standing assumptions. In this paper we find sufficient conditions for the desired convergence of the functionals associated with a class of path-dependent stochastic differential equations . The main ingredient is to identify the weak convergence of the approximated solution to the underlying path-dependent stochastic differential equation .
[ { "type": "R", "before": "by means of the weak convergence method to", "after": "of the functionals associated with", "start_char_pos": 533, "end_char_pos": 575 }, { "type": "A", "before": null, "after": ". The main ingredient is to identify the weak convergence of the approximated solution to the underlying path-dependent stochastic differential equation", "start_char_pos": 636, "end_char_pos": 636 } ]
[ 0, 135, 256, 460 ]
1506.00136
1
Recent development of high-resolution mass spectrometry (MS) instruments enables chemical cross-linking (XL) to become a high-throughput method for obtaining structural information about proteins. Restraints derived from XL-MS experiments have been used successfully for structure refinement and protein-protein docking. However, one formidable question is under which circumstances XL-MS data might be sufficient to determine a protein's tertiary structure de novo? Answering this question will not only include understanding the impact of XL-MS data on sampling and scoring within a de novo protein structure prediction algorithm, it must also determine an optimal cross-linker type and length for protein structure determination. While a longer cross-linker will yield more restraints, the value of each restraint for protein structure prediction decreases as the restraint is consistent with a larger conformational space. In this study, the number of cross-links and their discriminative power was systematically analyzed in silico on a set of 2,055 non-redundant protein folds considering Lys-Lys, Lys-Asp, Lys-Glu, Cys-Cys, and Arg-Arg reactive cross-linkers between 1 \r{A and 60 \r{A . Depending on the protein size a heuristic was developed that determines the optimal cross-linker length. Next, simulated restraints of variable length were used to de novo predict the tertiary structure of fifteen proteins using the BCL::Fold algorithm. The results demonstrate that a distinct cross-linker length exists for which information content for de novo protein structure prediction is maximized. The sampling accuracy improves on average by 1.0 \r{A and up to 2.2 \r{A in the most prominent example. XL-MS restraints enable consistently an improved selection of native-like models with an average enrichment of 2.1.
Recent development of high-resolution mass spectrometry (MS) instruments enables chemical cross-linking (XL) to become a high-throughput method for obtaining structural information about proteins. Restraints derived from XL-MS experiments have been used successfully for structure refinement and protein-protein docking. However, one formidable question is under which circumstances XL-MS data might be sufficient to determine a protein's tertiary structure de novo? Answering this question will not only include understanding the impact of XL-MS data on sampling and scoring within a de novo protein structure prediction algorithm, it must also determine an optimal cross-linker type and length for protein structure determination. While a longer cross-linker will yield more restraints, the value of each restraint for protein structure prediction decreases as the restraint is consistent with a larger conformational space. In this study, the number of cross-links and their discriminative power was systematically analyzed in silico on a set of 2,055 non-redundant protein folds considering Lys-Lys, Lys-Asp, Lys-Glu, Cys-Cys, and Arg-Arg reactive cross-linkers between 1 \AA and 60 \AA . Depending on the protein size a heuristic was developed that determines the optimal cross-linker length. Next, simulated restraints of variable length were used to de novo predict the tertiary structure of fifteen proteins using the BCL::Fold algorithm. The results demonstrate that a distinct cross-linker length exists for which information content for de novo protein structure prediction is maximized. The sampling accuracy improves on average by 1.0 \AA and up to 2.2 \AA in the most prominent example. XL-MS restraints enable consistently an improved selection of native-like models with an average enrichment of 2.1.
[ { "type": "R", "before": "\\r{A", "after": "\\AA", "start_char_pos": 1176, "end_char_pos": 1180 }, { "type": "R", "before": "\\r{A", "after": "\\AA", "start_char_pos": 1188, "end_char_pos": 1192 }, { "type": "R", "before": "\\r{A", "after": "\\AA", "start_char_pos": 1650, "end_char_pos": 1654 }, { "type": "R", "before": "\\r{A", "after": "\\AA", "start_char_pos": 1669, "end_char_pos": 1673 } ]
[ 0, 196, 320, 466, 732, 926, 1299, 1448, 1600, 1704 ]
1506.00236
1
The interfirm buyer-seller network is important from both macroeconomic and microeconomic perspectives. From a macroeconomic perspective, this network represents a form of interconnectedness that allows firm-level idiosyncratic shocksto be propagated to other firms. This propagation mechanism interferes with the averaging out process of shocks, having a possible impact on aggregate fluctuation. From a microeconomic perspective, the interfirm buyer-seller network is a result of a firm's strategic link renewal processes. There has been substantial research that models strategic link formation processes, but the economy-wide consequences of such strategic behaviors are not clear. We address these two questions using a unique dataset for the Japanese interfirm buyer-seller network. We take a structural equation modeling, and show that a large proportion of fluctuation in the average log growth rate of firms can be explained by the network and that link renewal by firms decreases the standard deviation of the log growth rate.\\%DIF > of the aggregate fluctuations can be explained by the network effect.
A firm's buyer--seller relationships can be characterized as a longitudinal network in which the connectivity patterns evolve as each firm faces productivity shocks. Despite its importance in characterizing the production network of an economy, little is known about how the network revises its linkage structure in the face of global and individual exogenous shocks. Using a unique data set covering 10 years of interfirm buyer--seller networks and structural equation modeling, we show that the evolution of the interfirm buyer--seller network is a result of a firm's myopic decisions to avoid other firms' negative shocks and share their positive shocks. We show that the current network is often the best network configuration, which improves both the propagation of positive shocks and the avoidance of negative shocks compared with previous networks. Furthermore, we show that for positive shocks, the future network is often better than the current network in the sense that it propagates positive shocks better than the current network. This is explained by the asymmetry in cost between severing a link and link formation. We also investigate the role of the network in aggregate fluctuations and show that at least 37\\%DIF > of the aggregate fluctuations can be explained by the network effect.
[ { "type": "R", "before": "The interfirm buyer-seller network is important from both macroeconomic and microeconomic perspectives. From a macroeconomic perspective, this network represents a form of interconnectedness that allows firm-level idiosyncratic shocksto be propagated to other firms. This propagation mechanism interferes with the averaging out process of shocks, having a possible impact on aggregate fluctuation. From a microeconomic perspective, the interfirm buyer-seller", "after": "A firm's buyer--seller relationships can be characterized as a longitudinal network in which the connectivity patterns evolve as each firm faces productivity shocks. Despite its importance in characterizing the production network of an economy, little is known about how the network revises its linkage structure in the face of global and individual exogenous shocks. Using a unique data set covering 10 years of interfirm buyer--seller networks and structural equation modeling, we show that the evolution of the interfirm buyer--seller", "start_char_pos": 0, "end_char_pos": 458 }, { "type": "R", "before": "strategic link renewal processes. There has been substantial research that models strategic link formation processes, but the economy-wide consequences of such strategic behaviors are not clear. We address these two questions using a unique dataset for the Japanese interfirm buyer-seller network. We take a structural equation modeling, and show that a large proportion of fluctuation in the average log growth rate of firms can be", "after": "myopic decisions to avoid other firms' negative shocks and share their positive shocks. We show that the current network is often the best network configuration, which improves both the propagation of positive shocks and the avoidance of negative shocks compared with previous networks. Furthermore, we show that for positive shocks, the future network is often better than the current network in the sense that it propagates positive shocks better than the current network. This is", "start_char_pos": 491, "end_char_pos": 923 }, { "type": "R", "before": "network and that link renewal by firms decreases the standard deviation of the log growth rate.", "after": "asymmetry in cost between severing a link and link formation. We also investigate the role of the network in aggregate fluctuations and show that at least 37", "start_char_pos": 941, "end_char_pos": 1036 } ]
[ 0, 103, 266, 397, 524, 685, 788, 1036 ]
1506.00236
2
A firm's buyer--seller relationships can be characterized as a longitudinal network in which the connectivity patterns evolve as each firm faces productivity shocks. Despite its importance in characterizing the production network of an economy, little is known about how the network revises its linkage structure in the face of global and individual exogenous shocks. Using a unique data set covering 10 years of interfirm buyer--seller networks and structural equation modeling, we show that the evolution of the interfirm buyer--seller network is a result of a firm's myopic decisions to avoid other firms ' negative shocks and share their positive shocks. We show that the current network is often the best network configuration, which improves both the propagation of positive shocks and the avoidance of negative shocks compared with previous networks. Furthermore, we show that for positive shocks, the future network is often better than the current network in the sense that it propagates positive shocks better than the current network. This is explained by the asymmetry in cost between severing a link and link formation. We also investigate the role of the network in aggregate fluctuations and show that at least 37%DIFDELCMD < \\%%% %DIF < of the aggregate fluctuations can be explained by the network effect.\end{abstract}
Buyer--seller relationships among firms can be regarded as a longitudinal network in which the connectivity pattern evolves as each firm receives productivity shocks. Based on a data set describing the evolution of buyer--seller links among 55,608 firms over a decade and structural equation modeling, we find some evidence that interfirm networks evolve reflecting a firm's local decisions to mitigate adverse effects from neighbor firms through interfirm linkage, while enjoying positive effects from them. As a result, link renewal tends to have a positive impact on the growth rates of firms. We also investigate the role of networks in aggregate fluctuations %DIFDELCMD < \\%%% %DIF < of the aggregate fluctuations can be explained by the network effect.\end{abstract} .
[ { "type": "R", "before": "A firm's buyer--seller relationships can be characterized", "after": "Buyer--seller relationships among firms can be regarded", "start_char_pos": 0, "end_char_pos": 57 }, { "type": "R", "before": "patterns evolve", "after": "pattern evolves", "start_char_pos": 110, "end_char_pos": 125 }, { "type": "R", "before": "faces", "after": "receives", "start_char_pos": 139, "end_char_pos": 144 }, { "type": "R", "before": "Despite its importance in characterizing the production network of an economy, little is known about how the network revises its linkage structure in the face of global and individual exogenous shocks. Using a unique data set covering 10 years of interfirm", "after": "Based on a data set describing the evolution of", "start_char_pos": 166, "end_char_pos": 422 }, { "type": "R", "before": "networks", "after": "links among 55,608 firms over a decade", "start_char_pos": 437, "end_char_pos": 445 }, { "type": "R", "before": "show that the evolution of the interfirm buyer--seller network is a result of a", "after": "find some evidence that interfirm networks evolve reflecting a", "start_char_pos": 483, "end_char_pos": 562 }, { "type": "R", "before": "myopic decisions to avoid other firms ' negative shocks and share their positive shocks. We show that the current network is often the best network configuration, which improves both the propagation of positive shocks and the avoidance of negative shocks compared with previous networks. Furthermore, we show that for positive shocks, the future network is often better than the current network in the sense that it propagates positive shocks better than the current network. This is explained by the asymmetry in cost between severing a link and link formation.", "after": "local decisions to mitigate adverse effects from neighbor firms through interfirm linkage, while enjoying positive effects from them. As a result, link renewal tends to have a positive impact on the growth rates of firms.", "start_char_pos": 570, "end_char_pos": 1132 }, { "type": "R", "before": "the network", "after": "networks", "start_char_pos": 1165, "end_char_pos": 1176 }, { "type": "D", "before": "and show that at least 37", "after": null, "start_char_pos": 1203, "end_char_pos": 1228 }, { "type": "A", "before": null, "after": ".", "start_char_pos": 1338, "end_char_pos": 1338 } ]
[ 0, 165, 367, 658, 857, 1045, 1132, 1323 ]
1506.00535
1
We provide new exact Taylor's series with fixed coefficients . We demonstrate the extreme usefulness of this contribution by using it to obtain very simple solutions to ( nonlinear) PDEs .
We provide new exact Taylor's series with fixed coefficients and without the remainder . We demonstrate the usefulness of this contribution by using it to obtain very simple solutions to ( non-linear) PDEs. We also apply the method to the portfolio model .
[ { "type": "A", "before": null, "after": "and without the remainder", "start_char_pos": 61, "end_char_pos": 61 }, { "type": "D", "before": "extreme", "after": null, "start_char_pos": 83, "end_char_pos": 90 }, { "type": "R", "before": "nonlinear) PDEs", "after": "non-linear) PDEs. We also apply the method to the portfolio model", "start_char_pos": 172, "end_char_pos": 187 } ]
[ 0, 63 ]
1506.00806
1
The aim of this work is to build financial crisis indicators based on market data time series . After choosing an optimal size for a rolling window, the market data is seen every trading day as a random matrix from which a covariance and correlation matrix is obtained. Our indicators deal with the spectral properties of these covariance and correlation matrices. Our basic financial intuition is that correlation and volatility are like the heartbeat of the financial market: when correlations between asset prices increase or develop abnormal patterns, when volatility starts to increase, then a crisis event might be around the corner. Our indicators will be mainly of two types . The first one is based on the Hellinger distance, computed between the distribution of the eigenvalues of the empirical covariance matrix and the distribution of the eigenvalues of a reference covariance matrix. As reference distributions we will use the theoretical Marchenko Pastur distribution and , mainly, simulated ones using a random matrix of the same size as the empirical rolling matrix and constituted of Gaussian or Student-t coefficients with some simulated correlations. The idea behind this first type of indicators is that when the empirical distribution of the spectrum of the covariance matrix is deviating from the reference in the sense of Hellinger, then a crisis may be forthcoming. The second type of indicators is based on the study of the spectral radius and the trace of the covariance and correlation matrices as a mean to directly study the volatility and correlations inside the market. The idea behind the second type of indicators is the fact that large eigenvalues are a sign of dynamic instability.
The aim of this work is to build financial crisis indicators based on time series of market data . After choosing an optimal size for a rolling window, the historical market data in this window is seen every trading day as a random matrix from which a covariance and a correlation matrix are obtained. The indicators that we have built deal with the spectral properties of these covariance and correlation matrices. The simple intuitive idea that we rely upon is that correlation and volatility are like the heartbeat of the financial market: when correlations between asset prices increase or develop abnormal patterns, when volatility starts to increase, then a crisis event might be around the corner. The financial crisis indicators that we have built are of two kinds . The first one is based on the Hellinger distance, computed between the distribution of the eigenvalues of the empirical covariance matrix and the distribution of the eigenvalues of a reference covariance matrix. As reference distributions we use the theoretical Marchenko Pastur distribution and numerically computed ones using a random matrix of the same size as the empirical rolling matrix and constituted of Gaussian or Student-t coefficients with some simulated correlations. The idea behind this first type of indicators is that when the empirical distribution of the spectrum of the covariance matrix is deviating from the reference in the sense of Hellinger, then a crisis may be forthcoming. The second type of indicators is based on the study of the spectral radius and the trace of the covariance and correlation matrices as a mean to directly study the volatility and correlations inside the market. The idea behind the second type of indicators is the fact that large eigenvalues are a sign of dynamic instability.
[ { "type": "R", "before": "market data time series", "after": "time series of market data", "start_char_pos": 70, "end_char_pos": 93 }, { "type": "R", "before": "market data", "after": "historical market data in this window", "start_char_pos": 153, "end_char_pos": 164 }, { "type": "R", "before": "correlation matrix is obtained. Our indicators", "after": "a correlation matrix are obtained. The indicators that we have built", "start_char_pos": 238, "end_char_pos": 284 }, { "type": "R", "before": "Our basic financial intuition", "after": "The simple intuitive idea that we rely upon", "start_char_pos": 365, "end_char_pos": 394 }, { "type": "R", "before": "Our indicators will be mainly of two types", "after": "The financial crisis indicators that we have built are of two kinds", "start_char_pos": 640, "end_char_pos": 682 }, { "type": "D", "before": "will", "after": null, "start_char_pos": 927, "end_char_pos": 931 }, { "type": "R", "before": ", mainly, simulated", "after": "numerically computed", "start_char_pos": 986, "end_char_pos": 1005 } ]
[ 0, 269, 364, 639, 684, 896, 1169, 1389, 1600 ]
1506.00806
2
The aim of this work is to build financial crisis indicators based on time series of market data. After choosing an optimal size for a rolling window, the historical market data in this window is seen every trading day as a random matrix from which a covariance and a correlation matrix are obtained. The indicators that we have built deal with the spectral properties of these covariance and correlation matrices . The simple intuitive idea that we rely upon is that correlation and volatility are like the heartbeat of the financial market: when correlations between asset prices increase or develop abnormal patterns, when volatility starts to increase, then a crisis event might be around the corner. The financial crisis indicators that we have built are of two kinds. The first one is based on the Hellinger distance, computed between the distribution of the eigenvalues of the empirical covariance matrix and the distribution of the eigenvalues of a reference covariance matrix . As reference distributions we use the theoretical Marchenko Pastur distribution and numerically computed ones using a random matrix of the same size as the empirical rolling matrix and constituted of Gaussian or Student-t coefficients with some simulated correlations . The idea behind this first type of indicators is that when the empirical distribution of the spectrum of the covariance matrix is deviating from the reference in the sense of Hellinger, then a crisis may be forthcoming. The second type of indicators is based on the study of the spectral radius and the trace of the covariance and correlation matrices as a mean to directly study the volatility and correlations inside the market. The idea behind the second type of indicators is the fact that large eigenvalues are a sign of dynamic instability .
The aim of this work is to build financial crisis indicators based on spectral properties of the dynamics of market data. After choosing an optimal size for a rolling window, the historical market data in this window is seen every trading day as a random matrix from which a covariance and a correlation matrix are obtained. The financial crisis indicators that we have built deal with the spectral properties of these covariance and correlation matrices and they are of two kinds. The first one is based on the Hellinger distance, computed between the distribution of the eigenvalues of the empirical covariance matrix and the distribution of the eigenvalues of a reference covariance matrix representing either a calm or agitated market . The idea behind this first type of indicators is that when the empirical distribution of the spectrum of the covariance matrix is deviating from the reference in the sense of Hellinger, then a crisis may be forthcoming. The second type of indicators is based on the study of the spectral radius and the trace of the covariance and correlation matrices as a mean to directly study the volatility and correlations inside the market. The idea behind the second type of indicators is the fact that large eigenvalues are a sign of dynamic instability . The predictive power of the financial crisis indicators in this framework is then demonstrated, in particular by using them as decision-making tools in a protective-put strategy .
[ { "type": "R", "before": "time series of", "after": "spectral properties of the dynamics of", "start_char_pos": 70, "end_char_pos": 84 }, { "type": "A", "before": null, "after": "financial crisis", "start_char_pos": 305, "end_char_pos": 305 }, { "type": "R", "before": ". The simple intuitive idea that we rely upon is that correlation and volatility are like the heartbeat of the financial market: when correlations between asset prices increase or develop abnormal patterns, when volatility starts to increase, then a crisis event might be around the corner. The financial crisis indicators that we have built are of", "after": "and they are of", "start_char_pos": 415, "end_char_pos": 763 }, { "type": "R", "before": ". As reference distributions we use the theoretical Marchenko Pastur distribution and numerically computed ones using a random matrix of the same size as the empirical rolling matrix and constituted of Gaussian or Student-t coefficients with some simulated correlations", "after": "representing either a calm or agitated market", "start_char_pos": 986, "end_char_pos": 1255 }, { "type": "A", "before": null, "after": ". The predictive power of the financial crisis indicators in this framework is then demonstrated, in particular by using them as decision-making tools in a protective-put strategy", "start_char_pos": 1804, "end_char_pos": 1804 } ]
[ 0, 97, 300, 416, 705, 774, 987, 1257, 1477, 1688 ]
1506.00937
1
This paper provides a framework for modeling the financial system with multiple illiquid assets during a crisis. This extends the network model of Cifuentes, Shin Ferrucci (2005) that incorporates a single asset with fire sales. We prove sufficient conditions for the existence and uniqueness of equilibrium clearing payments and liquidation prices. We prove sufficient conditions for the existence of an equilibrium liquidation strategy with corresponding clearing payments and liquidation prices. The number of defaults and wealth of the real economy under different investment and liquidation strategies are analyzed in several comparative case studies. Notably we investigate the effects that diversification has on the health of the financial system during a crisis .
This paper provides a framework for modeling the financial system with multiple illiquid assets during a crisis. This work generalizes the paper by Amini, Filipovic and Minca (2016) by allowing for differing liquidation strategies. The main result is a proof of sufficient conditions for the existence of an equilibrium liquidation strategy with corresponding unique clearing payments and liquidation prices. An algorithm for computing the maximal clearing payments and prices is provided .
[ { "type": "D", "before": "extends the network model of Cifuentes, Shin", "after": null, "start_char_pos": 118, "end_char_pos": 162 }, { "type": "R", "before": "Ferrucci (2005) that incorporates a single asset with fire sales. We prove sufficient conditions for the existence and uniqueness of equilibrium clearing payments and liquidation prices. We prove", "after": "work generalizes the paper by Amini, Filipovic and Minca (2016) by allowing for differing liquidation strategies. The main result is a proof of", "start_char_pos": 163, "end_char_pos": 358 }, { "type": "A", "before": null, "after": "unique", "start_char_pos": 457, "end_char_pos": 457 }, { "type": "R", "before": "The number of defaults and wealth of the real economy under different investment and liquidation strategies are analyzed in several comparative case studies. Notably we investigate the effects that diversification has on the health of the financial system during a crisis", "after": "An algorithm for computing the maximal clearing payments and prices is provided", "start_char_pos": 500, "end_char_pos": 771 } ]
[ 0, 112, 228, 349, 499, 657 ]
1506.01089
1
Recent experiments showing scaling of the intrachromosomal contact probability, P(s)\sim s^{-1} with the genomic distance s, are interpreted to mean a self-similar fractal-like URLanization. However, scaling of P(s) varies URLanisms, requiring an explanation. We illustrate that dynamical arrest in a highly confined space as a discriminating marker for URLanization, by modeling chromosome inside a nucleus as a self-avoiding homopolymer confined to a sphere of varying sizes. Brownian dynamics simulations show that the chain dynamics slows down as the polymer volume fraction (\phi) inside the confinement approaches a critical value \phi_c. Using finite size scaling analysis, we determine \phi_c^{\infty}\approx 0.44 for a sufficiently long polymer (N\gg 1) . Our study shows that the onset of glassy dynamics is the reason for the formation of URLanization in human chromosomes (N\approx 3\times 10^9, \phi\gtrsim\phi_c^{\infty}), whereas chromosomes of budding yeast (N\approx 1.2\times 10 ^7 , \phi<\phi_c^{\infty}) are equilibrated with no clear signature of URLanization.
Recent experiments showing scaling of the intrachromosomal contact probability, P(s)\sim s^{-1} with the genomic distance s, are interpreted to mean a self-similar fractal-like URLanization. However, scaling of P(s) varies URLanisms, requiring an explanation. We illustrate dynamical arrest in a highly confined space as a discriminating marker for URLanization, by modeling chromosome inside a nucleus as a homopolymer confined to a sphere of varying sizes. Brownian dynamics simulations show that the chain dynamics slows down as the polymer volume fraction (\phi) inside the confinement approaches a critical value \phi_c. The universal value of \phi_c^{\infty}\approx 0.44 for a sufficiently long polymer (N\gg 1) allows us to discuss genome dynamics using \phi as a single parameter . Our study shows that the onset of glassy dynamics is the reason for the segregated URLanization in human (N\approx 3\times 10^9, \phi\gtrsim\phi_c^{\infty}), whereas chromosomes of budding yeast (N\approx 10 ^8 , \phi<\phi_c^{\infty}) are equilibrated with no clear signature of URLanization.
[ { "type": "D", "before": "that", "after": null, "start_char_pos": 274, "end_char_pos": 278 }, { "type": "D", "before": "self-avoiding", "after": null, "start_char_pos": 413, "end_char_pos": 426 }, { "type": "R", "before": "Using finite size scaling analysis, we determine", "after": "The universal value of", "start_char_pos": 645, "end_char_pos": 693 }, { "type": "A", "before": null, "after": "allows us to discuss genome dynamics using \\phi as a single parameter", "start_char_pos": 763, "end_char_pos": 763 }, { "type": "R", "before": "formation of", "after": "segregated", "start_char_pos": 838, "end_char_pos": 850 }, { "type": "D", "before": "chromosomes", "after": null, "start_char_pos": 873, "end_char_pos": 884 }, { "type": "D", "before": "1.2\\times", "after": null, "start_char_pos": 985, "end_char_pos": 994 }, { "type": "R", "before": "^7", "after": "^8", "start_char_pos": 998, "end_char_pos": 1000 } ]
[ 0, 190, 259, 477, 644, 765 ]
1506.01467
1
This paper includes a proof of well-posedness of an initial-boundary value problem involving a system of non-local parabolic PDE which naturally arises in the study of derivative pricing in a generalized market model which is known as a semi-Markov modulated GBM model . We study the well-posedness of the problem via a Volterra integral equation of second kind. A probabilistic approach, in particular the method of conditioning on stopping times is used for showing uniqueness.
This paper includes a proof of well-posedness of an initial-boundary value problem involving a system of degenerate non-local parabolic PDE which naturally arises in the study of derivative pricing in a generalized market model . In a semi-Markov modulated GBM model the locally risk minimizing price function satisfies a special case of this problem . We study the well-posedness of the problem via a Volterra integral equation of second kind. A probabilistic approach, in particular the method of conditioning on stopping times is used for showing uniqueness.
[ { "type": "A", "before": null, "after": "degenerate", "start_char_pos": 105, "end_char_pos": 105 }, { "type": "R", "before": "which is known as", "after": ". In", "start_char_pos": 218, "end_char_pos": 235 }, { "type": "A", "before": null, "after": "the locally risk minimizing price function satisfies a special case of this problem", "start_char_pos": 270, "end_char_pos": 270 } ]
[ 0, 272, 364 ]
1506.01837
1
With the finite signed Borel measures on the non-negative real time axis representing deterministic cash flows , it is shown that the only arbitrage-free price functional on these cash flows that fulfills some additional mild , economically motivated, requirements is the integral of the unit zero-coupon bond prices with respect to the measures that represent the cash flows . For probability measures, this is a Choquet representation, where the Dirac measures, as unit zero-coupon bonds, are the extreme points. Dropping one of the requirements, the Lebesgue decomposition is used to construct counterexamples, where the Choquet price formula does not hold despite of an arbitrage-free market model. The concept is then extended to deterministic streams of assets and currencies in general, yielding a valuation principle for forward markets. Under mild assumptions, it is shown that a foreign cash flow's worth in local currency is identical to the value of the cash flow in local currency for which the Radon-Nikodym derivative with respect to the foreign cash flow is the forward FX rate . While the derived valuation principles are ubiquitously used in theory and practice, they are usually neither stated in this general way, nor do derivations of them exhibit the here presented mathematical rigor, which is based on the definition of a market as an equivalence relation of exchangeability .
In a market of deterministic cash flows, given as an additive, symmetric relation of exchangeability on the finite signed Borel measures on the non-negative real time axis , it is shown that the only arbitrage-free price functional that fulfills some additional mild requirements is the integral of the unit zero-coupon bond prices with respect to the payment measures . For probability measures, this is a Choquet representation, where the Dirac measures, as unit zero-coupon bonds, are the extreme points. Dropping one of the requirements, the Lebesgue decomposition is used to construct counterexamples, where the Choquet price formula does not hold despite of an arbitrage-free market model. The concept is then extended to deterministic streams of assets and currencies in general, yielding a valuation principle for forward markets. Under mild assumptions, it is shown that a foreign cash flow's worth in local currency is identical to the value of the cash flow in local currency for which the Radon-Nikodym derivative with respect to the foreign cash flow is the forward FX rate .
[ { "type": "R", "before": "With", "after": "In a market of deterministic cash flows, given as an additive, symmetric relation of exchangeability on", "start_char_pos": 0, "end_char_pos": 4 }, { "type": "D", "before": "representing deterministic cash flows", "after": null, "start_char_pos": 73, "end_char_pos": 110 }, { "type": "D", "before": "on these cash flows", "after": null, "start_char_pos": 171, "end_char_pos": 190 }, { "type": "D", "before": ", economically motivated,", "after": null, "start_char_pos": 226, "end_char_pos": 251 }, { "type": "R", "before": "measures that represent the cash flows", "after": "payment measures", "start_char_pos": 337, "end_char_pos": 375 }, { "type": "D", "before": ". While the derived valuation principles are ubiquitously used in theory and practice, they are usually neither stated in this general way, nor do derivations of them exhibit the here presented mathematical rigor, which is based on the definition of a market as an equivalence relation of exchangeability", "after": null, "start_char_pos": 1094, "end_char_pos": 1398 } ]
[ 0, 377, 514, 702, 845, 1095 ]
1506.02484
1
Nonlinear analysis of the phase-locked loop (PLL) circuits is a challenging task . In classic engineering literature simplified mathematical models and simulation are widely used for its study. In this work the limitations of classic engineering analysis are demonstrated , e.g. , hidden oscillations cannot be found by simulation. It is shown that simple simulation in SPICE and MATLAB may lead to wrong conclusions concerning the operability of PLL-based circuits .
Nonlinear analysis of the phase-locked loop (PLL) based circuits is a challenging task , thus in modern engineering literature simplified mathematical models and simulation are widely used for their study. In this work the limitations of numerical approach is discussed and it is shown that , e.g. hidden oscillations may not be found by simulation. Corresponding examples in SPICE and MatLab, which may lead to wrong conclusions concerning the operability of PLL-based circuits , are presented .
[ { "type": "A", "before": null, "after": "based", "start_char_pos": 50, "end_char_pos": 50 }, { "type": "R", "before": ". In classic", "after": ", thus in modern", "start_char_pos": 82, "end_char_pos": 94 }, { "type": "R", "before": "its", "after": "their", "start_char_pos": 184, "end_char_pos": 187 }, { "type": "R", "before": "classic engineering analysis are demonstrated", "after": "numerical approach is discussed and it is shown that", "start_char_pos": 227, "end_char_pos": 272 }, { "type": "R", "before": ", hidden oscillations cannot", "after": "hidden oscillations may not", "start_char_pos": 280, "end_char_pos": 308 }, { "type": "R", "before": "It is shown that simple simulation", "after": "Corresponding examples", "start_char_pos": 333, "end_char_pos": 367 }, { "type": "R", "before": "MATLAB", "after": "MatLab, which", "start_char_pos": 381, "end_char_pos": 387 }, { "type": "A", "before": null, "after": ", are presented", "start_char_pos": 467, "end_char_pos": 467 } ]
[ 0, 83, 194, 332 ]
1506.02802
1
When trading incurs proportional costs, leverage can scale an asset's return only up to a maximum multiple, which is sensitive to its volatility and liquidity. In a model with one safe and one risky asset, with constant investment opportunities and proportional costs, we find strategies that maximize long term return given average volatility. As leverage increases, rising rebalancing costs imply declining Sharpe ratios. Beyond a critical level, even returns decline. Holding the Sharpe ratio constant, higher volatility leads to superior returns through lower costs . For funds replicating benchmark multiples, such as leveraged ETFs, we identify the strategies that optimally trade off alpha against tracking error, and find that they depend on the target multiple and the benchmark's liquidity, but not its volatility .
When trading incurs proportional costs, leverage can scale an asset's return only up to a maximum multiple, which is sensitive to its volatility and liquidity. In a model with one safe and one risky asset, with constant investment opportunities and proportional costs, we find strategies that maximize long term return given average volatility. As leverage increases, rising rebalancing costs imply declining Sharpe ratios. Beyond a critical level, even returns decline. Holding the Sharpe ratio constant, higher asset volatility leads to superior returns through lower costs .
[ { "type": "A", "before": null, "after": "asset", "start_char_pos": 513, "end_char_pos": 513 }, { "type": "D", "before": ". For funds replicating benchmark multiples, such as leveraged ETFs, we identify the strategies that optimally trade off alpha against tracking error, and find that they depend on the target multiple and the benchmark's liquidity, but not its volatility", "after": null, "start_char_pos": 571, "end_char_pos": 824 } ]
[ 0, 159, 344, 423, 470, 572 ]
1506.03172
1
We propose a scheme for computing Maximum Likelihood Estimators for Log-Linear modelsusing reaction networks, and prove its correctness. Our scheme exploits the toric structure of equilibrium points of reaction networks . This allows an efficient encoding of the problem, and reveals how reaction networks are naturally suited to statistical inference tasks. Our scheme is relevant to molecular programming, an emerging discipline that views molecular interactions as computational primitives for the synthesis of sophisticated behaviors. In addition, such a scheme may provide a template to understand how biochemical signaling pathways integrate extensive information about their environment and history.
We propose a novel molecular computing scheme for statistical inference. We focus on the much-studied statistical inference problem of computing maximum likelihood estimators for log-linear models. Our scheme takes log-linear models to reaction systems, and the observed data to initial conditions, so that the corresponding equilibrium of each reaction system encodes the corresponding maximum likelihood estimator. The main idea is to exploit the coincidence between thermodynamic entropy and statistical entropy. We map a Maximum Entropy characterization of the maximum likelihood estimator onto a Maximum Entropy characterization of the equilibrium concentrations for the reaction system . This allows for an efficient encoding of the problem, and reveals that reaction networks are superbly suited to statistical inference tasks. Such a scheme may also provide a template to understanding how in vivo biochemical signaling pathways integrate extensive information about their environment and history.
[ { "type": "R", "before": "scheme for computing Maximum Likelihood Estimators for Log-Linear modelsusing reaction networks, and prove its correctness. Our scheme exploits the toric structure of equilibrium points of reaction networks", "after": "novel molecular computing scheme for statistical inference. We focus on the much-studied statistical inference problem of computing maximum likelihood estimators for log-linear models. Our scheme takes log-linear models to reaction systems, and the observed data to initial conditions, so that the corresponding equilibrium of each reaction system encodes the corresponding maximum likelihood estimator. The main idea is to exploit the coincidence between thermodynamic entropy and statistical entropy. We map a Maximum Entropy characterization of the maximum likelihood estimator onto a Maximum Entropy characterization of the equilibrium concentrations for the reaction system", "start_char_pos": 13, "end_char_pos": 219 }, { "type": "A", "before": null, "after": "for", "start_char_pos": 234, "end_char_pos": 234 }, { "type": "R", "before": "how", "after": "that", "start_char_pos": 285, "end_char_pos": 288 }, { "type": "R", "before": "naturally", "after": "superbly", "start_char_pos": 311, "end_char_pos": 320 }, { "type": "R", "before": "Our scheme is relevant to molecular programming, an emerging discipline that views molecular interactions as computational primitives for the synthesis of sophisticated behaviors. In addition, such", "after": "Such", "start_char_pos": 360, "end_char_pos": 557 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 571, "end_char_pos": 571 }, { "type": "R", "before": "understand how", "after": "understanding how in vivo", "start_char_pos": 594, "end_char_pos": 608 } ]
[ 0, 136, 221, 359, 539 ]
1506.03400
1
For much of the last three decades Monte Carlo-simulation methods have been the standard approach for accurately calculating the cyclization probability, J, or J factor, for DNA models having sequence-dependent bends or inhomogeneous bending flexibility. Within the last ten years, however, approaches based on harmonic analysis of semi-flexible polymer models have been introduced, which offer much greater computational efficiency than Monte Carlo techniques. These methods consider the ensemble of molecular conformations in terms of harmonic fluctuations about a well-defined elastic-energy minimum. In the case of computed values of the J factor, deviations of the harmonic approximation from the exact value of J as a function of DNA length have not been characterized. Using a recent, numerically exact method that accounts for both anharmonic and harmonic contributions to J for wormlike chains of arbitrary size, we report here the apparent error that results from neglecting anharmonic behavior. We find that the error in J arising from the harmonic approximation is generally negligible , amounting to free energies less than the thermal energy k_B T , for wormlike chains having contour lengths less than four times the persistence length . For larger systems, however, the deviations between harmonic and exact values increase approximately linearly with size.
For much of the last three decades Monte Carlo-simulation methods have been the standard approach for accurately calculating the cyclization probability, J, or J factor, for DNA models having sequence-dependent bends or inhomogeneous bending flexibility. Within the last ten years, however, approaches based on harmonic analysis of semi-flexible polymer models have been introduced, which offer much greater computational efficiency than Monte Carlo techniques. These methods consider the ensemble of molecular conformations in terms of harmonic fluctuations about a well-defined elastic-energy minimum. However, the harmonic approximation is only applicable for small systems, because the accessible conformation space of larger systems is increasingly dominated by anharmonic contributions. In the case of computed values of the J factor, deviations of the harmonic approximation from the exact value of J as a function of DNA length have not been characterized. Using a recent, numerically exact method that accounts for both anharmonic and harmonic contributions to J for wormlike chains of arbitrary size, we report here the apparent error that results from neglecting anharmonic behavior. For wormlike chains having contour lengths less than four times the persistence length the error in J arising from the harmonic approximation is generally small , amounting to free energies less than the thermal energy , k_B T . For larger systems, however, the deviations between harmonic and exact J values increase approximately linearly with size.
[ { "type": "A", "before": null, "after": "However, the harmonic approximation is only applicable for small systems, because the accessible conformation space of larger systems is increasingly dominated by anharmonic contributions.", "start_char_pos": 604, "end_char_pos": 604 }, { "type": "R", "before": "We find that the", "after": "For wormlike chains having contour lengths less than four times the persistence length the", "start_char_pos": 1007, "end_char_pos": 1023 }, { "type": "R", "before": "negligible", "after": "small", "start_char_pos": 1088, "end_char_pos": 1098 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1157, "end_char_pos": 1157 }, { "type": "D", "before": ", for wormlike chains having contour lengths less than four times the persistence length", "after": null, "start_char_pos": 1164, "end_char_pos": 1252 }, { "type": "A", "before": null, "after": "J", "start_char_pos": 1326, "end_char_pos": 1326 } ]
[ 0, 254, 461, 603, 776, 1006, 1254 ]
1506.03414
1
The present study asks how cooperation and consequently structure can emerge in many different evolutionary contexts. Cooperation , here, is a persistent behavioural pattern of individual entities pooling and sharing resources. Examples are: individual cells forming multicellular systems whose various parts pool and share nutrients; pack animals pooling and sharing prey; families firms, or modern nation states pooling and sharing financial resources. In these examples, each atomistic decision, at a point in time, of the better-off entity to cooperate poses a puzzle: the better-off entity will book an immediate net loss -- why should it cooperate? For each example, specific explanations have been put forward. Here we point out a very general mechanism -- a sufficient null model -- whereby cooperation can evolve. The mechanism is based the following insight : natural growth processes tend to be multiplicative . In multiplicative growth, ergodicity is broken in such a way that fluctuations have a net-negative effect on the time-average growth rate , although they have no effect on the growth rate of the ensemble average. Pooling and sharing resources reduces fluctuations, which leaves ensemble averages unchanged but -- contrary to common perception -- increases the time-average growth rate for each cooperator .
Cooperation is a persistent behavioral pattern of entities pooling and sharing resources. Its ubiquity in nature poses a conundrum. Whenever two entities cooperate, one must willingly relinquish something of value to the other. Why is this apparent altruism favored in evolution? Classical solutions assume a net fitness gain in a cooperative transaction which, through reciprocity or relatedness, finds its way back from recipient to donor. We seek the source of this fitness gain. Our analysis rests on the insight that evolutionary processes are typically multiplicative and noisy. Fluctuations have a net negative effect on the long-time growth rate of resources but no effect on the growth rate of their expectation value. This is an example of non-ergodicity. By reducing the amplitude of fluctuations, pooling and sharing increases the long-time growth rate for cooperating entities, meaning that cooperators outgrow similar non-cooperators. We identify this increase in growth rate as the net fitness gain, consistent with the concept of geometric mean fitness in the biological literature. This constitutes a fundamental mechanism for the evolution of cooperation. Its minimal assumptions make it a candidate explanation of cooperation in settings too simple for other fitness gains, such as emergent function and specialization, to be probable. One such example is the transition from single cells to early multicellular life .
[ { "type": "R", "before": "The present study asks how cooperation and consequently structure can emerge in many different evolutionary contexts. Cooperation , here,", "after": "Cooperation", "start_char_pos": 0, "end_char_pos": 137 }, { "type": "R", "before": "behavioural pattern of individual", "after": "behavioral pattern of", "start_char_pos": 154, "end_char_pos": 187 }, { "type": "R", "before": "Examples are: individual cells forming multicellular systems whose various parts pool and share nutrients; pack animals pooling and sharing prey; families firms, or modern nation states pooling and sharing financial resources. In these examples, each atomistic decision, at a point in time, of the better-off entity to cooperate poses a puzzle: the better-off entity will book an immediate net loss -- why should it cooperate? For each example, specific explanations have been put forward. Here we point out a very general mechanism -- a sufficient null model -- whereby cooperation can evolve. The mechanism is based the following insight : natural growth processes tend to be multiplicative . In multiplicative growth, ergodicity is broken in such a way that fluctuations have a net-negative", "after": "Its ubiquity in nature poses a conundrum. Whenever two entities cooperate, one must willingly relinquish something of value to the other. Why is this apparent altruism favored in evolution? Classical solutions assume a net fitness gain in a cooperative transaction which, through reciprocity or relatedness, finds its way back from recipient to donor. We seek the source of this fitness gain. Our analysis rests on the insight that evolutionary processes are typically multiplicative and noisy. Fluctuations have a net negative", "start_char_pos": 228, "end_char_pos": 1021 }, { "type": "R", "before": "time-average growth rate , although they have", "after": "long-time growth rate of resources but", "start_char_pos": 1036, "end_char_pos": 1081 }, { "type": "R", "before": "the ensemble average. Pooling and sharing resources reduces fluctuations, which leaves ensemble averages unchanged but -- contrary to common perception -- increases the time-average", "after": "their expectation value. This is an example of non-ergodicity. By reducing the amplitude of fluctuations, pooling and sharing increases the long-time", "start_char_pos": 1114, "end_char_pos": 1295 }, { "type": "R", "before": "each cooperator", "after": "cooperating entities, meaning that cooperators outgrow similar non-cooperators. We identify this increase in growth rate as the net fitness gain, consistent with the concept of geometric mean fitness in the biological literature. This constitutes a fundamental mechanism for the evolution of cooperation. Its minimal assumptions make it a candidate explanation of cooperation in settings too simple for other fitness gains, such as emergent function and specialization, to be probable. One such example is the transition from single cells to early multicellular life", "start_char_pos": 1312, "end_char_pos": 1327 } ]
[ 0, 117, 227, 334, 373, 454, 654, 717, 822, 922, 1135 ]
1506.04663
1
Counterparty risk denotes the risk that a party defaults in a bilateral contract. This risk not only depends on the two parties involved, but also on the risk from various other contracts each of these parties hold . In rather informal markets, such as the OTC (over-the-counter) derivative market, institutions only report their aggregated quarterly risk exposure, but no details about their counterparties. Hence, little is known about the diversification of counterparty risk. In this paper, we reconstruct the weighted and time-dependent network of counterparty risk in the OTC derivative market of the United States between 1998 and 2012. To proxy unknown bilateral exposures, we first study the co-occurrence patterns of institutions based on their quarterly activity and ranking in the official report. The network obtained this way is further analysed by a weighted k-core decomposition, to reveal a core-periphery structure. This allows us to compare the activity-based ranking with a topology-based ranking, to identify the most important institutions and their mutual dependencies. We also analyse correlations in these activities, to show strong similarities in the behavior of the core institutions. Our analysis clearly demonstrates the clustering of counterparty risk in a small set of about a dozen US banks. This not only increases the default risk of the central institutions, but also the default risk of peripheral institutions which have contracts with the central ones. Hence, all institutions indirectly have to bear (part of) the counterparty risk of all others, which needs to be better reflected in the price of OTC derivatives.
Counterparty risk denotes the risk that a party defaults in a bilateral contract. This risk not only depends on the two parties involved, but also on the risk from various other contracts each of these parties holds . In rather informal markets, such as the OTC (over-the-counter) derivative market, institutions only report their aggregated quarterly risk exposure, but no details about their counterparties. Hence, little is known about the diversification of counterparty risk. In this paper, we reconstruct the weighted and time-dependent network of counterparty risk in the OTC derivatives market of the United States between 1998 and 2012. To proxy unknown bilateral exposures, we first study the co-occurrence patterns of institutions based on their quarterly activity and ranking in the official report. The network obtained this way is further analysed by a weighted k-core decomposition, to reveal a core-periphery structure. This allows us to compare the activity-based ranking with a topology-based ranking, to identify the most important institutions and their mutual dependencies. We also analyse correlations in these activities, to show strong similarities in the behavior of the core institutions. Our analysis clearly demonstrates the clustering of counterparty risk in a small set of about a dozen US banks. This not only increases the default risk of the central institutions, but also the default risk of peripheral institutions which have contracts with the central ones. Hence, all institutions indirectly have to bear (part of) the counterparty risk of all others, which needs to be better reflected in the price of OTC derivatives.
[ { "type": "R", "before": "hold", "after": "holds", "start_char_pos": 210, "end_char_pos": 214 }, { "type": "R", "before": "derivative", "after": "derivatives", "start_char_pos": 582, "end_char_pos": 592 } ]
[ 0, 81, 216, 408, 479, 643, 809, 933, 1092, 1212, 1324, 1491 ]
1506.05157
1
With steadily increasing parallelism for high-performance architectures, simulations requiring a good strong scalability are prone to be limited in scalability with standard spatial-decomposition strategies at a certain amount of parallel processors. This can be a show-stopper if the simulation results have to be computed with wallclock time restrictions or as fast as possible . Here, the time-dimension is the only one left for parallelisation and we focus on Parareal as one particular parallelisationin-time method. We present a software approach for making Parareal parallelisation transparent for application developers, hence allowing fast prototyping for Parareal. Further, we introduce a decentralized Parareal which results in autonomous simulation instances which only require communicating with the previous and next simulation instances . This concept is evaluated by solving the rotational shallow water equations parallel-in-time: We provide speedup benchmarks and an in-depth analysis of our results based on state-plots and a performance model. This allows us to show the applicability of the Parareal approach with the rotational shallow water equations and also to evaluate the limitations of Parareal .
With steadily increasing parallelism for high-performance architectures, simulations requiring a good strong scalability are prone to be limited in scalability with standard spatial-decomposition strategies at a certain amount of parallel processors. This can be a show-stopper if the simulation results have to be computed with wallclock time restrictions (e.g.\,for weather forecasts) or as fast as possible (e.g. for urgent computing). Here, the time-dimension is the only one left for parallelization and we focus on Parareal as one particular parallelization-in-time method. We discuss a software approach for making Parareal parallelization transparent for application developers, hence allowing fast prototyping for Parareal. Further, we introduce a decentralized Parareal which results in autonomous simulation instances which only require communicating with the previous and next simulation instances , hence with strong locality for communication . This concept is evaluated by a prototypical solver for the rotational shallow-water equations which we use as a representative black-box solver .
[ { "type": "A", "before": null, "after": "(e.g.\\,for weather forecasts)", "start_char_pos": 357, "end_char_pos": 357 }, { "type": "R", "before": ".", "after": "(e.g. for urgent computing).", "start_char_pos": 381, "end_char_pos": 382 }, { "type": "R", "before": "parallelisation", "after": "parallelization", "start_char_pos": 433, "end_char_pos": 448 }, { "type": "R", "before": "parallelisationin-time", "after": "parallelization-in-time", "start_char_pos": 492, "end_char_pos": 514 }, { "type": "R", "before": "present", "after": "discuss", "start_char_pos": 526, "end_char_pos": 533 }, { "type": "R", "before": "parallelisation", "after": "parallelization", "start_char_pos": 574, "end_char_pos": 589 }, { "type": "A", "before": null, "after": ", hence with strong locality for communication", "start_char_pos": 853, "end_char_pos": 853 }, { "type": "R", "before": "solving the rotational shallow water equations parallel-in-time: We provide speedup benchmarks and an in-depth analysis of our results based on state-plots and a performance model. This allows us to show the applicability of the Parareal approach with the rotational shallow water equations and also to evaluate the limitations of Parareal", "after": "a prototypical solver for the rotational shallow-water equations which we use as a representative black-box solver", "start_char_pos": 885, "end_char_pos": 1224 } ]
[ 0, 250, 382, 522, 675, 855, 1065 ]
1506.05244
1
Epigenetic processes such as DNA methylation are increasingly recognised for their fundamental role in diseases such as cancer. Changes in DNA methylation patterns reflect environmental risk factors, and are amongst the first pre-disease changes in cancer. Hence, DNA methylation appears highly promising as a basis on which to develop minimally invasive, DNA-based measures of disease risk and prognosis. DNA methylation is a gene-regulatory pattern, and hence provides a means by which to assess genomic regulatory interactions. Network models are a natural way to represent and analyse groups of such interactions. The utility of network models also increases as the quantity of data and number of variables increase, as continues to happen in the genome-wide era of biomedical science. We present a DNA methylation-based measure of genomic interaction and association , and we show how to use it to infer prognostic genomic networks. We show how to identify prognostic biomarkers from such networks, which we term `network community oncomarkers'. These findings represent new statistical tools for use in the biomedical sciences .
In this paper we propose network methodology to infer prognostic cancer biomarkers, based on the epigenetic pattern DNA methylation. Epigenetic processes such as DNA methylation reflect environmental risk factors, and are increasingly recognised for their fundamental role in diseases such as cancer. DNA methylation is a gene-regulatory pattern, and hence provides a means by which to assess genomic regulatory interactions. Network models are a natural way to represent and analyse groups of such interactions. The utility of network models also increases as the quantity of data and number of variables increase, making them increasingly relevant to large-scale genomic studies. We propose methodology to infer prognostic genomic networks from a DNA methylation-based measure of genomic interaction and association . We then show how to identify prognostic biomarkers from such networks, which we term `network community oncomarkers'. We illustrate the power of our proposed methodology in the context of a large publicly available breast cancer data-set .
[ { "type": "A", "before": null, "after": "In this paper we propose network methodology to infer prognostic cancer biomarkers, based on the epigenetic pattern DNA methylation.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "A", "before": null, "after": "reflect environmental risk factors, and", "start_char_pos": 46, "end_char_pos": 46 }, { "type": "R", "before": "Changes in DNA methylation patterns reflect environmental risk factors, and are amongst the first pre-disease changes in cancer. Hence, DNA methylation appears highly promising as a basis on which to develop minimally invasive, DNA-based measures of disease risk and prognosis. DNA methylation", "after": "DNA methylation", "start_char_pos": 130, "end_char_pos": 423 }, { "type": "R", "before": "as continues to happen in the genome-wide era of biomedical science. We present", "after": "making them increasingly relevant to large-scale genomic studies. We propose methodology to infer prognostic genomic networks from", "start_char_pos": 723, "end_char_pos": 802 }, { "type": "R", "before": ", and we show how to use it to infer prognostic genomic networks. We", "after": ". We then", "start_char_pos": 874, "end_char_pos": 942 }, { "type": "R", "before": "These findings represent new statistical tools for use in the biomedical sciences", "after": "We illustrate the power of our proposed methodology in the context of a large publicly available breast cancer data-set", "start_char_pos": 1053, "end_char_pos": 1134 } ]
[ 0, 129, 258, 407, 532, 619, 791, 939, 1052 ]
1506.05352
1
The prevalence of neutral mutations implies that biological systems typically have many more genotypes than phenotypes. But can the way that genotypes are distributed over phenotypes determine evolutionary outcomes? Answering such questions is difficult because the number of genotypes can be hyper-astronomically large. By solving the genotype-phentoype (GP) map for RNA secondary structure for systems up to length L=126 nucleotides (where the set of all possible RNA strands would weigh more than the mass of the visible universe) we show that the GP map strongly constrains the evolution of non-coding RNA (ncRNA). Remarkably, simple random sampling over genotypes accurately predicts the distribution of properties such as the mutational robustness or the number of stems per secondary structure found in naturally occurring ncRNA . Since we ignore natural selection, this close correspondence with the mapping suggests that structures allowing for functionality are easily discovered, despite the enormous size of the genetic spaces. The mapping is extremely biased: the majority of genotypes map to an exponentially small portion of the morphospace of all biophysically possible structures. Such strong constraints provide a non-adaptive explanation for the convergent evolution of structures such as the hammerhead ribozyme. ncRNA presents a particularly clear example of bias in the arrival of variation strongly shaping evolutionary outcomes .
The prevalence of neutral mutations implies that biological systems typically have many more genotypes than phenotypes. But can the way that genotypes are distributed over phenotypes determine evolutionary outcomes? Answering such questions is difficult because the number of genotypes can be hyper-astronomically large. By solving the genotype-phenotype (GP) map for RNA secondary structure for systems up to length L=126 nucleotides (where the set of all possible RNA strands would weigh more than the mass of the visible universe) we show that the GP map strongly constrains the evolution of non-coding RNA (ncRNA). Simple random sampling over genotypes predicts the distribution of properties such as the mutational robustness or the number of stems per secondary structure found in naturally occurring ncRNA with surprising accuracy . Since we ignore natural selection, this strikingly close correspondence with the mapping suggests that structures allowing for functionality are easily discovered, despite the enormous size of the genetic spaces. The mapping is extremely biased: the majority of genotypes map to an exponentially small portion of the morphospace of all biophysically possible structures. Such strong constraints provide a non-adaptive explanation for the convergent evolution of structures such as the hammerhead ribozyme. These results presents a particularly clear example of bias in the arrival of variation strongly shaping evolutionary outcomes and may be relevant to Mayr's distinction between proximate and ultimate causes in evolutionary biology .
[ { "type": "R", "before": "genotype-phentoype", "after": "genotype-phenotype", "start_char_pos": 336, "end_char_pos": 354 }, { "type": "R", "before": "Remarkably, simple", "after": "Simple", "start_char_pos": 619, "end_char_pos": 637 }, { "type": "D", "before": "accurately", "after": null, "start_char_pos": 669, "end_char_pos": 679 }, { "type": "A", "before": null, "after": "with surprising accuracy", "start_char_pos": 836, "end_char_pos": 836 }, { "type": "A", "before": null, "after": "strikingly", "start_char_pos": 879, "end_char_pos": 879 }, { "type": "R", "before": "ncRNA", "after": "These results", "start_char_pos": 1335, "end_char_pos": 1340 }, { "type": "A", "before": null, "after": "and may be relevant to Mayr's distinction between proximate and ultimate causes in evolutionary biology", "start_char_pos": 1454, "end_char_pos": 1454 } ]
[ 0, 119, 215, 320, 618, 1041, 1199 ]
1506.05583
1
We present an analytical treatment of a genetic switch model consisting of two mutually inhibiting genes operating without cooperative binding of the corresponding transcription factors. Previous studies have numerically shown that these systems can exhibit bimodal dynamics without possessing two stable fixed points in the deterministic rate equations . We analytically show that bimodality is induced by the noise and reveal the critical repression strength which controls a transition between the bimodal and non-bimodal regimes. Moreover, we show that the mean switching time between bimodal states scales polynomially in the system size . These results, independent of the model under study, reveal essential differences between these systems and systems with cooperative binding .
We present an analytical treatment of a genetic switch model consisting of two mutually inhibiting genes operating without cooperative binding of the corresponding transcription factors. Previous studies have numerically shown that these systems can exhibit bimodal dynamics without possessing two stable fixed points at the deterministic level . We analytically show that bimodality is induced by the noise and find the critical repression strength that controls a transition between the bimodal and non-bimodal regimes. We also identify characteristic polynomial scaling laws of the mean switching time between bimodal states . These results, independent of the model under study, reveal essential differences between these systems and systems with cooperative binding , where there is no critical threshold for bimodality and the mean switching time scales exponentially with the system size .
[ { "type": "R", "before": "in the deterministic rate equations", "after": "at the deterministic level", "start_char_pos": 318, "end_char_pos": 353 }, { "type": "R", "before": "reveal", "after": "find", "start_char_pos": 421, "end_char_pos": 427 }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 461, "end_char_pos": 466 }, { "type": "R", "before": "Moreover, we show that", "after": "We also identify characteristic polynomial scaling laws of", "start_char_pos": 534, "end_char_pos": 556 }, { "type": "D", "before": "scales polynomially in the system size", "after": null, "start_char_pos": 604, "end_char_pos": 642 }, { "type": "A", "before": null, "after": ", where there is no critical threshold for bimodality and the mean switching time scales exponentially with the system size", "start_char_pos": 786, "end_char_pos": 786 } ]
[ 0, 186, 355, 533, 644 ]
1506.05905
1
Comparative analyses of protein-protein interaction networks play important roles in the understanding of biological processes. The growing enormity of available data on the networks becomes a computational challenge for the conventional alignment algorithms. Quantum algorithms generally provide efficiency over their classical counterparts in solving various problems. One of such algorithms is the quantum phase estimation algorithm which generates the principal eigenvector of a stochastic matrix with probability one. Using this property, in this article, we describe a quantum computing approach for the alignment of protein-protein interaction networks by following the classical algorithm IsoRank which uses the principal eigenvector of the stochastic matrix representing the Kronecker product of the normalized adjacency matrices of networks for the pairwise alignment. We also present a measurement scheme to efficiently procure the alignment from the output state of the phase estimation algorithm where the eigenvector is encoded as the amplitudes of this state. Furthermore, since the stochastic matrices are generally not Hermitian, we discuss how to approximate such matrices and generate quantum circuits. Finally we discuss the complexity of the quantum approach and show that it is exponentially more efficient .
Comparative analyses of protein-protein interaction networks play important roles in the understanding of biological processes. However, the growing enormity of available data on the networks becomes a computational challenge for the conventional alignment algorithms. Quantum algorithms generally provide greater efficiency over their classical counterparts in solving various problems. One of such algorithms is the quantum phase estimation algorithm which generates the principal eigenvector of a stochastic matrix with probability one. Using the quantum phase estimation algorithm, we introduce a quantum computing approach for the alignment of protein-protein interaction networks by following the classical algorithm IsoRank which uses the principal eigenvector of the stochastic matrix representing the Kronecker product of the normalized adjacency matrices of networks for the pairwise alignment. We also present a greedy quantum measurement scheme to efficiently procure the alignment from the output state of the phase estimation algorithm where the eigenvector is encoded as the amplitudes of this state. The complexity of the quantum approach outperforms the classical running time .
[ { "type": "R", "before": "The", "after": "However, the", "start_char_pos": 128, "end_char_pos": 131 }, { "type": "A", "before": null, "after": "greater", "start_char_pos": 297, "end_char_pos": 297 }, { "type": "R", "before": "this property, in this article, we describe", "after": "the quantum phase estimation algorithm, we introduce", "start_char_pos": 530, "end_char_pos": 573 }, { "type": "A", "before": null, "after": "greedy quantum", "start_char_pos": 898, "end_char_pos": 898 }, { "type": "R", "before": "Furthermore, since the stochastic matrices are generally not Hermitian, we discuss how to approximate such matrices and generate quantum circuits. Finally we discuss the", "after": "The", "start_char_pos": 1077, "end_char_pos": 1246 }, { "type": "R", "before": "and show that it is exponentially more efficient", "after": "outperforms the classical running time", "start_char_pos": 1282, "end_char_pos": 1330 } ]
[ 0, 127, 259, 371, 523, 879, 1076, 1223 ]
1506.06975
1
We consider the problem of approximate Bayesian parameter inference in nonlinear state space models with intractable likelihoods. Sequential Monte Carlo with approximate Bayesian computations (SMC-ABC) is an approach to approximate the likelihood in this type of models. However, such approximations can be noisy and computationally costly which hinders efficient implementations using standard methods based on optimisation and statistical simulation . We propose a novel method based on the combination of Gaussian process optimisation (GPO) and SMC-ABC to create a Laplace approximation of the intractable posterior. The properties of the resulting GPO-ABC method are studied using stochastic volatility (SV) models with both synthetic and real-world data . We conclude that the algorithm enjoys: good accuracy comparable to particle Markov chain Monte Carlo with a significant reduction in computational cost and better robustness to noise in the estimates compared with a gradient-based optimisation algorithm. Finally, we make use of GPO-ABC to estimate the Value-at-Risk for a portfolio using a copula model with SV models for the margins .
We consider the problem of approximate Bayesian parameter inference in non-linear state-space models with intractable likelihoods. Sequential Monte Carlo with approximate Bayesian computations (SMC-ABC) is one approach to approximate the likelihood in this type of models. However, such approximations can be noisy and computationally costly which hinders efficient implementations using standard methods based on optimisation and Monte Carlo methods . We propose a computationally efficient novel method based on the combination of Gaussian process optimisation and SMC-ABC to create a Laplace approximation of the intractable posterior. We exemplify the proposed algorithm for inference in stochastic volatility models with both synthetic and real-world data as well as for estimating the Value-at-Risk for two portfolios using a copula model . We document speed-ups of between one and two orders of magnitude compared to state-of-the-art algorithms for posterior inference .
[ { "type": "R", "before": "nonlinear state space", "after": "non-linear state-space", "start_char_pos": 71, "end_char_pos": 92 }, { "type": "R", "before": "an", "after": "one", "start_char_pos": 205, "end_char_pos": 207 }, { "type": "R", "before": "statistical simulation", "after": "Monte Carlo methods", "start_char_pos": 429, "end_char_pos": 451 }, { "type": "A", "before": null, "after": "computationally efficient", "start_char_pos": 467, "end_char_pos": 467 }, { "type": "D", "before": "(GPO)", "after": null, "start_char_pos": 539, "end_char_pos": 544 }, { "type": "R", "before": "The properties of the resulting GPO-ABC method are studied using stochastic volatility (SV)", "after": "We exemplify the proposed algorithm for inference in stochastic volatility", "start_char_pos": 621, "end_char_pos": 712 }, { "type": "R", "before": ". We conclude that the algorithm enjoys: good accuracy comparable to particle Markov chain Monte Carlo with a significant reduction in computational cost and better robustness to noise in the estimates compared with a gradient-based optimisation algorithm. Finally, we make use of GPO-ABC to estimate the", "after": "as well as for estimating the", "start_char_pos": 760, "end_char_pos": 1064 }, { "type": "R", "before": "a portfolio", "after": "two portfolios", "start_char_pos": 1083, "end_char_pos": 1094 }, { "type": "R", "before": "with SV models for the margins", "after": ". We document speed-ups of between one and two orders of magnitude compared to state-of-the-art algorithms for posterior inference", "start_char_pos": 1116, "end_char_pos": 1146 } ]
[ 0, 129, 270, 453, 620, 761, 1016 ]
1506.07212
1
Elicitation is the study of statistics or properties which are computable via empirical risk minimization. While several recent papers have approached the general question of which properties are elicitable, we suggest that this is the wrong question---all properties are elicitable by first eliciting the entire distribution or data set, and thus the important questionis how elicitable. Specifically, what is the minimum number of regression parameters needed to compute the property? Building on previous work, we introduce a new notion of elicitation complexity and lay the foundations for a calculus of elicitation. We establish several general results and techniques for proving upper and lower bounds on elicitation complexity. These results provide tight bounds for eliciting the Bayes risk of any loss, a large class of propertieswhich includes spectral risk measuresand several new properties of interest. Finally, we extend our calculus to conditionally elicitable properties, which are elicitable conditioned on knowing the value of another property, giving a necessary condition for the elicitability of both properties together .
A property, or statistical functional, is said to be elicitable if it minimizes expected loss for some loss function. The study of which properties are elicitable sheds light on the capabilities and limits of empirical risk minimization. While several recent papers have asked which properties are elicitable, we instead advocate for a more nuanced question: how many dimensions are required to indirectly elicit a given property? This number is called the elicitation complexity of the property. We lay the foundation for a general theory of elicitation complexity , including several basic results about how elicitation complexity behaves, and the complexity of standard properties of interest. Building on this foundation, we establish several upper and lower bounds for the broad class of Bayes risks. We apply these results by proving tight complexity bounds, with respect to identifiable properties, for variance, financial risk measures, entropy, norms, and new properties of interest. We then show how some of these bounds can extend to other practical classes of properties, and conclude with a discussion of open directions .
[ { "type": "R", "before": "Elicitation is the study of statistics or properties which are computable via", "after": "A property, or statistical functional, is said to be elicitable if it minimizes expected loss for some loss function. The study of which properties are elicitable sheds light on the capabilities and limits of", "start_char_pos": 0, "end_char_pos": 77 }, { "type": "R", "before": "approached the general question of", "after": "asked", "start_char_pos": 140, "end_char_pos": 174 }, { "type": "R", "before": "suggest that this is the wrong question---all properties are elicitable by first eliciting the entire distribution or data set, and thus the important questionis how elicitable. Specifically, what is", "after": "instead advocate for a more nuanced question: how many dimensions are required to indirectly elicit a given property? This number is called the elicitation complexity of", "start_char_pos": 211, "end_char_pos": 410 }, { "type": "R", "before": "minimum number of regression parameters needed to compute the property? Building on previous work, we introduce a new notion", "after": "property. We lay the foundation for a general theory", "start_char_pos": 415, "end_char_pos": 539 }, { "type": "R", "before": "and lay the foundations for a calculus of elicitation. We establish several general results and techniques for proving", "after": ", including several basic results about how elicitation complexity behaves, and the complexity of standard properties of interest. Building on this foundation, we establish several", "start_char_pos": 566, "end_char_pos": 684 }, { "type": "R", "before": "on elicitation complexity. These results provide tight bounds for eliciting the Bayes risk of any loss, a large class of propertieswhich includes spectral risk measuresand several", "after": "for the broad class of Bayes risks. We apply these results by proving tight complexity bounds, with respect to identifiable properties, for variance, financial risk measures, entropy, norms, and", "start_char_pos": 708, "end_char_pos": 887 }, { "type": "R", "before": "Finally, we extend our calculus to conditionally elicitable properties, which are elicitable conditioned on knowing the value of another property, giving a necessary condition for the elicitability of both properties together", "after": "We then show how some of these bounds can extend to other practical classes of properties, and conclude with a discussion of open directions", "start_char_pos": 916, "end_char_pos": 1141 } ]
[ 0, 106, 388, 486, 620, 734, 915 ]
1506.07212
2
A property, or statistical functional, is said to be elicitable if it minimizes expected loss for some loss function. The study of which properties are elicitable sheds light on the capabilities and limits of empirical risk minimization. While several recent papers have asked which properties are elicitable, we instead advocate for a more nuanced question: how many dimensions are required to indirectly elicit a given property? This number is called the elicitation complexity of the property. We lay the foundation for a general theory of elicitation complexity, including several basic results about how elicitation complexity behaves, and the complexity of standard properties of interest. Building on this foundation, we establish several upper and lower bounds for the broad class of Bayes risks. We apply these results by proving tight complexity bounds, with respect to identifiable properties , for variance, financial risk measures, entropy, norms, and new properties of interest. We then show how some of these bounds can extend to other practical classes of properties, and conclude with a discussion of open directions.
A property, or statistical functional, is said to be elicitable if it minimizes expected loss for some loss function. The study of which properties are elicitable sheds light on the capabilities and limitations of point estimation and empirical risk minimization. While recent work asks which properties are elicitable, we instead advocate for a more nuanced question: how many dimensions are required to indirectly elicit a given property? This number is called the elicitation complexity of the property. We lay the foundation for a general theory of elicitation complexity, including several basic results about how elicitation complexity behaves, and the complexity of standard properties of interest. Building on this foundation, our main result gives tight complexity bounds for the broad class of Bayes risks. We apply these results to several properties of interest, including variance, entropy, norms, and several classes of financial risk measures. We conclude with discussion and open directions.
[ { "type": "R", "before": "limits of", "after": "limitations of point estimation and", "start_char_pos": 199, "end_char_pos": 208 }, { "type": "R", "before": "several recent papers have asked", "after": "recent work asks", "start_char_pos": 244, "end_char_pos": 276 }, { "type": "R", "before": "we establish several upper and lower", "after": "our main result gives tight complexity", "start_char_pos": 725, "end_char_pos": 761 }, { "type": "R", "before": "by proving tight complexity bounds, with respect to identifiable properties , for variance, financial risk measures,", "after": "to several properties of interest, including variance,", "start_char_pos": 828, "end_char_pos": 944 }, { "type": "R", "before": "new properties of interest. We then show how some of these bounds can extend to other practical classes of properties, and conclude with a discussion of", "after": "several classes of financial risk measures. We conclude with discussion and", "start_char_pos": 965, "end_char_pos": 1117 } ]
[ 0, 117, 237, 430, 496, 695, 804, 992 ]
1506.08127
1
Martingality plays a crucial role in mathematical finance, in particular arbitrage-freeness of a financial model is guaranteed by the local martingale property of discounted price processes. However, in order to compute prices as conditional expectations the discounted price process has to be a true martingale. If this is not the case, the market and the fundamental (computed) prices deviate, which is interpreted as financial bubble. Moreover, if the discounted price process is a true martingale it can be used to define an equivalent change of measure. Based on general conditions in Kallsen and Shiryaev (2002), we derive explicit sufficient conditions for the true martingality of a wide class of exponentials of semimartingales. Suitably for applications, the conditions are expressed in terms of the semimartingale characteristics. We illustrate their use for stochastic volatility asset price models driven by semimartingales. Finally, we prove the well-definedness of semimartingale Libor models given by a backward construction .
We give a collection of explicit sufficient conditions for the true martingale property of a wide class of exponentials of semimartingales. We express the conditions in terms of semimartingale characteristics. This turns out to be very convenient in financial modeling in general. Especially it allows us to carefully discuss the question of well-definedness of semimartingale Libor models , whose construction crucially relies on a sequence of measure changes .
[ { "type": "R", "before": "Martingality plays a crucial role in mathematical finance, in particular arbitrage-freeness of a financial model is guaranteed by the local martingale property of discounted price processes. However, in order to compute prices as conditional expectations the discounted price process has to be a true martingale. If this is not the case, the market and the fundamental (computed) prices deviate, which is interpreted as financial bubble. Moreover, if the discounted price process is a true martingale it can be used to define an equivalent change of measure. Based on general conditions in Kallsen and Shiryaev (2002), we derive", "after": "We give a collection of", "start_char_pos": 0, "end_char_pos": 628 }, { "type": "R", "before": "martingality", "after": "martingale property", "start_char_pos": 673, "end_char_pos": 685 }, { "type": "R", "before": "Suitably for applications, the conditions are expressed", "after": "We express the conditions", "start_char_pos": 738, "end_char_pos": 793 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 806, "end_char_pos": 809 }, { "type": "R", "before": "We illustrate their use for stochastic volatility asset price models driven by semimartingales. Finally, we prove the", "after": "This turns out to be very convenient in financial modeling in general. Especially it allows us to carefully discuss the question of", "start_char_pos": 842, "end_char_pos": 959 }, { "type": "R", "before": "given by a backward construction", "after": ", whose construction crucially relies on a sequence of measure changes", "start_char_pos": 1008, "end_char_pos": 1040 } ]
[ 0, 190, 312, 437, 558, 737, 841, 937 ]
1506.08408
1
This paper considers magnitude, asymptotics and duration of drawdowns for some L\'evy processes. First, we revisit some existing results on the magnitude of drawdowns for spectrally negative L\'evy processes using an approximation approach. For any spectrally negative L\'evy process whose scale functions are well-behaved at 0+, we then study the asymptotics of drawdown quantities when the threshold of drawdown magnitude approaches zero. We also show that such asymptotics is robust to perturbations of additional positive compound Poisson jumps. Finally, thanks to the asymptotic results and some recent works on the running maximum of L\'evy processes, we derive the law of duration of drawdowns for a large class of L\'evy processes (with a general spectrally negative part plus a positive compound Poisson structure). The duration of drawdowns is also known as the "Time to Recover" (TTR) the historical maximum, which is a widely used performance measure in the fund management industry. We find that the law of duration of drawdowns qualitatively depends on the path type of the spectrally negative component of the underlying L\'evy process.
This paper considers magnitude, asymptotics and duration of drawdowns for some L\'{e processes. First, we revisit some existing results on the magnitude of drawdowns for spectrally negative L\'{e processes using an approximation approach. For any spectrally negative L\'{e process whose scale functions are well-behaved at 0+, we then study the asymptotics of drawdown quantities when the threshold of drawdown magnitude approaches zero. We also show that such asymptotics is robust to perturbations of additional positive compound Poisson jumps. Finally, thanks to the asymptotic results and some recent works on the running maximum of L\'{e processes, we derive the law of duration of drawdowns for a large class of L\'{e processes (with a general spectrally negative part plus a positive compound Poisson structure). The duration of drawdowns is also known as the "Time to Recover" (TTR) the historical maximum, which is a widely used performance measure in the fund management industry. We find that the law of duration of drawdowns qualitatively depends on the path type of the spectrally negative component of the underlying L\'{e process.
[ { "type": "R", "before": "L\\'evy", "after": "L\\'{e", "start_char_pos": 79, "end_char_pos": 85 }, { "type": "R", "before": "L\\'evy", "after": "L\\'{e", "start_char_pos": 191, "end_char_pos": 197 }, { "type": "R", "before": "L\\'evy", "after": "L\\'{e", "start_char_pos": 269, "end_char_pos": 275 }, { "type": "R", "before": "L\\'evy", "after": "L\\'{e", "start_char_pos": 640, "end_char_pos": 646 }, { "type": "R", "before": "L\\'evy", "after": "L\\'{e", "start_char_pos": 722, "end_char_pos": 728 }, { "type": "R", "before": "L\\'evy", "after": "L\\'{e", "start_char_pos": 1136, "end_char_pos": 1142 } ]
[ 0, 96, 240, 440, 549, 824, 995 ]
1506.08435
1
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. Till date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, used for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational cost, scalability and efficiency; which all will be addressed in this paper . The numerical experiments are conducted on the state-of-the-art HPC systems, and relevant parallel performance metrics are provided to illustrate the efficiency of our methodology . Our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for large and complicated 3D problems.
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, used for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries . The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers . Our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.
[ { "type": "R", "before": "Till", "after": "To", "start_char_pos": 461, "end_char_pos": 465 }, { "type": "R", "before": "cost, scalability and efficiency; which all will be addressed in this paper", "after": "performance of current algorithms available in these scientific libraries", "start_char_pos": 907, "end_char_pos": 982 }, { "type": "R", "before": "relevant parallel performance metrics are provided to illustrate", "after": "a single-core performance model is used to better characterize", "start_char_pos": 1066, "end_char_pos": 1130 }, { "type": "R", "before": "our methodology", "after": "the solvers", "start_char_pos": 1149, "end_char_pos": 1164 }, { "type": "R", "before": "large and complicated 3D", "after": "real-world large-scale", "start_char_pos": 1310, "end_char_pos": 1334 } ]
[ 0, 277, 460, 547, 700, 803, 940, 984, 1166 ]
1506.08595
1
We develop an XVA analysis of centrally cleared trading, parallel to the one that has been developed in the last years for bilateral transactions. A dynamic framework incorporates the sequence of the cash-flows involved in the waterfall of resources of the CCP . The total cost of the clearance framework for a member of the clearinghouse , called CCVA for central clearing valuation adjustment, is decomposed into a nonstandard CVA corresponding to the cost of the losses on the default fund in case of defaults of other members, an FVA corresponding to the cost of funding its position (including all the margins ) and a KVA corresponding to the cost of regulatory capital (and for completeness we also incorporate a DVA term). This framework can be used by a clearinghouse to assess the right balance between initial margins and default fund in order to minimize the CCVA, hence optimize its costs for a given level of resilience. A clearinghouse can also use it to analyze the benefit for a dealer to trade centrally as a member, rather than on a bilateral basis, or to help clearing members risk manage their CCVA. The potential netting benefit of central clearing and the impact of the credit risk of the members are illustrated numerically .
This paper develops an XVA (costs) analysis of centrally cleared trading, parallel to the one that has been developed in the last years for bilateral transactions. We introduce a dynamic framework that incorporates the sequence of cash-flows involved in the waterfall of resources of a clearing house . The total cost of the clearance framework for a clearing member , called CCVA for central clearing valuation adjustment, is decomposed into a CVA corresponding to the cost of its losses on the default fund in case of defaults of other member, an MVA corresponding to the cost of funding its margins and a KVA corresponding to the cost of the regulatory capital and also of the capital at risk that the member implicitly provides to the CCP through its default fund contribution. In the end the structure of the XVA equations for bilateral and cleared portfolios is similar, but the input data to these equations are not the same, reflecting different financial network structures. The resulting XVA numbers differ, but, interestingly enough, they become comparable after scaling by a suitable netting ratio .
[ { "type": "R", "before": "We develop an XVA", "after": "This paper develops an XVA (costs)", "start_char_pos": 0, "end_char_pos": 17 }, { "type": "R", "before": "A dynamic framework", "after": "We introduce a dynamic framework that", "start_char_pos": 147, "end_char_pos": 166 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 196, "end_char_pos": 199 }, { "type": "R", "before": "the CCP", "after": "a clearing house", "start_char_pos": 253, "end_char_pos": 260 }, { "type": "R", "before": "member of the clearinghouse", "after": "clearing member", "start_char_pos": 311, "end_char_pos": 338 }, { "type": "D", "before": "nonstandard", "after": null, "start_char_pos": 417, "end_char_pos": 428 }, { "type": "R", "before": "the", "after": "its", "start_char_pos": 462, "end_char_pos": 465 }, { "type": "R", "before": "members, an FVA", "after": "member, an MVA", "start_char_pos": 522, "end_char_pos": 537 }, { "type": "R", "before": "position (including all the margins )", "after": "margins", "start_char_pos": 579, "end_char_pos": 616 }, { "type": "R", "before": "regulatory capital (and for completeness we also incorporate a DVA term). This framework can be used by a clearinghouse to assess the right balance between initial margins and default fund in order to minimize the CCVA, hence optimize its costs for a given level of resilience. A clearinghouse can also use it to analyze the benefit for a dealer to trade centrally as a member, rather than on a bilateral basis, or to help clearing members risk manage their CCVA. The potential netting benefit of central clearing and the impact of the credit risk of the members are illustrated numerically", "after": "the regulatory capital and also of the capital at risk that the member implicitly provides to the CCP through its default fund contribution. In the end the structure of the XVA equations for bilateral and cleared portfolios is similar, but the input data to these equations are not the same, reflecting different financial network structures. The resulting XVA numbers differ, but, interestingly enough, they become comparable after scaling by a suitable netting ratio", "start_char_pos": 656, "end_char_pos": 1246 } ]
[ 0, 146, 262, 729, 933, 1119 ]
1507.00058
1
We study the effect of intrinsic noise on the thermodynamic balance of complex chemical networks subtending cellular metabolism and gene regulation. A topological network property called deficiency, known to determine the possibility of complex behavior such as multistability and oscillations, is shown to also characterize the entropic balance. In particular, only when deficiency is zero does the average stochastic dissipation rate equal that of the corresponding deterministic model, where correlations are disregarded. In fact, dissipation can be reduced by the effect of noise, as occurs in a simplified core model of metabolism that we employ to illustrate our findings .
We study the effect of intrinsic noise on the thermodynamic balance of complex chemical networks subtending cellular metabolism and gene regulation. A topological network property called deficiency, known to determine the possibility of complex behavior such as multistability and oscillations, is shown to also characterize the entropic balance. In particular, only when deficiency is zero does the average stochastic dissipation rate equal that of the corresponding deterministic model, where correlations are disregarded. In fact, dissipation can be reduced by the effect of noise, as occurs in a toy model of metabolism that we employ to illustrate our findings . This phenomenon highlights that there is a close interplay between deficiency and the activation of new dissipative pathways at low molecule numbers .
[ { "type": "R", "before": "simplified core", "after": "toy", "start_char_pos": 600, "end_char_pos": 615 }, { "type": "A", "before": null, "after": ". This phenomenon highlights that there is a close interplay between deficiency and the activation of new dissipative pathways at low molecule numbers", "start_char_pos": 678, "end_char_pos": 678 } ]
[ 0, 148, 346, 524 ]
1507.00208
1
We introduce here for the first time the long-term swap rate, characterised as the fair rate of an overnight indexed swap with infinitely many exchanges. Furthermore we analyse the relationship between the long-term swap rate, the long-term yield, see [ 4 ], [ 5 ], and [ 25 ], and the long-term simple rate, considered in [ 8 ] as long-term discounting rate. We finally investigate the existence of these long-term rates in two term structure methodologies, the Flesaker-Hughston model and the linear-rational model .
We introduce here for the first time the long-term swap rate, characterised as the fair rate of an overnight indexed swap with infinitely many exchanges. Furthermore we analyse the relationship between the long-term swap rate, the long-term yield, see Biagini et al. [ 2018 ], Biagini and H\"artel [ 2014 ], and El Karoui et al. [ 1997 ], and the long-term simple rate, considered in Brody and Hughston [ 2016 ] as long-term discounting rate. We finally investigate the existence of these long-term rates in two term structure methodologies, the Flesaker-Hughston model and the linear-rational model . A numerical example illustrates how our results can be used to estimate the non-optional component of a CoCo bond .
[ { "type": "A", "before": null, "after": "Biagini et al.", "start_char_pos": 252, "end_char_pos": 252 }, { "type": "R", "before": "4", "after": "2018", "start_char_pos": 255, "end_char_pos": 256 }, { "type": "A", "before": null, "after": "Biagini and H\\\"artel", "start_char_pos": 260, "end_char_pos": 260 }, { "type": "R", "before": "5", "after": "2014", "start_char_pos": 263, "end_char_pos": 264 }, { "type": "A", "before": null, "after": "El Karoui et al.", "start_char_pos": 272, "end_char_pos": 272 }, { "type": "R", "before": "25", "after": "1997", "start_char_pos": 275, "end_char_pos": 277 }, { "type": "A", "before": null, "after": "Brody and Hughston", "start_char_pos": 326, "end_char_pos": 326 }, { "type": "R", "before": "8", "after": "2016", "start_char_pos": 329, "end_char_pos": 330 }, { "type": "A", "before": null, "after": ". A numerical example illustrates how our results can be used to estimate the non-optional component of a CoCo bond", "start_char_pos": 521, "end_char_pos": 521 } ]
[ 0, 153, 363 ]
1507.00950
1
Our in silico model was built to investigate the development process of the adaptive immune system. For simplicity, we concentrated on humoral immunity and its major components: T cells, B cells, antibodies, interleukins, non-immune self cells, and foreign antigens. Our model is a microscopic one, similar to the interacting particle models of statistical physics . Events are considered random and modelled by a continuous time, finite state Markov process, that is, they are controlled by independent exponential clocks. Our main purpose was to compare different theoretical models of the adaptive immune system and self--nonself discrimination: the ones that are described by well-known textbooks, and a novel one developed by our research group. Our theoretical model emphasizes the hypothesis that the immune system of a fetus can primarily learn what self is but unable to prepare itself for the huge, unknown variety of nonself. The simulation begins after conception, by developing the immune system from scratch and learning the set of self antigens. The simulation ends several months after births when a more-or-less stationary state of the immune system has been established. We investigate how the immune system can recognize and fight against a primary infection. We also investigate that under what conditions can an immune memory be created that results in a more effective immune response to a repeated infection. The MiStImm simulation software package and the simulation results are available at the address URL
Our main purpose is to compare classical nonself-centered, two-signal theoretical models of the adaptive immune system with a novel, self-centered, one-signal model developed by our research group. Our model hypothesizes that the immune system of a fetus is capable learning the limited set of self antigens but unable to prepare itself for the unlimited variety of nonself antigens. We have built a computational model that simulates the development of the adaptive immune system. For simplicity, we concentrated on humoral immunity and its major components: T cells, B cells, antibodies, interleukins, non-immune self cells, and foreign antigens. Our model is a microscopic one, similar to the interacting particle models of statistical physics and agent-based models in immunology. Furthermore, our model is stochastic: events are considered random and modeled by a continuous time, finite state Markov process, that is, they are controlled by finitely many independent exponential clocks. We investigate under what conditions can an immune memory be created that results in a more effective immune response to a repeated infection. The simulations show that our self-centered model is realistic. Moreover, in case of a primary adaptive immune reaction, it can destroy infections more efficiently than a classical nonself-centered model. Predictions of our theoretical model were clinically supported by autoimmune-related adverse events in high-dose immune checkpoint inhibitor immunotherapy trials and also by safe and successful low-dose immune checkpoint inhibitor combination treatment of heavily pretreated stage IV cancer patients who had exhausted all conventional treatments. The MiStImm simulation tool and source codes are available at the address URL
[ { "type": "R", "before": "in silico model was built to investigate the development process", "after": "main purpose is to compare classical nonself-centered, two-signal theoretical models of the adaptive immune system with a novel, self-centered, one-signal model developed by our research group. Our model hypothesizes that the immune system of a fetus is capable learning the limited set of self antigens but unable to prepare itself for the unlimited variety of nonself antigens. We have built a computational model that simulates the development", "start_char_pos": 4, "end_char_pos": 68 }, { "type": "R", "before": ". Events", "after": "and agent-based models in immunology. Furthermore, our model is stochastic: events", "start_char_pos": 365, "end_char_pos": 373 }, { "type": "R", "before": "modelled", "after": "modeled", "start_char_pos": 400, "end_char_pos": 408 }, { "type": "A", "before": null, "after": "finitely many", "start_char_pos": 492, "end_char_pos": 492 }, { "type": "R", "before": "Our main purpose was to compare different theoretical models of the adaptive immune system and self--nonself discrimination: the ones that are described by well-known textbooks, and a novel one developed by our research group. Our theoretical model emphasizes the hypothesis that the immune system of a fetus can primarily learn what self is but unable to prepare itself for the huge, unknown variety of nonself. The simulation begins after conception, by developing the immune system from scratch and learning the set of self antigens. The simulation ends several months after births when a more-or-less stationary state of the immune system has been established. We investigate how the immune system can recognize and fight against a primary infection. We also investigate that", "after": "We investigate", "start_char_pos": 525, "end_char_pos": 1304 }, { "type": "R", "before": "MiStImm simulation software package and the simulation results", "after": "simulations show that our self-centered model is realistic. Moreover, in case of a primary adaptive immune reaction, it can destroy infections more efficiently than a classical nonself-centered model. Predictions of our theoretical model were clinically supported by autoimmune-related adverse events in high-dose immune checkpoint inhibitor immunotherapy trials and also by safe and successful low-dose immune checkpoint inhibitor combination treatment of heavily pretreated stage IV cancer patients who had exhausted all conventional treatments. The MiStImm simulation tool and source codes", "start_char_pos": 1437, "end_char_pos": 1499 } ]
[ 0, 99, 266, 524, 751, 937, 1061, 1189, 1279, 1432 ]
1507.01033
1
When estimating integrated covariationbetween two assets based on high-frequency data , simple assumptions are usually imposed on the relationship between the price processes and the observation times. In this paper, we introduce an endogenous 2-dimensional modeland show that it is more general than the existing endogenous models of the literature. In addition, we establish a central limit theorem for the Hayashi-Yoshida estimator in this general endogenous model in the case where prices follow pure-diffusion processes .
When estimating high-frequency covariance (quadratic covariation) of two arbitrary assets observed asynchronously , simple assumptions are usually imposed on the relationship between the prices process and the observation times. In this paper, we introduce a very general endogenous two dimensional nonparametric model. Because an observation is generated whenever an auxiliary process called observation time process hits one of the two boundary processes, it is called the hitting boundary process with time process (HBT) model. We establish a central limit theorem for the Hayashi-Yoshida estimator under HBT in the case where prices process follows a continuous Ito process: we obtain an asymptotic bias. We provide an estimator of the latter as well as a bias-corrected estimator of the high-frequency covariance. In addition, we give a consistent estimator of the associated standard error .
[ { "type": "D", "before": "integrated covariationbetween two assets based on", "after": null, "start_char_pos": 16, "end_char_pos": 65 }, { "type": "R", "before": "data", "after": "covariance (quadratic covariation) of two arbitrary assets observed asynchronously", "start_char_pos": 81, "end_char_pos": 85 }, { "type": "R", "before": "price processes", "after": "prices process", "start_char_pos": 159, "end_char_pos": 174 }, { "type": "R", "before": "an endogenous 2-dimensional modeland show that it is more general than the existing endogenous models of the literature. In addition, we", "after": "a very general endogenous two dimensional nonparametric model. Because an observation is generated whenever an auxiliary process called observation time process hits one of the two boundary processes, it is called the hitting boundary process with time process (HBT) model. We", "start_char_pos": 230, "end_char_pos": 366 }, { "type": "R", "before": "in this general endogenous model in", "after": "under HBT in", "start_char_pos": 435, "end_char_pos": 470 }, { "type": "R", "before": "follow pure-diffusion processes", "after": "process follows a continuous Ito process: we obtain an asymptotic bias. We provide an estimator of the latter as well as a bias-corrected estimator of the high-frequency covariance. In addition, we give a consistent estimator of the associated standard error", "start_char_pos": 493, "end_char_pos": 524 } ]
[ 0, 201, 350 ]
1507.01033
2
When estimating high-frequency covariance (quadratic covariation) of two arbitrary assets observed asynchronously, simple assumptions are usually imposed on the relationship between the prices process and the observation times. In this paper, we introduce a very general endogenous two dimensional nonparametric model. Because an observation is generated whenever an auxiliary process called observation time process hits one of the two boundary processes, it is called the hitting boundary process with time process (HBT) model. We establish a central limit theorem for the Hayashi-Yoshida estimator under HBT in the case where prices process follows a continuous Ito process : we obtain an asymptotic bias. We provide an estimator of the latter as well as a bias-corrected estimator of the high-frequency covariance. In addition, we give a consistent estimator of the associated standard error.
When estimating high-frequency covariance (quadratic covariation) of two arbitrary assets observed asynchronously, simple assumptions , such as independence, are usually imposed on the relationship between the prices process and the observation times. In this paper, we introduce a general endogenous two-dimensional nonparametric model. Because an observation is generated whenever an auxiliary process called observation time process hits one of the two boundary processes, it is called the hitting boundary process with time process (HBT) model. We establish a central limit theorem for the Hayashi-Yoshida (HY) estimator under HBT in the case where the price process and the observation price process follow a continuous Ito process . We obtain an asymptotic bias. We provide an estimator of the latter as well as a bias-corrected HY estimator of the high-frequency covariance. In addition, we give a consistent estimator of the associated standard error.
[ { "type": "A", "before": null, "after": ", such as independence,", "start_char_pos": 134, "end_char_pos": 134 }, { "type": "R", "before": "very general endogenous two dimensional", "after": "general endogenous two-dimensional", "start_char_pos": 259, "end_char_pos": 298 }, { "type": "A", "before": null, "after": "(HY)", "start_char_pos": 592, "end_char_pos": 592 }, { "type": "R", "before": "prices process follows", "after": "the price process and the observation price process follow", "start_char_pos": 631, "end_char_pos": 653 }, { "type": "R", "before": ": we", "after": ". We", "start_char_pos": 679, "end_char_pos": 683 }, { "type": "A", "before": null, "after": "HY", "start_char_pos": 777, "end_char_pos": 777 } ]
[ 0, 228, 319, 530, 710, 821 ]
1507.01354
1
Circadian clocks exhibit the robustness of period and plasticity of phase against environmental changes such as temperature and nutrient conditions. Thus far, however, it is unclear how both are simultaneously achieved. By investigating distinct models of circadian clocks, we demonstrate reciprocity between robustness and plasticity: higher robustness in the period implies higher plasticity in the phase, where changes in period and in phase follow a linear relationship with a negative coefficient . The robustness of period is achieved by the adaptation on the limit cycle via a concentration change of a buffer molecule, whose temporal change leads to a phase shift following a shift of the limit-cycle orbit in phase space. Universality of reciprocity is confirmed with an analysis of simple models, and biological significance is discussed.
Circadian clocks exhibit the robustness of period and plasticity of phase against environmental changes such as temperature and nutrient conditions. Thus far, however, it is unclear how both are simultaneously achieved. By investigating distinct models of circadian clocks, we demonstrate reci- procity between robustness and plasticity: higher robustness in the period implies higher plasticity in the phase, where changes in period and in phase follow a linear relationship with a negative coef- ficient . The robustness of period is achieved by the adaptation on the limit cycle via a concentration change of a buffer molecule, whose temporal change leads to a phase shift following a shift of the limit-cycle orbit in phase space. Generality of reciprocity in clocks with the adaptation mechanism is confirmed with theoretical analysis of simple models, while biological significance is discussed.
[ { "type": "R", "before": "reciprocity", "after": "reci- procity", "start_char_pos": 289, "end_char_pos": 300 }, { "type": "R", "before": "coefficient", "after": "coef- ficient", "start_char_pos": 490, "end_char_pos": 501 }, { "type": "R", "before": "Universality of reciprocity", "after": "Generality of reciprocity in clocks with the adaptation mechanism", "start_char_pos": 731, "end_char_pos": 758 }, { "type": "R", "before": "an", "after": "theoretical", "start_char_pos": 777, "end_char_pos": 779 }, { "type": "R", "before": "and", "after": "while", "start_char_pos": 807, "end_char_pos": 810 } ]
[ 0, 148, 219, 503, 730 ]
1507.01444
1
Finite automata working bitwise on the local coordinates of points in the plane are constructed and shown to lead to self-affine surfaces ('patchwork quilts') under general circumstances. We prove that these models give rise to a roughness exponent that shapes the resulting spatial patterns : Larger values of the exponent lead to coarser surfaces . We suggest that finite automata provide the mathematical link between the concept of positional information of modern theoretical biology and the emergence of fractal self-affine surfaces ubiquitously found in nature .
Fractal surfaces ('patchwork quilts') are shown to arise under most general circumstances involving simple bitwise operations between real numbers. A theory is presented for all bitwise operations on a finite alphabet which are not governed by chance. It is shown that these models give rise to a roughness exponent H that shapes the resulting spatial patterns , larger values of the exponent leading to coarser surfaces .
[ { "type": "R", "before": "Finite automata working bitwise on the local coordinates of points in the plane are constructed and shown to lead to self-affine", "after": "Fractal", "start_char_pos": 0, "end_char_pos": 128 }, { "type": "R", "before": "under general circumstances. We prove", "after": "are shown to arise under most general circumstances involving simple bitwise operations between real numbers. A theory is presented for all bitwise operations on a finite alphabet which are not governed by chance. It is shown", "start_char_pos": 159, "end_char_pos": 196 }, { "type": "A", "before": null, "after": "H", "start_char_pos": 249, "end_char_pos": 249 }, { "type": "R", "before": ": Larger", "after": ", larger", "start_char_pos": 293, "end_char_pos": 301 }, { "type": "R", "before": "lead", "after": "leading", "start_char_pos": 325, "end_char_pos": 329 }, { "type": "D", "before": ". We suggest that finite automata provide the mathematical link between the concept of positional information of modern theoretical biology and the emergence of fractal self-affine surfaces ubiquitously found in nature", "after": null, "start_char_pos": 350, "end_char_pos": 568 } ]
[ 0, 187, 351 ]
1507.01729
1
We propose a general framework for measuring frequency dynamics of connectedness in economic variables based on spectral representation of variance decompositions . We argue that the frequency dynamics is insightful when studying the connectedness of variables as shocks with heterogeneous frequency responses will create frequency dependent connections of different strength that remain hidden when time domain measures are used. Two applications support the usefulness of the discussion, guide a user to apply the methods in different situations, and contribute to the literature with important findings about sources of connectedness. Giving up the assumption of global stationarity of stock market data and approximating the dynamics locally , we document rich time-frequency dynamics of connectedness in US market risk in the first application. Controlling for common shocks due to common stochastic trends which dominate the connections, we identify connections of global economy at business cycle frequencies of 18 up to 96 months in the second application. In addition, we study the effects of cross-sectional dependence on the connectedness of variables .
Risk management has generally focused on aggregate connectedness, overlooking its cyclical sources . We argue that the frequency dynamics is insightful for studying this connectedness because shocks with heterogeneous frequency responses create linkages with various degrees of persistence. Such connections are important for understanding the possible sources of systemic risk specific to economic cycles but remain hidden when aggregate measures of connectedness are used. To estimate connectedness on short-, medium-, and long-term financial cycles, we propose a general framework based on spectral representation of variance decompositions. In an empirical application , we document the rich dynamics of volatility connectedness in the US financial institutions with short-term connections due to contemporaneous correlations as well as significant weekly, monthly, and longer connections that play a role. Hence, we find that the financial market clears part of the information but that the permanent changes in investors' expectations having longer-term responses are non-negligible .
[ { "type": "R", "before": "We propose a general framework for measuring frequency dynamics of connectedness in economic variables based on spectral representation of variance decompositions", "after": "Risk management has generally focused on aggregate connectedness, overlooking its cyclical sources", "start_char_pos": 0, "end_char_pos": 162 }, { "type": "R", "before": "when studying the connectedness of variables as", "after": "for studying this connectedness because", "start_char_pos": 216, "end_char_pos": 263 }, { "type": "R", "before": "will create frequency dependent connections of different strength that", "after": "create linkages with various degrees of persistence. Such connections are important for understanding the possible sources of systemic risk specific to economic cycles but", "start_char_pos": 310, "end_char_pos": 380 }, { "type": "R", "before": "time domain measures", "after": "aggregate measures of connectedness", "start_char_pos": 400, "end_char_pos": 420 }, { "type": "R", "before": "Two applications support the usefulness of the discussion, guide a user to apply the methods in different situations, and contribute to the literature with important findings about sources of connectedness. Giving up the assumption of global stationarity of stock market data and approximating the dynamics locally", "after": "To estimate connectedness on short-, medium-, and long-term financial cycles, we propose a general framework based on spectral representation of variance decompositions. In an empirical application", "start_char_pos": 431, "end_char_pos": 745 }, { "type": "R", "before": "rich time-frequency dynamics of connectedness in US market risk in the first application. Controlling for common shocks due to common stochastic trends which dominate the connections, we identify connections of global economy at business cycle frequencies of 18 up to 96 months in the second application. In addition, we study the effects of cross-sectional dependence on the connectedness of variables", "after": "the rich dynamics of volatility connectedness in the US financial institutions with short-term connections due to contemporaneous correlations as well as significant weekly, monthly, and longer connections that play a role. Hence, we find that the financial market clears part of the information but that the permanent changes in investors' expectations having longer-term responses are non-negligible", "start_char_pos": 760, "end_char_pos": 1162 } ]
[ 0, 164, 430, 637, 849, 1064 ]
1507.01729
2
Risk management has generally focused on aggregate connectedness, overlooking its cyclical sources. We argue that the frequency dynamics is insightful for studying this connectedness because shocks with heterogeneous frequency responses create linkages with various degrees of persistence. Such connections are important for understanding the possible sources of systemic risk specific to economic cycles but remain hidden when aggregate measures of connectedness are used . To estimate connectedness on short-, medium-, and long-term financial cycles, we propose a general framework based on spectral representation of variance decompositions. In an empirical application, we document the rich dynamics of volatility connectedness in the US financial institutions with short-term connections due to contemporaneous correlations as well as significant weekly, monthly, and longer connections that play a role. Hence, we find that the financial market clears part of the information but that the permanent changes in investors' expectations having longer-term responses are non-negligible .
We propose a new framework for measuring connectedness among financial variables that arises due to heterogeneous frequency responses to shocks . To estimate connectedness in short-, medium-, and long-term financial cycles, we introduce a framework based on the spectral representation of variance decompositions. In an empirical application, we document the rich time-frequency dynamics of volatility connectedness in US financial institutions . Economically, periods in which connectedness is created at high frequencies are periods when stock markets seem to process information rapidly and calmly, and a shock to one asset in the system will have an impact mainly in the short term. When the connectedness is created at lower frequencies, it suggests that shocks are persistent and are being transmitted for longer periods .
[ { "type": "R", "before": "Risk management has generally focused on aggregate connectedness, overlooking its cyclical sources. We argue that the frequency dynamics is insightful for studying this connectedness because shocks with", "after": "We propose a new framework for measuring connectedness among financial variables that arises due to", "start_char_pos": 0, "end_char_pos": 202 }, { "type": "R", "before": "create linkages with various degrees of persistence. Such connections are important for understanding the possible sources of systemic risk specific to economic cycles but remain hidden when aggregate measures of connectedness are used", "after": "to shocks", "start_char_pos": 237, "end_char_pos": 472 }, { "type": "R", "before": "on", "after": "in", "start_char_pos": 501, "end_char_pos": 503 }, { "type": "R", "before": "propose a general", "after": "introduce a", "start_char_pos": 556, "end_char_pos": 573 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 593, "end_char_pos": 593 }, { "type": "A", "before": null, "after": "time-frequency", "start_char_pos": 696, "end_char_pos": 696 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 737, "end_char_pos": 740 }, { "type": "R", "before": "with short-term connections due to contemporaneous correlations as well as significant weekly, monthly, and longer connections that play a role. Hence, we find that the financial market clears part of the information but that the permanent changes in investors' expectations having longer-term responses are non-negligible", "after": ". Economically, periods in which connectedness is created at high frequencies are periods when stock markets seem to process information rapidly and calmly, and a shock to one asset in the system will have an impact mainly in the short term. When the connectedness is created at lower frequencies, it suggests that shocks are persistent and are being transmitted for longer periods", "start_char_pos": 767, "end_char_pos": 1089 } ]
[ 0, 99, 289, 474, 645, 911 ]
1507.03004
1
We introduce a simulation scheme for Brownian semistationary processes, which is based on discretizing the stochastic integral representation of the process in the time domain. We assume that the kernel function of the process is regularly varying at zero. The novel feature of the scheme is to approximate the kernel function by a power function near zero and by a step function elsewhere. The resulting approximation of the process is a combination of Wiener integrals of the power function and a Riemann sum, which is why we call this method a hybrid scheme. Our main theoretical result describes the asymptotics of the mean square error of the hybrid scheme and we observe that the scheme leads to a substantial improvement of accuracy compared to the ordinary forward Riemann-sum scheme, while having the same computational complexity. We exemplify the use of the hybrid scheme by two numerical experiments, where we examine the finite-sample properties of an estimator of the roughness parameter of a Brownian semistationary process and study Monte Carlo option pricing in the rough Bergomi model of Bayer et al. (2015) , respectively.
We introduce a simulation scheme for Brownian semistationary processes, which is based on discretizing the stochastic integral representation of the process in the time domain. We assume that the kernel function of the process is regularly varying at zero. The novel feature of the scheme is to approximate the kernel function by a power function near zero and by a step function elsewhere. The resulting approximation of the process is a combination of Wiener integrals of the power function and a Riemann sum, which is why we call this method a hybrid scheme. Our main theoretical result describes the asymptotics of the mean square error of the hybrid scheme and we observe that the scheme leads to a substantial improvement of accuracy compared to the ordinary forward Riemann-sum scheme, while having the same computational complexity. We exemplify the use of the hybrid scheme by two numerical experiments, where we examine the finite-sample properties of an estimator of the roughness parameter of a Brownian semistationary process and study Monte Carlo option pricing in the rough Bergomi model of Bayer et al. Quant. Finance 16(6), 887-904, 2016 , respectively.
[ { "type": "R", "before": "(2015)", "after": "Quant. Finance 16(6), 887-904, 2016", "start_char_pos": 1119, "end_char_pos": 1125 } ]
[ 0, 176, 256, 390, 561, 840 ]
1507.03141
1
The goal of this research was applying a nonlinear approach to the detection of market regime transitions: mean reversion to momentum regimes and vice versa.It has been shown that the transition process has nonlinear scenarios: slow and fast bifurcations. Slow bifurcation assumes that control parameter is changing slowly in relation to the system characteristic time. Gradual absorption of information provides stability loss delay effect.Fast bifurcation has a discrete non equilibrium nature. Each transition from one attracting cycle to another one is preceded by passing through fixed point state: an effect of precatastophic stabilization exists. Two analytical methods have been developed for recognition of slow and fast bifurcation : R analysis and D analysis correspondingly.Combined R/D tool has been incorporated for analysis of world financial crisis of 2008.It turned out that R analysis is more convenient for long term investment while D analysis suggests middle- and short-term approach.R/D analysis has been applied as a filter for currency positional trading system. Slow and fast bifurcation patterns have been applied for the filtering of breakdown signals. Incorporation of a filter allowed to reduce twice the number of trades and to increase systemefficiency, Calmar ratio, by seven times. R/D filter allowed decreasing sensitivity to volatility: duration of equity stagnation has fallen down to two months in relation to one year for the original breakdown system. It has been shown that R and D patterns may improve the long term efficiency and stability of a momentum quantitative trading model .
In this paper mechanisms of reversion - momentum transition are considered. Two basic nonlinear mechanisms are highlighted: a slow and fast bifurcation. A slow bifurcation leads to the equilibrium evolution, preceded by stability loss delay of a control parameter. A single order parameter is introduced by Markovian chain diffusion, which plays a role of a precursor. A fast bifurcation is formed by a singular fusion of unstable and stable equilibrium states. The effect of a precatastrophic range compression is observed before the discrete change of a system. A diffusion time scaling is presented as a precursor of the fast bifurcation. The efficiency of both precursors in a currency market was illustrated by simulation of a prototype of a trading system .
[ { "type": "R", "before": "The goal of this research was applying a nonlinear approach to the detection of market regime transitions: mean reversion to momentum regimes and vice versa.It has been shown that the transition process has nonlinear scenarios:", "after": "In this paper mechanisms of reversion - momentum transition are considered. Two basic nonlinear mechanisms are highlighted: a", "start_char_pos": 0, "end_char_pos": 227 }, { "type": "R", "before": "bifurcations. Slow bifurcation assumes that control parameter is changing slowly in relation to the system characteristic time. Gradual absorption of information provides", "after": "bifurcation. A slow bifurcation leads to the equilibrium evolution, preceded by", "start_char_pos": 242, "end_char_pos": 412 }, { "type": "R", "before": "effect.Fast bifurcation has a discrete non equilibrium nature. Each transition from one attracting cycle to another one is preceded by passing through fixed point state: an effect of precatastophic stabilization exists. Two analytical methods have been developed for recognition of slow and fast bifurcation : R analysis and D analysis correspondingly.Combined R/D tool has been incorporated for analysis of world financial crisis of 2008.It turned out that R analysis is more convenient for long term investment while D analysis suggests middle- and short-term approach.R/D analysis has been applied as a filter for currency positional trading system. Slow and fast bifurcation patterns have been applied for the filtering of breakdown signals. Incorporation of a filter allowed to reduce twice the number of trades and to increase systemefficiency, Calmar ratio, by seven times. R/D filter allowed decreasing sensitivity to volatility: duration of equity stagnation has fallen down to two months in relation to one year for the original breakdown system. It has been shown that R and D patterns may improve the long term efficiency and stability of a momentum quantitative trading model", "after": "of a control parameter. A single order parameter is introduced by Markovian chain diffusion, which plays a role of a precursor. A fast bifurcation is formed by a singular fusion of unstable and stable equilibrium states. The effect of a precatastrophic range compression is observed before the discrete change of a system. A diffusion time scaling is presented as a precursor of the fast bifurcation. The efficiency of both precursors in a currency market was illustrated by simulation of a prototype of a trading system", "start_char_pos": 434, "end_char_pos": 1622 } ]
[ 0, 157, 255, 369, 441, 496, 653, 786, 873, 1005, 1086, 1179, 1314, 1490 ]
1507.03378
1
In this paper we have analyzed scaling properties and cyclical behavior of the three types of stock market indexes (SMI) time series: data belonging to stock markets of developed economies, emerging economies, and of the underdeveloped or transitional economies. We have used two techniques of data analysis to obtain and verify our findings: the wavelet spectral analysis to study SMI returns data, and the Hurst exponent formalism to study local behavior around market cycles and trends. We have found cyclical behavior in all SMI data sets that we have analyzed. Moreover, the positions and the boundaries of cyclical intervals that we have found seam to be common for all markets in our dataset. We list and illustrate the presence of nine such periods in our SMI data. We also report on the possibilities to differentiate between the level of growth of the analyzed markets by way of statistical analysis of the properties of wavelet spectra that characterize particular peak behaviors. Our results show that measures like the relative WT energy content and the relative WT amplitude for the peaks in the small scales region could be used for partial differentiation between market economies. Finally, we propose a way to quantify the level of development of a stock market , based on the Hurst scaling exponent approach. From the local scaling exponents calculated for our nine peak regions we have defined what we named the Development Index (H_{DI , which proved, at least in the case of our dataset, to be suitable to rank the SMI series that we have analyzed in three distinct groups . Further verification of this method remains open for future research .
In this paper we have analyzed scaling properties and cyclical behavior of the three types of stock market indexes (SMI) time series: data belonging to stock markets of developed economies, emerging economies, and of the underdeveloped or transitional economies. We have used two techniques of data analysis to obtain and verify our findings: the wavelet spectral analysis to study SMI returns data, and the Hurst exponent formalism to study local behavior around market cycles and trends. We have found cyclical behavior in all SMI data sets that we have analyzed. Moreover, the positions and the boundaries of cyclical intervals that we have found seam to be common for all markets in our dataset. We list and illustrate the presence of nine such periods in our SMI data. We also report on the possibilities to differentiate between the level of growth of the analyzed markets by way of statistical analysis of the properties of wavelet spectra that characterize particular peak behaviors. Our results show that measures like the relative WT energy content and the relative WT amplitude for the peaks in the small scales region could be used for partial differentiation between market economies. Finally, we propose a way to quantify the level of development of a stock market based on the Hurst scaling exponent approach. From the local scaling exponents calculated for our nine peak regions we have defined what we named the Development Index , which proved, at least in the case of our dataset, to be suitable to rank the SMI series that we have analyzed in three distinct groups .
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 1279, "end_char_pos": 1280 }, { "type": "D", "before": "(H_{DI", "after": null, "start_char_pos": 1449, "end_char_pos": 1455 }, { "type": "D", "before": ". Further verification of this method remains open for future research", "after": null, "start_char_pos": 1594, "end_char_pos": 1664 } ]
[ 0, 262, 489, 565, 699, 773, 991, 1197, 1326, 1595 ]
1507.03703
1
A binding potential of mean force (BPMF) is a free energy of noncovalent association in which one binding partner is flexible and the other is rigid . I have developed a method to calculate BPMFs for protein-ligand systems. The method is based on replica exchange sampling from multiple thermodynamic states at different temperatures and protein-ligand interaction strengths. Protein-ligand interactions are represented by interpolating precomputed electrostatic and van der Waals grids. Using a simple estimator for thermodynamic length, thermodynamic states are initialized at approximately equal intervals. The method is demonstrated on the Astex diverse set, a database of 85 protein-ligand complexes relevant to pharmacy or agriculture. Fifteen independent simulations of each complex were started using poses from crystallography, docking, or the lowest-energy pose observed in the other simulations. Benchmark simulations completed within three days on a single processor. Overall, protocols initialized using the thermodynamic length estimator were system-specific, robust, and led to approximately even replica exchange acceptance probabilities between neighboring states. In most systems, the standard deviation of the BPMF converges to within 5 kT . Even with low variance, however, the mean BPMF was sometimes dependent on starting conditions, implying inadequate sampling. Within the thermodynamic cycle, free energies estimated based on multiple intermediate states were more precise, and those estimated by single-step perturbation were less precise. The results demonstrate that the method is promising, but that ligand pose sampling and phase space overlap can sometimes prevent precise BPMF estimation. The software used to perform these calculations, Alchemical Grid Dock (AlGDock), is available under the open-source MIT license at URL
Alchemical Grid Dock (AlGDock) is open-source software designed to compute the binding potential of mean force (BPMF) - the binding free energy between a flexible ligand and a rigid receptor - for a URLanic ligand and a biological macromolecule. Multiple BPMFs can be used to rigorously compute binding affinities between flexible partners. AlGDock uses replica exchange between thermodynamic states at different temperatures and receptor-ligand interaction strengths. Receptor-ligand interaction energies are represented by interpolating precomputed grids. Thermodynamic states are adaptively initialized and adjusted on-the-fly to maintain replica exchange rates. In demonstrative calculations, when the bound ligand is treated as fully solvated, AlGDock estimates BPMFs with a precision within 4 kT in 65\% and within 8 kT for 91\% of systems. It correctly identifies the native binding pose in 83\% of simulations. Performance is sometimes limited by subtle differences in the important configuration space of sampled and targeted thermodynamic states.
[ { "type": "R", "before": "A", "after": "Alchemical Grid Dock (AlGDock) is open-source software designed to compute the", "start_char_pos": 0, "end_char_pos": 1 }, { "type": "R", "before": "is a free energy of noncovalent association in which one binding partner is flexible and the other is rigid . I have developed a method to calculate BPMFs for protein-ligand systems. The method is based on replica exchange sampling from multiple", "after": "- the binding free energy between a flexible ligand and a rigid receptor - for a URLanic ligand and a biological macromolecule. Multiple BPMFs can be used to rigorously compute binding affinities between flexible partners. AlGDock uses replica exchange between", "start_char_pos": 41, "end_char_pos": 286 }, { "type": "R", "before": "protein-ligand", "after": "receptor-ligand", "start_char_pos": 338, "end_char_pos": 352 }, { "type": "R", "before": "Protein-ligand interactions", "after": "Receptor-ligand interaction energies", "start_char_pos": 376, "end_char_pos": 403 }, { "type": "R", "before": "electrostatic and van der Waals grids. Using a simple estimator for thermodynamic length, thermodynamic states are initialized at approximately equal intervals. The method is demonstrated on the Astex diverse set, a database of 85 protein-ligand complexes relevant to pharmacy or agriculture. Fifteen independent simulations of each complex were started using poses from crystallography, docking, or the lowest-energy pose observed in the other simulations. Benchmark simulations completed within three days on a single processor. Overall, protocols initialized using the thermodynamic length estimator were system-specific, robust, and led to approximately even replica exchange acceptance probabilities between neighboring states. In most systems, the standard deviation of the BPMF converges to within 5 kT . Even with low variance, however, the mean BPMF was sometimes dependent on starting conditions, implying inadequate sampling. Within the thermodynamic cycle, free energies estimated based on multiple intermediate states were more precise, and those estimated by single-step perturbation were less precise. The results demonstrate that the method is promising, but that ligand pose sampling and phase space overlap can sometimes prevent precise BPMF estimation. The software used to perform these calculations, Alchemical Grid Dock (AlGDock), is available under the open-source MIT license at URL", "after": "grids. Thermodynamic states are adaptively initialized and adjusted on-the-fly to maintain replica exchange rates. In demonstrative calculations, when the bound ligand is treated as fully solvated, AlGDock estimates BPMFs with a precision within 4 kT in 65\\% and within 8 kT for 91\\% of systems. It correctly identifies the native binding pose in 83\\% of simulations. Performance is sometimes limited by subtle differences in the important configuration space of sampled and targeted thermodynamic states.", "start_char_pos": 449, "end_char_pos": 1855 } ]
[ 0, 150, 223, 375, 487, 609, 741, 906, 979, 1181, 1260, 1385, 1565, 1720 ]
1507.03877
1
In his celebrated book "What is Life?" Schrodinger proposed using the properties of living systems to constrain unknown features of life. Here we propose an inverse approach and suggest using biology as a means to constrain unknown physics. We focus on information and causation, as their widespread use in biology is the most problematic aspect of life from the perspective of fundamental physics. Our proposal is cast as a methodology for identifying potentially distinctive features of the informational architecture of biological systems, as compared to other classes of physical system. To illustrate our approach, we use as a case study a Boolean network model for the cell cycle regulation of the single-celled fission yeast (Schizosaccharomyces Pombe) and compare its informational properties to two classes of null model that share commonalities in their causal structure . We report patterns in local information processing and storage that do indeed distinguish biological from random . Conversely, we find that integrated information, which serves as a measure of "emergent" information processing, does not differ from random for the case presented. We discuss implications for our understanding of the informational architecture of the fission yeast cell cycle network and for illuminating any distinctive physics operative in life.
We compare the informational architecture of biological and random networks to identify informational features that may distinguish biological networks from random. The study presented here focuses on the Boolean network model for regulation of the cell cycle of the fission yeast Schizosaccharomyces Pombe. We compare calculated values of local and global information measures for the fission yeast cell cycle to the same measures as applied to two different classes of random networks: random and scale-free . We report patterns in local information processing and storage that do indeed distinguish biological from random , associated with control nodes that regulate the function of the fission yeast cell cycle network . Conversely, we find that integrated information, which serves as a global measure of "emergent" information processing, does not differ from random for the case presented. We discuss implications for our understanding of the informational architecture of the fission yeast cell cycle network in particular, and more generally for illuminating any distinctive physics that may be operative in life.
[ { "type": "R", "before": "In his celebrated book \"What is Life?\" Schrodinger proposed using the properties of living systems to constrain unknown features of life. Here we propose an inverse approach and suggest using biology as a means to constrain unknown physics. We focus on information and causation, as their widespread use in biology is the most problematic aspect of life from the perspective of fundamental physics. Our proposal is cast as a methodology for identifying potentially distinctive features of the", "after": "We compare the", "start_char_pos": 0, "end_char_pos": 492 }, { "type": "R", "before": "systems, as compared to other classes of physical system. To illustrate our approach, we use as a case study a", "after": "and random networks to identify informational features that may distinguish biological networks from random. The study presented here focuses on the", "start_char_pos": 534, "end_char_pos": 644 }, { "type": "A", "before": null, "after": "regulation of", "start_char_pos": 671, "end_char_pos": 671 }, { "type": "D", "before": "regulation of the single-celled fission yeast (Schizosaccharomyces Pombe) and compare its informational properties to two classes", "after": null, "start_char_pos": 687, "end_char_pos": 816 }, { "type": "R", "before": "null model that share commonalities in their causal structure", "after": "the fission yeast Schizosaccharomyces Pombe. We compare calculated values of local and global information measures for the fission yeast cell cycle to the same measures as applied to two different classes of random networks: random and scale-free", "start_char_pos": 820, "end_char_pos": 881 }, { "type": "A", "before": null, "after": ", associated with control nodes that regulate the function of the fission yeast cell cycle network", "start_char_pos": 997, "end_char_pos": 997 }, { "type": "A", "before": null, "after": "global", "start_char_pos": 1067, "end_char_pos": 1067 }, { "type": "R", "before": "and", "after": "in particular, and more generally", "start_char_pos": 1286, "end_char_pos": 1289 }, { "type": "A", "before": null, "after": "that may be", "start_char_pos": 1331, "end_char_pos": 1331 } ]
[ 0, 38, 137, 240, 398, 591, 883, 1165 ]