doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1312.4196
2
Detailed balance in reversible chemical reaction networks (CRNs) is a property possessed by certain chemical reaction networks (CRNs) when modeled as a deterministic dynamical system taken with mass-action kinetics whose reaction rate parameters are appropriately constrained, the constraints being imposed by the network structure of the CRN. We will refer to this property as reaction network detailed balance (RNDB) . Markov chains (whether arising as models of CRNs or otherwise) have their own notion of detailed balance, imposed by the network structure of the graph of the transition matrix of the Markov chain. When considering Markov chains arising from chemical reaction networks with mass-action kinetics, we will refer to this property as Markov chain detailed balance (MCDB). Finally, we refer to the stochastic analog of RNDB as Whittle stochastic detailed balance (WSDB). It is known that RNDB and WSDB are equivalent , in the sense that they require an identical set of conditions on the rate constants . We prove that WSDB and MCDB are also intimately related but are not equivalent , although they are sometimes confused for each other . While RNDB implies MCDB, the converse is not true. The conditions on rate constants that result in networks with MCDB but without RNDB are stringent, and thus examples of this phenomenon are rare, a notable exception is a network whose Markov chain is a birth and death process. Using the fact that RNDB implies MCDB, we give a new algorithm to find conditions on the rate constants that are required for MCDB and we obtain an explicit formula for the stationary distribution of networks with RNDB .
Certain chemical reaction networks (CRNs) when modeled as a deterministic dynamical system taken with mass-action kinetics have the property of reaction network detailed balance (RNDB) which is achieved by imposing network-related constraints on the reaction rate constants . Markov chains (whether arising as models of CRNs or otherwise) have their own notion of detailed balance, imposed by the network structure of the graph of the transition matrix of the Markov chain. When considering Markov chains arising from chemical reaction networks with mass-action kinetics, we will refer to this property as Markov chain detailed balance (MCDB). Finally, we refer to the stochastic analog of RNDB as Whittle stochastic detailed balance (WSDB). It is known that RNDB and WSDB are equivalent . We prove that WSDB and MCDB are also intimately related but are not equivalent . While RNDB implies MCDB, the converse is not true. The conditions on rate constants that result in networks with MCDB but without RNDB are stringent, and thus examples of this phenomenon are rare, a notable exception is a network whose Markov chain is a birth and death process. We give a new algorithm to find conditions on the rate constants that are required for MCDB .
[ { "type": "R", "before": "Detailed balance in reversible", "after": "Certain", "start_char_pos": 0, "end_char_pos": 30 }, { "type": "D", "before": "is a property possessed by certain chemical reaction networks (CRNs)", "after": null, "start_char_pos": 65, "end_char_pos": 133 }, { "type": "R", "before": "whose reaction rate parameters are appropriately constrained, the constraints being imposed by the network structure of the CRN. We will refer to this property as", "after": "have the property of", "start_char_pos": 215, "end_char_pos": 377 }, { "type": "A", "before": null, "after": "which is achieved by imposing network-related constraints on the reaction rate constants", "start_char_pos": 419, "end_char_pos": 419 }, { "type": "D", "before": ", in the sense that they require an identical set of conditions on the rate constants", "after": null, "start_char_pos": 934, "end_char_pos": 1019 }, { "type": "D", "before": ", although they are sometimes confused for each other", "after": null, "start_char_pos": 1101, "end_char_pos": 1154 }, { "type": "R", "before": "Using the fact that RNDB implies MCDB, we", "after": "We", "start_char_pos": 1436, "end_char_pos": 1477 }, { "type": "D", "before": "and we obtain an explicit formula for the stationary distribution of networks with RNDB", "after": null, "start_char_pos": 1567, "end_char_pos": 1654 } ]
[ 0, 343, 619, 789, 887, 1021, 1156, 1207, 1435 ]
1312.4227
1
Draft. We combine static hedging and real options valuation ideas to build a capital budgeting technique . Here one applies the market information of derivative prices on a traded 'quasi twin security' to benchmark a single-step stochastic cash stream. We provide a transparent, more or less closed-form solution for valuing these streams . The fundamental properties of this valuation rule are then studied. The derivation of the pricing rule is developed in such a way as to generalize intuitive real option considerations to continuous state step-by-step. We also discuss some required mathematical finance machinery as wellas results of Breeden-Litzenberger type .
We investigate a statistical-static hedging technique for pricing assets considered as single-step stochastic cash flows. The valuation is based on constructing in a canonical way a European style derivative on a benchmark security such that the physical payoff distribution coincides with the (corrected) physical asset price distribution. It turns out that this pricing technique is economically viable under some natural cases . The fundamental properties of the pricing rule arising in this way are investigated here. This gives rise to a novel way of estimating state price density. Our approach has some tangible benefits: its principle is transparent, and it is easy to implement numerically while avoiding many issues typically involved in such an estimation. As an application, it is shown how this method can be used in performing kurtosis corrections to the standard Black-Scholes-Merton model by a mixture of several types of distributions. In fact, the technique is non-parametric in nature, and it can handle in principle any physical distribution, e.g., a multimodal one. Some other interesting applications are discussed as well .
[ { "type": "R", "before": "Draft. We combine static hedging and real options valuation ideas to build a capital budgeting technique . Here one applies the market information of derivative prices on a traded 'quasi twin security' to benchmark a", "after": "We investigate a statistical-static hedging technique for pricing assets considered as", "start_char_pos": 0, "end_char_pos": 216 }, { "type": "R", "before": "stochastic cash stream. We provide a transparent, more or less closed-form solution for valuing these streams", "after": "stochastic cash flows. The valuation is based on constructing in a canonical way a European style derivative on a benchmark security such that the physical payoff distribution coincides with the (corrected) physical asset price distribution. It turns out that this pricing technique is economically viable under some natural cases", "start_char_pos": 229, "end_char_pos": 338 }, { "type": "D", "before": "this valuation rule are then studied. The derivation of", "after": null, "start_char_pos": 371, "end_char_pos": 426 }, { "type": "R", "before": "is developed in such a way as to generalize intuitive real option considerations to continuous state step-by-step. We also discuss some required mathematical finance machinery as wellas results of Breeden-Litzenberger type", "after": "arising in this way are investigated here. This gives rise to a novel way of estimating state price density. Our approach has some tangible benefits: its principle is transparent, and it is easy to implement numerically while avoiding many issues typically involved in such an estimation. As an application, it is shown how this method can be used in performing kurtosis corrections to the standard Black-Scholes-Merton model by a mixture of several types of distributions. In fact, the technique is non-parametric in nature, and it can handle in principle any physical distribution, e.g., a multimodal one. Some other interesting applications are discussed as well", "start_char_pos": 444, "end_char_pos": 666 } ]
[ 0, 106, 252, 340, 408, 558 ]
1312.4385
1
In this paper we investigate the local risk-minimization approach for a financial market where there are restrictions on the available information to agents who can observe at least the asset prices. We characterize the optimal strategy in terms of the predictable covariation of the optimal value process and the stock price with respect to a given filtration representing the information level, even in presence of jumps. Finally, we discuss some practical examples in a Markovian framework and show that the computation of the optimal strategy leads to solve filtering problems under the real-world probability measure and under the minimal martingale measure.
In this paper we investigate the local risk-minimization approach for a semimartingale financial market where there are restrictions on the available information to agents who can observe at least the asset prices. We characterize the optimal strategy in terms of suitable decompositions of a given contingent claim, with respect to a filtration representing the information level, even in presence of jumps. Finally, we discuss some practical examples in a Markovian framework and show that the computation of the optimal strategy leads to filtering problems under the real-world probability measure and under the minimalmartingale measure.
[ { "type": "A", "before": null, "after": "semimartingale", "start_char_pos": 72, "end_char_pos": 72 }, { "type": "R", "before": "the predictable covariation of the optimal value process and the stock price", "after": "suitable decompositions of a given contingent claim,", "start_char_pos": 250, "end_char_pos": 326 }, { "type": "D", "before": "given", "after": null, "start_char_pos": 345, "end_char_pos": 350 }, { "type": "D", "before": "solve", "after": null, "start_char_pos": 557, "end_char_pos": 562 }, { "type": "R", "before": "minimal martingale", "after": "minimalmartingale", "start_char_pos": 637, "end_char_pos": 655 } ]
[ 0, 200, 424 ]
1312.4496
1
The collective behaviour of proteins on biomembranes is usually studied within the spontaneous curvature model. Here we consider an alternative approach, which accounts consistently for the liquid-crystalline order of proteins together with the morphology of biomembrane. We show analytically that the anchoring forces exerted on the membrane by a layer of proteins can lead to the membrane bending. Within a broad range of parameters the calculated equilibrium shapes are similar to the ones observed in experiments while budding of vesicles and division of cells . The predicted instabilities can advance our conceptual understanding of the collective phenomena in biological systems .
The collective behavior of proteins on biomembranes is usually studied within the spontaneous curvature model. Here we consider a novel approach, which accounts consistently for the liquid-crystalline order of proteins together with the morphology of biomembrane. We show analytically that the anchoring forces exerted on the membrane by a layer of proteins can lead to the membrane bending. We find equilibrium shapes similar to the ones observed during the budding of vesicles and cell division . The predicted instabilities can advance our conceptual understanding of the collective phenomena in biological systems , in particular those with inherent anisotropy .
[ { "type": "R", "before": "behaviour", "after": "behavior", "start_char_pos": 15, "end_char_pos": 24 }, { "type": "R", "before": "an alternative", "after": "a novel", "start_char_pos": 129, "end_char_pos": 143 }, { "type": "R", "before": "Within a broad range of parameters the calculated equilibrium shapes are", "after": "We find equilibrium shapes", "start_char_pos": 400, "end_char_pos": 472 }, { "type": "R", "before": "in experiments while", "after": "during the", "start_char_pos": 502, "end_char_pos": 522 }, { "type": "R", "before": "division of cells", "after": "cell division", "start_char_pos": 547, "end_char_pos": 564 }, { "type": "A", "before": null, "after": ", in particular those with inherent anisotropy", "start_char_pos": 686, "end_char_pos": 686 } ]
[ 0, 111, 271, 399, 566 ]
1312.4496
2
The collective behavior of proteins on biomembranes is usually studied within the spontaneous curvature model. Here we consider a novel approach, which accounts consistently for the liquid-crystalline order of proteins together with the morphology of biomembrane. We show analytically that the anchoring forces exerted on the membrane by a layer of proteins can lead to the membrane bending. We find equilibrium shapes similar to the ones observed during the budding of vesicles and cell division . The predicted instabilities can advance our conceptual understanding of the collective phenomena in biological systems, in particular those with inherent anisotropy.
Collective behavior of proteins on biomembranes is usually studied within the spontaneous curvature model. Here we consider an alternative phenomenological approach, which accounts consistently for partial ordering of proteins as well as the anchoring forces exerted on a membrane by layer of proteins . We show analytically that such anisotropic interactions can drive membrane bending, resulting in non-trivial equilibrium morphologies . The predicted instabilities can advance our conceptual understanding of physical mechanisms behind collective phenomena in biological systems, in particular those with inherent anisotropy.
[ { "type": "R", "before": "The collective", "after": "Collective", "start_char_pos": 0, "end_char_pos": 14 }, { "type": "R", "before": "a novel", "after": "an alternative phenomenological", "start_char_pos": 128, "end_char_pos": 135 }, { "type": "R", "before": "the liquid-crystalline order of proteins together with the morphology of biomembrane. We show analytically that the", "after": "partial ordering of proteins as well as the", "start_char_pos": 178, "end_char_pos": 293 }, { "type": "R", "before": "the membrane by a", "after": "a membrane by", "start_char_pos": 322, "end_char_pos": 339 }, { "type": "R", "before": "can lead to the membrane bending. We find equilibrium shapes similar to the ones observed during the budding of vesicles and cell division", "after": ". We show analytically that such anisotropic interactions can drive membrane bending, resulting in non-trivial equilibrium morphologies", "start_char_pos": 358, "end_char_pos": 496 }, { "type": "R", "before": "the", "after": "physical mechanisms behind", "start_char_pos": 571, "end_char_pos": 574 } ]
[ 0, 110, 263, 391, 498 ]
1312.4603
1
Many studies have shown that the icosahedral symmetry adopted by most spherical viruses is the result of free energy minimization of a generic interaction between virus proteins. Remarkably, we find that icosahedral and other highly symmetric structures observed both in vitro and equilibrium studies of viral shells can readily grow from identical subunits under non equilibrium conditions. Our minimal model of virus assembly shows that structures of small shells are basically determined by the spontaneous curvature almost independently of the mechanical properties of the protein subunits .
Highly symmetric nano-shells are found in many biological systems, such as clathrin cages and viral shells. Several studies have shown that symmetric shells appear in nature as a result of the free energy minimization of a generic interaction between their constituent subunits. We examine the physical basis for the formation of symmetric shells, and using a minimal model we demonstrate that these structures can readily grow from identical subunits under non equilibrium conditions. Our model of nano-shell assembly shows that the spontaneous curvature regulates the size of the shell while the mechanical properties of the subunit determines the symmetry of the assembled structure. Understanding the minimum requirements for the formation of closed nano-shells is a necessary step towards engineering of nano-containers, which will have far reaching impact in both material science and medicine .
[ { "type": "R", "before": "Many", "after": "Highly symmetric nano-shells are found in many biological systems, such as clathrin cages and viral shells. Several", "start_char_pos": 0, "end_char_pos": 4 }, { "type": "R", "before": "the icosahedral symmetry adopted by most spherical viruses is the result of", "after": "symmetric shells appear in nature as a result of the", "start_char_pos": 29, "end_char_pos": 104 }, { "type": "R", "before": "virus proteins. Remarkably, we find that icosahedral and other highly symmetric structures observed both in vitro and equilibrium studies of viral shells", "after": "their constituent subunits. We examine the physical basis for the formation of symmetric shells, and using a minimal model we demonstrate that these structures", "start_char_pos": 163, "end_char_pos": 316 }, { "type": "R", "before": "minimal model of virus", "after": "model of nano-shell", "start_char_pos": 396, "end_char_pos": 418 }, { "type": "D", "before": "structures of small shells are basically determined by", "after": null, "start_char_pos": 439, "end_char_pos": 493 }, { "type": "R", "before": "almost independently of the", "after": "regulates the size of the shell while the", "start_char_pos": 520, "end_char_pos": 547 }, { "type": "R", "before": "protein subunits", "after": "subunit determines the symmetry of the assembled structure. Understanding the minimum requirements for the formation of closed nano-shells is a necessary step towards engineering of nano-containers, which will have far reaching impact in both material science and medicine", "start_char_pos": 577, "end_char_pos": 593 } ]
[ 0, 178, 391 ]
1312.4774
1
Multisite phosphorylation networks are encountered in many intracellular processes like signal transduction, cell-cycle control or nuclear signal integration. In {\em Wang and Sontag, 2008 ,the authors study the number of steady states in general n-site sequential distributive phosphorylation and show that there are at most 2n-1 steady states. They furthermore conjecture that , for odd n, there are at most n and that, for even n, there are at most n+1 steady states. Building on earlier work in {\em Holstein et.al., 2013}, we present a scalar determining equation for multistationarity which will lead to 5 steady states for a 3-site and to 7 steady states for a 4-site phosphorylation system and hence to counterexamples to the conjecture of Wang and Sontag. We conclude with a brief biological interpretation of the inherent geometric properties of multistationarity .
Multisite protein phosphorylation plays a prominent role in intracellular processes like signal transduction, cell-cycle control and nuclear signal integration. Many proteins are phosphorylated in a sequential and distributive way at more than one phosphorylation site. Mathematical models of n-site sequential distributive phosphorylation are therefore studied frequently. In particular, in {\em Wang and Sontag, 2008 , it is shown that models of n-site sequential distributive phosphorylation admit at most 2n-1 steady states. Wang and Sontag furthermore conjecture that for odd n, there are at most n and that, for even n, there are at most n+1 steady states. This, however, is not true: building on earlier work in {\em Holstein et.al., 2013}, we present a scalar determining equation for multistationarity which will lead to parameter values where a 3-site system has 5 steady states and parameter values where a 4-site system has 7 steady states. Our results therefore are counterexamples to the conjecture of Wang and Sontag. We furthermore study the inherent geometric properties of multistationarity in n-site sequential distributive phosphorylation: the complete vector of steady state ratios is determined by the steady state ratios of free enzymes and unphosphorylated protein and there exists a linear relationship between steady state ratios of phosphorylated protein .
[ { "type": "R", "before": "phosphorylation networks are encountered in many", "after": "protein phosphorylation plays a prominent role in", "start_char_pos": 10, "end_char_pos": 58 }, { "type": "R", "before": "or", "after": "and", "start_char_pos": 128, "end_char_pos": 130 }, { "type": "R", "before": "In", "after": "Many proteins are phosphorylated in a sequential and distributive way at more than one phosphorylation site. Mathematical models of n-site sequential distributive phosphorylation are therefore studied frequently. In particular, in", "start_char_pos": 159, "end_char_pos": 161 }, { "type": "R", "before": ",the authors study the number of steady states in general", "after": ",", "start_char_pos": 189, "end_char_pos": 246 }, { "type": "A", "before": null, "after": "it is shown that models of", "start_char_pos": 247, "end_char_pos": 247 }, { "type": "R", "before": "and show that there are", "after": "admit", "start_char_pos": 295, "end_char_pos": 318 }, { "type": "R", "before": "They", "after": "Wang and Sontag", "start_char_pos": 347, "end_char_pos": 351 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 380, "end_char_pos": 381 }, { "type": "R", "before": "Building", "after": "This, however, is not true: building", "start_char_pos": 472, "end_char_pos": 480 }, { "type": "R", "before": "5 steady states for", "after": "parameter values where", "start_char_pos": 611, "end_char_pos": 630 }, { "type": "R", "before": "and to 7 steady states for", "after": "system has 5 steady states and parameter values where", "start_char_pos": 640, "end_char_pos": 666 }, { "type": "R", "before": "phosphorylation system and hence to", "after": "system has 7 steady states. Our results therefore are", "start_char_pos": 676, "end_char_pos": 711 }, { "type": "R", "before": "conclude with a brief biological interpretation of", "after": "furthermore study", "start_char_pos": 769, "end_char_pos": 819 }, { "type": "A", "before": null, "after": "in n-site sequential distributive phosphorylation: the complete vector of steady state ratios is determined by the steady state ratios of free enzymes and unphosphorylated protein and there exists a linear relationship between steady state ratios of phosphorylated protein", "start_char_pos": 875, "end_char_pos": 875 } ]
[ 0, 158, 346, 471, 765 ]
1312.4803
1
The agent-based computational economical model for the emergence of money from the initial barter trading, inspired by Menger's postulate that money can spontaneously emerge in a commodity exchange economy, is extensively studied. The model considered, while manageable, is sufficiently complex, however. It already is able to reveal phenomena that can be interpreted as emergence and collapse of money as well as the related competition effects. In particular, it is shown that - as an extra emerging effect - the money lifetimes near the critical threshold value develop multiscaling, which allow one to set parallels to critical phenomena and, thus, to the real financial markets.
An agent-based computational economical toy model for the emergence of money from the initial barter trading, inspired by Menger's postulate that money can spontaneously emerge in a commodity exchange economy, is extensively studied. The model considered, while manageable, is significantly complex, however. It is already able to reveal phenomena that can be interpreted as emergence and collapse of money as well as the related competition effects. In particular, it is shown that - as an extra emerging effect - the money lifetimes near the critical threshold value develop multiscaling, which allow one to set parallels to critical phenomena and, thus, to the real financial markets.
[ { "type": "R", "before": "The", "after": "An", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "A", "before": null, "after": "toy", "start_char_pos": 41, "end_char_pos": 41 }, { "type": "R", "before": "sufficiently", "after": "significantly", "start_char_pos": 275, "end_char_pos": 287 }, { "type": "R", "before": "already is", "after": "is already", "start_char_pos": 309, "end_char_pos": 319 } ]
[ 0, 231, 305, 447 ]
1312.5116
1
A general market model with memory is considered . The formulation is given in terms of stochastic functional differential equations , which allow for flexibility in the modeling of market memory and delays. We focus on the sensitivity analysis of the dependence of option prices on the memory. This implies a generalization of the concept of delta . Our techniques use Malliavin calculus and Fr\'echet derivation. When it comes to option prices, we consider both the risk-neutral and the benchmark approaches and we compute the delta in both cases. Some examples are provided .
A general market model with memory is considered in terms of stochastic functional differential equations . We aim at representation formulae for the sensitivity analysis of the dependence of option prices on the memory. This implies a generalization of the concept of delta .
[ { "type": "D", "before": ". The formulation is given", "after": null, "start_char_pos": 49, "end_char_pos": 75 }, { "type": "R", "before": ", which allow for flexibility in the modeling of market memory and delays. We focus on", "after": ". We aim at representation formulae for", "start_char_pos": 133, "end_char_pos": 219 }, { "type": "D", "before": ". Our techniques use Malliavin calculus and Fr\\'echet derivation. When it comes to option prices, we consider both the risk-neutral and the benchmark approaches and we compute the delta in both cases. Some examples are provided", "after": null, "start_char_pos": 349, "end_char_pos": 576 } ]
[ 0, 50, 207, 294, 350, 414, 549 ]
1312.5204
1
Dynamical robustness and modularity in cellular processes are considered as special characters in biological regulatory network. Here we construct a simplified cell-cycle model in budding yeast to investigate the underlying mechanism that ensures the robustness in the multi-step process . First, we establish a three-variable model and select a parameter set that can qualitatively describe the yeast cell-cycle process. Then, through nonlinear dynamical analysis we demonstrate that the yeast cell-cycle process is an excited system driven by a sequence of saddle-node bifurcations with ghost effects , and the yeast cell-cycle trajectory is globally attractive with modularity in both state and parameter space, while the convergent manifold provides a suitable control state for cell-cycle checkpoints. These results highlight the dynamical regulatory mechanism for complex biological processes to execute successive events and multi-task .
Yeast cells produce daughter cells through a DNA replication and mitosis cycle associated with checkpoints and governed by the cell cycle regulatory network. To ensure genome stability and genetic information inheritance, this regulatory network must be dynamically robust against various fluctuations. Here we construct a simplified cell cycle model for a budding yeast to investigate the underlying mechanism that ensures robustness in this process containing sequential tasks (DNA replication and mitosis). We first establish a three-variable model and select a parameter set that qualitatively describes the yeast cell cycle process. Then, through nonlinear dynamic analysis, we demonstrate that the yeast cell cycle process is an excitable system driven by a sequence of saddle-node bifurcations with ghost effects . We further show that the yeast cell cycle trajectory is globally attractive with modularity in both state and parameter space, while the convergent manifold provides a suitable control state for cell cycle checkpoints. These results not only highlight a regulatory mechanism for executing successive cell cycle processes, but also provide a possible strategy for the synthetic network design of sequential-task processes .
[ { "type": "R", "before": "Dynamical robustness and modularity in cellular processes are considered as special characters in biological", "after": "Yeast cells produce daughter cells through a DNA replication and mitosis cycle associated with checkpoints and governed by the cell cycle", "start_char_pos": 0, "end_char_pos": 108 }, { "type": "A", "before": null, "after": "To ensure genome stability and genetic information inheritance, this regulatory network must be dynamically robust against various fluctuations.", "start_char_pos": 129, "end_char_pos": 129 }, { "type": "R", "before": "cell-cycle model in", "after": "cell cycle model for a", "start_char_pos": 161, "end_char_pos": 180 }, { "type": "R", "before": "the robustness in the multi-step process . First, we", "after": "robustness in this process containing sequential tasks (DNA replication and mitosis). We first", "start_char_pos": 248, "end_char_pos": 300 }, { "type": "R", "before": "can qualitatively describe the yeast cell-cycle", "after": "qualitatively describes the yeast cell cycle", "start_char_pos": 366, "end_char_pos": 413 }, { "type": "R", "before": "dynamical analysis", "after": "dynamic analysis,", "start_char_pos": 447, "end_char_pos": 465 }, { "type": "R", "before": "cell-cycle", "after": "cell cycle", "start_char_pos": 496, "end_char_pos": 506 }, { "type": "R", "before": "excited", "after": "excitable", "start_char_pos": 521, "end_char_pos": 528 }, { "type": "R", "before": ", and the yeast cell-cycle", "after": ". We further show that the yeast cell cycle", "start_char_pos": 604, "end_char_pos": 630 }, { "type": "R", "before": "cell-cycle", "after": "cell cycle", "start_char_pos": 784, "end_char_pos": 794 }, { "type": "R", "before": "highlight the dynamical", "after": "not only highlight a", "start_char_pos": 822, "end_char_pos": 845 }, { "type": "R", "before": "complex biological processes to execute successive events and multi-task", "after": "executing successive cell cycle processes, but also provide a possible strategy for the synthetic network design of sequential-task processes", "start_char_pos": 871, "end_char_pos": 943 } ]
[ 0, 128, 290, 422, 807 ]
1312.5228
1
The uniform sampling of convex polytopes is an interesting computational problem with many applications , in particular in the field of metabolic network analysis , but the performances of sampling algorithms can be affected by high condition numbers in real instances . In this work we define a procedure in order to reduce the condition number based on building an ellipsoid that closely matches the sampling space . This defines an affine transformation that renders the space homogeneous and suited to an efficient sampling by means of an Hit-and-Run Montecarlo markov chain . In this way the uniformity of the sampling is rigorously guaranteed at odds with procedures based on non-markovian dynamics. We propose two methods in order to build the ellipsoid: one based on principal component analysis and the other on linear programming . We show its performances on highly heterogeneous hyper-rectangles and apply it to a model of the metabolism of the bacterium E.Coli .
The uniform sampling of convex polytopes is an interesting computational problem with many applications in inference from linear constraints , but the performances of sampling algorithms can be affected by ill-conditioning. This is the case of inferring the feasible steady states in models of metabolic networks, since they can show heterogeneous timescales . In this work we focus on rounding procedures based on building an ellipsoid that closely matches the sampling space , that can be used to define an efficient Hit-and-Run markov chain Monte Carlo . In this way the uniformity of the sampling is rigorously guaranteed at odds with procedures based on non-markovian dynamics. We analyze and compare three rounding methods in order to sample the feasible steady states of a model of the metabolism of the bacterium E.Coli. The first is based on principal component analysis (PCA), the second on linear programming (LP) and finally we employ the Lovazs ellipsoid method (LEM). Our results show that a rounding procedure is mandatory for such inference problems and suggest that a combination of LEM or LP with a subsequent PCA seems to perform the best .
[ { "type": "R", "before": ", in particular in the field of metabolic network analysis", "after": "in inference from linear constraints", "start_char_pos": 104, "end_char_pos": 162 }, { "type": "R", "before": "high condition numbers in real instances", "after": "ill-conditioning. This is the case of inferring the feasible steady states in models of metabolic networks, since they can show heterogeneous timescales", "start_char_pos": 228, "end_char_pos": 268 }, { "type": "R", "before": "define a procedure in order to reduce the condition number", "after": "focus on rounding procedures", "start_char_pos": 287, "end_char_pos": 345 }, { "type": "R", "before": ". This defines an affine transformation that renders the space homogeneous and suited to an efficient sampling by means of an", "after": ", that can be used to define an efficient", "start_char_pos": 417, "end_char_pos": 542 }, { "type": "R", "before": "Montecarlo markov chain", "after": "markov chain Monte Carlo", "start_char_pos": 555, "end_char_pos": 578 }, { "type": "R", "before": "propose two", "after": "analyze and compare three rounding", "start_char_pos": 709, "end_char_pos": 720 }, { "type": "R", "before": "build the ellipsoid: one", "after": "sample the feasible steady states of a model of the metabolism of the bacterium E.Coli. The first is", "start_char_pos": 741, "end_char_pos": 765 }, { "type": "R", "before": "and the other", "after": "(PCA), the second", "start_char_pos": 804, "end_char_pos": 817 }, { "type": "R", "before": ". We show its performances on highly heterogeneous hyper-rectangles and apply it to a model of the metabolism of the bacterium E.Coli", "after": "(LP) and finally we employ the Lovazs ellipsoid method (LEM). Our results show that a rounding procedure is mandatory for such inference problems and suggest that a combination of LEM or LP with a subsequent PCA seems to perform the best", "start_char_pos": 840, "end_char_pos": 973 } ]
[ 0, 270, 418, 580, 705, 841 ]
1312.5228
2
The uniform sampling of convex polytopes is an interesting computational problem with many applications in inference from linear constraints, but the performances of sampling algorithms can be affected by ill-conditioning. This is the case of inferring the feasible steady states in models of metabolic networks, since they can show heterogeneous timescales . In this work we focus on rounding procedures based on building an ellipsoid that closely matches the sampling space, that can be used to define an efficient Hit-and-Run markov chain Monte Carlo. In this way the uniformity of the sampling is rigorously guaranteed at odds with procedures based on non-markovian dynamics . We analyze and compare three rounding methods in order to sample the feasible steady states of a model of the metabolism of the bacterium E. Coli. The first is based on principal component analysis (PCA), the second on linear programming (LP) and finally we employ the Lovazs ellipsoid method (LEM). Our results show that a rounding procedure is mandatory for such inference problems and suggest that a combination of LEM or LP with a subsequent PCA seems to perform the best .
The uniform sampling of convex polytopes is an interesting computational problem with many applications in inference from linear constraints, but the performances of sampling algorithms can be affected by ill-conditioning. This is the case of inferring the feasible steady states in models of metabolic networks, since they can show heterogeneous time scales . In this work we focus on rounding procedures based on building an ellipsoid that closely matches the sampling space, that can be used to define an efficient hit-and-run (HR) Markov Chain Monte Carlo. In this way the uniformity of the sampling of the convex space of interest is rigorously guaranteed , at odds with non markovian methods . We analyze and compare three rounding methods in order to sample the feasible steady states of metabolic networks of three models of growing size up to genomic scale. The first is based on principal component analysis (PCA), the second on linear programming (LP) and finally we employ the lovasz ellipsoid method (LEM). Our results show that a rounding procedure is mandatory for the application of the HR in these inference problem and suggest that a combination of LEM or LP with a subsequent PCA perform the best . We finally compare the distributions of the HR with that of two heuristics based on the Artificially Centered hit-and-run (ACHR), gpSampler and optGpSampler. They show a good agreement with the results of the HR for the small network, while on genome scale models present inconsistencies .
[ { "type": "R", "before": "timescales", "after": "time scales", "start_char_pos": 347, "end_char_pos": 357 }, { "type": "R", "before": "Hit-and-Run markov chain", "after": "hit-and-run (HR) Markov Chain", "start_char_pos": 517, "end_char_pos": 541 }, { "type": "A", "before": null, "after": "of the convex space of interest", "start_char_pos": 598, "end_char_pos": 598 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 624, "end_char_pos": 624 }, { "type": "R", "before": "procedures based on non-markovian dynamics", "after": "non markovian methods", "start_char_pos": 638, "end_char_pos": 680 }, { "type": "R", "before": "a model of the metabolism of the bacterium E. Coli.", "after": "metabolic networks of three models of growing size up to genomic scale.", "start_char_pos": 778, "end_char_pos": 829 }, { "type": "R", "before": "Lovazs", "after": "lovasz", "start_char_pos": 952, "end_char_pos": 958 }, { "type": "R", "before": "such inference problems", "after": "the application of the HR in these inference problem", "start_char_pos": 1043, "end_char_pos": 1066 }, { "type": "D", "before": "seems to", "after": null, "start_char_pos": 1133, "end_char_pos": 1141 }, { "type": "A", "before": null, "after": ". We finally compare the distributions of the HR with that of two heuristics based on the Artificially Centered hit-and-run (ACHR), gpSampler and optGpSampler. They show a good agreement with the results of the HR for the small network, while on genome scale models present inconsistencies", "start_char_pos": 1159, "end_char_pos": 1159 } ]
[ 0, 222, 359, 554, 682, 829, 982 ]
1312.5492
1
A dendritic spine is a very small structure } of a neuron that processes input timing information. Why are spines so small? Here, we provide functional reasons; the size of spines is optimal for probability codingof Ca2+ increases, which makes robust and sensitive to input timing information . We created a stochastic simulation model of input timing-dependent Ca2+ increases in a cerebellar Purkinje cells spine. Spines used probability coding of Ca2+ increases rather than amplitude coding for input timing detection via stochastic facilitation by utilizing the small number of molecules in a spine volume, which appeared optimalfor probability coding . Probability coding of Ca2+ increases in a spine volume was more robust against input fluctuation and more sensitive to input numbers than amplitude coding of Ca2+ increases in a cell volume. Thus, stochasticity is a strategy by which neurons robustly and sensitively code information.
A dendritic spine is a very small structure (~0.1 \mu}m^3) of a neuron that processes input timing information. Why are spines so small? Here, we provide functional reasons; the size of spines is optimal for information coding. Spines code input timing information by the probability of Ca^{2+ increases, which makes robust and sensitive information coding possible . We created a stochastic simulation model of input timing-dependent Ca^{2+ increases in a cerebellar Purkinje cell's spine. Spines used probability coding of Ca^{2+ increases rather than amplitude coding for input timing detection via stochastic facilitation by utilizing the small number of molecules in a spine volume, where information per volume appeared optimal . Probability coding of Ca^{2+ increases in a spine volume was more robust against input fluctuation and more sensitive to input numbers than amplitude coding of Ca^{2+ increases in a cell volume. Thus, stochasticity is a strategy by which neurons robustly and sensitively code information.
[ { "type": "A", "before": null, "after": "(~0.1", "start_char_pos": 44, "end_char_pos": 44 }, { "type": "A", "before": null, "after": "\\mu", "start_char_pos": 45, "end_char_pos": 45 }, { "type": "A", "before": null, "after": "m^3)", "start_char_pos": 46, "end_char_pos": 46 }, { "type": "R", "before": "probability codingof Ca2+", "after": "information coding. Spines code input timing information by the probability of Ca^{2+", "start_char_pos": 196, "end_char_pos": 221 }, { "type": "R", "before": "to input timing information", "after": "information coding possible", "start_char_pos": 266, "end_char_pos": 293 }, { "type": "R", "before": "Ca2+", "after": "Ca^{2+", "start_char_pos": 363, "end_char_pos": 367 }, { "type": "R", "before": "cells", "after": "cell's", "start_char_pos": 403, "end_char_pos": 408 }, { "type": "R", "before": "Ca2+", "after": "Ca^{2+", "start_char_pos": 450, "end_char_pos": 454 }, { "type": "R", "before": "which appeared optimalfor probability coding", "after": "where information per volume appeared optimal", "start_char_pos": 611, "end_char_pos": 655 }, { "type": "R", "before": "Ca2+", "after": "Ca^{2+", "start_char_pos": 680, "end_char_pos": 684 }, { "type": "R", "before": "Ca2+", "after": "Ca^{2+", "start_char_pos": 816, "end_char_pos": 820 } ]
[ 0, 99, 124, 161, 295, 415, 848 ]
1312.5911
1
In quantitative finance, we often wish to recover the volatility of asset prices given by a noisy It\=o semimartingale. Existing estimates, however, lose accuracy when the jumps are of infinite variation, as is suggested by empirical evidence . In this paper, we show that when the efficient prices are given by an unknown time-changed L\'evy process, the rate of time change, which plays the role of the volatility, can be estimated well under arbitrary jump activity. We further show that our estimate remains valid for the volatility in the general semimartingale model, obtaining convergence rates as good as any previously implied in the literature.
In quantitative finance, we often wish to model the behaviour of asset prices given by a noisy Ito semimartingale; unfortunately, this model is too complex to identify from price data . In this paper, we therefore consider efficient prices given by a time-changed Levy process; this model is both identifiable, and replicates salient features of price data. We give a new estimate of the rate process in this model, which governs its volatility. Our estimate obtains minimax convergence rates, and is unaffected by arbitrary jump activity. Furthermore, it remains valid for the volatility in the general semimartingale model, obtaining convergence rates as good as any previously implied in the literature.
[ { "type": "R", "before": "recover the volatility", "after": "model the behaviour", "start_char_pos": 42, "end_char_pos": 64 }, { "type": "R", "before": "It\\=o semimartingale. Existing estimates, however, lose accuracy when the jumps are of infinite variation, as is suggested by empirical evidence", "after": "Ito semimartingale; unfortunately, this model is too complex to identify from price data", "start_char_pos": 98, "end_char_pos": 242 }, { "type": "R", "before": "show that when the efficient prices are given by an unknown", "after": "therefore consider efficient prices given by a", "start_char_pos": 263, "end_char_pos": 322 }, { "type": "R", "before": "L\\'evy process, the rate of time change, which plays the role of the volatility, can be estimated well under", "after": "Levy process; this model is both identifiable, and replicates salient features of price data. We give a new estimate of the rate process in this model, which governs its volatility. Our estimate obtains minimax convergence rates, and is unaffected by", "start_char_pos": 336, "end_char_pos": 444 }, { "type": "R", "before": "We further show that our estimate", "after": "Furthermore, it", "start_char_pos": 470, "end_char_pos": 503 } ]
[ 0, 119, 244, 469 ]
1312.5911
2
In quantitative finance, we often wish to model the behaviour of asset prices given by a noisy Ito semimartingale ; unfortunately, this model is too complex to identify from price data. In this paper, we therefore consider efficient prices given by a time-changed Levy process ; this model is both identifiable, and replicates salient features of price data . We give a new estimate of the rate process in this model, which governs its volatility. Our estimate obtains minimax convergence rates, and is unaffected by arbitrary jump activity. Furthermore, it remains valid for the volatility in the general semimartingale model , obtaining convergence rates as good as any previously implied in the literature.
In quantitative finance, we often model asset prices as a noisy Ito semimartingale . As this model is not identifiable, approximating by a time-changed Levy process can be useful for generative modelling . We give a new estimate of the normalised volatility or time change in this model, which obtains minimax convergence rates, and is unaffected by infinite-variation jumps. In the semimartingale model, our estimate remains accurate for the normalised volatility , obtaining convergence rates as good as any previously implied in the literature.
[ { "type": "R", "before": "wish to model the behaviour of asset prices given by", "after": "model asset prices as", "start_char_pos": 34, "end_char_pos": 86 }, { "type": "R", "before": "; unfortunately,", "after": ". As", "start_char_pos": 114, "end_char_pos": 130 }, { "type": "R", "before": "too complex to identify from price data. In this paper, we therefore consider efficient prices given", "after": "not identifiable, approximating", "start_char_pos": 145, "end_char_pos": 245 }, { "type": "R", "before": "; this model is both identifiable, and replicates salient features of price data", "after": "can be useful for generative modelling", "start_char_pos": 277, "end_char_pos": 357 }, { "type": "R", "before": "rate process", "after": "normalised volatility or time change", "start_char_pos": 390, "end_char_pos": 402 }, { "type": "D", "before": "governs its volatility. Our estimate", "after": null, "start_char_pos": 424, "end_char_pos": 460 }, { "type": "R", "before": "arbitrary jump activity. Furthermore, it remains valid for the volatility in the general semimartingale model", "after": "infinite-variation jumps. In the semimartingale model, our estimate remains accurate for the normalised volatility", "start_char_pos": 517, "end_char_pos": 626 } ]
[ 0, 115, 185, 278, 359, 447, 541 ]
1312.5911
3
In quantitative finance, we often model asset prices as a noisy Ito semimartingale. As this model is not identifiable, approximating by a time-changed Levy process can be useful for generative modelling. We give a new estimate of the normalised volatility or time change in this model, which obtains minimax convergence rates, and is unaffected by infinite-variation jumps. In the semimartingale model, our estimate remains accurate for the normalised volatility, obtaining convergence rates as good as any previously implied in the literature.
In quantitative finance, we often model asset prices as a noisy It\^o semimartingale. As this model is not identifiable, approximating by a time-changed L\'evy process can be useful for generative modelling. We give a new estimate of the normalised volatility or time change in this model, which obtains minimax convergence rates, and is unaffected by infinite-variation jumps. In the semimartingale model, our estimate remains accurate for the normalised volatility, obtaining convergence rates as good as any previously implied in the literature.
[ { "type": "R", "before": "Ito", "after": "It\\^o", "start_char_pos": 64, "end_char_pos": 67 }, { "type": "R", "before": "Levy", "after": "L\\'evy", "start_char_pos": 151, "end_char_pos": 155 } ]
[ 0, 83, 203, 373 ]
1312.5911
4
In quantitative finance, we often model asset prices as a noisy It\^o semimartingale. As this model is not identifiable, approximating by a time-changed L\'evy process can be useful for generative modelling. We give a new estimate of the normalised volatility or time change in this model, which obtains minimax convergence rates, and is unaffected by infinite-variation jumps. In the semimartingale model, our estimate remains accurate for the normalised volatility, obtaining convergence rates as good as any previously implied in the literature.
In quantitative finance, we often model asset prices as a noisy Ito semimartingale. As this model is not identifiable, approximating by a time-changed Levy process can be useful for generative modelling. We give a new estimate of the normalised volatility or time change in this model, which obtains minimax convergence rates, and is unaffected by infinite-variation jumps. In the semimartingale model, our estimate remains accurate for the normalised volatility, obtaining convergence rates as good as any previously implied in the literature.
[ { "type": "R", "before": "It\\^o", "after": "Ito", "start_char_pos": 64, "end_char_pos": 69 }, { "type": "R", "before": "L\\'evy", "after": "Levy", "start_char_pos": 153, "end_char_pos": 159 } ]
[ 0, 85, 207, 377 ]
1312.6209
1
Multistable gene regulatory systems sustain different levels of gene expression under identical external conditions. Such multistability is used to encode phenotypic states in processes ranging from nutrient uptake and persistence in bacteria to cell cycle control and development. Stochastic switching between different phenotypes can occur as the result of random fluctuations in molecular copy numbers of mRNA and proteins arising in transcription, translation, transport, and binding. However, which component of a pathway triggers such a transition is generally not known. By linking single-cell experiments on the lactose-uptake pathway in E. coli to molecular simulations, we devise a general method to pinpoint the particular fluctuation driving phenotype switching and apply it to the transition between the uninduced and induced states of the lac-genes . We find that the transition to the induced state is not caused only by the single event of lac-repressor unbinding, but depends crucially on the time period over which the repressor remains unbound from the lac-operon. We confirm this notion in strains with a high expression level of the repressor lacI (leading to shorter periods over which the lac-operon remains unbound), which show a reduced transition rate. Our techniques apply to multistable gene regulatory systems in general and allow to identify the molecular mechanisms behind stochastic transitions in gene regulatory circuits.
Multistable gene regulatory systems sustain different levels of gene expression under identical external conditions. Such multistability is used to encode phenotypic states in processes including nutrient uptake and persistence in bacteria , fate selection in viral infection, cell cycle control , and development. Stochastic switching between different phenotypes can occur as the result of random fluctuations in molecular copy numbers of mRNA and proteins arising in transcription, translation, transport, and binding. However, which component of a pathway triggers such a transition is generally not known. By linking single-cell experiments on the lactose-uptake pathway in E. coli to molecular simulations, we devise a general method to pinpoint the particular fluctuation driving phenotype switching and apply this method to the transition between the uninduced and induced states of the lac genes . We find that the transition to the induced state is not caused only by the single event of lac-repressor unbinding, but depends crucially on the time period over which the repressor remains unbound from the lac-operon. We confirm this notion in strains with a high expression level of the repressor (leading to shorter periods over which the lac-operon remains unbound), which show a reduced switching rate. Our techniques apply to multi-stable gene regulatory systems in general and allow to identify the molecular mechanisms behind stochastic transitions in gene regulatory circuits.
[ { "type": "R", "before": "ranging from", "after": "including", "start_char_pos": 186, "end_char_pos": 198 }, { "type": "R", "before": "to", "after": ", fate selection in viral infection,", "start_char_pos": 243, "end_char_pos": 245 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 265, "end_char_pos": 265 }, { "type": "R", "before": "it", "after": "this method", "start_char_pos": 785, "end_char_pos": 787 }, { "type": "R", "before": "lac-genes", "after": "lac genes", "start_char_pos": 854, "end_char_pos": 863 }, { "type": "D", "before": "lacI", "after": null, "start_char_pos": 1165, "end_char_pos": 1169 }, { "type": "R", "before": "transition", "after": "switching", "start_char_pos": 1263, "end_char_pos": 1273 }, { "type": "R", "before": "multistable", "after": "multi-stable", "start_char_pos": 1304, "end_char_pos": 1315 } ]
[ 0, 116, 282, 489, 578, 865, 1084, 1279 ]
1312.6776
1
Cells with identical genomes often exhibit biochemically distinct phenotypic states ; stochastic switchings among them are one of the functional roles of "noise" in a genetic circuitry. To quantify the stabilities of these phenotypic states and, more importantly, the transition rates among them, we study a minimal model of gene regulation that incorporates positive feedbackwith multiple gene states. We first show that starting from a full Delbr\"uck-Gillespie-process description of the gene regulation network, two much simpler stochastic biochemical kinetic systems can be deduced, respectively, in the limit of either the stochastic gene-state switching or transcription factor copy-number fluctuation being dominant among other sources of noise. We then propose two saddle-crossing rate formulas for the simplified dynamics. They are associated with the barriers of different nonequilibrium landscape functions, whichexhibit different dependence on the switching rates of gene states. Keeping exactly the same mean-field deterministic dynamics, incorporating noise can possibly yield opposite predictions on the relative stability of the coexisted phenotypic states when the single-molecule switching between different gene states, compared to the protein copy-number fluctuation, is relatively slow or sufficiently rapid. This quantitative theory emphasizes noises from different origins with distinct characteristics as an additional complexity within gene regulation .
Multiple phenotypic states often arise in a single cell with different gene-expression states that undergo transcription regulation with positive feedback. Recent experiments have shown that at least in E. coli, the gene state switching can be neither extremely slow nor exceedingly rapid as many previous theoretical treatments assumed. Rather it is in the intermediate region which is difficult to handle mathematically.Under this condition, from a full chemical-master-equation description we derive a model in which the protein copy-number, for a given gene state, follow a deterministic mean-field description while the protein synthesis rates fluctuate due to stochastic gene-state switching . The simplified kinetics yields a nonequilibrium landscape function, which, similar to the energy function for equilibrium fluctuation, provides the leading orders of fluctuations around each phenotypic state, as well as the transition rates between the two phenotypic states. This rate formula is analogous to Kramers theory for chemical reactions. The resulting behaviors are significantly different from the two limiting cases studied previously .
[ { "type": "R", "before": "Cells with identical genomes often exhibit biochemically distinct phenotypic states ; stochastic switchings among them are one of the functional roles of \"noise\" in a genetic circuitry. To quantify the stabilities of these phenotypic states and, more importantly, the transition rates among them, we study a minimal model of gene regulation that incorporates positive feedbackwith multiple gene states. We first show that starting", "after": "Multiple phenotypic states often arise in a single cell with different gene-expression states that undergo transcription regulation with positive feedback. Recent experiments have shown that at least in E. coli, the gene state switching can be neither extremely slow nor exceedingly rapid as many previous theoretical treatments assumed. Rather it is in the intermediate region which is difficult to handle mathematically.Under this condition,", "start_char_pos": 0, "end_char_pos": 430 }, { "type": "R", "before": "Delbr\\\"uck-Gillespie-process description of the gene regulation network, two much simpler stochastic biochemical kinetic systems can be deduced, respectively, in the limit of either the", "after": "chemical-master-equation description we derive a model in which the protein copy-number, for a given gene state, follow a deterministic mean-field description while the protein synthesis rates fluctuate due to", "start_char_pos": 443, "end_char_pos": 628 }, { "type": "R", "before": "or transcription factor copy-number fluctuation being dominant among other sources of noise. We then propose two saddle-crossing rate formulas for the simplified dynamics. They are associated with the barriers of different nonequilibrium landscape functions, whichexhibit different dependence on the switching rates of gene states. Keeping exactly the same mean-field deterministic dynamics, incorporating noise can possibly yield opposite predictions on the relative stability of the coexisted phenotypic states when the single-molecule switching between different gene states, compared to the protein copy-number fluctuation, is relatively slow or sufficiently rapid. This quantitative theory emphasizes noises from different origins with distinct characteristics as an additional complexity within gene regulation", "after": ". The simplified kinetics yields a nonequilibrium landscape function, which, similar to the energy function for equilibrium fluctuation, provides the leading orders of fluctuations around each phenotypic state, as well as the transition rates between the two phenotypic states. This rate formula is analogous to Kramers theory for chemical reactions. The resulting behaviors are significantly different from the two limiting cases studied previously", "start_char_pos": 661, "end_char_pos": 1477 } ]
[ 0, 85, 185, 402, 753, 832, 992, 1330 ]
1312.6804
1
I show the equivalence between a model of financial contagion and the widely-used threshold model of global cascades proposed by Watts (2002). The model financial network comprises banks that hold risky external assets as well as interbank assets. It turns out that there is no need to construct the balance sheets of banks if the shadow threshold of default is appropriately defined in accordance with the stochastic fluctuations in external assets .
I show the equivalence between a model of financial contagion and the threshold model of global cascades proposed by Watts (2002). The model financial network comprises banks that hold risky external assets as well as interbank assets. It is shown that a simple threshold model can replicate the size and the frequency of financial contagion without using information about individual balance sheets. Keywords: financial network, cascades, financial contagion, systemic risk .
[ { "type": "D", "before": "widely-used", "after": null, "start_char_pos": 70, "end_char_pos": 81 }, { "type": "R", "before": "turns out that there is no need to construct the balance sheets of banks if the shadow threshold of default is appropriately defined in accordance with the stochastic fluctuations in external assets", "after": "is shown that a simple threshold model can replicate the size and the frequency of financial contagion without using information about individual balance sheets. Keywords: financial network, cascades, financial contagion, systemic risk", "start_char_pos": 251, "end_char_pos": 449 } ]
[ 0, 142, 247 ]
1312.7111
2
Determining the full complement of protein-coding genes is a key goal of genome annotation. The most powerful approach for confirming protein coding potential is the detection of cellular protein expression through peptide mass spectrometry experiments. Here we map the peptides detected in 7 large-scale proteomics studies to almost 60\% of the protein coding genes in the GENCODE annotation the human genome. We find the age of the gene family and its conservation across vertebrate species are key indicators of whether a peptide will be detected in proteomics experiments. We find peptides for most highly conserved genes and for practically all genes that evolved before bilateria. At the same time there is little or no evidence of protein expression for novel genes , those that have appeared since primates, or genes that do not have any protein-like features or cross-species conservation. We identify 19 non-protein like features such as weak conservation, no protein-like features or ambiguous annotations in the major databases that are indicators of low peptide detection rates. We use these features to describe a set of 2,001 genes that are potentially non-coding, and show that many of these genes behave more like non-coding genes than protein-coding genes. We detect peptides for just 3\% of these genes. We suggest that many of these 2,001 genes do not code for proteins under normal circumstances and that they should not be included in the human protein coding gene catalogue .
Determining the full complement of protein-coding genes is a key goal of genome annotation. The most powerful approach for confirming protein coding potential is the detection of cellular protein expression through peptide mass spectrometry experiments. Here we map the peptides detected in 7 large-scale proteomics studies to almost 60\% of the protein coding genes in the GENCODE annotation the human genome. We find that conservation across vertebrate species and the age of the gene family are key indicators of whether a peptide will be detected in proteomics experiments. We find peptides for most highly conserved genes and for practically all genes that evolved before bilateria. At the same time there is almost no evidence of protein expression for genes that have appeared since primates, or for genes that do not have any protein-like features or cross-species conservation. We identify 19 non-protein-like features such as weak conservation, no protein features or ambiguous annotations in major databases that are indicators of low peptide detection rates. We use these features to describe a set of 2,001 genes that are potentially non-coding, and show that many of these genes behave more like non-coding genes than protein-coding genes. We detect peptides for just 3\% of these genes. We suggest that many of these 2,001 genes do not code for proteins under normal circumstances and that they should not be included in the human protein coding gene catalogue . These potential non-coding genes will be revised as part of the ongoing human genome annotation effort .
[ { "type": "A", "before": null, "after": "that conservation across vertebrate species and", "start_char_pos": 419, "end_char_pos": 419 }, { "type": "D", "before": "and its conservation across vertebrate species", "after": null, "start_char_pos": 447, "end_char_pos": 493 }, { "type": "R", "before": "little or", "after": "almost", "start_char_pos": 714, "end_char_pos": 723 }, { "type": "R", "before": "novel genes , those", "after": "genes", "start_char_pos": 762, "end_char_pos": 781 }, { "type": "A", "before": null, "after": "for", "start_char_pos": 820, "end_char_pos": 820 }, { "type": "R", "before": "non-protein like", "after": "non-protein-like", "start_char_pos": 916, "end_char_pos": 932 }, { "type": "R", "before": "protein-like", "after": "protein", "start_char_pos": 972, "end_char_pos": 984 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 1022, "end_char_pos": 1025 }, { "type": "A", "before": null, "after": ". These potential non-coding genes will be revised as part of the ongoing human genome annotation effort", "start_char_pos": 1499, "end_char_pos": 1499 } ]
[ 0, 91, 253, 410, 577, 687, 900, 1093, 1276, 1324 ]
1312.7149
1
Autophagy is a conserved biological stress response in mammalian cells that is responsible for clearing damaged proteins URLanelles from the cytoplasm and recycling their contents via the lysosomal pathway. In cases where the stressis not too severe , autophagy acts as a survival mechanism . In cases of severe stress , may lead to programmed cell death. There is also a third alternative, autophagic cell death, which can occur when the apoptotic pathway is blocked. Autophagy is abnormally regulated in a wide-range of diseases, including cancer. To integrate the existing knowledge about this decision process into a rigorous, analytical framework, we built a mathematical model of cell fate decision mediated by autophagy. The model treats autophagy as a gradual response to stress that delays the initiation of apoptosis to give the cell an opportunity to survive. We show that our dynamical model is consistent with existing quantitative measurements of time courses of autophagic responses to cisplatin treatment .
Autophagy is a conserved biological stress response in mammalian cells that is responsible for clearing damaged proteins URLanelles from the cytoplasm and recycling their contents via the lysosomal pathway. In cases of mild stress , autophagy acts as a survival mechanism , while in cases of severe stress cells may switch to programmed cell death. Understanding the decision process that moves a cell from autophagy to apoptosis is important since abnormal regulation of autophagy occurs in many diseases, including cancer. To integrate existing knowledge about this decision process into a rigorous, analytical framework, we built a mathematical model of cell fate decisions mediated by autophagy. Our dynamical model is consistent with existing quantitative measurements of autophagy and apoptosis in rat kidney proximal tubular cells responding to cisplatin-induced stress .
[ { "type": "R", "before": "where the stressis not too severe", "after": "of mild stress", "start_char_pos": 216, "end_char_pos": 249 }, { "type": "R", "before": ". In", "after": ", while in", "start_char_pos": 291, "end_char_pos": 295 }, { "type": "R", "before": ", may lead", "after": "cells may switch", "start_char_pos": 319, "end_char_pos": 329 }, { "type": "R", "before": "There is also a third alternative, autophagic cell death, which can occur when the apoptotic pathway is blocked. Autophagy is abnormally regulated in a wide-range of", "after": "Understanding the decision process that moves a cell from autophagy to apoptosis is important since abnormal regulation of autophagy occurs in many", "start_char_pos": 356, "end_char_pos": 521 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 563, "end_char_pos": 566 }, { "type": "R", "before": "decision", "after": "decisions", "start_char_pos": 696, "end_char_pos": 704 }, { "type": "R", "before": "The model treats autophagy as a gradual response to stress that delays the initiation of apoptosis to give the cell an opportunity to survive. We show that our", "after": "Our", "start_char_pos": 728, "end_char_pos": 887 }, { "type": "R", "before": "time courses of autophagic responses to cisplatin treatment", "after": "autophagy and apoptosis in rat kidney proximal tubular cells responding to cisplatin-induced stress", "start_char_pos": 961, "end_char_pos": 1020 } ]
[ 0, 206, 292, 355, 468, 549, 727, 870 ]
1312.7545
1
This paper introduces the notion of public collaborative processes (PCPs)to the field of Business Process Management (BPM) . PCPs involve URLanizations with a common objective, where a number of URLanizations cooperating under various unstructured forms take a collaborative approach to reaching the final goal. This paper exemplifies the analysis of PCPs with the case of safeguarding financial stability, particularly macroprudential oversight in Europe. Following the literature on BPM, this paper models the macroprudential oversight process as an event-driven process chain, including activities, responsible entities, as well as inputs and outputs. We view the process from two directions. As an initial step, we document a high-level representation of the process (i.e., without specifying responsible entities) as depicted in the literature. Moreover, we show the increase in complexity when moving to a low-level description of the process (i.e., specifying responsible entities). Along these lines, we put forward a matrix view consisting of five necessary steps to managing PCPs. Thus, in terms of managing the macroprudential oversight process, this motivates further work in collaboration with experts to document, analyze and improve the macroprudential oversight process at a more detailed level .
The 2007--2008 financial crisis has paved the way for the use of macroprudential policies in supervising the financial system as a whole. This paper views macroprudential oversight in Europe as a process, a sequence of activities with the ultimate aim of safeguarding financial stability. To conceptualize a process in this context, we introduce the notion of a public collaborative process (PCP) . PCPs involve URLanizations with a common objective, where a number of URLanizations cooperate under various unstructured forms and take a collaborative approach to reaching the final goal. We argue that PCPs can and should essentially be managed using the tools and practices common for business processes. To this end, we conduct an assessment of process readiness for macroprudential oversight in Europe. Based upon interviews with key European policymakers and supervisors, we provide an analysis model to assess the maturity of five process enablers for macroprudential oversight. With the results of our analysis, we give clear recommendations on the areas that need further attention when macroprudential oversight is being developed, in addition to providing a general purpose framework for monitoring the impact of improvement efforts .
[ { "type": "R", "before": "This paper introduces the notion of public collaborative processes (PCPs)to the field of Business Process Management (BPM)", "after": "The 2007--2008 financial crisis has paved the way for the use of macroprudential policies in supervising the financial system as a whole. This paper views macroprudential oversight in Europe as a process, a sequence of activities with the ultimate aim of safeguarding financial stability. To conceptualize a process in this context, we introduce the notion of a public collaborative process (PCP)", "start_char_pos": 0, "end_char_pos": 122 }, { "type": "R", "before": "cooperating", "after": "cooperate", "start_char_pos": 209, "end_char_pos": 220 }, { "type": "A", "before": null, "after": "and", "start_char_pos": 254, "end_char_pos": 254 }, { "type": "R", "before": "This paper exemplifies the analysis of PCPs with the case of safeguarding financial stability, particularly macroprudential oversight in Europe. Following the literature on BPM, this paper models the macroprudential oversight process as an event-driven process chain, including activities, responsible entities, as well as inputs and outputs. We view the process from two directions. As an initial step, we document a high-level representation of the process (i.e., without specifying responsible entities) as depicted in the literature. Moreover, we show the increase in complexity when moving to a low-level description of the process (i.e., specifying responsible entities). Along these lines, we put forward a matrix view consisting of five necessary steps to managing PCPs. Thus, in terms of managing the macroprudential oversight process, this motivates further work in collaboration with experts to document, analyze and improve the macroprudential oversight process at a more detailed level", "after": "We argue that PCPs can and should essentially be managed using the tools and practices common for business processes. To this end, we conduct an assessment of process readiness for macroprudential oversight in Europe. Based upon interviews with key European policymakers and supervisors, we provide an analysis model to assess the maturity of five process enablers for macroprudential oversight. With the results of our analysis, we give clear recommendations on the areas that need further attention when macroprudential oversight is being developed, in addition to providing a general purpose framework for monitoring the impact of improvement efforts", "start_char_pos": 313, "end_char_pos": 1311 } ]
[ 0, 312, 457, 655, 696, 850, 990, 1091 ]
1401.0124
1
By analysing the financial data of firms across Japan, a non-trivial power law with exponent 1.3 is observed between the number of business partners (i.e. the degree of the inter-firm trading network) and sales. In this paper, we clarify the relationship between this non-trivial scaling and the structure of the network by applying mean-field approximation of diffusion in a complex network to a money-transport model , which has been numerically shown to reproduce this empirical scaling. By theoretical analysis, we obtain the mean-field solution of money-transport models and find thatthe scaling exponent can be determined from the average degree of the nearest neighbours , which is one of the cardinal features of a network .
By analysing the financial data of firms across Japan, a nonlinear power law with an exponent of 1.3 was observed between the number of business partners (i.e. the degree of the inter-firm trading network) and sales. In a previous study using numerical simulations, we found that this scaling can be explained by both the money-transport model, where a firm (i.e. customer) distributes money to its out-edges (suppliers) in proportion to the in-degree of destinations, and by the correlations among the Japanese inter-firm trading network. However, in this previous study, we could not specifically identify what types of structure properties (or correlations) of the network determine the 1.3 exponent. In the present study, we more clearly elucidate the relationship between this nonlinear scaling and the network structure by applying mean-field approximation of the diffusion in a complex network to this money-transport model . Using theoretical analysis, we obtained the mean-field solution of the model and found that, in the case of the Japanese firms, the scaling exponent of 1.3 can be determined from the power law of the average degree of the nearest neighbours of the network with an exponent of -0.7 .
[ { "type": "R", "before": "non-trivial", "after": "nonlinear", "start_char_pos": 57, "end_char_pos": 68 }, { "type": "R", "before": "exponent", "after": "an exponent of", "start_char_pos": 84, "end_char_pos": 92 }, { "type": "R", "before": "is", "after": "was", "start_char_pos": 97, "end_char_pos": 99 }, { "type": "R", "before": "this paper, we clarify the", "after": "a previous study using numerical simulations, we found that this scaling can be explained by both the money-transport model, where a firm (i.e. customer) distributes money to its out-edges (suppliers) in proportion to the in-degree of destinations, and by the correlations among the Japanese inter-firm trading network. However, in this previous study, we could not specifically identify what types of structure properties (or correlations) of the network determine the 1.3 exponent. In the present study, we more clearly elucidate the", "start_char_pos": 215, "end_char_pos": 241 }, { "type": "R", "before": "non-trivial", "after": "nonlinear", "start_char_pos": 268, "end_char_pos": 279 }, { "type": "R", "before": "structure of the network", "after": "network structure", "start_char_pos": 296, "end_char_pos": 320 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 361, "end_char_pos": 361 }, { "type": "R", "before": "a", "after": "this", "start_char_pos": 396, "end_char_pos": 397 }, { "type": "R", "before": ", which has been numerically shown to reproduce this empirical scaling. By", "after": ". Using", "start_char_pos": 420, "end_char_pos": 494 }, { "type": "R", "before": "obtain", "after": "obtained", "start_char_pos": 520, "end_char_pos": 526 }, { "type": "R", "before": "money-transport models and find thatthe scaling exponent", "after": "the model and found that, in the case of the Japanese firms, the scaling exponent of 1.3", "start_char_pos": 554, "end_char_pos": 610 }, { "type": "A", "before": null, "after": "power law of the", "start_char_pos": 638, "end_char_pos": 638 }, { "type": "R", "before": ", which is one of the cardinal features of a network", "after": "of the network with an exponent of -0.7", "start_char_pos": 680, "end_char_pos": 732 } ]
[ 0, 211, 491 ]
1401.0562
1
Two major financial market frictions are transaction costs and uncertain volatility, and we analyze their joint impact on the problem of portfolio optimization. When volatility is constant, the transaction costs optimal investment problem has a long history, especially in the use of asymptotic approximations when the cost is small. Under stochastic volatility, but with no transaction costs, the Merton problem under general utility functions can also be analyzed with asymptotic methods. Here, we look at the long-run growth rate problem when both frictions are present, using separation of time scales approximations. This leads to perturbation analysis of an eigenvalue problem. We find the first term in the asymptotic expansion in the time scale parameter, of the optimal long-term growth rate, and of the optimal strategy, for fixed small transaction costs
Two major financial market complexities are transaction costs and uncertain volatility, and we analyze their joint impact on the problem of portfolio optimization. When volatility is constant, the transaction costs optimal investment problem has a long history, especially in the use of asymptotic approximations when the cost is small. Under stochastic volatility, but with no transaction costs, the Merton problem under general utility functions can also be analyzed with asymptotic methods. Here, we look at the long-run growth rate problem when both complexities are present, using separation of time scales approximations. This leads to perturbation analysis of an eigenvalue problem. We find the first term in the asymptotic expansion in the time scale parameter, of the optimal long-term growth rate, and of the optimal strategy, for fixed small transaction costs .
[ { "type": "R", "before": "frictions", "after": "complexities", "start_char_pos": 27, "end_char_pos": 36 }, { "type": "R", "before": "frictions", "after": "complexities", "start_char_pos": 551, "end_char_pos": 560 }, { "type": "A", "before": null, "after": ".", "start_char_pos": 865, "end_char_pos": 865 } ]
[ 0, 160, 333, 490, 621, 683 ]
1401.0903
1
We show that the jumps correlation matrix of a multivariate Hawkes process is related to the Hawkes kernel matrix by a system of Wiener-Hopf integral equations. A Wiener-Hopf argument allows one to prove that this system (in which the kernel matrix is the unknown) possesses a unique causal solution and consequently that the second-order properties fully characterize Hawkes processes . The numerical inversion of the system of integral equations allows us to propose a fast and efficient method ] to perform a non-parametric estimation of the Hawkes kernel matrix. We discuss the estimation error and provide some numerical examples . Applications to high frequency trading events in financial markets and to earthquakes occurrence dynamics are considered.
We show that the jumps correlation matrix of a multivariate Hawkes process is related to the Hawkes kernel matrix through a system of Wiener-Hopf integral equations. A Wiener-Hopf argument allows one to prove that this system (in which the kernel matrix is the unknown) possesses a unique causal solution and consequently that the second-order properties fully characterize a Hawkes process . The numerical inversion of this system of integral equations allows us to propose a fast and efficient method , which main principles were initially sketched in Bacry and Muzy, 2013], to perform a non-parametric estimation of the Hawkes kernel matrix. In this paper, we perform a systematic study of this non-parametric estimation procedure in the general framework of marked Hawkes processes. We describe precisely this procedure step by step. We discuss the estimation error and explain how the values for the main parameters should be chosen. Various numerical examples are given in order to illustrate the broad possibilities of this estimation procedure ranging from 1-dimensional (power-law or non positive kernels) up to 3-dimensional (circular dependence) processes. A comparison to other non-parametric estimation procedures is made. Applications to high frequency trading events in financial markets and to earthquakes occurrence dynamics are finally considered.
[ { "type": "R", "before": "by", "after": "through", "start_char_pos": 114, "end_char_pos": 116 }, { "type": "R", "before": "Hawkes processes", "after": "a Hawkes process", "start_char_pos": 369, "end_char_pos": 385 }, { "type": "R", "before": "the", "after": "this", "start_char_pos": 415, "end_char_pos": 418 }, { "type": "A", "before": null, "after": ", which main principles were initially sketched in", "start_char_pos": 497, "end_char_pos": 497 }, { "type": "A", "before": null, "after": "Bacry and Muzy, 2013", "start_char_pos": 498, "end_char_pos": 498 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 499, "end_char_pos": 499 }, { "type": "R", "before": "We", "after": "In this paper, we perform a systematic study of this non-parametric estimation procedure in the general framework of marked Hawkes processes. We describe precisely this procedure step by step. We", "start_char_pos": 568, "end_char_pos": 570 }, { "type": "R", "before": "provide some numerical examples .", "after": "explain how the values for the main parameters should be chosen. Various numerical examples are given in order to illustrate the broad possibilities of this estimation procedure ranging from 1-dimensional (power-law or non positive kernels) up to 3-dimensional (circular dependence) processes. A comparison to other non-parametric estimation procedures is made.", "start_char_pos": 604, "end_char_pos": 637 }, { "type": "A", "before": null, "after": "finally", "start_char_pos": 748, "end_char_pos": 748 } ]
[ 0, 160, 387, 567 ]
1401.1294
1
Developing an efficient spectrum access policy enables cognitive radios to dramatically increase spectrum utilization while ensuring predetermined quality of service levels for the primary users. In this paper, modeling, performance analysis, and optimization of a distributed secondary network with random sensing order policy are studied. Specifically, the secondary users create a random order of the available channels upon primary users return, and then find optimal transmission and handoff opportunities in a distributed manner. By a Markov chain analysis, the average throughputs of the secondary users and average interference level among the secondary and primary users are evaluated . A maximization of the secondary network performance in terms of throughput while keeping under control the average interference is proposed. It is shown that despite of traditional view, non-zero false alarm in the channel sensing can increase channel utilization . Then, two simple and practical adaptive algorithms are established to optimize the network. The second algorithm follows the variations of the wireless channels in non-stationary conditions and outperforms even static brute force optimization, while demanding few computations. Finally, numerical results validate the analytical derivations and demonstrate the efficiency of the proposed schemes. It is concluded that fully distributed algorithms can achieve substantial performance improvements in cognitive radio networks without the need of centralized management or message passing among the users.
Developing an efficient spectrum access policy enables cognitive radios to dramatically increase spectrum utilization while ensuring predetermined quality of service levels for the primary users. In this paper, modeling, performance analysis, and optimization of a distributed secondary network with random sensing order policy are studied. Specifically, the secondary users create a random order of the available channels upon primary users return, and then find optimal transmission and handoff opportunities in a distributed manner. By a Markov chain analysis, the average throughputs of the secondary users and average interference level among the secondary and primary users are investigated . A maximization of the secondary network performance in terms of throughput while keeping under control the average interference is proposed. It is shown that despite of traditional view, non-zero false alarm in the channel sensing can increase channel utilization , especially in a dense secondary network where the contention is too high . Then, two simple and practical adaptive algorithms are established to optimize the network. The second algorithm follows the variations of the wireless channels in non-stationary conditions and outperforms even static brute force optimization, while demanding few computations. The convergence of the distributed algorithms are theoretically investigated based on the analytical performance indicators established by the Markov chain analysis. Finally, numerical results validate the analytical derivations and demonstrate the efficiency of the proposed schemes. It is concluded that fully distributed sensing order algorithms can achieve substantial performance improvements in cognitive radio networks without the need of centralized management or message passing among the users.
[ { "type": "R", "before": "evaluated", "after": "investigated", "start_char_pos": 684, "end_char_pos": 693 }, { "type": "A", "before": null, "after": ", especially in a dense secondary network where the contention is too high", "start_char_pos": 960, "end_char_pos": 960 }, { "type": "A", "before": null, "after": "The convergence of the distributed algorithms are theoretically investigated based on the analytical performance indicators established by the Markov chain analysis.", "start_char_pos": 1241, "end_char_pos": 1241 }, { "type": "A", "before": null, "after": "sensing order", "start_char_pos": 1400, "end_char_pos": 1400 } ]
[ 0, 195, 340, 535, 695, 836, 962, 1054, 1240, 1360 ]
1401.1294
2
Developing an efficient spectrum access policy enables cognitive radios to dramatically increase spectrum utilization while ensuring predetermined quality of service levels for the primary users. In this paper, modeling, performance analysis, and optimization of a distributed secondary network with random sensing order policy are studied. Specifically, the secondary users create a random order of the available channels upon primary users return, and then find optimal transmission and handoff opportunities in a distributed manner. By a Markov chain analysis, the average throughputs of the secondary users and average interference level among the secondary and primary users are investigated. A maximization of the secondary network performance in terms of throughput while keeping under control the average interference is proposed. It is shown that despite of traditional view, non-zero false alarm in the channel sensing can increase channel utilization, especially in a dense secondary network where the contention is too high. Then, two simple and practical adaptive algorithms are established to optimize the network. The second algorithm follows the variations of the wireless channels in non-stationary conditions and outperforms even static brute force optimization, while demanding few computations. The convergence of the distributed algorithms are theoretically investigated based on the analytical performance indicators established by the Markov chain analysis. Finally, numerical results validate the analytical derivations and demonstrate the efficiency of the proposed schemes. It is concluded that fully distributed sensing order algorithms can achieve substantial performance improvements in cognitive radio networks without the need of centralized management or message passing among the users.
Developing an efficient spectrum access policy enables cognitive radios to dramatically increase spectrum utilization while ensuring predetermined quality of service levels for primary users. In this paper, modeling, performance analysis, and optimization of a distributed secondary network with random sensing order policy are studied. Specifically, the secondary users create a random order of available channels upon primary users return, and then find optimal transmission and handoff opportunities in a distributed manner. By a Markov chain analysis, the average throughputs of the secondary users and average interference level among the secondary and primary users are investigated. A maximization of the secondary network performance in terms of the throughput while keeping under control the average interference is proposed. It is shown that despite of traditional view, non-zero false alarm in the channel sensing can increase channel utilization, especially in a dense secondary network where the contention is too high. Then, two simple and practical adaptive algorithms are established to optimize the network. The second algorithm follows the variations of the wireless channels in non-stationary conditions and outperforms even static brute force optimization, while demanding few computations. The convergence of the distributed algorithms are theoretically investigated based on the analytical performance indicators established by the Markov chain analysis. Finally, numerical results validate the analytical derivations and demonstrate the efficiency of the proposed schemes. It is concluded that fully distributed sensing order algorithms can lead to substantial performance improvements in cognitive radio networks without the need of centralized management or message passing among the users.
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 177, "end_char_pos": 180 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 400, "end_char_pos": 403 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 762, "end_char_pos": 762 }, { "type": "R", "before": "achieve", "after": "lead to", "start_char_pos": 1669, "end_char_pos": 1676 } ]
[ 0, 195, 340, 535, 697, 839, 1037, 1129, 1315, 1481, 1600 ]
1401.1892
1
In Part III of this paper , we apply the price dynamical model with big buyers and big sellers developed in Part I of this paper to the daily closing data of the top 20 stocks in Hang Seng Index in Hong Kong Stock Exchange. The basic idea is to estimate the strength parameters of the big buyers and the big sellers in the model and make buy/sell decisions based on these parameter estimates. We develop two trading strategies: (i) Follow-the-Big-Buyer which buys when big buyer begins to appear and there is no sign of big sellers, holds the stock as long as the big buyer is still there, and sells all holdings of this stock once the big buyer disappears; and (ii) Ride-the-Mood which buys as soon as the big buyer strength begins to surpass the big seller strength and sells all holdings of the stock once the big seller strength is larger than the big buyer strength . Based on the testing over 198 two-year intervals uniformly distributed across the six-year period from 03-July-2007 to 28-June-2013 which includes a variety of scenarios, the net profits would increase 47\% or 64 \% on average if an investor switched from the benchmark Buy-and-Hold strategy to the Follow-the-Big-Buyer or Ride-the-Mood strategies during this period, respectively.
In Part III of this study , we apply the price dynamical model with big buyers and big sellers developed in Part I of this paper to the daily closing prices of the top 20 banking and real estate stocks listed in the Hong Kong Stock Exchange. The basic idea is to estimate the strength parameters of the big buyers and the big sellers in the model and make buy/sell decisions based on these parameter estimates. We propose two trading strategies: (i) Follow-the-Big-Buyer which buys when big buyer begins to appear and there is no sign of big sellers, holds the stock as long as the big buyer is still there, and sells the stock once the big buyer disappears; and (ii) Ride-the-Mood which buys as soon as the big buyer strength begins to surpass the big seller strength , and sells the stock once the opposite happens . Based on the testing over 245 two-year intervals uniformly distributed across the seven years from 03-July-2007 to 02-July-2014 which includes a variety of scenarios, the net profits would increase 67\% or 120 \% on average if an investor switched from the benchmark Buy-and-Hold strategy to the Follow-the-Big-Buyer or Ride-the-Mood strategies during this period, respectively.
[ { "type": "R", "before": "paper", "after": "study", "start_char_pos": 20, "end_char_pos": 25 }, { "type": "R", "before": "data", "after": "prices", "start_char_pos": 150, "end_char_pos": 154 }, { "type": "R", "before": "stocks in Hang Seng Index in", "after": "banking and real estate stocks listed in the", "start_char_pos": 169, "end_char_pos": 197 }, { "type": "R", "before": "develop", "after": "propose", "start_char_pos": 396, "end_char_pos": 403 }, { "type": "R", "before": "all holdings of this", "after": "the", "start_char_pos": 600, "end_char_pos": 620 }, { "type": "R", "before": "and sells all holdings of", "after": ", and sells", "start_char_pos": 768, "end_char_pos": 793 }, { "type": "R", "before": "big seller strength is larger than the big buyer strength", "after": "opposite happens", "start_char_pos": 813, "end_char_pos": 870 }, { "type": "R", "before": "198", "after": "245", "start_char_pos": 899, "end_char_pos": 902 }, { "type": "R", "before": "six-year period", "after": "seven years", "start_char_pos": 955, "end_char_pos": 970 }, { "type": "R", "before": "28-June-2013", "after": "02-July-2014", "start_char_pos": 992, "end_char_pos": 1004 }, { "type": "R", "before": "47\\% or 64", "after": "67\\% or 120", "start_char_pos": 1075, "end_char_pos": 1085 } ]
[ 0, 223, 392, 657, 1043 ]
1401.2770
1
We propose a "sugar" coarse-grained (CG) DNA model capable of simulating both biologically significant B- and A-DNA forms. The number of degrees of freedom is reduced to six grains per nucleotide. We show that this is the minimal number sufficient for this purpose. The key features of the sugar CG DNA model are : (1) simulation of sugar repuckering between C2'-endo and C3'-endo by the use of one non-harmonic potential and one three-particle potential, (2) explicit representation of sodium counterions and (3) implicit solvent approach . Effects of solvation and of partial charge screening at small distances are taken into account through the shape of potentials of interactions between charged particles . We obtain parameters of the sugar CG DNA model from the all-atom AMBER model. The suggested model allows adequate simulation of the transitions between A- and B-DNA forms, as well as of large deformations of long DNA molecules , for example, in binding with proteins . Small modifications of the model can provide the possibility of introducing sequence dependence, as well as of modeling base pairs openings . One can also study other, than sodium, ions as well as different solvents, and model the corresponding electrostatic effects .
We propose a "sugar" coarse-grained (CG) DNA model capable of simulating both biologically significant B- and A-DNA . The model also demonstrates both the A to B and the B to A transitions. The number of degrees of freedom is reduced to six grains per nucleotide. We show that this is the minimal number sufficient for this purpose. The key features of the model are (1) simulation of sugar repuckering between C2'-endo and C3'-endo by the use of one nonharmonic potential and one three-particle potential, (2) explicit representation of ions in solution around the DNA, (3) implicit solvent approach and (4) sequence dependence . We obtain parameters of the model from the all atom AMBER force field. The model can be used to study large local deformations of long DNA molecules ( for example, in binding with proteins ). Small modification of the model can provide the possibility of modeling base pairs openings in melting, transcription and replication. And one can also simulate the interactions of the DNA molecule with different types of ions in different kinds of solutions .
[ { "type": "R", "before": "forms. The", "after": ". The model also demonstrates both the A to B and the B to A transitions. The", "start_char_pos": 116, "end_char_pos": 126 }, { "type": "R", "before": "sugar CG DNA model are :", "after": "model are", "start_char_pos": 290, "end_char_pos": 314 }, { "type": "R", "before": "non-harmonic", "after": "nonharmonic", "start_char_pos": 399, "end_char_pos": 411 }, { "type": "R", "before": "sodium counterions and", "after": "ions in solution around the DNA,", "start_char_pos": 487, "end_char_pos": 509 }, { "type": "R", "before": ". Effects of solvation and of partial charge screening at small distances are taken into account through the shape of potentials of interactions between charged particles", "after": "and (4) sequence dependence", "start_char_pos": 540, "end_char_pos": 710 }, { "type": "D", "before": "sugar CG DNA", "after": null, "start_char_pos": 741, "end_char_pos": 753 }, { "type": "R", "before": "all-atom AMBER model. The suggested model allows adequate simulation of the transitions between A- and B-DNA forms, as well as of large", "after": "all atom AMBER force field. The model can be used to study large local", "start_char_pos": 769, "end_char_pos": 904 }, { "type": "R", "before": ",", "after": "(", "start_char_pos": 940, "end_char_pos": 941 }, { "type": "R", "before": ". Small modifications", "after": "). Small modification", "start_char_pos": 980, "end_char_pos": 1001 }, { "type": "D", "before": "introducing sequence dependence, as well as of", "after": null, "start_char_pos": 1046, "end_char_pos": 1092 }, { "type": "R", "before": ". One can also study other, than sodium, ions as well as different solvents, and model the corresponding electrostatic effects", "after": "in melting, transcription and replication. And one can also simulate the interactions of the DNA molecule with different types of ions in different kinds of solutions", "start_char_pos": 1122, "end_char_pos": 1248 } ]
[ 0, 122, 196, 265, 712, 790, 1123 ]
1401.2770
2
We propose a "sugar" coarse-grained (CG) DNA model capable of simulating both biologically significant B- and A-DNA. The model also demonstrates both the A to B and the B to A transitions. The number of degrees of freedom is reduced to six grains per nucleotide. We show that this is the minimal number sufficient for this purpose. The key features of the model are (1) simulation of sugarrepuckering between C2'-endo and C3'-endo by the use of one nonharmonic potential and one three-particle potential, (2 ) explicit representation of ions in solution around the DNA, (3) implicit solventapproach and (4 ) sequence dependence. We obtain parameters of the model from the all atom AMBER force field. The model can be used to study large local deformations of long DNA molecules (for example, in binding with proteins) . Small modification of the model can provide the possibility of modeling base pairs openings in melting, transcription and replication. And one can also simulate the interactions of the DNA molecule with different types of ions in different kinds of solutions .
More than twenty coarse-grained (CG) DNA models have been developed for simulating the behavior of this molecule under various conditions, including those required for nanotechnology. However, none of these models reproduces the DNA polymorphism associated with conformational changes in the ribose rings of the DNA backbone. These changes make an essential contribution to the DNA local deformability and provide the possibility of the transition of the DNA double helix from the B-form to the A-form during interactions with biological molecules. We propose a CG representation of the ribose conformational flexibility. We substantiate the choice of the CG sites (6 per nucleotide) needed for the "sugar" GC DNA model, and obtain the potentials of the CG interactions between the sites by the "bottom-up" approach using the all-atom AMBER force field. We show that the representation of the ribose flexibility requires one non-harmonic and one three-particle potential, the forms of both the potentials being different from the ones generally used. The model also includes (i ) explicit representation of ions (in an implicit solvent) and (ii ) sequence dependence. With these features, the sugar CG DNA model reproduces (with the same parameters) both the B- and A- stable forms under corresponding conditions and demonstrates both the A to B and the B to A phase transitions .
[ { "type": "R", "before": "We propose a \"sugar\"", "after": "More than twenty", "start_char_pos": 0, "end_char_pos": 20 }, { "type": "R", "before": "model capable of simulating both biologically significant B- and A-DNA. The model also demonstrates both the A to B and", "after": "models have been developed for simulating the behavior of this molecule under various conditions, including those required for nanotechnology. However, none of these models reproduces the DNA polymorphism associated with conformational changes in the ribose rings of the DNA backbone. These changes make an essential contribution to the DNA local deformability and provide the possibility of the transition of the DNA double helix from the B-form to", "start_char_pos": 45, "end_char_pos": 164 }, { "type": "R", "before": "B to A transitions. The number of degrees of freedom is reduced to six grains per nucleotide. We show that this is the minimal number sufficient for this purpose. The key features of the model are (1) simulation of sugarrepuckering between C2'-endo and C3'-endo by the use of one nonharmonic potential", "after": "A-form during interactions with biological molecules. We propose a CG representation of the ribose conformational flexibility. We substantiate the choice of the CG sites (6 per nucleotide) needed for the \"sugar\" GC DNA model, and obtain the potentials of the CG interactions between the sites by the \"bottom-up\" approach using the all-atom AMBER force field. We show that the representation of the ribose flexibility requires one non-harmonic", "start_char_pos": 169, "end_char_pos": 470 }, { "type": "R", "before": "(2", "after": "the forms of both the potentials being different from the ones generally used. The model also includes (i", "start_char_pos": 505, "end_char_pos": 507 }, { "type": "R", "before": "in solution around the DNA, (3) implicit solventapproach and (4", "after": "(in an implicit solvent) and (ii", "start_char_pos": 542, "end_char_pos": 605 }, { "type": "R", "before": "We obtain parameters of the model from the all atom AMBER force field. The model can be used to study large local deformations of long DNA molecules (for example, in binding with proteins) . Small modification of the model can provide the possibility of modeling base pairs openings in melting, transcription and replication. And one can also simulate the interactions of the DNA molecule with different types of ions in different kinds of solutions", "after": "With these features, the sugar CG DNA model reproduces (with the same parameters) both the B- and A- stable forms under corresponding conditions and demonstrates both the A to B and the B to A phase transitions", "start_char_pos": 629, "end_char_pos": 1078 } ]
[ 0, 116, 188, 262, 331, 628, 699, 954 ]
1401.2954
1
In this paper we consider an information theoretic approach for the accounting classification process. We propose a matrix formalism and an algorithm for calculations of information theoretic measures associated to accounting classification. The formalism may be useful for further generalizations , and computer based implementation. Information theoretic measures, mutual information and symmetric uncertainty, were evaluated for daily transactions recorded in the chart of accounts of a small company during two years. Variation in the information measures due the aggregation of data in the process of accounting classification is observed. In particular, the symmetric uncertainty seems to be a useful parameter for comparing companies over time or in different sectors ; or different accounting choices and standards.
In this paper we consider an information theoretic approach for the accounting classification process. We propose a matrix formalism and an algorithm for calculations of information theoretic measures associated to accounting classification. The formalism may be useful for further generalizations and computer-based implementation. Information theoretic measures, mutual information and symmetric uncertainty, were evaluated for daily transactions recorded in the chart of accounts of a small company during two years. Variation in the information measures due the aggregation of data in the process of accounting classification is observed. In particular, the symmetric uncertainty seems to be a useful parameter for comparing companies over time or in different sectors or different accounting choices and standards.
[ { "type": "R", "before": ", and computer based", "after": "and computer-based", "start_char_pos": 298, "end_char_pos": 318 }, { "type": "D", "before": ";", "after": null, "start_char_pos": 775, "end_char_pos": 776 } ]
[ 0, 102, 241, 334, 521, 644, 776 ]
1401.3133
1
The theory of acceptance sets and their associated risk measures plays a key role in the design of capital adequacy tests. The objective of this paper is to investigate, in the context of bounded financial positions, the class of surplus-invariant acceptance sets. These are characterized by the fact that acceptability does not depend on the positive part, or surplus, of a capital position. We argue that surplus invariance is a reasonable requirement from a regulatory perspective, because it focuses on the interests of liability holders of a financial institution. We provide a dual characterization of surplus-invariant, convex acceptance sets, and show that the combination of surplus invariance and coherence leads to a narrow range of capital adequacy tests, essentially limited to scenario-based tests. Finally, we analyze the relationship between surplus-invariant acceptance sets and loss-based and excess-invariant risk measures, which have been recently studied by Cont, Deguest, and He , and by Staum .
The theory of acceptance sets and their associated risk measures plays a key role in the design of capital adequacy tests. The objective of this paper is to investigate, in the context of bounded financial positions, the class of surplus-invariant acceptance sets. These are characterized by the fact that acceptability does not depend on the positive part, or surplus, of a capital position. We argue that surplus invariance is a reasonable requirement from a regulatory perspective, because it focuses on the interests of liability holders of a financial institution. We provide a dual characterization of surplus-invariant, convex acceptance sets, and show that the combination of surplus invariance and coherence leads to a narrow range of capital adequacy tests, essentially limited to scenario-based tests. Finally, we emphasize the advantages of dealing with surplus-invariant acceptance sets as the primary object rather than directly with risk measures, such as loss-based and excess-invariant risk measures, which have been recently studied by Cont, Deguest, and He (2013) and by Staum (2013), respectively .
[ { "type": "R", "before": "analyze the relationship between", "after": "emphasize the advantages of dealing with", "start_char_pos": 825, "end_char_pos": 857 }, { "type": "R", "before": "and", "after": "as the primary object rather than directly with risk measures, such as", "start_char_pos": 892, "end_char_pos": 895 }, { "type": "R", "before": ",", "after": "(2013)", "start_char_pos": 1001, "end_char_pos": 1002 }, { "type": "A", "before": null, "after": "(2013), respectively", "start_char_pos": 1016, "end_char_pos": 1016 } ]
[ 0, 122, 264, 392, 569, 812 ]
1401.3145
1
The analysis of markets with indivisible goods and fixed exogenous prices has played an important role in economic models, especially in relation to wage rigidity and unemployment. This paper provides a novel mathematical programming based approach to study pure exchange economies where discrete amounts of commodities are exchanged at fixed prices. Barter processes, consisting in sequences of elementary reallocations of couple of commodities among couples of agents, are formalized as local searches converging to equilibrium allocations. A direct application of the analyzed processes in the context of computational economics is provided, along with a Java implementation of the approaches described in this paper: URL
The analysis of markets with indivisible goods and fixed exogenous prices has played an important role in economic models, especially in relation to wage rigidity and unemployment. This research report provides a mathematical and computational details associated to the mathematical programming based approaches proposed by Nasini et al. (accepted 2014) to study pure exchange economies where discrete amounts of commodities are exchanged at fixed prices. Barter processes, consisting in sequences of elementary reallocations of couple of commodities among couples of agents, are formalized as local searches converging to equilibrium allocations. A direct application of the analyzed processes in the context of computational economics is provided, along with a Java implementation of the approaches described in this research report.
[ { "type": "R", "before": "paper provides a novel mathematical programming based approach", "after": "research report provides a mathematical and computational details associated to the mathematical programming based approaches proposed by Nasini et al. (accepted 2014)", "start_char_pos": 186, "end_char_pos": 248 }, { "type": "R", "before": "paper: URL", "after": "research report.", "start_char_pos": 714, "end_char_pos": 724 } ]
[ 0, 180, 350, 542 ]
1401.3261
1
We study the utility indifference price of a European option in the context of small transaction costs. Considering the general setup allowing consumption and a general utility function at final time T, we obtain an asymptotic expansion of the utility indifference price as a function of the asymptotic expansions of the utility maximization problems with and without the European contingent claim. We use the tools developed in [ 51 ] and [ 45 ] based on homogenization and viscosity solutions to characterize these expansions. Finally we provide two examples , in particular recovering under weaker assumptions the results of [6].
We study the utility indifference price of a European option in the context of small transaction costs. Considering the general setup allowing consumption and a general utility function at final time T, we obtain an asymptotic expansion of the utility indifference price as a function of the asymptotic expansions of the utility maximization problems with and without the European contingent claim. We use the tools developed in [ 54 ] and [ 48 ] based on homogenization and viscosity solutions to characterize these expansions. Finally we study more precisely the example of exponential utilities , in particular recovering under weaker assumptions the results of [6].
[ { "type": "R", "before": "51", "after": "54", "start_char_pos": 431, "end_char_pos": 433 }, { "type": "R", "before": "45", "after": "48", "start_char_pos": 442, "end_char_pos": 444 }, { "type": "R", "before": "provide two examples", "after": "study more precisely the example of exponential utilities", "start_char_pos": 540, "end_char_pos": 560 } ]
[ 0, 103, 398, 528 ]
1401.3316
1
In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. This presented method uses techniques of Renyi 's entropy and the mean square error analysis to discuss the conditions under which the error in Renyi's entropy estimation is minimal. We illustrate the utility of our method by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the optimality of the bin-width we compare the \delta-spectrum for various bin-widths . Implications for the multifractal \delta-spectrum as a function of Renyi 's q parameter are also discussed and graphically represented .
In the framework of Multifractal Diffusion Entropy Analysis we propose a method for choosing an optimal bin-width in histograms generated from underlying probability distributions of interest. The method presented uses techniques of R\'{e 's entropy and the mean squared error analysis to discuss the conditions under which the error in the multifractal spectrum estimation is minimal. We illustrate the utility of our approach by focusing on a scaling behavior of financial time series. In particular, we analyze the S&P500 stock index as sampled at a daily rate in the time period 1950-2013. In order to demonstrate a strength of the method proposed we compare the multifractal \delta-spectrum for various bin-widths and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the \delta-spectrum and R\'{e 's q parameter is also discussed and elucidated on a simple example of multiscale time series .
[ { "type": "R", "before": "This presented method", "after": "The method presented", "start_char_pos": 193, "end_char_pos": 214 }, { "type": "R", "before": "Renyi", "after": "R\\'{e", "start_char_pos": 234, "end_char_pos": 239 }, { "type": "R", "before": "square", "after": "squared", "start_char_pos": 264, "end_char_pos": 270 }, { "type": "R", "before": "Renyi's entropy", "after": "the multifractal spectrum", "start_char_pos": 337, "end_char_pos": 352 }, { "type": "R", "before": "method", "after": "approach", "start_char_pos": 409, "end_char_pos": 415 }, { "type": "R", "before": "optimality of the bin-width", "after": "method proposed", "start_char_pos": 624, "end_char_pos": 651 }, { "type": "A", "before": null, "after": "multifractal", "start_char_pos": 667, "end_char_pos": 667 }, { "type": "R", "before": ". Implications for the multifractal", "after": "and show the robustness of the method, especially for large values of q. For such values, other methods in use, e.g., those based on moment estimation, tend to fail for heavy-tailed data or data with long correlations. Connection between the", "start_char_pos": 707, "end_char_pos": 742 }, { "type": "R", "before": "as a function of Renyi", "after": "and R\\'{e", "start_char_pos": 759, "end_char_pos": 781 }, { "type": "R", "before": "are", "after": "is", "start_char_pos": 797, "end_char_pos": 800 }, { "type": "R", "before": "graphically represented", "after": "elucidated on a simple example of multiscale time series", "start_char_pos": 820, "end_char_pos": 843 } ]
[ 0, 192, 375, 475, 581, 708 ]
1401.3604
1
Network theory is today a central topic in computational systems biology as a framework to understand and reconstruct relations among biological components. For example, constructing networks from a gene expression dataset provides a set of possible hypotheses explaining connections among genes, vital knowledge to advancing our understanding of URLanisms as systems. Here we briefly survey aspects at the intersection of information theory and network biology. We show that Shannon's information entropy, Kolmogorov complexity and algorithmic probability quantify different aspects of biological networks at the interplay of local and global pattern detection . We provide approximations to the algorithmic probability and Kolmogorov complexity of motifs connected to the asymptotic topological propertiesof networks .
We introduce concepts and tools at the intersection of information theory and network biology. We show that Shannon's information entropy, Kolmogorov complexity and algorithmic probability quantify different aspects of synthetic and biological networks at the intersection of local (e.g. graph motifs) and global pattern detection , including, the detection of the connectivity phase transition leading to the emergence of giant components in Erd\"os-R\'enyi random graphs . We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labelled and unlabelled graphs and prove that the Kolmogorov complexity of a labelled graph is a good approximation of the Kolmogorov complexity of the unlabelled graph .
[ { "type": "R", "before": "Network theory is today a central topic in computational systems biology as a framework to understand and reconstruct relations among biological components. For example, constructing networks from a gene expression dataset provides a set of possible hypotheses explaining connections among genes, vital knowledge to advancing our understanding of URLanisms as systems. Here we briefly survey aspects", "after": "We introduce concepts and tools", "start_char_pos": 0, "end_char_pos": 399 }, { "type": "A", "before": null, "after": "synthetic and", "start_char_pos": 587, "end_char_pos": 587 }, { "type": "R", "before": "interplay of local", "after": "intersection of local (e.g. graph motifs)", "start_char_pos": 615, "end_char_pos": 633 }, { "type": "A", "before": null, "after": ", including, the detection of the connectivity phase transition leading to the emergence of giant components in Erd\\\"os-R\\'enyi random graphs", "start_char_pos": 663, "end_char_pos": 663 }, { "type": "R", "before": "approximations to the", "after": "exact theoretical calculations, numerical approximations and error estimations of entropy,", "start_char_pos": 677, "end_char_pos": 698 }, { "type": "R", "before": "of motifs connected to the asymptotic topological propertiesof networks", "after": "for different types of graphs characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labelled and unlabelled graphs and prove that the Kolmogorov complexity of a labelled graph is a good approximation of the Kolmogorov complexity of the unlabelled graph", "start_char_pos": 749, "end_char_pos": 820 } ]
[ 0, 156, 368, 462, 665 ]
1401.3604
2
We introduce concepts and tools at the intersection of information theory and network biology. We show that Shannon's information entropy, Kolmogorov complexity and algorithmic probability quantify different aspects of synthetic and biological networks at the intersection of local (e.g. graph motifs) and global pattern detection, including, the detection of the connectivity phase transition leading to the emergence of giant components in Erd\"os-R\'enyi random graphs . We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labelled and unlabelled graphs and prove that the Kolmogorov complexity of a labelled graph is a good approximation of the Kolmogorov complexity of the unlabelled graph.
We introduce concepts and tools at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic probability quantify different aspects of synthetic and biological networks at the intersection of local and global pattern detection, including, the detection of the connectivity phase transition leading to the emergence of giant components in Erd\"os-R\'enyi random graphs , and the recovery of topological properties from numerical kinetic properties simulating gene expression arrays . We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labelled and unlabelled graphs and prove that the Kolmogorov complexity of a labelled graph is a good approximation of the Kolmogorov complexity of the unlabelled graph.
[ { "type": "R", "before": "Kolmogorov complexity", "after": "compressibility", "start_char_pos": 139, "end_char_pos": 160 }, { "type": "D", "before": "(e.g. graph motifs)", "after": null, "start_char_pos": 282, "end_char_pos": 301 }, { "type": "A", "before": null, "after": ", and the recovery of topological properties from numerical kinetic properties simulating gene expression arrays", "start_char_pos": 472, "end_char_pos": 472 } ]
[ 0, 94, 474, 711 ]
1401.3604
3
We introduce concepts and tools at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic probability quantify different aspects of synthetic and biological networks at the intersection of local and global pattern detection , including, the detection of the connectivity phase transition leading to the emergence of giant components in Erd\"os-R\'enyi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression arrays . We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labelled and unlabelled graphs and prove that the Kolmogorov complexity of a labelled graph is a good approximation of the Kolmogorov complexity of the unlabelled graph .
We survey and introduce concepts and tools at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different aspects of synthetic and biological networks at the intersection of local and global pattern detection . This includes, for example , the detection of the connectivity phase transition leading to the emergence of giant components in Erdos-Renyi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data . We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labelled and unlabelled graphs and prove that the Kolmogorov complexity of a labelled graph is a good approximation of the Kolmogorov complexity of the unlabelled graph and thus a robust definition of graph complexity .
[ { "type": "A", "before": null, "after": "survey and", "start_char_pos": 3, "end_char_pos": 3 }, { "type": "R", "before": "probability", "after": "complexity", "start_char_pos": 172, "end_char_pos": 183 }, { "type": "A", "before": null, "after": ". This includes, for example", "start_char_pos": 306, "end_char_pos": 306 }, { "type": "D", "before": "including,", "after": null, "start_char_pos": 309, "end_char_pos": 319 }, { "type": "R", "before": "Erd\\\"os-R\\'enyi", "after": "Erdos-Renyi", "start_char_pos": 419, "end_char_pos": 434 }, { "type": "R", "before": "arrays", "after": "data", "start_char_pos": 554, "end_char_pos": 560 }, { "type": "A", "before": null, "after": "and thus a robust definition of graph complexity", "start_char_pos": 1024, "end_char_pos": 1024 } ]
[ 0, 95, 562, 799 ]
1401.3604
4
We survey and introduce concepts and tools at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different aspects of synthetic and biological networks at the intersection of local and global pattern detection. This includes, for example, the detection of the connectivity phase transition leading to the emergence of giant components in Erdos-Renyi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labelled and unlabelled graphs and prove that the Kolmogorov complexity of a labelled graph is a good approximation of the Kolmogorov complexity of the unlabelled graph and thus a robust definition of graph complexity.
We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdos-Renyi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs , characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity.
[ { "type": "A", "before": null, "after": "located", "start_char_pos": 43, "end_char_pos": 43 }, { "type": "A", "before": null, "after": "local and global", "start_char_pos": 213, "end_char_pos": 213 }, { "type": "R", "before": "networks at the intersection of local and global pattern detection. This includes, for example, the detection of the connectivity phase transition leading to the", "after": "data. We show examples such as the", "start_char_pos": 250, "end_char_pos": 411 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 764, "end_char_pos": 764 }, { "type": "R", "before": "labelled and unlabelled", "after": "labeled and unlabeled", "start_char_pos": 875, "end_char_pos": 898 }, { "type": "R", "before": "labelled", "after": "labeled", "start_char_pos": 952, "end_char_pos": 960 }, { "type": "R", "before": "the Kolmogorov complexity of the unlabelled graph", "after": "its unlabeled Kolmogorov complexity", "start_char_pos": 994, "end_char_pos": 1043 } ]
[ 0, 106, 317, 581, 819 ]
1401.3911
1
This paper investigates the dynamic behaviour of jumps in financial prices and volatility. The proposed model is based on a standard jump diffusion process for price and volatility augmented by a bivariate Hawkes process for the two jump components. The latter process specifies a joint dynamic structure for the price and volatility jump intensities, with the intensity of a volatility jump also directly affected by a jump in the price. The impact of certain aspects of the model on the higher-order conditional moments for returns is investigated. In particular, the differential effects of the jump intensities and the random process for latent volatility itself, are measured and documented. A state space representation of the model is constructed using both financial returnsand non-parametric measures of integrated volatility and price jumps as the observable quantities. Bayesian inference , based on a Markov chain Monte Carlo algorithm , is used to obtain a posterior distribution for the relevant model parameters and latent variables, and to analyze various hypotheses about the dynamics in, and the relationship between, the jump intensities . An extensive empirical investigation using data based on the S&P500 market index over a period ending in early-2013 is conducted. Substantial empirical support for dynamic jump intensities is documented, with predictive accuracy enhanced by the inclusion of this type of specification. In addition, movements in the intensity parameter for volatility jumps are found to track key market events closely over this period .
Dynamic jumps in the price and volatility of an asset are modelled using a joint Hawkes process in conjunction with a bivariate jump diffusion. A state space representation is used to link observed returns, plus nonparametric measures of integrated volatility and price jumps , to the specified model components; with Bayesian inference conducted using a Markov chain Monte Carlo algorithm . The calculation of marginal likelihoods for the proposed and related models is discussed . An extensive empirical investigation is undertaken using the S&P500 market index , with substantial support for dynamic jump intensities - including in terms of predictive accuracy - documented .
[ { "type": "R", "before": "This paper investigates the dynamic behaviour of jumps in financial prices and volatility. The proposed model is based on a standard jump diffusion process for", "after": "Dynamic jumps in the", "start_char_pos": 0, "end_char_pos": 159 }, { "type": "R", "before": "augmented by a bivariate Hawkes process for the two jump components. The latter process specifies a joint dynamic structure for the price and volatility jump intensities, with the intensity of a volatility jump also directly affected by a jump in the price. The impact of certain aspects of the model on the higher-order conditional moments for returns is investigated. In particular, the differential effects of the jump intensities and the random process for latent volatility itself, are measured and documented.", "after": "of an asset are modelled using a joint Hawkes process in conjunction with a bivariate jump diffusion.", "start_char_pos": 181, "end_char_pos": 696 }, { "type": "R", "before": "of the model is constructed using both financial returnsand non-parametric", "after": "is used to link observed returns, plus nonparametric", "start_char_pos": 726, "end_char_pos": 800 }, { "type": "R", "before": "as the observable quantities. Bayesian inference , based on", "after": ", to the specified model components; with Bayesian inference conducted using", "start_char_pos": 851, "end_char_pos": 910 }, { "type": "R", "before": ", is used to obtain a posterior distribution for the relevant model parameters and latent variables, and to analyze various hypotheses about the dynamics in, and the relationship between, the jump intensities", "after": ". The calculation of marginal likelihoods for the proposed and related models is discussed", "start_char_pos": 948, "end_char_pos": 1156 }, { "type": "R", "before": "using data based on", "after": "is undertaken using", "start_char_pos": 1196, "end_char_pos": 1215 }, { "type": "R", "before": "over a period ending in early-2013 is conducted. Substantial empirical", "after": ", with substantial", "start_char_pos": 1240, "end_char_pos": 1310 }, { "type": "R", "before": "is documented, with predictive accuracy enhanced by the inclusion of this type of specification. In addition, movements in the intensity parameter for volatility jumps are found to track key market events closely over this period", "after": "- including in terms of predictive accuracy - documented", "start_char_pos": 1348, "end_char_pos": 1577 } ]
[ 0, 90, 249, 438, 550, 696, 880, 1158, 1288, 1444 ]
1401.3911
2
Dynamic jumps in the price and volatility of an asset are modelled using a joint Hawkes process in conjunction with a bivariate jump diffusion. A state space representation is used to link observed returns, plus nonparametric measures of integrated volatility and price jumps, to the specified model components; with Bayesian inference conducted using a Markov chain Monte Carlo algorithm. The calculation of marginal likelihoods for the proposed and related modelsis discussed . An extensive empirical investigation is undertaken using the S&P500 market index , with substantial support for dynamic jump intensities - including in terms of predictive accuracy - documented.
Dynamic jumps in the price and volatility of an asset are modelled using a joint Hawkes process in conjunction with a bivariate jump diffusion. A state space representation is used to link observed returns, plus nonparametric measures of integrated volatility and price jumps, to the specified model components; with Bayesian inference conducted using a Markov chain Monte Carlo algorithm. An evaluation of marginal likelihoods for the proposed model relative to a large number of alternative models, including some that have featured in the literature, is provided . An extensive empirical investigation is undertaken using data on the S&P500 market index over the 1996 to 2014 period , with substantial support for dynamic jump intensities - including in terms of predictive accuracy - documented.
[ { "type": "R", "before": "The calculation", "after": "An evaluation", "start_char_pos": 390, "end_char_pos": 405 }, { "type": "R", "before": "and related modelsis discussed", "after": "model relative to a large number of alternative models, including some that have featured in the literature, is provided", "start_char_pos": 447, "end_char_pos": 477 }, { "type": "A", "before": null, "after": "data on", "start_char_pos": 537, "end_char_pos": 537 }, { "type": "A", "before": null, "after": "over the 1996 to 2014 period", "start_char_pos": 562, "end_char_pos": 562 } ]
[ 0, 143, 311, 389, 479 ]
1401.4387
1
In this work, we consider Corporate Governance ties among companies from a multiple network perspective. Such a structurenaturally arises from the close interrelation between the Shareholding Network and the Board of Directors network . Inorder to capture the simultaneous effects on both networks on Corporate Governance , we propose to model the Corporate Governance multiple network structure via tensor analysis. In particular, we consider the TOPHITS model, based on the PARAFAC tensor decomposition, to show that tensor techniques can be successfully applied in this context. After providing some empirical results from the Italian financial market in the univariate case, we will show that a tensor-based multiple network approach can reveal important information.
In this work, we consider Corporate Governance (CG) ties among companies from a multiple network perspective. Such a structure naturally arises from the close interrelation between the Shareholding Network (SH) and the Board of Directors network (BD). In order to capture the simultaneous effects of both networks on CG , we propose to model the CG multiple network structure via tensor analysis. In particular, we consider the TOPHITS model, based on the PARAFAC tensor decomposition, to show that tensor techniques can be successfully applied in this context. By providing some empirical results from the Italian financial market in the univariate case, we then show that a tensor--based multiple network approach can reveal important information.
[ { "type": "A", "before": null, "after": "(CG)", "start_char_pos": 47, "end_char_pos": 47 }, { "type": "R", "before": "structurenaturally", "after": "structure naturally", "start_char_pos": 113, "end_char_pos": 131 }, { "type": "A", "before": null, "after": "(SH)", "start_char_pos": 201, "end_char_pos": 201 }, { "type": "R", "before": ". Inorder", "after": "(BD). In order", "start_char_pos": 237, "end_char_pos": 246 }, { "type": "R", "before": "on", "after": "of", "start_char_pos": 283, "end_char_pos": 285 }, { "type": "R", "before": "Corporate Governance", "after": "CG", "start_char_pos": 303, "end_char_pos": 323 }, { "type": "R", "before": "Corporate Governance", "after": "CG", "start_char_pos": 350, "end_char_pos": 370 }, { "type": "R", "before": "After", "after": "By", "start_char_pos": 584, "end_char_pos": 589 }, { "type": "R", "before": "will", "after": "then", "start_char_pos": 684, "end_char_pos": 688 }, { "type": "R", "before": "tensor-based", "after": "tensor--based", "start_char_pos": 701, "end_char_pos": 713 } ]
[ 0, 105, 238, 418, 583 ]
1401.4707
1
In this paper the problem of surface charge of the lipid membrane is considered. It is shown that the membrane surface is negatively charged. Negative ions are in potential wells formed by the dipole heads of membrane phospholipids. The binding energy of the ion with the membrane surface is much greater than its thermal energy. A self-consistent model of the potential in solution is developed, and a stationary charge density on the membrane surface is found. The estimates given in the paper show that the potential difference across the membrane of the unexcited axon (resting potential) can be explained by the difference in surface densities of the bound charges on the inner and outer surfaces of the membrane .
In this paper the problem of surface charge of the lipid membrane immersed in the physiological solution is considered. It is shown that both side of the bilayer phospholipid membrane surface are negatively charged. A self-consistent model of the potential in solution is developed, and a stationary charge density on the membrane surface is found. It is shown that the ions of the surface charge are in a relatively deep (as compared to kBT) potential wells, which are localized near the dipole heads of phospholipid membrane. It makes impossible for ions to slip along the membrane surface. Simple experiments for verifying the correctness of the considered model are proposed. A developed approach can be used for estimations of the surface charges on the outer and inner membrane of the cell .
[ { "type": "A", "before": null, "after": "immersed in the physiological solution", "start_char_pos": 66, "end_char_pos": 66 }, { "type": "R", "before": "the membrane surface is", "after": "both side of the bilayer phospholipid membrane surface are", "start_char_pos": 99, "end_char_pos": 122 }, { "type": "D", "before": "Negative ions are in potential wells formed by the dipole heads of membrane phospholipids. The binding energy of the ion with the membrane surface is much greater than its thermal energy.", "after": null, "start_char_pos": 143, "end_char_pos": 330 }, { "type": "R", "before": "The estimates given in the paper show that the potential difference across the membrane of the unexcited axon (resting potential) can be explained by the difference in surface densities of the bound", "after": "It is shown that the ions of the surface charge are in a relatively deep (as compared to kBT) potential wells, which are localized near the dipole heads of phospholipid membrane. It makes impossible for ions to slip along the membrane surface. Simple experiments for verifying the correctness of the considered model are proposed. A developed approach can be used for estimations of the surface", "start_char_pos": 464, "end_char_pos": 662 }, { "type": "R", "before": "inner and outer surfaces of the membrane", "after": "outer and inner membrane of the cell", "start_char_pos": 678, "end_char_pos": 718 } ]
[ 0, 81, 142, 233, 330, 463 ]
1401.4787
1
This paper attempts to provide a decision theoretical foundation for the measurement of economic tail risk, which is not only closely related to utility theory but also relevant to statistical model uncertainty. The main result of the paper is that the only tail risk measure that satisfies both a set of economic axioms proposed by Schmeidler (1989, Econometrica) and the statistical property of elicitability (i.e. there exists an objective function such that minimizing the expected objective function yields the risk measure; see Gneiting , 2011, J. Amer. Stat. Assoc.) is median shortfall, which is the median of the tail loss distribution. As an application, we argue that median shortfall is a better alternative than expected shortfall as a risk measure for setting capital requirements in Basel Accords.
This paper attempts to provide a decision-theoretic foundation for the measurement of economic tail risk, which is not only closely related to utility theory but also relevant to statistical model uncertainty. The main result is that the only tail risk measure that satisfies a set of economic axioms proposed by Schmeidler (1989, Econometrica) and the statistical property of elicitability (i.e. there exists an objective function such that minimizing the expected objective function yields the risk measure; see Gneiting ( 2011, J. Amer. Stat. Assoc.) ) is median shortfall, which is the median of tail loss distribution. Elicitability is important for backtesting. Median shortfall has a desirable property of distributional robustness with respect to model misspecification. We also extend the result to address model uncertainty by incorporating multiple scenarios. As an application, we argue that median shortfall is a better alternative than expected shortfall for setting capital requirements in Basel Accords.
[ { "type": "R", "before": "decision theoretical", "after": "decision-theoretic", "start_char_pos": 33, "end_char_pos": 53 }, { "type": "D", "before": "of the paper", "after": null, "start_char_pos": 228, "end_char_pos": 240 }, { "type": "D", "before": "both", "after": null, "start_char_pos": 291, "end_char_pos": 295 }, { "type": "R", "before": ",", "after": "(", "start_char_pos": 543, "end_char_pos": 544 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 574, "end_char_pos": 574 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 619, "end_char_pos": 622 }, { "type": "A", "before": null, "after": "Elicitability is important for backtesting. Median shortfall has a desirable property of distributional robustness with respect to model misspecification. We also extend the result to address model uncertainty by incorporating multiple scenarios.", "start_char_pos": 647, "end_char_pos": 647 }, { "type": "D", "before": "as a risk measure", "after": null, "start_char_pos": 746, "end_char_pos": 763 } ]
[ 0, 211, 529, 559, 646 ]
1401.4787
2
This paper attempts to provide a decision-theoretic foundation for the measurement of economic tail risk, which is not only closely related to utility theory but also relevant to statistical model uncertainty. The main result is that the only tail risk measure that satisfies a set of economic axioms proposed by Schmeidler (1989, Econometrica) and the statistical property of elicitability (i.e. there exists an objective function such that minimizing the expected objective function yields the risk measure ; see Gneiting (2011, J. Amer. Stat. Assoc.) ) is median shortfall, which is the median of tail loss distribution. Elicitability is important for backtesting . Median shortfall has a desirable property of distributional robustness with respect to model misspecification . We also extend the result to address model uncertainty by incorporating multiple scenarios. As an application, we argue that median shortfall is a better alternative than expected shortfall for setting capital requirements in Basel Accords.
This paper attempts to provide a decision-theoretic foundation for the measurement of economic tail risk, which is not only closely related to utility theory but also relevant to statistical model uncertainty. The main result is that the only risk measures that satisfy a set of economic axioms for the Choquet expected utility and the statistical property of elicitability (i.e. there exists an objective function such that minimizing the expected objective function yields the risk measure ) are the mean functional and the median shortfall, which is the median of tail loss distribution. Elicitability is important for backtesting . We also extend the result to address model uncertainty by incorporating multiple scenarios. As an application, we argue that median shortfall is a better alternative than expected shortfall for setting capital requirements in Basel Accords.
[ { "type": "R", "before": "tail risk measure that satisfies", "after": "risk measures that satisfy", "start_char_pos": 243, "end_char_pos": 275 }, { "type": "R", "before": "proposed by Schmeidler (1989, Econometrica)", "after": "for the Choquet expected utility", "start_char_pos": 301, "end_char_pos": 344 }, { "type": "R", "before": "; see Gneiting (2011, J. Amer. Stat. Assoc.) ) is", "after": ") are the mean functional and the", "start_char_pos": 509, "end_char_pos": 558 }, { "type": "D", "before": ". Median shortfall has a desirable property of distributional robustness with respect to model misspecification", "after": null, "start_char_pos": 667, "end_char_pos": 778 } ]
[ 0, 209, 510, 539, 623, 668, 780, 872 ]
1401.7528
1
Data centers have evolved from a passive element of compute infrastructure to become an active and core part of any ICT solution. Modular data centers are a promising design approach to improve resiliency of data centers, and they can play a key role in deploying ICT infrastructure in remote and inhospitable environments with low temperatures and hydro- and wind-electric capabilities. Modular data centers can also survive even with lack of continuous physical maintenance and support. Generally, the most critical part of a data center is its network fabric that could impede the whole system even if all other components are fully functional . In this work, a complete failure analysis of modular data centers using failure models of various components including servers, switches, and links is performed using a proposed Monte-Carlo approach. This approach allows us to calculate the performance of a design along its lifespan even at the terminal stages. A class of modified Tanh-Log cumulative distribution function of failure is proposed for aforementioned components in order to achieve a better fit on the real data. In this study, the real experimental data from the lanl2005 database is used to calculate the fitting parameters of the failure cumulative distributions. For the network connectivity, various topologies, such as FatTree, BCube, MDCube, and their modified topologies are considered. The performance and also the lifespan of each topology in presence of failures in various components are studied against the topology parameters using the proposed approach. Furthermore, these topologies are compared against each other in a consistent settings in order to determine what topology could deliver a higher performance and resiliency subject to the scalability and agility requirements of a target data center design .
Data centers have been evolved from a passive element of compute infrastructure to become an active , core part of any ICT solution. In particular, modular data centers (MDCs), which are a promising design approach to improve resiliency of data centers, can play a key role in deploying ICT infrastructure in remote and inhospitable environments in order to take advantage of low temperatures and hydro- and wind-electric capabilities. This is because of capability of the modular data centers to survive even in lack of continuous on-site maintenance and support. The most critical part of a data center is its network fabric that could impede the whole system even if all other components are fully functional , assuming that other analyses has been already performed to ensure the reliability of the underlying infrastructure and support systems . In this work, a complete failure analysis of modular data centers using failure models of various components including servers, switches, and links is performed using a proposed Monte-Carlo approach. The proposed Monte-Carlo approach, which is based on the concept of snapshots, allows us to effectively calculate the performance of a design along its lifespan even up to the terminal stages. To show the capabilities of the proposed approach, various network topologies, such as FatTree, BCube, MDCube, and their modifications are considered. The performance and also the lifespan of each topology design in presence of failures of their components are studied against the topology parameters .
[ { "type": "A", "before": null, "after": "been", "start_char_pos": 18, "end_char_pos": 18 }, { "type": "R", "before": "and", "after": ",", "start_char_pos": 96, "end_char_pos": 99 }, { "type": "R", "before": "Modular data centers", "after": "In particular, modular data centers (MDCs), which", "start_char_pos": 131, "end_char_pos": 151 }, { "type": "D", "before": "and they", "after": null, "start_char_pos": 223, "end_char_pos": 231 }, { "type": "R", "before": "with", "after": "in order to take advantage of", "start_char_pos": 324, "end_char_pos": 328 }, { "type": "R", "before": "Modular data centers can also survive even with", "after": "This is because of capability of the modular data centers to survive even in", "start_char_pos": 389, "end_char_pos": 436 }, { "type": "R", "before": "physical", "after": "on-site", "start_char_pos": 456, "end_char_pos": 464 }, { "type": "R", "before": "Generally, the", "after": "The", "start_char_pos": 490, "end_char_pos": 504 }, { "type": "A", "before": null, "after": ", assuming that other analyses has been already performed to ensure the reliability of the underlying infrastructure and support systems", "start_char_pos": 648, "end_char_pos": 648 }, { "type": "R", "before": "This approach", "after": "The proposed Monte-Carlo approach, which is based on the concept of snapshots,", "start_char_pos": 851, "end_char_pos": 864 }, { "type": "A", "before": null, "after": "effectively", "start_char_pos": 878, "end_char_pos": 878 }, { "type": "R", "before": "at", "after": "up to", "start_char_pos": 941, "end_char_pos": 943 }, { "type": "R", "before": "A class of modified Tanh-Log cumulative distribution function of failure is proposed for aforementioned components in order to achieve a better fit on the real data. In this study, the real experimental data from the lanl2005 database is used to calculate the fitting parameters of the failure cumulative distributions. For the network connectivity, various", "after": "To show the capabilities of the proposed approach, various network", "start_char_pos": 965, "end_char_pos": 1322 }, { "type": "R", "before": "modified topologies", "after": "modifications", "start_char_pos": 1377, "end_char_pos": 1396 }, { "type": "A", "before": null, "after": "design", "start_char_pos": 1468, "end_char_pos": 1468 }, { "type": "R", "before": "in various", "after": "of their", "start_char_pos": 1493, "end_char_pos": 1503 }, { "type": "D", "before": "using the proposed approach. Furthermore, these topologies are compared against each other in a consistent settings in order to determine what topology could deliver a higher performance and resiliency subject to the scalability and agility requirements of a target data center design", "after": null, "start_char_pos": 1559, "end_char_pos": 1843 } ]
[ 0, 130, 388, 489, 650, 850, 964, 1130, 1284, 1412, 1587 ]
1401.7913
1
We introduce a multi-factor stochastic volatility model based on the CIR/Heston stochastic volatility process. In order to capture the Samuelson effect displayed by commodity futures contracts, we add expiry-dependent exponential damping factors to their volatility coefficients. The pricing of single underlying European options on futures contracts is straightforward and can incorporate the volatility smile or skew observed in the market. We calculate the joint characteristic function of two futures contracts in the model and use the two-dimensional FFT method of Hurd and Zhou (SIFIN 2010 ) to price calendar spread options. The model leads to stochastic correlation between the returns of two futures contracts. We illustrate the distribution of this correlation in an example .
We introduce a multi-factor stochastic volatility model based on the CIR/Heston stochastic volatility process. In order to capture the Samuelson effect displayed by commodity futures contracts, we add expiry-dependent exponential damping factors to their volatility coefficients. The pricing of single underlying European options on futures contracts is straightforward and can incorporate the volatility smile or skew observed in the market. We calculate the joint characteristic function of two futures contracts in the model in analytic form and use the one-dimensional Fourier inversion method of Caldana and Fusai (JBF 2013 ) to price calendar spread options. The model leads to stochastic correlation between the returns of two futures contracts. We illustrate the distribution of this correlation in an example . We then propose analytical expressions to obtain the copula and copula density directly from the joint characteristic function of a pair of futures. These expressions are convenient to analyze the term-structure of dependence between the two futures produced by the model. In an empirical application, we calibrate the proposed model to volatility surfaces of vanilla options on WTI and provide evidence that the model is able to produce the desired stylized facts in terms of volatility and dependence. In the two appendices, we give proofs of our main results and guidance for the implementation of the proposed model and the Fourier inversion results by means of one- and two-dimensional FFT methods .
[ { "type": "A", "before": null, "after": "in analytic form", "start_char_pos": 528, "end_char_pos": 528 }, { "type": "R", "before": "two-dimensional FFT method of Hurd and Zhou (SIFIN 2010", "after": "one-dimensional Fourier inversion method of Caldana and Fusai (JBF 2013", "start_char_pos": 541, "end_char_pos": 596 }, { "type": "A", "before": null, "after": ". We then propose analytical expressions to obtain the copula and copula density directly from the joint characteristic function of a pair of futures. These expressions are convenient to analyze the term-structure of dependence between the two futures produced by the model. In an empirical application, we calibrate the proposed model to volatility surfaces of vanilla options on WTI and provide evidence that the model is able to produce the desired stylized facts in terms of volatility and dependence. In the two appendices, we give proofs of our main results and guidance for the implementation of the proposed model and the Fourier inversion results by means of one- and two-dimensional FFT methods", "start_char_pos": 786, "end_char_pos": 786 } ]
[ 0, 110, 279, 442, 632, 720 ]
1401.7913
2
We introduce a multi-factor stochastic volatility model based on the CIR/Heston stochastic volatility process. In order to capture the Samuelson effect displayed by commodity futures contracts, we add expiry-dependent exponential damping factors to their volatility coefficients. The pricing of single underlying European options on futures contracts is straightforward and can incorporate the volatility smile or skew observed in the market. We calculate the joint characteristic function of two futures contracts in the model in analytic form and use the one-dimensional Fourier inversion method of Caldana and Fusai (JBF 2013) to price calendar spread options. The model leads to stochastic correlation between the returns of two futures contracts. We illustrate the distribution of this correlation in an example. We then propose analytical expressions to obtain the copula and copula density directly from the joint characteristic function of a pair of futures. These expressions are convenient to analyze the term-structure of dependence between the two futures produced by the model. In an empirical application , we calibrate the proposed model to volatility surfaces of vanilla options on WTI and provide evidence that the model is able to produce the desired stylized facts in terms of volatility and dependence. In the two appendices , we give proofs of our main results and guidance for the implementation of the proposed model and the Fourier inversion results by means of one- and two-dimensional FFT methods.
We introduce a multi-factor stochastic volatility model based on the CIR/Heston stochastic volatility process. In order to capture the Samuelson effect displayed by commodity futures contracts, we add expiry-dependent exponential damping factors to their volatility coefficients. The pricing of single underlying European options on futures contracts is straightforward and can incorporate the volatility smile or skew observed in the market. We calculate the joint characteristic function of two futures contracts in the model in analytic form and use the one-dimensional Fourier inversion method of Caldana and Fusai (JBF 2013) to price calendar spread options. The model leads to stochastic correlation between the returns of two futures contracts. We illustrate the distribution of this correlation in an example. We then propose analytical expressions to obtain the copula and copula density directly from the joint characteristic function of a pair of futures. These expressions are convenient to analyze the term-structure of dependence between the two futures produced by the model. In an empirical application we calibrate the proposed model to volatility surfaces of vanilla options on WTI . In this application we provide evidence that the model is able to produce the desired stylized facts in terms of volatility and dependence. In a separate appendix , we give guidance for the implementation of the proposed model and the Fourier inversion results by means of one and two-dimensional FFT methods.
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 1119, "end_char_pos": 1120 }, { "type": "R", "before": "and", "after": ". In this application we", "start_char_pos": 1202, "end_char_pos": 1205 }, { "type": "R", "before": "the two appendices", "after": "a separate appendix", "start_char_pos": 1326, "end_char_pos": 1344 }, { "type": "D", "before": "proofs of our main results and", "after": null, "start_char_pos": 1355, "end_char_pos": 1385 }, { "type": "R", "before": "one-", "after": "one", "start_char_pos": 1486, "end_char_pos": 1490 } ]
[ 0, 110, 279, 442, 663, 751, 817, 966, 1090, 1322 ]
1401.8026
1
Financial markets are exposed to systemic risk (SR), the risk that a major fraction of the system ceases to function and collapses. Since recently it is possible to quantify SR in terms of underlying financial networks where nodes represent financial institutions, and links capture the size and maturity of assets (loans), liabilities, and other obligations such as derivatives. In particular it is possible to quantify the share of SR that individual nodes contribute to the overall SR in the financial system. We extend the notion of node-specific SR to individual liabilities in a financial network (liability-specific SR) . We use historical, empirical data of interbank liabilities to show that a few liabilities in a nation-wide interbank network contribute to the major fraction of the overall SR. We propose a tax on individual transactions that is proportional to their contribution to overall SR. If a transaction does not increase SR it is tax free. We use a macroeconomic agent based model (CRISIS macro-financial model) with a financial economy to demonstrate that the proposed Systemic Risk Tax (SRT) leads to a URLanized re-structuring of financial networks , that are practically free of SR. This is because risk-increasing transactions will be systematically avoided when a SRT is in place. Systemic stability under a SRT emerges due to a de facto elimination of system-wide cascading failure. ABM predictions agree remarkably well with the empirical data and can be used to understand the relation of credit risk and systemic risk .
Financial markets are exposed to systemic risk (SR), the risk that a major fraction of the system ceases to function and collapses. Since recently it is possible to quantify SR in terms of underlying financial networks where nodes represent financial institutions, and links capture the size and maturity of assets (loans), liabilities, and other obligations such as derivatives. We show that it is possible to quantify the share of SR that individual liabilities in a financial network contribute to the overall SR . We use empirical data of nation-wide interbank liabilities to show that a few liabilities carry the major fraction of the overall SR. We propose a tax on individual transactions that is proportional to their contribution to overall SR. If a transaction does not increase SR it is tax free. With an agent based model (CRISIS macro-financial model) we demonstrate that the proposed Systemic Risk Tax (SRT) leads to a URLanized re-structuring of financial networks that are practically free of SR. ABM predictions agree remarkably well with the empirical data and can be used to understand the relation of credit risk and SR .
[ { "type": "R", "before": "In particular", "after": "We show that", "start_char_pos": 380, "end_char_pos": 393 }, { "type": "D", "before": "nodes contribute to the overall SR in the financial system. We extend the notion of node-specific SR to individual", "after": null, "start_char_pos": 453, "end_char_pos": 567 }, { "type": "R", "before": "(liability-specific SR)", "after": "contribute to the overall SR", "start_char_pos": 603, "end_char_pos": 626 }, { "type": "D", "before": "historical,", "after": null, "start_char_pos": 636, "end_char_pos": 647 }, { "type": "A", "before": null, "after": "nation-wide", "start_char_pos": 666, "end_char_pos": 666 }, { "type": "R", "before": "in a nation-wide interbank network contribute to", "after": "carry", "start_char_pos": 720, "end_char_pos": 768 }, { "type": "R", "before": "We use a macroeconomic", "after": "With an", "start_char_pos": 963, "end_char_pos": 985 }, { "type": "R", "before": "with a financial economy to", "after": "we", "start_char_pos": 1035, "end_char_pos": 1062 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1175, "end_char_pos": 1176 }, { "type": "D", "before": "This is because risk-increasing transactions will be systematically avoided when a SRT is in place. Systemic stability under a SRT emerges due to a de facto elimination of system-wide cascading failure.", "after": null, "start_char_pos": 1210, "end_char_pos": 1412 }, { "type": "R", "before": "systemic risk", "after": "SR", "start_char_pos": 1537, "end_char_pos": 1550 } ]
[ 0, 131, 379, 512, 806, 908, 962, 1209, 1309, 1412 ]
1401.8106
1
We study historical correlation and lead-lag relationships between individual stock risk (volatility of daily stock returns) and market risk (volatility of daily returns of a market representative portfolio) in the US stock market. We calculate corresponding cross-correlation functions averaged over all stocks for 71 historical stock prices from the Standard & Poor's 500 index for 1992--2013. The provided analysis suggests that cross-correlations maximum value increases near periods of crisis and remains close to 1 since the US housing bubble in 2007. Our analysis is based on the linear response theory approximation and uses asymmetries of cross-correlation function with respect to zero lag. Characteristic regimes , when changes of individual stock risks on average follow changes of the total market risk and vice versa , are observed near market crashes. Corresponding historical dynamics suggests a particular pattern: Shortly before a crash individual stock risks start to influence market risk while after the crash the situation is reversed .
We study historical correlations and lead-lag relationships between individual stock risk (volatility of daily stock returns) and market risk (volatility of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over all stocks , using 71 stock prices from the Standard & Poor's 500 index for 1994--2013. We focus on the behavior of the cross-correlations at the times of financial crises with significant jumps of market volatility. The observed historical dynamics shows that the dependence between the risks becomes linear near such events and the maximum value of this averaged cross-correlation function is often shifted with respect to zero lag. We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when the volatility of an individual stock follows the market volatility and vice versa .
[ { "type": "R", "before": "correlation", "after": "correlations", "start_char_pos": 20, "end_char_pos": 31 }, { "type": "R", "before": "market representative", "after": "market-representative", "start_char_pos": 175, "end_char_pos": 196 }, { "type": "R", "before": "calculate corresponding", "after": "consider the", "start_char_pos": 235, "end_char_pos": 258 }, { "type": "R", "before": "for", "after": ", using", "start_char_pos": 312, "end_char_pos": 315 }, { "type": "D", "before": "historical", "after": null, "start_char_pos": 319, "end_char_pos": 329 }, { "type": "R", "before": "1992--2013. The provided analysis suggests that", "after": "1994--2013. We focus on the behavior of the", "start_char_pos": 384, "end_char_pos": 431 }, { "type": "R", "before": "maximum value increases near periods of crisis and remains close to 1 since the US housing bubble in 2007. Our analysis is based on the linear response theory approximation and uses asymmetries of", "after": "at the times of financial crises with significant jumps of market volatility. The observed historical dynamics shows that the dependence between the risks becomes linear near such events and the maximum value of this averaged", "start_char_pos": 451, "end_char_pos": 647 }, { "type": "A", "before": null, "after": "is often shifted", "start_char_pos": 675, "end_char_pos": 675 }, { "type": "R", "before": "Characteristic regimes , when changes of individual stock risks on average follow changes of the total market risk", "after": "We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when the volatility of an individual stock follows the market volatility", "start_char_pos": 702, "end_char_pos": 816 }, { "type": "D", "before": ", are observed near market crashes. Corresponding historical dynamics suggests a particular pattern: Shortly before a crash individual stock risks start to influence market risk while after the crash the situation is reversed", "after": null, "start_char_pos": 832, "end_char_pos": 1057 } ]
[ 0, 231, 395, 557, 701, 867 ]
1401.8106
2
We study historical correlations and lead-lag relationships between individual stock risk (volatility of daily stock returns) and market risk (volatility of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over all stocks, using 71 stock prices from the Standard Poor's 500 index for 1994--2013. We focus on the behavior of the cross-correlations at the times of financial crises with significant jumps of market volatility. The observed historical dynamics shows that the dependence between the risks becomes linear near such events and the maximum value of this averaged cross-correlation function is often shifted with respect to zero lag . We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when the volatility of an individual stock follows the market volatility and vice versa.
We study historical correlations and lead-lag relationships between individual stock risk (volatility of daily stock returns) and market risk (volatility of daily returns of a market-representative portfolio) in the US stock market. We consider the cross-correlation functions averaged over all stocks, using 71 stock prices from the Standard \& Poor's 500 index for 1994--2013. We focus on the behavior of the cross-correlations at the times of financial crises with significant jumps of market volatility. The observed historical dynamics showed that the dependence between the risks was almost linear during the US stock market downturn of 2002 and after the US housing bubble in 2007, remaining on that level until 2013. Moreover, the averaged cross-correlation function often had an asymmetric shape with respect to zero lag in the periods of high correlation . We develop the analysis by the application of the linear response formalism to study underlying causal relations. The calculated response functions suggest the presence of characteristic regimes near financial crashes, when the volatility of an individual stock follows the market volatility and vice versa.
[ { "type": "A", "before": null, "after": "\\&", "start_char_pos": 343, "end_char_pos": 343 }, { "type": "R", "before": "shows", "after": "showed", "start_char_pos": 539, "end_char_pos": 544 }, { "type": "R", "before": "becomes linear near such events and the maximum value of this", "after": "was almost linear during the US stock market downturn of 2002 and after the US housing bubble in 2007, remaining on that level until 2013. Moreover, the", "start_char_pos": 583, "end_char_pos": 644 }, { "type": "R", "before": "is often shifted", "after": "often had an asymmetric shape", "start_char_pos": 681, "end_char_pos": 697 }, { "type": "A", "before": null, "after": "in the periods of high correlation", "start_char_pos": 723, "end_char_pos": 723 } ]
[ 0, 232, 376, 505, 725, 839 ]
1402.0894
1
The bacterial transcription factor LacI loops DNA by binding to two separate locations on the DNA simultaneously. Despite being one of the best-studied model systems for transcriptional regulation, the number and conformations of loop structures accessible to LacI remain unclear, though the importance of multiple co-existing loops has been implicated in interactions between LacI and other cellular regulators of gene expression. To probe this issue, we have developed a new analysis method for tethered particle motion (TPM) , a versatile and commonly-used in vitro single-molecule technique. Our method, vbTPM, is based on a variational Bayes treatment of hidden Markov models. It learns the number of distinct states (i.e., DNA-protein conformations) directly from TPM data with better resolution than existing methods, while easily correcting for common experimental artifacts. Studying short (roughly 100 bp) LacI-mediated loops, we are able to resolve three distinct loop structures, more than previously reported at the single molecule level . Moreover, our results confirm that changes in LacI conformation and DNA binding topology both contribute to the repertoire of LacI-mediated loops formed in vitro, and provide qualitatively new input for models of looping and transcriptional regulation. We expect vbTPM to be broadly useful for probing complex protein-nucleic acid interactions.
The bacterial transcription factor LacI loops DNA by binding to two separate locations on the DNA simultaneously. Despite being one of the best-studied model systems for transcriptional regulation, the number and conformations of loop structures accessible to LacI remain unclear, though the importance of multiple co-existing loops has been implicated in interactions between LacI and other cellular regulators of gene expression. To probe this issue, we have developed a new analysis method for tethered particle motion , a versatile and commonly-used in vitro single-molecule technique. Our method, vbTPM, performs variational Bayesian inference in hidden Markov models. It learns the number of distinct states (i.e., DNA-protein conformations) directly from tethered particle motion data with better resolution than existing methods, while easily correcting for common experimental artifacts. Studying short (roughly 100 bp) LacI-mediated loops, we provide evidence for three distinct loop structures, more than previously reported in single-molecule studies . Moreover, our results confirm that changes in LacI conformation and DNA binding topology both contribute to the repertoire of LacI-mediated loops formed in vitro, and provide qualitatively new input for models of looping and transcriptional regulation. We expect vbTPM to be broadly useful for probing complex protein-nucleic acid interactions.
[ { "type": "D", "before": "(TPM)", "after": null, "start_char_pos": 522, "end_char_pos": 527 }, { "type": "R", "before": "is based on a variational Bayes treatment of", "after": "performs variational Bayesian inference in", "start_char_pos": 615, "end_char_pos": 659 }, { "type": "R", "before": "TPM", "after": "tethered particle motion", "start_char_pos": 770, "end_char_pos": 773 }, { "type": "R", "before": "are able to resolve", "after": "provide evidence for", "start_char_pos": 940, "end_char_pos": 959 }, { "type": "R", "before": "at the single molecule level", "after": "in single-molecule studies", "start_char_pos": 1022, "end_char_pos": 1050 } ]
[ 0, 113, 431, 595, 681, 883, 1052, 1305 ]
1402.1268
1
Circadian rhythms are acquired through evolution to increase the chances for survival by synchronizing to the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. Since both properties have been tuned through natural selection, their adaptation can be formalized in the framework of mathematical optimization. By using a succinct model , we found that simultaneous optimization of regularity and entrainability entails inherent features of the circadian mechanism irrespective of model details . At the behavioral level we discovered the existence of a dead zone, a time during which light pulses neither advance nor delay the clock. At the molecular level we demonstrate the role-sharing of two light inputs, phase advance and delay, as is well observed in mammals. We also reproduce the results of phase-controlling experiments and predict molecular elements responsible for the clockwork . Our results indicate that circadian clocks function optimally and that a simple mathematical model can illuminate many complex phenomena observed in nature .
Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We found by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism . At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses . Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution .
[ { "type": "R", "before": "by synchronizing to", "after": "through synchronizing with", "start_char_pos": 86, "end_char_pos": 105 }, { "type": "R", "before": "Since both properties have been tuned through natural selection, their adaptation can be formalized in the framework of mathematical optimization. By using a succinct model , we found that simultaneous optimization", "after": "We found by using a phase model with multiple inputs that achieving the maximal limit", "start_char_pos": 299, "end_char_pos": 513 }, { "type": "A", "before": null, "after": "many", "start_char_pos": 555, "end_char_pos": 555 }, { "type": "D", "before": "irrespective of model details", "after": null, "start_char_pos": 601, "end_char_pos": 630 }, { "type": "R", "before": "behavioral level we discovered the existence of a dead zone, a time during which light pulses neither advance nor delay the clock. At the molecular level", "after": "molecular level,", "start_char_pos": 640, "end_char_pos": 793 }, { "type": "R", "before": "role-sharing", "after": "role sharing", "start_char_pos": 813, "end_char_pos": 825 }, { "type": "R", "before": "We also", "after": "At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We", "start_char_pos": 904, "end_char_pos": 911 }, { "type": "R", "before": "and predict molecular elements responsible for the clockwork", "after": "entrained by two types of periodic light pulses", "start_char_pos": 967, "end_char_pos": 1027 }, { "type": "R", "before": "function optimally and that a simple mathematical model can illuminate many complex phenomena observed in nature", "after": "are designed optimally for reliable clockwork through evolution", "start_char_pos": 1073, "end_char_pos": 1185 } ]
[ 0, 125, 298, 445, 632, 770, 903, 1029 ]
1402.1445
1
The continuous research of new diagnostical methods , early, low invasive and much efficient is orienting the technological research toward the use of bio-integrated devices and in particular sensors, able to use the excellent ability of proteins to selectively react to a specific stimulus, in a fast, reproducible and reversible way. To explore these specific features, a theoretical/computational model called INPA (impedance network protein analogue) is used . The specific characteristic of this approach is to give in a glance a description of the protein not lingering on the complex details of its biochemialc nature but instead privileging its activity .
The need of new diagnostic methods satisfying, as an early detection, a low invasive procedure and a cost-efficient value, is orienting the technological research toward the use of bio-integrated devices , in particular bio-sensors. The set of know-why necessary to achieve this goal is wide, from biochemistry to electronics and is summarized in an emerging branch of electronics, calledproteotronics. Proteotronics is here here applied to state a comparative analysis of the electrical responses coming from type-1 and type-2 opsins. In particular, the procedure is used as an early investigation of a recently discovered family of opsins, the proteorhodopsins activated by blue light, BPRs. The results reveal some interesting and unexpected similarities between proteins of the two families, suggesting the global electrical response are not strictly linked to the class identity .
[ { "type": "R", "before": "continuous research of new diagnostical methods , early, low invasive and much efficient", "after": "need of new diagnostic methods satisfying, as an early detection, a low invasive procedure and a cost-efficient value,", "start_char_pos": 4, "end_char_pos": 92 }, { "type": "R", "before": "and in particular sensors, able to use the excellent ability of proteins to selectively react to a specific stimulus, in a fast, reproducible and reversible way. To explore these specific features, a theoretical/computational model called INPA (impedance network protein analogue) is used . The specific characteristic of this approach is to give in a glance a description of the protein not lingering on the complex details of its biochemialc nature but instead privileging its activity", "after": ", in particular bio-sensors. The set of know-why necessary to achieve this goal is wide, from biochemistry to electronics and is summarized in an emerging branch of electronics, called", "start_char_pos": 174, "end_char_pos": 661 }, { "type": "A", "before": null, "after": "proteotronics", "start_char_pos": 661, "end_char_pos": 661 }, { "type": "A", "before": null, "after": ". Proteotronics is here here applied to state a comparative analysis of the electrical responses coming from type-1 and type-2 opsins. In particular, the procedure is used as an early investigation of a recently discovered family of opsins, the proteorhodopsins activated by blue light, BPRs. The results reveal some interesting and unexpected similarities between proteins of the two families, suggesting the global electrical response are not strictly linked to the class identity", "start_char_pos": 661, "end_char_pos": 661 } ]
[ 0, 335, 464 ]
1402.2599
1
In a discrete-time market, we study the problem of model-independent superhedging of exotic options under portfolio constraints. The superhedging portfolio consists of {\it static positions in liquidly traded vanilla options, and a dynamic trading strategy , subject to certain constraints, on the risky asset. By the theory of Monge-Kantorovich optimal transport, we establish a superhedging duality, which admits a natural connection to convex risk measures. With the aid of this duality, we derive a model-independent version of the fundamental theorem of asset pricing under portfolio constraints . It is worth noting that our method covers a large class of Delta constraints as well as Gamma constraint.
In a discrete-time market, we study model-independent superhedging , while the semi-static superhedging portfolio consists of {\it three parts: static positions in liquidly traded vanilla calls, static positions in other tradable, yet possibly less liquid, exotic options, and a dynamic trading strategy in risky assets under certain constraints. By considering the limit order book of each tradable exotic option and employing the Monge-Kantorovich theory of optimal transport, we establish a general superhedging duality, which admits a natural connection to convex risk measures. With the aid of this duality, we derive a model-independent version of the fundamental theorem of asset pricing . The notion "finite optimal arbitrage profit", weaker than no-arbitrage, is also introduced . It is worth noting that our method covers a large class of Delta constraints as well as Gamma constraint.
[ { "type": "D", "before": "the problem of", "after": null, "start_char_pos": 36, "end_char_pos": 50 }, { "type": "R", "before": "of exotic options under portfolio constraints. The", "after": ", while the semi-static", "start_char_pos": 82, "end_char_pos": 132 }, { "type": "A", "before": null, "after": "three", "start_char_pos": 173, "end_char_pos": 173 }, { "type": "A", "before": null, "after": "parts:", "start_char_pos": 174, "end_char_pos": 174 }, { "type": "A", "before": null, "after": "calls, static positions in other tradable, yet possibly less liquid, exotic", "start_char_pos": 219, "end_char_pos": 219 }, { "type": "R", "before": ", subject to certain constraints, on the risky asset. By the theory of", "after": "in risky assets under certain constraints. By considering the limit order book of each tradable exotic option and employing the", "start_char_pos": 260, "end_char_pos": 330 }, { "type": "A", "before": null, "after": "theory of", "start_char_pos": 349, "end_char_pos": 349 }, { "type": "A", "before": null, "after": "general", "start_char_pos": 384, "end_char_pos": 384 }, { "type": "R", "before": "under portfolio constraints", "after": ". The notion \"finite optimal arbitrage profit\", weaker than no-arbitrage, is also introduced", "start_char_pos": 578, "end_char_pos": 605 } ]
[ 0, 128, 313, 465, 607 ]
1402.3030
1
In the past 20 years, momentum or trend following strategies have become an established part of the investor toolbox. We introduce a new way of analyzing momentum strategies by looking at the information ratio (IR, average return divided by standard deviation). We calculate the theoretical IR of a momentum strategy and show that if momentum is mainly due to the positive autocorrelation in returns, IR as a function of the portfolio formation period (look-back) is very different from momentum due to the drift (average return). The IR shows that for look-back periods of a few months, the investor is more likely to tap into autocorrelation. However, for look-back periods closer to 1 year, the investor is more likely to tap into the drift. We compare the historical data to the theoretical IR by carefully constructing stationary periods. The empirical study finds that there are periods/regimes where the autocorrelation is more important than the drift in explaining the IR (particularly pre-1975) . We conclude our study by applying our momentum strategy to the entire data set in order to contrast the difference between the stationary and the non-stationary data. Empirically, for the non-stationary data, we find damped oscillations for very long look-back periods which we model as a reversal to the mean growth rate.
In the past 20 years, momentum or trend following strategies have become an established part of the investor toolbox. We introduce a new way of analyzing momentum strategies by looking at the information ratio (IR, average return divided by standard deviation). We calculate the theoretical IR of a momentum strategy , and show that if momentum is mainly due to the positive autocorrelation in returns, IR as a function of the portfolio formation period (look-back) is very different from momentum due to the drift (average return). The IR shows that for look-back periods of a few months, the investor is more likely to tap into autocorrelation. However, for look-back periods closer to 1 year, the investor is more likely to tap into the drift. We compare the historical data to the theoretical IR by constructing stationary periods. The empirical study finds that there are periods/regimes where the autocorrelation is more important than the drift in explaining the IR (particularly pre-1975) and others where the drift is more important (mostly after 1975) . We conclude our study by applying our momentum strategy to 100 plus years of the Dow-Jones Industrial Average. We report damped oscillations on the IR for look-back periods of several years and model such oscilations as a reversal to the mean growth rate.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 317, "end_char_pos": 317 }, { "type": "D", "before": "carefully", "after": null, "start_char_pos": 802, "end_char_pos": 811 }, { "type": "A", "before": null, "after": "and others where the drift is more important (mostly after 1975)", "start_char_pos": 1006, "end_char_pos": 1006 }, { "type": "R", "before": "the entire data set in order to contrast the difference between the stationary and the non-stationary data. Empirically, for the non-stationary data, we find damped oscillations for very long", "after": "100 plus years of the Dow-Jones Industrial Average. We report damped oscillations on the IR for", "start_char_pos": 1068, "end_char_pos": 1259 }, { "type": "R", "before": "which we model", "after": "of several years and model such oscilations", "start_char_pos": 1278, "end_char_pos": 1292 } ]
[ 0, 117, 261, 531, 645, 745, 844, 1008, 1175 ]
1402.3562
1
We generalize Merton's framework by incorporating an insurable loss . Motivated by new insurance products, we allow not only the financial market but also the insurable loss to depend on the regime of the economy. An investor wants to select an optimal consumption, investment, and insurance policy that maximizes his expected total discounted utility of consumption over an infinite time horizon. For the case of hyperbolic absolute risk aversion (HARA) utility functions, we obtain the first explicit solutions for optimal consumption, investment, and insurance problem when there is regime switching. We determine that the optimal insurance contract is either no-insurance or deductible insurance, and calculate when it is optimal to buy insurance. The optimal policy depends strongly on the regime of the economy. Through an economic analysis, we calculate the advantage of buying insurance. We also observe that as long as optimal insurance is nonzero in one regime, investors gain benefits in all regimes from insurance.
We consider an investor who wants to select her/his optimal consumption, investment and insurance policies . Motivated by new insurance products, we allow not only the financial marke but also the insurable loss to depend on the regime of the economy. The objective of the investor is to maximize her/ his expected total discounted utility of consumption over an infinite time horizon. For the case of hyperbolic absolute risk aversion (HARA) utility functions, we obtain the first explicit solutions for simultaneous optimal consumption, investment, and insurance problems when there is regime switching. We determine that the optimal insurance contract is either no-insurance or deductible insurance, and calculate when it is optimal to buy insurance. The optimal policy depends strongly on the regime of the economy. Through an economic analysis, we calculate the advantage of buying insurance.
[ { "type": "R", "before": "generalize Merton's framework by incorporating an insurable loss", "after": "consider an investor who wants to select her/his optimal consumption, investment and insurance policies", "start_char_pos": 3, "end_char_pos": 67 }, { "type": "R", "before": "market", "after": "marke", "start_char_pos": 139, "end_char_pos": 145 }, { "type": "R", "before": "An investor wants to select an optimal consumption, investment, and insurance policy that maximizes", "after": "The objective of the investor is to maximize her/", "start_char_pos": 214, "end_char_pos": 313 }, { "type": "A", "before": null, "after": "simultaneous", "start_char_pos": 517, "end_char_pos": 517 }, { "type": "R", "before": "problem", "after": "problems", "start_char_pos": 565, "end_char_pos": 572 }, { "type": "D", "before": "insurance. We also observe that as long as optimal insurance is nonzero in one regime, investors gain benefits in all regimes from", "after": null, "start_char_pos": 886, "end_char_pos": 1016 } ]
[ 0, 69, 213, 397, 604, 752, 818, 896 ]
1402.3720
1
Consider an equity market with n stocks. The vector of proportions of the total market capitalization that belongs to each stock is called the market weights. Consider two portfolios, one is a passive buy-and-hold portfolio representing the entire market, and the other assigns a portfolio vector for each possible value of the market weights and requires trading to maintain this assignment. The evolution of stocks is taken to be any jump process in discrete time, or a path of any continuous semimartingale in continuous time. We do not make any stochastic modeling assumptions on the evolutions. We provide necessary and sufficient conditions on the assignment on portfolios to guarantee that, for all such evolutions in a sufficiently volatile market, the actively traded portfolio outperforms the buy-and-hold portfolio in the long run. This class of `relative arbitrage' portfolios were discovered by Fernholz and are called functionally generated portfolios. We show that, in an appropriate sense, a slight generalization of these are the only possible ones. Remarkably, such portfolios can be constructed using solutions of Monge-Kantorovich optimal transport problems on the unit simplex with a special choice of the cost function. We provide conditions under which these portfolios lead to statistical arbitrage in high-frequency trading. Our primary tool is a property of multidimensional functions that we call multiplicative cyclical monotonicity.
Consider an equity market with n stocks. The vector of proportions of the total market capitalization that belongs to each stock is called the market weights. Consider two portfolios, one is a passive buy-and-hold portfolio representing the entire market, and the other assigns a portfolio vector for each possible value of the market weights and requires trading to maintain this assignment. The evolution of stocks is taken to be any jump process in discrete time, or a path of any continuous semimartingale in continuous time. We do not make any stochastic modeling assumptions on the evolutions. We provide necessary and sufficient conditions on the assignment on portfolios to guarantee that, for all such evolutions in a diverse and sufficiently volatile market, the actively traded portfolio outperforms the buy-and-hold portfolio in the long run. This class of `relative arbitrage' portfolios were discovered by Fernholz and are called functionally generated portfolios. We prove that, in an appropriate sense, a slight generalization of these are the only possible ones. Remarkably, such portfolios correspond to solutions of Monge-Kantorovich optimal transport problems on the unit simplex with a cost function that can be described as log of the partition function. We also show how the presence of microstructure noise in stock price data leads to statistical arbitrage in high-frequency trading. Our primary tool is a property of multidimensional functions that we call multiplicative cyclical monotonicity.
[ { "type": "A", "before": null, "after": "diverse and", "start_char_pos": 727, "end_char_pos": 727 }, { "type": "R", "before": "show", "after": "prove", "start_char_pos": 971, "end_char_pos": 975 }, { "type": "R", "before": "can be constructed using", "after": "correspond to", "start_char_pos": 1096, "end_char_pos": 1120 }, { "type": "R", "before": "special choice of the cost", "after": "cost function that can be described as log of the partition", "start_char_pos": 1206, "end_char_pos": 1232 }, { "type": "R", "before": "provide conditions under which these portfolios lead", "after": "also show how the presence of microstructure noise in stock price data leads", "start_char_pos": 1246, "end_char_pos": 1298 } ]
[ 0, 40, 158, 392, 529, 599, 843, 967, 1067, 1242, 1350 ]
1402.3720
2
Consider an equity market with n stocks. The vector of proportions of the total market capitalization that belongs to each stock is called the market weights. Consider two portfolios, one is a passive buy-and-hold portfolio representing the entire market , and the other assigns a portfolio vector for each possible value of the market weights and requires trading to maintain this assignment. The evolution of stocks is taken to be any jump process in discrete time, or a path of any continuous semimartingale in continuous time. We do not make any stochastic modeling assumptions on the evolutions. We provide necessary and sufficient conditions on the assignment on portfolios to guarantee that, for all such evolutions in a diverse and sufficiently volatile market, the actively traded portfolio outperforms the buy-and-hold portfolio in the long run . This class of `relative arbitrage' portfolios were discovered by Fernholz and are called functionally generated portfolios. We prove that , in an appropriate sense , a slight generalization of these are the only possible ones. Remarkably, such portfolios correspond to solutions of Monge-Kantorovich optimal transport problems on the unit simplex with a cost function that can be described as log of the partition function. We also show how the presence of microstructure noise in stock price data leads to statistical arbitrage in high-frequency trading . Our primary tool is a property of multidimensional functions that we call multiplicative cyclical monotonicity .
Consider an equity market with n stocks. The vector of proportions of the total market capitalizations that belongs to each stock is called the market weight. The market weight defines a buy-and-hold portfolio called the market portfolio whose value represents the performance of the entire stock market. Consider a function that assigns a portfolio vector for each possible value of the market weight. Suppose we perform self-financing trading using this portfolio function. We study the problem of characterizing functions such that the resulting portfolio will outperform the market portfolio in the long run under the conditions of diversity and sufficient volatility. No other assumption on the future behavior of stock prices is made. We prove that the only solutions are functionally generated portfolios in the sense of Fernholz. A second characterization is given as the optimal maps of a remarkable optimal transport problem. Both these characterizations follow from a novel property of portfolios called multiplicative cyclical monotonicity. Using this framework we show how the presence of microstructure noise in stock price data leads to statistical arbitrage in high-frequency trading .
[ { "type": "R", "before": "capitalization", "after": "capitalizations", "start_char_pos": 87, "end_char_pos": 101 }, { "type": "R", "before": "weights. Consider two portfolios, one is a passive", "after": "weight. The market weight defines a", "start_char_pos": 150, "end_char_pos": 200 }, { "type": "R", "before": "representing the entire market , and the other", "after": "called the market portfolio whose value represents the performance of the entire stock market. Consider a function that", "start_char_pos": 224, "end_char_pos": 270 }, { "type": "R", "before": "weights and requires trading to maintain this assignment. The evolution of stocks is taken to be any jump process in discrete time, or a path of any continuous semimartingale in continuous time. We do not make any stochastic modeling assumptions on the evolutions. We provide necessary and sufficient conditions on the assignment on portfolios to guarantee that, for all such evolutions in a diverse and sufficiently volatile market, the actively traded portfolio outperforms the buy-and-hold", "after": "weight. Suppose we perform self-financing trading using this portfolio function. We study the problem of characterizing functions such that the resulting portfolio will outperform the market", "start_char_pos": 336, "end_char_pos": 828 }, { "type": "R", "before": ". This class of `relative arbitrage' portfolios were discovered by Fernholz and are called functionally generated portfolios.", "after": "under the conditions of diversity and sufficient volatility. No other assumption on the future behavior of stock prices is made.", "start_char_pos": 855, "end_char_pos": 980 }, { "type": "R", "before": ", in an appropriate sense , a slight generalization of these are the only possible ones. Remarkably, such portfolios correspond to solutions of Monge-Kantorovich optimal transport problems on the unit simplex with a cost function that can be described as log of the partition function. We also", "after": "the only solutions are functionally generated portfolios in the sense of Fernholz. A second characterization is given as the optimal maps of a remarkable optimal transport problem. Both these characterizations follow from a novel property of portfolios called multiplicative cyclical monotonicity. Using this framework we", "start_char_pos": 995, "end_char_pos": 1288 }, { "type": "D", "before": ". Our primary tool is a property of multidimensional functions that we call multiplicative cyclical monotonicity", "after": null, "start_char_pos": 1412, "end_char_pos": 1524 } ]
[ 0, 40, 158, 393, 530, 600, 856, 980, 1083, 1280, 1413 ]
1402.3720
3
Consider an equity market with n stocks. The vector of proportions of the total market capitalizations that belongs to each stock is called the market weight. The market weight defines a buy-and-hold portfolio called the market portfolio whose value represents the performance of the entire stock market. Consider a function that assigns a portfolio vector for each possible value of the market weight . Suppose we perform self-financing trading using this portfolio function. We study the problem of characterizing functions such that the resulting portfolio will outperform the market portfolio in the long run under the conditions of diversity and sufficient volatility. No other assumption on the future behavior of stock prices is made. We prove that the only solutions are functionally generated portfolios in the sense of Fernholz. A second characterization is given as the optimal maps of a remarkable optimal transport problem. Both these characterizations follow from a novel property of portfolios called multiplicative cyclical monotonicity . Using this framework we show how the presence of microstructure noise in stock price data leads to statistical arbitrage in high-frequency trading .
Consider an equity market with n stocks. The vector of proportions of the total market capitalizations that belong to each stock is called the market weight. The market weight defines the market portfolio which is a buy-and-hold portfolio representing the performance of the entire stock market. Consider a function that assigns a portfolio vector to each possible value of the market weight , and we perform self-financing trading using this portfolio function. We study the problem of characterizing functions such that the resulting portfolio will outperform the market portfolio in the long run under the conditions of diversity and sufficient volatility. No other assumption on the future behavior of stock prices is made. We prove that the only solutions are functionally generated portfolios in the sense of Fernholz. A second characterization is given as the optimal maps of a remarkable optimal transport problem. Both characterizations follow from a novel property of portfolios called multiplicative cyclical monotonicity .
[ { "type": "R", "before": "belongs", "after": "belong", "start_char_pos": 108, "end_char_pos": 115 }, { "type": "A", "before": null, "after": "the market portfolio which is", "start_char_pos": 185, "end_char_pos": 185 }, { "type": "R", "before": "called the market portfolio whose value represents the", "after": "representing the", "start_char_pos": 211, "end_char_pos": 265 }, { "type": "R", "before": "for", "after": "to", "start_char_pos": 358, "end_char_pos": 361 }, { "type": "R", "before": ". Suppose", "after": ", and", "start_char_pos": 403, "end_char_pos": 412 }, { "type": "D", "before": "these", "after": null, "start_char_pos": 943, "end_char_pos": 948 }, { "type": "D", "before": ". Using this framework we show how the presence of microstructure noise in stock price data leads to statistical arbitrage in high-frequency trading", "after": null, "start_char_pos": 1054, "end_char_pos": 1202 } ]
[ 0, 40, 158, 305, 477, 674, 742, 839, 937, 1055 ]
1402.3725
1
The issue of constructing a risk minimizing hedge with additional constraints on the shortfall risk is examined. Several classical risk minimizing problems have been adapted to the new setting and solved. The existence and specific forms of optimal strategies in a general semimartingale market model with the use of conditional statistical tests have been proven. The quantile hedging method applied in \mbox{%DIFAUXCMD FL10pt%DIFAUXCMD and \mbox{%DIFAUXCMD FL2 }0pt%DIFAUXCMD } as well as the classical Neyman-Pearson lemma have been generalized. Optimal hedging strategies with shortfall constraints in the Black-Scholes and exponential Poisson model have been explicitly determined.
The issue of constructing a risk minimizing hedge under an additional almost-surely type constraint on the shortfall profile is examined. Several classical risk minimizing problems are adapted to the new setting and solved. In particular, the bankruptcy threat of optimal strategies appearing in the classical risk minimizing setting is ruled out. The existence and concrete forms of optimal strategies in a general semimartingale market model with the use of conditional statistical tests are proven. The well known quantile hedging method 0pt%DIFAUXCMD and \mbox{%DIFAUXCMD FL2 }0pt%DIFAUXCMD } as well as the classical Neyman-Pearson lemma are generalized. Optimal hedging strategies with shortfall constraints in the Black-Scholes and exponential Poisson model are explicitly determined.
[ { "type": "R", "before": "with additional constraints", "after": "under an additional almost-surely type constraint", "start_char_pos": 50, "end_char_pos": 77 }, { "type": "R", "before": "risk", "after": "profile", "start_char_pos": 95, "end_char_pos": 99 }, { "type": "R", "before": "have been", "after": "are", "start_char_pos": 156, "end_char_pos": 165 }, { "type": "A", "before": null, "after": "In particular, the bankruptcy threat of optimal strategies appearing in the classical risk minimizing setting is ruled out.", "start_char_pos": 205, "end_char_pos": 205 }, { "type": "R", "before": "specific", "after": "concrete", "start_char_pos": 224, "end_char_pos": 232 }, { "type": "R", "before": "have been", "after": "are", "start_char_pos": 348, "end_char_pos": 357 }, { "type": "A", "before": null, "after": "well known", "start_char_pos": 370, "end_char_pos": 370 }, { "type": "D", "before": "applied in \\mbox{%DIFAUXCMD FL1", "after": null, "start_char_pos": 395, "end_char_pos": 426 }, { "type": "R", "before": "have been", "after": "are", "start_char_pos": 528, "end_char_pos": 537 }, { "type": "R", "before": "have been", "after": "are", "start_char_pos": 656, "end_char_pos": 665 } ]
[ 0, 112, 204, 365, 550 ]
1402.4021
1
In small volumes, the kinetics of filamentous protein self-assembly is expected to show significant variability, arising from intrinsic molecular noise. This is not accounted for in existing deterministic models. We introduce a simple stochastic model including nucleation and autocatalytic growth via elongation and fragmentation, which allows us to predict the effects of molecular noise on the kinetics of autocatalytic self-assembly. We derive an analytic expression for the lag-time distribution, which agrees well with existing experimental results for the aggregation of bovine insulin into amyloid fibrils. Our solution provides a way to interpret small-volume experiments on fibril formation, providing insight into the mechanisms at play in early-stage aggregation .
In small volumes, the kinetics of filamentous protein self-assembly is expected to show significant variability, arising from intrinsic molecular noise. This is not accounted for in existing deterministic models. We introduce a simple stochastic model including nucleation and autocatalytic growth via elongation and fragmentation, which allows us to predict the effects of molecular noise on the kinetics of autocatalytic self-assembly. We derive an analytic expression for the lag-time distribution, which agrees well with experimental results for the fibrillation of bovine insulin . Our expression decomposes the lag time variability into contributions from primary nucleation and autocatalytic growth and reveals how each of these scales with the key kinetic parameters. Our analysis shows that significant lag-time variability can arise from both primary nucleation and from autocatalytic growth and should provide a way to extract mechanistic information on early-stage aggregation from small-volume experiments .
[ { "type": "D", "before": "existing", "after": null, "start_char_pos": 525, "end_char_pos": 533 }, { "type": "R", "before": "aggregation", "after": "fibrillation", "start_char_pos": 563, "end_char_pos": 574 }, { "type": "R", "before": "into amyloid fibrils. Our solution provides", "after": ". Our expression decomposes the lag time variability into contributions from primary nucleation and autocatalytic growth and reveals how each of these scales with the key kinetic parameters. Our analysis shows that significant lag-time variability can arise from both primary nucleation and from autocatalytic growth and should provide", "start_char_pos": 593, "end_char_pos": 636 }, { "type": "R", "before": "interpret small-volume experiments on fibril formation, providing insight into the mechanisms at play in", "after": "extract mechanistic information on", "start_char_pos": 646, "end_char_pos": 750 }, { "type": "A", "before": null, "after": "from small-volume experiments", "start_char_pos": 775, "end_char_pos": 775 } ]
[ 0, 152, 212, 437, 614 ]
1402.4171
1
In economic and financial networks, the strength (total value of the connections) of a given node has always an important economic meaning, such as the size of supply and demand, import and export, or financial exposure. Constructing null models of networks matching the observed strengths of all nodes is crucial in order to either detect interesting deviations of an empirical network from economically meaningful benchmarks or reconstruct the most likely structure of an economic network when the latter is unknown. However, several studies have proved that real economic networks are topologically very different from networks inferred only from node strengths. Here we provide a detailed analysis for the World Trade Web (WTW) by comparing it to an enhanced null model that simultaneously reproduces the strength and the number of connections of each node. We study several temporal snapshots and different aggregation levels (commodity classes) of the WTW and systematically find that the observed properties are extremely well reproduced by our model. This allows us to introduce the concept of extensive and intensive bias, defined as a measurable tendency of the network to prefer either the formation of new links or the reinforcement of existing ones. We discuss the possible economic interpretation in terms of trade margins.
In economic and financial networks, the strength (total value of the connections) of a given node has always an important economic meaning, such as the size of supply and demand, import and export, or financial exposure. Constructing null models of networks matching the observed strengths of all nodes is crucial in order to either detect interesting deviations of an empirical network from economically meaningful benchmarks or reconstruct the most likely structure of an economic network when the latter is unknown. However, several studies have proved that real economic networks and multiplexes are topologically very different from configurations inferred only from node strengths. Here we provide a detailed analysis for the World Trade Multiplex by comparing it to an enhanced null model that we recently introduced in order to simultaneously reproduce the strength and the degree of each node. We study several temporal snapshots and different layers (commodity classes) of the multiplex and systematically find that the observed properties are extremely well reproduced by our model. This allows us to introduce the concept of extensive and intensive bias, defined as a measurable tendency of the network to prefer either the formation of new links or the reinforcement of existing ones. We discuss the possible economic interpretation in terms of trade margins.
[ { "type": "A", "before": null, "after": "and multiplexes", "start_char_pos": 584, "end_char_pos": 584 }, { "type": "R", "before": "networks", "after": "configurations", "start_char_pos": 623, "end_char_pos": 631 }, { "type": "R", "before": "Web (WTW)", "after": "Multiplex", "start_char_pos": 723, "end_char_pos": 732 }, { "type": "R", "before": "simultaneously reproduces", "after": "we recently introduced in order to simultaneously reproduce", "start_char_pos": 780, "end_char_pos": 805 }, { "type": "R", "before": "number of connections of", "after": "degree of", "start_char_pos": 827, "end_char_pos": 851 }, { "type": "R", "before": "aggregation levels", "after": "layers", "start_char_pos": 913, "end_char_pos": 931 }, { "type": "R", "before": "WTW", "after": "multiplex", "start_char_pos": 959, "end_char_pos": 962 } ]
[ 0, 220, 518, 666, 862, 1059, 1263 ]
1402.4171
2
In economic and financial networks, the strength (total value of the connections) of a given node has always an important economic meaning, such as the size of supply and demand, import and export, or financial exposure. Constructing null models of networks matching the observed strengths of all nodes is crucial in order to either detect interesting deviations of an empirical network from economically meaningful benchmarks or reconstruct the most likely structure of an economic network when the latter is unknown. However, several studies have proved that real economic networks and multiplexes are topologically very different from configurations inferred only from node strengths. Here we provide a detailed analysis for the World Trade Multiplex by comparing it to an enhanced null model that we recently introduced in order to simultaneously reproduce the strength and the degree of each node. We study several temporal snapshots and different layers (commodity classes) of the multiplex and systematically find that the observed properties are extremely well reproduced by our model. This allows us to introduce the concept of extensive and intensive bias, defined as a measurable tendency of the network to prefer either the formation of new links or the reinforcement of existing ones. We discuss the possible economic interpretation in terms of trade margins .
In economic and financial networks, the strength of each node has always an important economic meaning, such as the size of supply and demand, import and export, or financial exposure. Constructing null models of networks matching the observed strengths of all nodes is crucial in order to either detect interesting deviations of an empirical network from economically meaningful benchmarks or reconstruct the most likely structure of an economic network when the latter is unknown. However, several studies have proved that real economic networks and multiplexes are topologically very different from configurations inferred only from node strengths. Here we provide a detailed analysis of the World Trade Multiplex by comparing it to an enhanced null model that simultaneously reproduces the strength and the degree of each node. We study several temporal snapshots and almost one hundred layers (commodity classes) of the multiplex and find that the observed properties are systematically well reproduced by our model. Our formalism allows us to introduce the (static) concept of extensive and intensive bias, defined as a measurable tendency of the network to prefer either the formation of extra links or the reinforcement of link weights, with respect to a reference case where only strengths are enforced. Our findings complement the existing economic literature on (dynamic) intensive and extensive trade margins. More in general, they show that real-world multiplexes can be strongly shaped by layer-specific local constraints .
[ { "type": "R", "before": "(total value of the connections) of a given", "after": "of each", "start_char_pos": 49, "end_char_pos": 92 }, { "type": "R", "before": "for", "after": "of", "start_char_pos": 724, "end_char_pos": 727 }, { "type": "R", "before": "we recently introduced in order to simultaneously reproduce", "after": "simultaneously reproduces", "start_char_pos": 801, "end_char_pos": 860 }, { "type": "R", "before": "different", "after": "almost one hundred", "start_char_pos": 943, "end_char_pos": 952 }, { "type": "D", "before": "systematically", "after": null, "start_char_pos": 1001, "end_char_pos": 1015 }, { "type": "R", "before": "extremely", "after": "systematically", "start_char_pos": 1054, "end_char_pos": 1063 }, { "type": "R", "before": "This", "after": "Our formalism", "start_char_pos": 1094, "end_char_pos": 1098 }, { "type": "A", "before": null, "after": "(static)", "start_char_pos": 1126, "end_char_pos": 1126 }, { "type": "R", "before": "new", "after": "extra", "start_char_pos": 1250, "end_char_pos": 1253 }, { "type": "R", "before": "existing ones. We discuss the possible economic interpretation in terms of trade margins", "after": "link weights, with respect to a reference case where only strengths are enforced. Our findings complement the existing economic literature on (dynamic) intensive and extensive trade margins. More in general, they show that real-world multiplexes can be strongly shaped by layer-specific local constraints", "start_char_pos": 1284, "end_char_pos": 1372 } ]
[ 0, 220, 518, 687, 902, 1093, 1298 ]
1402.4547
1
Expression quantitative trait loci (eQTL) mapping constitutes a challenging problem due to, among other reasons, the high-dimensional multivariate nature of gene expression traits. Next to the expression heterogeneity produced by confounding factors and other sources of unwanted variation, indirect effects spread throughout genes as a result of genetic, molecular and environmental perturbations. Disentangling direct from indirect effects while adjusting for unwanted variability should help us moving from current parts list of molecular components to understanding how these components work together . In this paper we approach this challenge with mixed graphical Markov models and higher-order conditional independences . To unlock this methodological framework we derive the parameters for an exact likelihood ratio test and demonstrate its fundamental relevance for higher-order conditioning on continuous expression and discrete genotypes . These models show that additive genetic effects propagate through the network as function of gene-gene correlations. The estimation of the eQTL network underlying a well-studied yeast data set using our methodology leads to a sparse structure with more direct genetic and regulatory associations that enable a straightforward comparison of the genetic control of gene expression across chromosomes. More importantly, it reveals that the larger genetic effects are trans-acting on genes located in a different chromosome and with a high number of connections to other genes in the network .
Expression quantitative trait loci (eQTL) mapping constitutes a challenging problem due to, among other reasons, the high-dimensional multivariate nature of gene expression traits. Next to the expression heterogeneity produced by confounding factors and other sources of unwanted variation, indirect effects spread throughout genes as a result of genetic, molecular and environmental perturbations. From a multivariate perspective one would like to adjust for the effect of all of these factors to end up with a network of direct associations connecting the path from genotype to phenotype . In this paper we approach this challenge with mixed graphical Markov models , higher-order conditional independences and q-order correlation graphs . These models show that additive genetic effects propagate through the network as function of gene-gene correlations. Our estimation of the eQTL network underlying a well-studied yeast data set leads to a sparse structure with more direct genetic and regulatory associations that enable a straightforward comparison of the genetic control of gene expression across chromosomes. Interestingly, it also reveals that eQTLs explain most of the expression variability of network hub genes .
[ { "type": "R", "before": "Disentangling direct from indirect effects while adjusting for unwanted variability should help us moving from current parts list of molecular components to understanding how these components work together", "after": "From a multivariate perspective one would like to adjust for the effect of all of these factors to end up with a network of direct associations connecting the path from genotype to phenotype", "start_char_pos": 399, "end_char_pos": 604 }, { "type": "R", "before": "and", "after": ",", "start_char_pos": 683, "end_char_pos": 686 }, { "type": "R", "before": ". To unlock this methodological framework we derive the parameters for an exact likelihood ratio test and demonstrate its fundamental relevance for higher-order conditioning on continuous expression and discrete genotypes", "after": "and q-order correlation graphs", "start_char_pos": 726, "end_char_pos": 947 }, { "type": "R", "before": "The", "after": "Our", "start_char_pos": 1067, "end_char_pos": 1070 }, { "type": "D", "before": "using our methodology", "after": null, "start_char_pos": 1143, "end_char_pos": 1164 }, { "type": "R", "before": "More importantly, it reveals that the larger genetic effects are trans-acting on genes located in a different chromosome and with a high number of connections to other genes in the network", "after": "Interestingly, it also reveals that eQTLs explain most of the expression variability of network hub genes", "start_char_pos": 1349, "end_char_pos": 1537 } ]
[ 0, 180, 398, 606, 727, 949, 1066, 1348 ]
1402.4683
1
We study the left tail behavior of the logarithm of the distribution function of a sum of dependent positive random variables . Asymptotics are computed under the assumption that the marginal distribution functions decay slowly at zero, meaning that the their logarithms are slowly varying functions. This includes parametric families such as log-normal, gamma, Weibull and many distributions from the financial mathematics literature. We show that the logarithmic asymptotics of the sum in question depend on a characteristic of the copula of the random variables which we term weak lower tail dependence function , and which is computed explicitly for several families of copulas in this paper. In applications, our results may be used to quantify the diversification of long-only portfolios of financial assets with respect to extreme losses . As an illustration, we compute the left tail asymptotics for a portfolio of options in the multidimensional Black-Scholes model.
We study the left tail behavior of the distribution function of a sum of dependent positive random variables , with a special focus on the setting of asymptotic independence. Asymptotics at the logarithmic scale are computed under the assumption that the marginal distribution functions decay slowly at zero, meaning that their logarithms are slowly varying functions. This includes parametric families such as log-normal, gamma, Weibull and many distributions from the financial mathematics literature. We show that the asymptotics of the sum depend on a characteristic of the copula of the random variables which we term weak lower tail dependence function . We then compute this function explicitly for several families of copulas , such as the Gaussian copula, the copulas of Gaussian mean-variance mixtures and a class of Archimedean copulas . As an illustration, we compute the left tail asymptotics for a portfolio of call options in the multidimensional Black-Scholes model.
[ { "type": "D", "before": "logarithm of the", "after": null, "start_char_pos": 39, "end_char_pos": 55 }, { "type": "R", "before": ". Asymptotics", "after": ", with a special focus on the setting of asymptotic independence. Asymptotics at the logarithmic scale", "start_char_pos": 126, "end_char_pos": 139 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 250, "end_char_pos": 253 }, { "type": "D", "before": "logarithmic", "after": null, "start_char_pos": 453, "end_char_pos": 464 }, { "type": "D", "before": "in question", "after": null, "start_char_pos": 488, "end_char_pos": 499 }, { "type": "R", "before": ", and which is computed", "after": ". We then compute this function", "start_char_pos": 615, "end_char_pos": 638 }, { "type": "R", "before": "in this paper. In applications, our results may be used to quantify the diversification of long-only portfolios of financial assets with respect to extreme losses", "after": ", such as the Gaussian copula, the copulas of Gaussian mean-variance mixtures and a class of Archimedean copulas", "start_char_pos": 682, "end_char_pos": 844 }, { "type": "A", "before": null, "after": "call", "start_char_pos": 923, "end_char_pos": 923 } ]
[ 0, 300, 435, 696, 846 ]
1402.4683
2
We study the left tail behavior of the distribution function of a sum of dependent positive random variables, with a special focus on the setting of asymptotic independence. Asymptotics at the logarithmic scale are computed under the assumption that the marginal distribution functions decay slowly at zero, meaning that their logarithms are slowly varying functions. This includes parametric families such as log-normal, gamma, Weibull and many distributions from the financial mathematics literature. We show that the asymptotics of the sum depend on a characteristic of the copula of the random variables which we term weak lower tail dependence function. We then compute this function explicitly for several families of copulas, such as the Gaussian copula, the copulas of Gaussian mean-variance mixtures and a class of Archimedean copulas. As an illustration, we compute the left tail asymptotics for a portfolio of call options in the multidimensional Black-Scholes model .
We introduce a new functional measure of tail dependence for weakly dependent (asymptotically independent) random vectors, termed weak tail dependence function. The new measure is defined at the level of copulas and we compute it for several copula families such as the Gaussian copula, copulas of a class of Gaussian mixture models, certain Archimedean copulas and extreme value copulas. The new measure allows to quantify the tail behavior of certain functionals of weakly dependent random vectors at the log scale .
[ { "type": "R", "before": "study the left tail behavior of the distribution function of a sum of dependent positive random variables, with a special focus on the setting of asymptotic independence. Asymptotics at the logarithmic scale are computed under the assumption that the marginal distribution functions decay slowly at zero, meaning that their logarithms are slowly varying functions. This includes parametric families such as log-normal, gamma, Weibull and many distributions from the financial mathematics literature. We show that the asymptotics of the sum depend on a characteristic of the copula of the random variables which we term weak lower", "after": "introduce a new functional measure of tail dependence for weakly dependent (asymptotically independent) random vectors, termed weak", "start_char_pos": 3, "end_char_pos": 632 }, { "type": "R", "before": "We then compute this function explicitly for several families of copulas,", "after": "The new measure is defined at the level of copulas and we compute it for several copula families", "start_char_pos": 659, "end_char_pos": 732 }, { "type": "R", "before": "the copulas of Gaussian mean-variance mixtures and", "after": "copulas of", "start_char_pos": 762, "end_char_pos": 812 }, { "type": "R", "before": "Archimedean copulas. As an illustration, we compute the left tail asymptotics for a portfolio of call options in the multidimensional Black-Scholes model", "after": "Gaussian mixture models, certain Archimedean copulas and extreme value copulas. The new measure allows to quantify the tail behavior of certain functionals of weakly dependent random vectors at the log scale", "start_char_pos": 824, "end_char_pos": 977 } ]
[ 0, 173, 367, 502, 658, 844 ]
1402.4783
1
The recent financial crisis illustrated the need for a thorough, functional understanding of systemic risk in strongly interconnected financial structures. Dynamic processes on complex networks being intrinsically difficult, most recent studies of this problem have relied on numerical simulations. In this paper, we report analytical results in a network model of interbank lending based on directly relevant financial parameters such as interest rates and leverage ratios. Using a mean-field approach, we obtain a closed-form formula for the "critical degree", viz. the number of creditors per bank below which an individual shock can cascade throughout the network. We relate the failures distribution (probability that a single shock induces F failures) to the degree distribution (probability that a bank has k creditors), showing in particular that the former is fat-tailed whenever the latter is. Remarkably, our criterion for the onset of contagion turns out to be isomorphic to a simple rule for the evolution of cooperation on graphs and social networks, supporting recent calls for a methodological rapprochement between finance and ecology.
The 2008 financial crisis illustrated the need for a thorough, functional understanding of systemic risk in strongly interconnected financial structures. Dynamic processes on complex networks being intrinsically difficult, most recent studies of this problem have relied on numerical simulations. Here we report analytical results in a network model of interbank lending based on directly relevant financial parameters , such as interest rates and leverage ratios. Using a mean-field approach, we obtain a closed-form formula for the "critical degree", viz. the number of creditors per bank below which an individual shock can propagate throughout the network. We relate the failures distribution (probability that a single shock induces F failures) to the degree distribution (probability that a bank has k creditors), showing in particular that the former is fat-tailed whenever the latter is. Our criterion for the onset of contagion turns out to be isomorphic to the condition for cooperation to evolve on graphs and social networks, as recently formulated in evolutionary game theory. This remarkable connection supports recent calls for a methodological rapprochement between finance and ecology.
[ { "type": "R", "before": "recent", "after": "2008", "start_char_pos": 4, "end_char_pos": 10 }, { "type": "R", "before": "In this paper,", "after": "Here", "start_char_pos": 299, "end_char_pos": 313 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 431, "end_char_pos": 431 }, { "type": "R", "before": "cascade", "after": "propagate", "start_char_pos": 638, "end_char_pos": 645 }, { "type": "R", "before": "Remarkably, our", "after": "Our", "start_char_pos": 905, "end_char_pos": 920 }, { "type": "R", "before": "a simple rule for the evolution of cooperation", "after": "the condition for cooperation to evolve", "start_char_pos": 988, "end_char_pos": 1034 }, { "type": "R", "before": "supporting", "after": "as recently formulated in evolutionary game theory. This remarkable connection supports", "start_char_pos": 1066, "end_char_pos": 1076 } ]
[ 0, 155, 298, 475, 669, 904 ]
1402.5062
1
Motivation In many key applications in the field of metabolomics, such as toxicology or nutrigenomics, it is of interest to profile and detect physiological changes in metabolic pathways . For this purpose, it is useful to build a comprehensive graphical representation which shows metabolic processes occurring in URLanism using networks. To model these systems it is possible to describe both reactions and relations among enzymes and metobolites . In this way, analysis of possible changes or perturbations impact throughout the network are easier to understand, detect and predict. To address this problem, we develop a library to build metabolic networks starting from a list of compounds. Results We release the MetaboX library, an open source PHP framework for developing metabolic networks from a set of compounds. This library provides easy access to the Kyoto Encyclopedia for Genes and Genomes (KEGG) database using its RESTful Application Programming Interfaces (APIs), and methods to enhance manipulation of the information returned from KEGG webservice. MetaboX includes methods to extract information about a resource of interest (e.g. metabolite, reaction and enzyme) for further processing and storing purposes. MetaboX is modular, thus developers can contribute with alternative implementations or extensions. Each component of the library is designed with minimum dependency on other components. Supplementary information available in Table 1. Availability The MetaboX library is available under the AGPL license on gitHub repository URL
In many key applications of metabolomics, such as toxicology or nutrigenomics, it is of interest to profile and detect changes in metabolic processes, usually represented in the form of pathways. As an alternative, a broader point of view would enable investigators to better understand the relations between entities that exist in different processes. Therefore, relating a possible perturbation to several known processes represents a new approach to this field of study. We propose to use a network representation of metabolism in terms of reactants, enzyme and metabolite . To model these systems it is possible to describe both reactions and relations among enzymes and metabolites . In this way, analysis of the impact of changes in some metabolites or enzymes on different processes are easier to understand, detect and predict. Results: We release the MetaboX library, an open source PHP framework for developing metabolic networks from a set of compounds. This library uses data stored in Kyoto Encyclopedia for Genes and Genomes (KEGG) database using its RESTful Application Programming Interfaces (APIs), and methods to enhance manipulation of the information returned from KEGG webservice. The MetaboX library includes methods to extract information about a resource of interest (e.g. metabolite, reaction and enzyme) and to build reactants networks, bipartite enzyme-metabolite and unipartite enzyme networks. These networks can be exported in different formats for data visualization with standard tools. As a case study, the networks built from a subset of the Glycolysis pathway are described and discussed. Conclusions: The advantages of using such a library imply the ability to model complex systems with few starting information represented by a collection of metabolites KEGG IDs.
[ { "type": "D", "before": "Motivation", "after": null, "start_char_pos": 0, "end_char_pos": 10 }, { "type": "D", "before": "in the field", "after": null, "start_char_pos": 36, "end_char_pos": 48 }, { "type": "D", "before": "physiological", "after": null, "start_char_pos": 143, "end_char_pos": 156 }, { "type": "R", "before": "pathways", "after": "processes, usually represented in the form of pathways. As an alternative, a broader point of view would enable investigators to better understand the relations between entities that exist in different processes. Therefore, relating a possible perturbation to several known processes represents a new approach to this field of study. We propose to use a network representation of metabolism in terms of reactants, enzyme and metabolite", "start_char_pos": 178, "end_char_pos": 186 }, { "type": "D", "before": "For this purpose, it is useful to build a comprehensive graphical representation which shows metabolic processes occurring in URLanism using networks.", "after": null, "start_char_pos": 189, "end_char_pos": 339 }, { "type": "R", "before": "metobolites", "after": "metabolites", "start_char_pos": 437, "end_char_pos": 448 }, { "type": "R", "before": "possible changes or perturbations impact throughout the network", "after": "the impact of changes in some metabolites or enzymes on different processes", "start_char_pos": 476, "end_char_pos": 539 }, { "type": "R", "before": "To address this problem, we develop a library to build metabolic networks starting from a list of compounds. Results", "after": "Results:", "start_char_pos": 586, "end_char_pos": 702 }, { "type": "R", "before": "provides easy access to the", "after": "uses data stored in", "start_char_pos": 836, "end_char_pos": 863 }, { "type": "R", "before": "MetaboX", "after": "The MetaboX library", "start_char_pos": 1068, "end_char_pos": 1075 }, { "type": "R", "before": "for further processing and storing purposes. MetaboX is modular, thus developers can contribute with alternative implementations or extensions. Each component of the library is designed with minimum dependency on other components. Supplementary information available in Table 1. Availability The MetaboX library is available under the AGPL license on gitHub repository URL", "after": "and to build reactants networks, bipartite enzyme-metabolite and unipartite enzyme networks. These networks can be exported in different formats for data visualization with standard tools. As a case study, the networks built from a subset of the Glycolysis pathway are described and discussed. Conclusions: The advantages of using such a library imply the ability to model complex systems with few starting information represented by a collection of metabolites KEGG IDs.", "start_char_pos": 1184, "end_char_pos": 1556 } ]
[ 0, 188, 339, 450, 585, 694, 822, 1067, 1228, 1327, 1414, 1462 ]
1402.5214
1
Although the phrase "ontogeny recapitulates phylogeny'' turned out to be incorrect, the search for possible relationships between development and evolution still gathers much attention. Recently, dynamical-systems analysis has proven to be relevant to both development and evolution, and it may therefore provide a link between the two. Using extensive simulations to evolve gene regulation networks that shape morphogenesis, we observed remarkable congruence between development and evolution: Both consisted of the same successive epochs to shape stripes, and good agreement was observed for the ordering as well as the topology of branching of stripes between the two. This congruence is explained by the agreement of bifurcations in dynamical-systems theory between evolution and development, where slowly varying gene-expression levels work as emergent control parameters. In terms of the gene regulation networks, this congruence is understood as the successive addition of downstream modules, either as feedforward or feedback , while the upstream feedforward network shapes the boundary condition for the downstream dynamics, based on the maternal morphogen gradient. Acquisition of a novel developmental mode was due to mutational change in the upstream network to alter the boundary condition . Our results provide a fresh perspective on evolution-development relationship, as well as on the acquisition of developmental novelty .
Search for possible relationships between phylogeny and ontogeny is one of the most important issues in the field of evolutionary developmental biology. By representing developmental dynamics of spatially located cells with gene expression dynamics with cell-to-cell interaction under external morphogen gradient, evolved are gene regulation networks under mutation and selection with the fitness to approach a prescribed spatial pattern of expressed genes. For most of thousands of numerical evolution experiments, evolution of pattern over generations and development of pattern by an evolved network exhibit remarkable congruence. Here, both the pattern dynamics consist of several epochs to form successive stripe formations between quasi-stationary regimes. In evolution, the regimes are generations needed to hit relevant mutations, while in development, they are due to the emergence of slowly varying expression that controls the pattern change. Successive pattern changes are thus generated, which are regulated by successive combinations of feedback or feedforward regulations under the upstream feedforward network that reads the morphogen gradient. By using a pattern generated by the upstream feedforward network as a boundary condition, downstream networks form later stripe patterns. These epochal changes in development and evolution are represented as same bifurcations in dynamical-systems theory, and this agreement of bifurcations lead to the evolution-development congruences. Violation of the evolution-development congruence, observed exceptionally, is shown to be originated in alteration of the boundary due to mutation at the upstream feedforward network . Our results provide a new look on developmental stages, punctuated equilibrium, developmental bottlenecks, and evolutionary acquisition of novelty in morphogenesis .
[ { "type": "R", "before": "Although the phrase \"ontogeny recapitulates phylogeny'' turned out to be incorrect, the search", "after": "Search", "start_char_pos": 0, "end_char_pos": 94 }, { "type": "R", "before": "development and evolution still gathers much attention. Recently, dynamical-systems analysis has proven to be relevant to both development and evolution, and it may therefore provide a link between the two. Using extensive simulations to evolve gene regulation networks that shape morphogenesis, we observed remarkable congruence between development and evolution: Both consisted of the same successive epochs to shape stripes, and good agreement was observed for the ordering as well as the topology of branching of stripes between the two. This congruence is explained by", "after": "phylogeny and ontogeny is one of the most important issues in the field of evolutionary developmental biology. By representing developmental dynamics of spatially located cells with gene expression dynamics with cell-to-cell interaction under external morphogen gradient, evolved are gene regulation networks under mutation and selection with the fitness to approach a prescribed spatial pattern of expressed genes. For most of thousands of numerical evolution experiments, evolution of pattern over generations and development of pattern by an evolved network exhibit remarkable congruence. Here, both the pattern dynamics consist of several epochs to form successive stripe formations between quasi-stationary regimes. In evolution, the regimes are generations needed to hit relevant mutations, while in development, they are due to the emergence of slowly varying expression that controls the pattern change. Successive pattern changes are thus generated, which are regulated by successive combinations of feedback or feedforward regulations under", "start_char_pos": 130, "end_char_pos": 703 }, { "type": "D", "before": "agreement of bifurcations in dynamical-systems theory between evolution and development, where slowly varying gene-expression levels work as emergent control parameters. In terms of the gene regulation networks, this congruence is understood as the successive addition of downstream modules, either as feedforward or feedback , while the", "after": null, "start_char_pos": 708, "end_char_pos": 1045 }, { "type": "R", "before": "shapes the boundary condition for the downstream dynamics, based on the maternal morphogen gradient. Acquisition of a novel developmental mode was due to mutational change in", "after": "that reads the morphogen gradient. By using a pattern generated by the upstream feedforward network as a boundary condition, downstream networks form later stripe patterns. These epochal changes in development and evolution are represented as same bifurcations in dynamical-systems theory, and this agreement of bifurcations lead to the evolution-development congruences. Violation of", "start_char_pos": 1075, "end_char_pos": 1249 }, { "type": "R", "before": "upstream network to alter the boundary condition", "after": "evolution-development congruence, observed exceptionally, is shown to be originated in alteration of the boundary due to mutation at the upstream feedforward network", "start_char_pos": 1254, "end_char_pos": 1302 }, { "type": "R", "before": "fresh perspective on evolution-development relationship, as well as on the acquisition of developmental novelty", "after": "new look on developmental stages, punctuated equilibrium, developmental bottlenecks, and evolutionary acquisition of novelty in morphogenesis", "start_char_pos": 1327, "end_char_pos": 1438 } ]
[ 0, 185, 336, 671, 877, 1175, 1304 ]
1402.7027
1
The raising importance of renewable energy, especially solar and wind power, led to new impacts on the formation of electricity prices. Hence, this paper introduces an econometric model for the hourly time series of electricity prices of the EEX which incorporates specific features like renewable energy. The model consists of several sophisticated and established approaches and can be regarded as a periodic VAR-TARCH with wind power, solar power and load as influencing time series. It is able to map the distinct and well-known features of electricity prices in Germany. An efficient iteratively reweighted lasso approach is used for estimation. Moreover, it is shown that several existing models are outperformed by using the procedure developed within this paper.
The increasing importance of renewable energy, especially solar and wind power, has led to new forces in the formation of electricity prices. Hence, this paper introduces an econometric model for the hourly time series of electricity prices of the European Energy Exchange (EEX) which incorporates specific features like renewable energy. The model consists of several sophisticated and established approaches and can be regarded as a periodic VAR-TARCH with wind power, solar power , and load as influences on the time series. It is able to map the distinct and well-known features of electricity prices in Germany. An efficient iteratively reweighted lasso approach is used for the estimation. Moreover, it is shown that several existing models are outperformed by the procedure developed in this paper.
[ { "type": "R", "before": "raising", "after": "increasing", "start_char_pos": 4, "end_char_pos": 11 }, { "type": "A", "before": null, "after": "has", "start_char_pos": 77, "end_char_pos": 77 }, { "type": "R", "before": "impacts on", "after": "forces in", "start_char_pos": 89, "end_char_pos": 99 }, { "type": "R", "before": "EEX", "after": "European Energy Exchange (EEX)", "start_char_pos": 243, "end_char_pos": 246 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 451, "end_char_pos": 451 }, { "type": "R", "before": "influencing", "after": "influences on the", "start_char_pos": 464, "end_char_pos": 475 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 641, "end_char_pos": 641 }, { "type": "D", "before": "using", "after": null, "start_char_pos": 725, "end_char_pos": 730 }, { "type": "R", "before": "within", "after": "in", "start_char_pos": 755, "end_char_pos": 761 } ]
[ 0, 136, 306, 488, 577, 653 ]
1403.0333
1
We examine the nature of some well-known phenomena such as volatility smiles, convexity adjustments and parallel markets. We propose that the market is incomplete and postulate the existence of an intrinsic risk in every contingent claim as a basis for understanding the phenomena. In a continuous time framework, we bring together the notion of intrinsic risk and the martingale theory to derive a martingale measure, namely risk-subjective measure, for pricing and hedging financial derivatives. The risk-subjective measure provides an internal consistency in pricing and hedging contingent claims and explains the phenomena.
We examine the nature of some well-known phenomena such as volatility smiles, convexity adjustments and parallel markets. We propose that the market is incomplete and postulate the existence of an intrinsic risk in every contingent claim as a basis for understanding the phenomena. In a continuous time framework, we bring together the notion of intrinsic risk and the martingale theory to derive a probability measure, namely risk-subjective measure, for pricing and hedging financial derivatives. The risk-subjective measure provides an internal consistency in pricing and hedging contingent claims and importantly explains the phenomena.
[ { "type": "R", "before": "martingale", "after": "probability", "start_char_pos": 399, "end_char_pos": 409 }, { "type": "A", "before": null, "after": "importantly", "start_char_pos": 604, "end_char_pos": 604 } ]
[ 0, 121, 281, 497 ]
1403.0333
2
We examine the nature of some well-known phenomena such as volatility smiles, convexity adjustments and parallel markets. We propose that the market is incomplete and postulate the existence of an intrinsic risk in every contingent claim as a basis for understanding the phenomena. In a continuous time framework, we bring together the notion of intrinsic risk and the martingale theory to derive a probability measure, namely risk-subjective measure, for pricing and hedging financial derivatives. The risk-subjective measure provides an internal consistency in pricing and hedging contingent claims and importantly explains the phenomena.
We examine the nature of some well-known phenomena such as volatility smiles, convexity adjustments and parallel markets. We propose that the market is incomplete and postulate the existence of an intrinsic risk in every contingent claim as a basis for understanding the phenomena. In a continuous time framework, we bring together the notion of intrinsic risk and the theory of change of measures to derive a probability measure, namely risk-subjective measure, for pricing and hedging financial derivatives. The risk-subjective measure and the measure of intrinsic risk provide an internal consistency in pricing and hedging contingent claims and importantly explains the phenomena.
[ { "type": "R", "before": "martingale theory", "after": "theory of change of measures", "start_char_pos": 369, "end_char_pos": 386 }, { "type": "R", "before": "provides", "after": "and the measure of intrinsic risk provide", "start_char_pos": 527, "end_char_pos": 535 } ]
[ 0, 121, 281, 498 ]
1403.0333
3
We examine the nature of some well-known phenomena such as volatility smiles, convexity adjustments and parallel markets. We propose that the market is incomplete and postulate the existence of an intrinsic risk in every contingent claim as a basis for understanding the phenomena. In a continuous time framework, we bring together the notion of intrinsic risk and the theory of change of measures to derive a probability measure, namely risk-subjective measure, for pricing and hedging financial derivatives. The risk-subjective measure and the measure of intrinsic risk provide an internal consistency in pricing and hedging contingent claims and importantly explains the phenomena .
We examine the nature of some well-known phenomena such as volatility smiles, convexity adjustments and parallel derivative markets. We propose that the market is incomplete and postulate the existence of an intrinsic risk in every contingent claim as a basis for understanding these phenomena. In a continuous time framework, we bring together the notion of intrinsic risk and the theory of change of measures to derive a probability measure, namely risk-subjective measure, for evaluating contingent claims. This paper is a modest attempt to prove that measure of intrinsic risk is a crucial ingredient for explaining these phenomena, and in consequence proposes a new approach to pricing and hedging financial derivatives. We show that our approach is consistent and robust, compared with the standard risk-neutral approach .
[ { "type": "A", "before": null, "after": "derivative", "start_char_pos": 113, "end_char_pos": 113 }, { "type": "R", "before": "the", "after": "these", "start_char_pos": 268, "end_char_pos": 271 }, { "type": "R", "before": "pricing and hedging financial derivatives. The risk-subjective measure and the measure", "after": "evaluating contingent claims. This paper is a modest attempt to prove that measure", "start_char_pos": 468, "end_char_pos": 554 }, { "type": "R", "before": "provide an internal consistency in", "after": "is a crucial ingredient for explaining these phenomena, and in consequence proposes a new approach to", "start_char_pos": 573, "end_char_pos": 607 }, { "type": "R", "before": "contingent claims and importantly explains the phenomena", "after": "financial derivatives. We show that our approach is consistent and robust, compared with the standard risk-neutral approach", "start_char_pos": 628, "end_char_pos": 684 } ]
[ 0, 122, 282, 510 ]
1403.0333
4
We examine the nature of some well-known phenomena such as volatility smiles, convexity adjustments and parallel derivative markets. We propose that the market is incomplete and postulate the existence of an intrinsic risk in every contingent claim as a basis for understanding these phenomena. In a continuous time framework, we bring together the notion of intrinsic risk and the theory of change of measures to derive a probability measure, namely risk-subjective measure, for evaluating contingent claims. This paper is a modest attempt to prove that measure of intrinsic risk is a crucial ingredient for explaining these phenomena, and in consequence proposes a new approach to pricing and hedging financial derivatives. We show that our approach is consistent and robust, compared with the standard risk-neutral approach.
We review the nature of some well-known phenomena such as volatility smiles, convexity adjustments and parallel derivative markets. We propose that the market is incomplete and postulate the existence of intrinsic risks in every contingent claim as a basis for understanding these phenomena. In a continuous time framework, we bring together the notion of intrinsic risk and the theory of change of measures to derive a probability measure, namely risk-subjective measure, for evaluating contingent claims. This paper is a modest attempt to prove that measure of intrinsic risk is a crucial ingredient for explaining these phenomena, and in consequence proposes a new approach to pricing and hedging financial derivatives. By adapting theoretical knowledge to practical applications, we show that our approach is consistent and robust, compared with the standard risk-neutral approach.
[ { "type": "R", "before": "examine", "after": "review", "start_char_pos": 3, "end_char_pos": 10 }, { "type": "R", "before": "an intrinsic risk", "after": "intrinsic risks", "start_char_pos": 205, "end_char_pos": 222 }, { "type": "R", "before": "We", "after": "By adapting theoretical knowledge to practical applications, we", "start_char_pos": 726, "end_char_pos": 728 } ]
[ 0, 132, 294, 509, 725 ]
1403.0527
1
We study asymptotic properties of some parameter estimators for subcritical Heston models based on discrete time observations derived from conditional least squares estimators of some modified parameters.
We study asymptotic properties of some (essentially conditional least squares) parameter estimators for the subcritical Heston model based on discrete time observations derived from conditional least squares estimators of some modified parameters.
[ { "type": "A", "before": null, "after": "(essentially conditional least squares)", "start_char_pos": 39, "end_char_pos": 39 }, { "type": "R", "before": "subcritical Heston models", "after": "the subcritical Heston model", "start_char_pos": 65, "end_char_pos": 90 } ]
[ 0 ]
1403.0842
1
In financial markets, the order flow, defined as the process assuming value one for buy market order and minus one for sell market orders, displays very slowly decaying autocorrelation function. Since orders impact prices, reconciling the persistence of the order flow with market efficiency is a subtle issue whose possible solution is provided by asymmetric liquidity which states that the impact of a buy or sell order is inversely related to the probability of its occurrence. We empirically find that when the order flow predictability increases in one direction, the liquidity in the opposite side decreases, but the probability that a trade moves the price decreases significantly. While the last mechanism is able to counterbalance the persistence of order flow and restore efficiency and diffusivity, the first acts in opposite direction. We introduce a statistical order book model where the persistence of the order flow is mitigated by adjusting the market order volume to the predictability of the order flow. The model reproduces the diffusive behaviour of prices at all time scales without fine-tuning the values of parameters, as well as the behaviour of most order book quantities as a function of the local predictability of order flow.
In financial markets, the order flow, defined as the process assuming value one for buy market orders and minus one for sell market orders, displays a very slowly decaying autocorrelation function. Since orders impact prices, reconciling the persistence of the order flow with market efficiency is a subtle issue . A possible solution is provided by asymmetric liquidity , which states that the impact of a buy or sell order is inversely related to the probability of its occurrence. We empirically find that when the order flow predictability increases in one direction, the liquidity in the opposite side decreases, but the probability that a trade moves the price decreases significantly. While the last mechanism is able to counterbalance the persistence of order flow and restore efficiency and diffusivity, the first acts in opposite direction. We introduce a statistical order book model where the persistence of the order flow is mitigated by adjusting the market order volume to the predictability of the order flow. The model reproduces the diffusive behaviour of prices at all time scales without fine-tuning the values of parameters, as well as the behaviour of most order book quantities as a function of the local predictability of order flow.
[ { "type": "R", "before": "order", "after": "orders", "start_char_pos": 95, "end_char_pos": 100 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 148, "end_char_pos": 148 }, { "type": "R", "before": "whose", "after": ". A", "start_char_pos": 311, "end_char_pos": 316 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 371, "end_char_pos": 371 } ]
[ 0, 195, 482, 690, 849, 1024 ]
1403.1236
1
The classic model of eukaryotic gene expression requires direct spatial contact between a distal enhancer and a proximal promoter. However, recent chromosome conformation capture studies (e.g. Hi-C) show that enhancer and promoters are embedded in a complex network of cell-type specific looping interactions. Here we investigate whether, and to what extent, looping interactions between elements in the vicinity of an enhancer- promoter pair can influence the frequencyof enhancer-promoter contacts. Our polymer simulations show that a chromatin loop formed by elements flanking either an enhancer or a promoter suppresses enhancer-promoter interactions, working as a topological insulator. A loop formed by elements located in the region between an enhancer and a promoter, on the contrary, facilitates their interactions. We find that these two consequences of chromatin loops have different genomic extents, with facilitation being a local effect and insulation persisting over a large range of genomic distances. Overall, our results show that looping interactions which do not directly involve an enhancer-promoter contact can nevertheless significantly modulate their interactions. This illustrates the intricate effects that local URLanization can have on gene expression .
The classic model of eukaryotic gene expression requires direct spatial contact between a distal enhancer and a proximal promoter. Recent Chromosome Conformation Capture (3C) studies show that enhancers and promoters are embedded in a complex network of looping interactions. Here we use a polymer model of chromatin fiber to investigate whether, and to what extent, looping interactions between elements in the vicinity of an enhancer-promoter pair can influence their contact frequency. Our equilibrium polymer simulations show that a chromatin loop , formed by elements flanking either an enhancer or a promoter , suppresses enhancer-promoter interactions, working as an insulator. A loop formed by elements located in the region between an enhancer and a promoter, on the contrary, facilitates their interactions. We find that different mechanisms underlie insulation and facilitation; insulation occurs due to steric exclusion by the loop, and is a global effect, while facilitation occurs due to an effective shortening of the enhancer-promoter genomic distance, and is a local effect. Consistently, we find that these effects manifest quite differently for in silico 3C and microscopy. Our results show that looping interactions that do not directly involve an enhancer-promoter pair can nevertheless significantly modulate their interactions. This phenomenon is analogous to allosteric regulation in proteins, where a conformational change triggered by binding of a regulatory molecule to one site affects the state of another site .
[ { "type": "R", "before": "However, recent chromosome conformation capture studies (e.g. Hi-C) show that enhancer", "after": "Recent Chromosome Conformation Capture (3C) studies show that enhancers", "start_char_pos": 131, "end_char_pos": 217 }, { "type": "D", "before": "cell-type specific", "after": null, "start_char_pos": 269, "end_char_pos": 287 }, { "type": "A", "before": null, "after": "use a polymer model of chromatin fiber to", "start_char_pos": 318, "end_char_pos": 318 }, { "type": "R", "before": "enhancer- promoter", "after": "enhancer-promoter", "start_char_pos": 420, "end_char_pos": 438 }, { "type": "R", "before": "the frequencyof enhancer-promoter contacts. Our", "after": "their contact frequency. Our equilibrium", "start_char_pos": 458, "end_char_pos": 505 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 553, "end_char_pos": 553 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 615, "end_char_pos": 615 }, { "type": "R", "before": "a topological", "after": "an", "start_char_pos": 670, "end_char_pos": 683 }, { "type": "R", "before": "these two consequences of chromatin loops have different genomic extents, with facilitation being a local effect and insulation persisting over a large range of genomic distances. Overall, our", "after": "different mechanisms underlie insulation and facilitation; insulation occurs due to steric exclusion by the loop, and is a global effect, while facilitation occurs due to an effective shortening of the enhancer-promoter genomic distance, and is a local effect. Consistently, we find that these effects manifest quite differently for in silico 3C and microscopy. Our", "start_char_pos": 841, "end_char_pos": 1033 }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 1073, "end_char_pos": 1078 }, { "type": "R", "before": "contact", "after": "pair", "start_char_pos": 1124, "end_char_pos": 1131 }, { "type": "R", "before": "illustrates the intricate effects that local URLanization can have on gene expression", "after": "phenomenon is analogous to allosteric regulation in proteins, where a conformational change triggered by binding of a regulatory molecule to one site affects the state of another site", "start_char_pos": 1197, "end_char_pos": 1282 } ]
[ 0, 130, 309, 501, 694, 827, 1020, 1191 ]
1403.1574
1
We consider a three state agent based herding model of the financial markets. From this agent based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation.
We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation.
[ { "type": "R", "before": "consider a three state agent based", "after": "are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based", "start_char_pos": 3, "end_char_pos": 37 }, { "type": "R", "before": "agent based", "after": "agent-based", "start_char_pos": 88, "end_char_pos": 99 }, { "type": "A", "before": null, "after": "agent population and log price in", "start_char_pos": 211, "end_char_pos": 211 } ]
[ 0, 77, 234, 346, 433, 668 ]
1403.1822
1
We study a resource utilization scenario characterized by intrinsic attractiveness , in a system of many restaurants where customers compete to get the best services out of many choices . Results for the case with uniform attractiveness are reported. When attractiveness is uniformly distributed, it gives rise to a Zipf-Pareto law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of attractiveness. A variant of the model is also introduced where the attractiveness can be treated as a fitness to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The fitness distribution is characterized by a power law, but the power law distribution in number of customers still holds , implying the robustness of the model. Our model serves as a paradigm for city size distribution and the emergence of Zipf law.
We study a resource utilization scenario characterized by intrinsic attractiveness . We consider a system of many restaurants where customers compete , as in a game, to get the best services out of many choices using iterative learning . Results for the case with uniform attractiveness are reported. When attractiveness is uniformly distributed, it gives rise to a Zipf-Pareto law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of attractiveness. A variant of the model is also introduced where the attractiveness can be treated as a fitness to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, but the distribution in number of customers is still given by power law , implying the robustness of the model. Our model serves as a paradigm for city size distribution and the emergence of Zipf law.
[ { "type": "R", "before": ", in", "after": ". We consider", "start_char_pos": 83, "end_char_pos": 87 }, { "type": "A", "before": null, "after": ", as in a game,", "start_char_pos": 141, "end_char_pos": 141 }, { "type": "A", "before": null, "after": "using iterative learning", "start_char_pos": 187, "end_char_pos": 187 }, { "type": "A", "before": null, "after": "steady state", "start_char_pos": 694, "end_char_pos": 694 }, { "type": "D", "before": "power law", "after": null, "start_char_pos": 757, "end_char_pos": 766 }, { "type": "R", "before": "still holds", "after": "is still given by power law", "start_char_pos": 803, "end_char_pos": 814 } ]
[ 0, 252, 362, 489, 609, 689, 854 ]
1403.1822
2
We study a resource utilization scenario characterized by intrinsic attractiveness. We consider a system of many restaurants where customers compete, as in a game, to get the best services out of many choices using iterative learning . Results for the case with uniform attractiveness are reported. When attractiveness is uniformly distributed, it gives rise to a Zipf-Pareto law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of attractiveness . A variant of the model is also introduced where the attractiveness can be treated as a fitness to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, but the distribution in number of customers is still given by power law, implying the robustness of the model. Our model serves as a paradigm for city size distribution and the emergence of Zipf law .
We study a resource utilization scenario characterized by intrinsic fitness. To describe the growth URLanization of different cities, we consider a model for resource utilization where many restaurants compete, as in a game, to attract customers using an iterative learning process . Results for the case of restaurants with uniform fitness are reported. When fitness is uniformly distributed, it gives rise to a Zipf law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of fitness . A variant of the model is also introduced where the fitness can be treated as an ability to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, while the distribution of the number of customers still follows the Zipf law, implying the robustness of the model. Our model serves as a paradigm for the emergence of Zipf law in city size distribution .
[ { "type": "R", "before": "attractiveness. We consider a system of many restaurants where customers", "after": "fitness. To describe the growth URLanization of different cities, we consider a model for resource utilization where many restaurants", "start_char_pos": 68, "end_char_pos": 140 }, { "type": "R", "before": "get the best services out of many choices using iterative learning", "after": "attract customers using an iterative learning process", "start_char_pos": 167, "end_char_pos": 233 }, { "type": "R", "before": "with uniform attractiveness", "after": "of restaurants with uniform fitness", "start_char_pos": 257, "end_char_pos": 284 }, { "type": "R", "before": "attractiveness", "after": "fitness", "start_char_pos": 304, "end_char_pos": 318 }, { "type": "R", "before": "Zipf-Pareto", "after": "Zipf", "start_char_pos": 364, "end_char_pos": 375 }, { "type": "R", "before": "attractiveness", "after": "fitness", "start_char_pos": 520, "end_char_pos": 534 }, { "type": "R", "before": "attractiveness", "after": "fitness", "start_char_pos": 589, "end_char_pos": 603 }, { "type": "R", "before": "a fitness", "after": "an ability", "start_char_pos": 622, "end_char_pos": 631 }, { "type": "R", "before": "but the distribution in", "after": "while the distribution of the", "start_char_pos": 808, "end_char_pos": 831 }, { "type": "R", "before": "is still given by power", "after": "still follows the Zipf", "start_char_pos": 852, "end_char_pos": 875 }, { "type": "D", "before": "city size distribution and", "after": null, "start_char_pos": 954, "end_char_pos": 980 }, { "type": "A", "before": null, "after": "in city size distribution", "start_char_pos": 1007, "end_char_pos": 1007 } ]
[ 0, 83, 298, 408, 536, 656, 736, 918 ]
1403.3212
1
We consider an incomplete market with a non-tradable stochastic factor and an investment problem with optimality criterion based on a functional which is a modification of a monotone mean-variance preferences. We formulate it as a stochastic differential game problem and use Hamilton Jacobi Bellman Isaacs equations to derive the optimal investment strategy and the value function. Finally , we show that our solution coincides with the solution to classical mean-variance problem with risk aversion coefficient which is dependent on stochastic factor .
We consider an incomplete market with a nontradable stochastic factor and a continuous time investment problem with an optimality criterion based on monotone mean-variance preferences. We formulate it as a stochastic differential game problem and use Hamilton-Jacobi-Bellman-Isaacs equations to find an optimal investment strategy and the value function. What is more , we show that our solution is also optimal for the classical Markowitz problem and every optimal solution for the classical Markowitz problem is optimal also for the monotone mean-variance preferences. These results are interesting because the original Markowitz functional is not monotone, and it was observed that in the case of a static one-period optimization problem the solutions for those two functionals are different. In addition, we determine explicit Markowitz strategies in the square root factor models .
[ { "type": "R", "before": "non-tradable", "after": "nontradable", "start_char_pos": 40, "end_char_pos": 52 }, { "type": "R", "before": "an", "after": "a continuous time", "start_char_pos": 75, "end_char_pos": 77 }, { "type": "A", "before": null, "after": "an", "start_char_pos": 102, "end_char_pos": 102 }, { "type": "D", "before": "a functional which is a modification of a", "after": null, "start_char_pos": 133, "end_char_pos": 174 }, { "type": "R", "before": "Hamilton Jacobi Bellman Isaacs equations to derive the", "after": "Hamilton-Jacobi-Bellman-Isaacs equations to find an", "start_char_pos": 277, "end_char_pos": 331 }, { "type": "R", "before": "Finally", "after": "What is more", "start_char_pos": 384, "end_char_pos": 391 }, { "type": "R", "before": "coincides with the solution to classical mean-variance problem with risk aversion coefficient which is dependent on stochastic factor", "after": "is also optimal for the classical Markowitz problem and every optimal solution for the classical Markowitz problem is optimal also for the monotone mean-variance preferences. These results are interesting because the original Markowitz functional is not monotone, and it was observed that in the case of a static one-period optimization problem the solutions for those two functionals are different. In addition, we determine explicit Markowitz strategies in the square root factor models", "start_char_pos": 420, "end_char_pos": 553 } ]
[ 0, 210, 383 ]
1403.4291
1
An importance sampling algorithm for copula models is introduced. The method improves Monte Carlo estimators when the functional of interest depends mainly on the behaviour of the underlying random vector when at least one of the components is large. Such problems often arise from dependence models in finance and insurance. The importance sampling framework we propose is general and can be easily implemented for all classes of copula models from which sampling is feasible. We show how the proposal distribution can be optimized to reduce the sampling error. In a case study inspired by a typical multivariate insurance application, we obtain variance reduction factors between 10 and 20 in comparison to standard Monte Carlo estimators.
An importance sampling approach for sampling copula models is introduced. We propose two algorithms that improve Monte Carlo estimators when the functional of interest depends mainly on the behaviour of the underlying random vector when at least one of the components is large. Such problems often arise from dependence models in finance and insurance. The importance sampling framework we propose is general and can be easily implemented for all classes of copula models from which sampling is feasible. We show how the proposal distribution of the two algorithms can be optimized to reduce the sampling error. In a case study inspired by a typical multivariate insurance application, we obtain variance reduction factors between 10 and 30 in comparison to standard Monte Carlo estimators.
[ { "type": "R", "before": "algorithm for", "after": "approach for sampling", "start_char_pos": 23, "end_char_pos": 36 }, { "type": "R", "before": "The method improves", "after": "We propose two algorithms that improve", "start_char_pos": 66, "end_char_pos": 85 }, { "type": "A", "before": null, "after": "of the two algorithms", "start_char_pos": 516, "end_char_pos": 516 }, { "type": "R", "before": "20", "after": "30", "start_char_pos": 690, "end_char_pos": 692 } ]
[ 0, 65, 250, 325, 477, 563 ]
1403.4329
1
The paper studies the properties of discrete time stochastic optimal control problems associated with the portfolio selectionproblem and related continuous time portfolio selection problems. We found that Merton's strategy that is optimal for continuous time model can be used effectively for the discrete market model that has sufficiently small time steps and approximate the continuous time model. After natural discretization, the Merton's strategy approximates the performance of the optimal strategy in discrete time model .
This paper studies the properties of discrete time stochastic optimal control problems associated with portfolio selection. We investigate if optimal continuous time strategies can be used effectively for a discrete time market after a straightforward discretization. We found that Merton's strategy approximates the performance of the optimal strategy in a discrete time model with the sufficiently small time steps
[ { "type": "R", "before": "The", "after": "This", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "R", "before": "the portfolio selectionproblem and related continuous time portfolio selection problems. We found that Merton's strategy that is optimal for continuous time model", "after": "portfolio selection. We investigate if optimal continuous time strategies", "start_char_pos": 102, "end_char_pos": 264 }, { "type": "R", "before": "the discrete market model that has sufficiently small time steps and approximate the continuous time model. After natural discretization, the", "after": "a discrete time market after a straightforward discretization. We found that", "start_char_pos": 293, "end_char_pos": 434 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 509, "end_char_pos": 509 }, { "type": "R", "before": ".", "after": "with the sufficiently small time steps", "start_char_pos": 530, "end_char_pos": 531 } ]
[ 0, 190, 400 ]
1403.4460
1
Economic integration, globalization and financial crises represent examples of processes whose understanding requires the analysis of the underlying network structure. Of particular interest is establishing whether a real economic network is in a state of (quasi)stationary equilibrium, i.e. characterized by smooth structural changes rather than abrupt transitions. While in the former case the behaviour of the system can be reasonably controlled and predicted, in the latter case this is generally impossible. Here we propose a method to assess whether a real economic network is in a quasi-stationary state by checking the consistency of its structural evolution with appropriate quasi-equilibrium maximum-entropy ensembles of graphs. As illustrative examples, we consider the International Trade Network (ITN) and the Dutch Interbank Network (DIN). We find that the ITN is an almost perfect example of quasi-equilibrium network, while the DIN is clearly out-of-equilibrium. In the latter, the entity of the deviation from quasi-stationarity contains precious information that allows us to identify remarkable early-warning signals of the interbank crisis of 2008. These early-warning signals involve certain dyadic and triadic topological properties, including dangerous ` debt loops' with different levels of interbank reciprocity.
Economic integration, globalization and financial crises represent examples of processes whose understanding requires the analysis of the underlying network structure. Of particular interest is establishing whether a real economic network is in a state of (quasi)stationary equilibrium, i.e. characterized by smooth structural changes rather than abrupt transitions. While in the former case the behaviour of the system can be reasonably controlled and predicted, in the latter case this is generally impossible. Here , we propose a method to assess whether a real economic network is in a quasi-stationary state by checking the consistency of its structural evolution with appropriate quasi-equilibrium maximum-entropy ensembles of graphs. As illustrative examples, we consider the International Trade Network (ITN) and the Dutch Interbank Network (DIN). We find that the ITN is an almost perfect example of quasi-equilibrium network, while the DIN is clearly out-of-equilibrium. In the latter, the entity of the deviation from quasi-stationarity contains precious information that allows us to identify remarkable early warning signals of the interbank crisis of 2008. These early warning signals involve certain dyadic and triadic topological properties, including dangerous ' debt loops' with different levels of interbank reciprocity.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 518, "end_char_pos": 518 }, { "type": "R", "before": "early-warning", "after": "early warning", "start_char_pos": 1115, "end_char_pos": 1128 }, { "type": "R", "before": "early-warning", "after": "early warning", "start_char_pos": 1176, "end_char_pos": 1189 }, { "type": "R", "before": "`", "after": "'", "start_char_pos": 1277, "end_char_pos": 1278 } ]
[ 0, 167, 366, 512, 739, 854, 979, 1169 ]
1403.5227
1
We introduce a model independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the process of Hawkes branching ratio estimation, proposed as a proxy for market endogeneity in recent publications and formerly estimated using numerical maximisation of likelihood . We employ this method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.
We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio , recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximisation . We employ our new method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.
[ { "type": "R", "before": "model independent", "after": "model-independent", "start_char_pos": 15, "end_char_pos": 32 }, { "type": "R", "before": "process of", "after": "estimation of the", "start_char_pos": 329, "end_char_pos": 339 }, { "type": "R", "before": "estimation,", "after": ", recently", "start_char_pos": 363, "end_char_pos": 374 }, { "type": "D", "before": "in recent publications", "after": null, "start_char_pos": 418, "end_char_pos": 440 }, { "type": "R", "before": "maximisation of likelihood", "after": "likelihood maximisation", "start_char_pos": 480, "end_char_pos": 506 }, { "type": "R", "before": "this", "after": "our new", "start_char_pos": 519, "end_char_pos": 523 } ]
[ 0, 111, 283, 508 ]
1403.5623
1
Complex non-linear interactions between banks and assets we model by two time-dependent Erdos Renyi network models where each node, representing bank, can invest either to a single asset (model I) or multiple assets (model II). We use dynamical network approach to evaluate the collective financial failure---systemic risk---quantified by the fraction of active nodes. The systemic risk can be calculated over any future time period, divided on sub-periods, where within each sub-period banks may contiguously fail due to links to either (i) assets or (ii) other banks, controlled by two parameters, probability of internal failure p and threshold T_h ( ``solvency'' parameter). The systemic risk non-linearly increases with p and decreases with average network degree faster when all assets are equally distributed across banks than if assets are randomly distributed. The more inactive banks each bank can endure (smaller T_h), the smaller the systemic risk---for some T_h values in %DIFDELCMD < {\bf %%% I we report a discontinuity in systemic risk. When contiguous spreading becomes stochastic (ii) controlled by probability p_2---a condition for the bank to be solvent (active) is stochastic---with increasing p_2, the systemic risk decreases with both pand T_h . We analyse asset allocation for the U.S. banks.
Complex non-linear interactions between banks and assets we model by two time-dependent Erdos Renyi network models where each node, representing bank, can invest either to a single asset (model I) or multiple assets (model II). We use dynamical network approach to evaluate the collective financial failure---systemic risk---quantified by the fraction of active nodes. The systemic risk can be calculated over any future time period, divided on sub-periods, where within each sub-period banks may contiguously fail due to links to either (i) assets or (ii) other banks, controlled by two parameters, probability of internal failure p and threshold T_h ( "solvency" parameter). The systemic risk non-linearly increases with p and decreases with average network degree faster when all assets are equally distributed across banks than if assets are randomly distributed. The more inactive banks each bank can sustain (smaller T_h), the smaller the systemic risk---for some T_h values in %DIFDELCMD < {\bf %%% I we report a discontinuity in systemic risk. When contiguous spreading becomes stochastic (ii) controlled by probability p_2---a condition for the bank to be solvent (active) is stochastic---the systemic risk decreases with decreasing p_2 . We analyse asset allocation for the U.S. banks.
[ { "type": "R", "before": "``solvency''", "after": "\"solvency\"", "start_char_pos": 654, "end_char_pos": 666 }, { "type": "R", "before": "endure", "after": "sustain", "start_char_pos": 908, "end_char_pos": 914 }, { "type": "R", "before": "I", "after": "I", "start_char_pos": 1007, "end_char_pos": 1008 }, { "type": "R", "before": "stochastic---with increasing p_2, the", "after": "stochastic---the", "start_char_pos": 1186, "end_char_pos": 1223 }, { "type": "R", "before": "both pand T_h", "after": "decreasing p_2", "start_char_pos": 1253, "end_char_pos": 1266 } ]
[ 0, 227, 368, 678, 869, 1052 ]
1403.5987
1
By extending large deviation theory to sub-exponential contributions , we determine the fine structure in the probability distribution of the observable displacement of a bead coupled to a molecular motor. More generally, for any stochastic motion along a periodic substrate, this approach reveals a discrete symmetry of this distribution for which hidden degrees of freedom lead to a periodic modulation of the slope typically associated with the fluctuation theorem. Contrary to previous interpretations of experimental data, the mean force exerted by a molecular motor is unrelated to the long-time asymptotics of this slope and must rather be extracted from its short-time limit.
By considering subexponential contributions in large deviation theory , we determine the fine structure in the probability distribution of the observable displacement of a bead coupled to a molecular motor. More generally, for any stochastic motion along a periodic substrate, this approach reveals a discrete symmetry of this distribution for which hidden degrees of freedom lead to a periodic modulation of the slope typically associated with the fluctuation theorem. Contrary to previous interpretations of experimental data, the mean force exerted by a molecular motor is unrelated to the long-time asymptotics of this slope and must rather be extracted from its short-time limit.
[ { "type": "R", "before": "extending", "after": "considering subexponential contributions in", "start_char_pos": 3, "end_char_pos": 12 }, { "type": "D", "before": "to sub-exponential contributions", "after": null, "start_char_pos": 36, "end_char_pos": 68 } ]
[ 0, 205, 468 ]
1403.6222
1
As the total number of molecules in a system decreases, transitions in the qualitative behavior of autocatalytic chemical reaction dynamics may appear. By adapting a method based on the discrete-time Markov process , we provide here a general analytic tool for the reaction dynamics of systems with a small number of molecules. Bistability induced by the small-number effect in a two-component model is analyzed and excellent agreement with the simulated mean switching time is demonstrated. A novel transition involving the reversal of the chemical reaction flow is also found in a three-component model and quantified analytically using the proposed method .
Transitions in the qualitative behavior of chemical reaction dynamics with a decrease in molecule number have attracted much attention. Here, a method based on a Markov process with a tridiagonal transition matrix is applied to the analysis of this transition in reaction dynamics. The transition to bistability due to the small-number effect and the mean switching time between the bistable states are analytically calculated in agreement with numerical simulations. In addition, a novel transition involving the reversal of the chemical reaction flow is found in the model under an external flow, and also in a three-component model . The generality of this transition and its correspondence to biological phenomena are also discussed .
[ { "type": "R", "before": "As the total number of molecules in a system decreases, transitions in", "after": "Transitions in", "start_char_pos": 0, "end_char_pos": 70 }, { "type": "D", "before": "autocatalytic", "after": null, "start_char_pos": 99, "end_char_pos": 112 }, { "type": "R", "before": "may appear. By adapting", "after": "with a decrease in molecule number have attracted much attention. Here,", "start_char_pos": 140, "end_char_pos": 163 }, { "type": "A", "before": null, "after": "a Markov process with a tridiagonal transition matrix is applied to the analysis of this transition in reaction dynamics. The transition to bistability due to", "start_char_pos": 182, "end_char_pos": 182 }, { "type": "D", "before": "discrete-time Markov process , we provide here a general analytic tool for the reaction dynamics of systems with a small number of molecules. Bistability induced by the", "after": null, "start_char_pos": 187, "end_char_pos": 355 }, { "type": "R", "before": "in a two-component model is analyzed and excellent agreement with the simulated", "after": "and the", "start_char_pos": 376, "end_char_pos": 455 }, { "type": "R", "before": "is demonstrated. A", "after": "between the bistable states are analytically calculated in agreement with numerical simulations. In addition, a", "start_char_pos": 476, "end_char_pos": 494 }, { "type": "R", "before": "also found in", "after": "found in the model under an external flow, and also in", "start_char_pos": 568, "end_char_pos": 581 }, { "type": "R", "before": "and quantified analytically using the proposed method", "after": ". The generality of this transition and its correspondence to biological phenomena are also discussed", "start_char_pos": 606, "end_char_pos": 659 } ]
[ 0, 151, 328, 492 ]
1403.7269
1
Many investment models in discrete or continuous-time settings boil down to maximizing an objective of the quantile function of the decision variable. This quantile optimization problem is known as the quantile formulation of the original investment problem. Under certain monotonicity assumptions, several schemes to solve such quantile optimization problems have been proposed in the literature. In this paper, we propose a change-of-variable and relaxation method to solve the quantile optimization problems without using the calculus of variations or making any monotonicity assumptions. The method is demonstrated through a portfolio choice problem under rank-dependent utility theory (RDUT). We show that solving a portfolio choice problem under RDUT reduces to solving a classical Merton's portfolio choice problem under expected utility theory with the same utility function but a different pricing kernel explicitly determined by the given pricing kernel and probability weighting function. With this result, the feasibility, well-posedness, attainability and uniqueness issues for the portfolio choice problem under RDUT are solved . The method is applicable to general models with law-invariant preference measures including portfolio choice models under cumulative prospect theory (CPT) or RDUT, Yaari's dual model, Lopes' SP/A model, and optimal stopping models under CPT or RDUT.
Many investment models in discrete or continuous-time settings boil down to maximizing an objective of the quantile function of the decision variable. This quantile optimization problem is known as the quantile formulation of the original investment problem. Under certain monotonicity assumptions, several schemes to solve such quantile optimization problems have been proposed in the literature. In this paper, we propose a change-of-variable and relaxation method to solve the quantile optimization problems without using the calculus of variations or making any monotonicity assumptions. The method is demonstrated through a portfolio choice problem under rank-dependent utility theory (RDUT). We show that this problem is equivalent to a classical Merton's portfolio choice problem under expected utility theory with the same utility function but a different pricing kernel explicitly determined by the given pricing kernel and probability weighting function. With this result, the feasibility, well-posedness, attainability and uniqueness issues for the portfolio choice problem under RDUT are solved . It is also shown that solving functional optimization problems may reduce to solving probabilistic optimization problems . The method is applicable to general models with law-invariant preference measures including portfolio choice models under cumulative prospect theory (CPT) or RDUT, Yaari's dual model, Lopes' SP/A model, and optimal stopping models under CPT or RDUT.
[ { "type": "R", "before": "solving a portfolio choice problem under RDUT reduces to solving", "after": "this problem is equivalent to", "start_char_pos": 711, "end_char_pos": 775 }, { "type": "A", "before": null, "after": ". It is also shown that solving functional optimization problems may reduce to solving probabilistic optimization problems", "start_char_pos": 1142, "end_char_pos": 1142 } ]
[ 0, 150, 258, 397, 591, 697, 999, 1144 ]
1403.7924
1
AQPs (aquaporins), the rapid water channels of cells, play a key role in maintaining osmotic equilibrium of cells. In this paper, we reported the dynamic mechanism of AQP osmosis at the molecular level. A theoretical model based on molecular dynamics was carried out and verified by the published experimental data. The reflection coefficients ( \sigma%DIFDELCMD < }%%% ) of neutral molecules are mainly decided by their relative size with AQPs, and increase with a third power up to a constant value 1. This model also indicated that the reflection coefficient of a complete impermeable solute can be smaller than 1. The H+ concentration of solution can influence the driving force of the AQPs by changing the equivalent diameters of vestibules surrounded by loops with abundant polar amino acids. In this way, pH of solution can regulate water permeability of AQPs. Therefore, an AQP may not only work as a switch to open or close, but as a rapid response molecular valve to control its water flow. The vestibules can prevent the channel blockage of AQPs by a primary screening before their constriction region. This model also provides a prediction tool to the structure of AQPs by the \sigma%DIFDELCMD < }%%% s of special solutes. The puzzling variance between \sigma%DIFDELCMD < } %%% to erythrocytes AQP1 and \sigma%DIFDELCMD < } %%% to oocytes-expressing AQP1 was also explained .
%DIFDELCMD < }%%% This work presents a modified Kedem-Katchalsky equations for osmosis through nano-pore. osmotic reflection coefficient of a %DIFDELCMD < }%%% %DIFDELCMD < } %%% %DIFDELCMD < } %%% solute was found to be chiefly affected by the entrance of the pore while filtration reflection coefficient can be affected by both the entrance and the internal structure of the pore. Using an analytical method, we get the quantitative relationship between osmotic reflection coefficient and the molecule size. The model is verified by comparing the theoretical results with the reported experimental data of aquaporin osmosis. Our work is expected to pave the way for a better understanding of osmosis in bio-system and to give us new ideas in designing new membranes with better performance .
[ { "type": "D", "before": "AQPs (aquaporins), the rapid water channels of cells, play a key role in maintaining osmotic equilibrium of cells. In this paper, we reported the dynamic mechanism of AQP osmosis at the molecular level. A theoretical model based on molecular dynamics was carried out and verified by the published experimental data. The reflection coefficients (", "after": null, "start_char_pos": 0, "end_char_pos": 345 }, { "type": "D", "before": "\\sigma", "after": null, "start_char_pos": 346, "end_char_pos": 352 }, { "type": "R", "before": ") of neutral molecules are mainly decided by their relative size with AQPs, and increase with a third power up to a constant value 1. This model also indicated that the reflection", "after": "This work presents a modified Kedem-Katchalsky equations for osmosis through nano-pore. osmotic reflection", "start_char_pos": 370, "end_char_pos": 549 }, { "type": "D", "before": "complete impermeable solute can be smaller than 1. The H+ concentration of solution can influence the driving force of the AQPs by changing the equivalent diameters of vestibules surrounded by loops with abundant polar amino acids. In this way, pH of solution can regulate water permeability of AQPs. Therefore, an AQP may not only work as a switch to open or close, but as a rapid response molecular valve to control its water flow. The vestibules can prevent the channel blockage of AQPs by a primary screening before their constriction region. This model also provides a prediction tool to the structure of AQPs by the", "after": null, "start_char_pos": 567, "end_char_pos": 1188 }, { "type": "D", "before": "\\sigma", "after": null, "start_char_pos": 1189, "end_char_pos": 1195 }, { "type": "D", "before": "s of special solutes. The puzzling variance between", "after": null, "start_char_pos": 1213, "end_char_pos": 1264 }, { "type": "D", "before": "\\sigma", "after": null, "start_char_pos": 1265, "end_char_pos": 1271 }, { "type": "D", "before": "to erythrocytes AQP1 and", "after": null, "start_char_pos": 1290, "end_char_pos": 1314 }, { "type": "D", "before": "\\sigma", "after": null, "start_char_pos": 1315, "end_char_pos": 1321 }, { "type": "R", "before": "to oocytes-expressing AQP1 was also explained", "after": "solute was found to be chiefly affected by the entrance of the pore while filtration reflection coefficient can be affected by both the entrance and the internal structure of the pore. Using an analytical method, we get the quantitative relationship between osmotic reflection coefficient and the molecule size. The model is verified by comparing the theoretical results with the reported experimental data of aquaporin osmosis. Our work is expected to pave the way for a better understanding of osmosis in bio-system and to give us new ideas in designing new membranes with better performance", "start_char_pos": 1340, "end_char_pos": 1385 } ]
[ 0, 114, 202, 315, 503, 617, 798, 867, 1000, 1113, 1234 ]
1403.8125
1
We develop momentum and contrarian strategies with stock selection rules based on maximum drawdown and consecutive recovery. The alternative strategies in monthly and weekly scales outperform the portfolios constructed by cumulative return regardless of marketuniverse. In monthly periods, the ranking rules associated with the maximum drawdown dominate other momentum strategies. The recovery related selection rules are the best ranking criteria for the weekly contrarian portfolio construction. The alternative portfolios are less riskier in many reward-risk measures such as Sharpe ratio, VaR, CVaR , and maximum drawdown. The outperformance of the alternative strategies leads to the higher factor-neutral alphas in the Fama-French three-factor model .
We test predictability on asset price using stock selection rules based on maximum drawdown and consecutive recovery. Monthly momentum- and weekly contrarian-style portfolios ranked by the alternative selection criteria are implemented in various asset classes. Regardless of market, the alternative ranking rules are superior in forecasting asset prices and capturing cross-sectional return differentials. In a monthly period, alternative portfolios constructed by maximum drawdown measures dominate other momentum portfolios including the cumulative return-based momentum portfolios. Recovery-related stock selection criteria are the best ranking measures for predicting mean-reversion in a weekly scale. Prediction on future directions becomes more consistent, because the alternative portfolios are less riskier in various reward-risk measures such as Sharpe ratio, VaR, CVaR and maximum drawdown. In the Carhart four-factor analysis, higher factor-neutral intercepts for the alternative strategies are another evidence for the robust prediction by the alternative stock selection rules .
[ { "type": "R", "before": "develop momentum and contrarian strategies with", "after": "test predictability on asset price using", "start_char_pos": 3, "end_char_pos": 50 }, { "type": "R", "before": "The alternative strategies in monthly and weekly scales outperform the portfolios constructed by cumulative return regardless of marketuniverse. In monthly periods, the ranking rules associated with the maximum drawdown", "after": "Monthly momentum- and weekly contrarian-style portfolios ranked by the alternative selection criteria are implemented in various asset classes. Regardless of market, the alternative ranking rules are superior in forecasting asset prices and capturing cross-sectional return differentials. In a monthly period, alternative portfolios constructed by maximum drawdown measures", "start_char_pos": 125, "end_char_pos": 344 }, { "type": "R", "before": "strategies. The recovery related selection rules", "after": "portfolios including the cumulative return-based momentum portfolios. Recovery-related stock selection criteria", "start_char_pos": 369, "end_char_pos": 417 }, { "type": "R", "before": "criteria for the weekly contrarian portfolio construction. The", "after": "measures for predicting mean-reversion in a weekly scale. Prediction on future directions becomes more consistent, because the", "start_char_pos": 439, "end_char_pos": 501 }, { "type": "R", "before": "many", "after": "various", "start_char_pos": 545, "end_char_pos": 549 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 603, "end_char_pos": 604 }, { "type": "R", "before": "The outperformance of the alternative strategies leads to the higher", "after": "In the Carhart four-factor analysis, higher", "start_char_pos": 627, "end_char_pos": 695 }, { "type": "R", "before": "alphas in the Fama-French three-factor model", "after": "intercepts for the alternative strategies are another evidence for the robust prediction by the alternative stock selection rules", "start_char_pos": 711, "end_char_pos": 755 } ]
[ 0, 124, 269, 380, 497, 626 ]
1404.0284
1
Many countries are rolling out smart electricity meters. These measure a home's total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the `ground truth' demand of individual appliances. We present ` UK-DALE ' : an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from four homes , one of which was recorded for 499 days, the longest duration we are aware of for similar datasets . We also describe the low-cost, open-source, wireless system we built for collecting our dataset.
Many countries are rolling out smart electricity meters. These measure a home's total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the `ground truth' demand of individual appliances. In this context,we present UK-DALE : an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses , one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate . We also describe the low-cost, open-source, wireless system we built for collecting our dataset.
[ { "type": "R", "before": "We present `", "after": "In this context,we present", "start_char_pos": 623, "end_char_pos": 635 }, { "type": "D", "before": "'", "after": null, "start_char_pos": 644, "end_char_pos": 645 }, { "type": "R", "before": "four homes", "after": "five houses", "start_char_pos": 905, "end_char_pos": 915 }, { "type": "R", "before": "499", "after": "655", "start_char_pos": 948, "end_char_pos": 951 }, { "type": "R", "before": "similar datasets", "after": "any energy dataset at this sample rate", "start_char_pos": 999, "end_char_pos": 1015 } ]
[ 0, 56, 99, 291, 432, 622, 817, 887, 1017 ]
1404.0427
1
The current biochemical information processing systems behave in a pre-determined manner because all features are defined during the design phase. To make such unconventional computing systems reusable and programmable for biomedical applications, adaptation, learning, and self-modification based on external stimuli would be highly desirable. However, so far, it has been too challenging to implement these in real or simulated chemistries. In this paper we extend the chemical perceptron, a model previously proposed by the authors, to function as an analog instead of a binary system. The new analog asymmetric signal perceptron learns through feedback and supports Michaelis-Menten kinetics. The results show that our perceptron is able to learn linear and nonlinear (quadratic) functions of two inputs. To the best of our knowledge, it is the first simulated chemical system capable of doing so. The small number of species and reactions allows for a mapping to an actual wet implementation using DNA-strand displacement or deoxyribozymes. Our results are an important step toward actual biochemical systems that can learn and adapt.
The current biochemical information processing systems behave in a predetermined manner because all features are defined during the design phase. To make such unconventional computing systems reusable and programmable for biomedical applications, adaptation, learning, and self-modification based on external stimuli would be highly desirable. However, so far, it has been too challenging to implement these in wet chemistries. In this paper we extend the chemical perceptron, a model previously proposed by the authors, to function as an analog instead of a binary system. The new analog asymmetric signal perceptron learns through feedback and supports Michaelis-Menten kinetics. The results show that our perceptron is able to learn linear and nonlinear (quadratic) functions of two inputs. To the best of our knowledge, it is the first simulated chemical system capable of doing so. The small number of species and reactions and their simplicity allows for a mapping to an actual wet implementation using DNA-strand displacement or deoxyribozymes. Our results are an important step toward actual biochemical systems that can learn and adapt.
[ { "type": "R", "before": "pre-determined", "after": "predetermined", "start_char_pos": 67, "end_char_pos": 81 }, { "type": "R", "before": "real or simulated", "after": "wet", "start_char_pos": 412, "end_char_pos": 429 }, { "type": "A", "before": null, "after": "and their simplicity", "start_char_pos": 944, "end_char_pos": 944 } ]
[ 0, 146, 344, 442, 588, 696, 808, 901, 1046 ]
1404.0568
1
HiRE-RNA is a simplified, coarse-grained model, developed in recent years to address the question of RNA folding for the prediction of equilibrium configurations, dynamics and thermodynamics. Earlier versions of the model predicted simple folds such as hairpins and double helices. Important modifications in the force field now allow us to treat a much larger variety of structures thanks to the possibility of one base to form multiple contacts and non-canonical pairings, which are essential for the formation and the stability of structures such as pseudoknots, multiple helices, and the complex architecturesof riboswitches\mus .
HiRE-RNA is a simplified, coarse-grained RNA model for the prediction of equilibrium configurations, dynamics and thermodynamics. Using a reduced set of particles and detailed interactions accounting for base-pairing and stacking we show that non-canonical and multiple base interactions are necessary to capture the full physical behavior of complex RNAs. In this paper we give a full account of the model and we present results on the folding, stability and free energy surfaces of 16 systems with 12 to 76 nucleotides of increasingly complex architectures, ranging from monomers to dimers, using a total of 850\mus simulation time .
[ { "type": "R", "before": "model, developed in recent years to address the question of RNA folding", "after": "RNA model", "start_char_pos": 41, "end_char_pos": 112 }, { "type": "R", "before": "Earlier versions of the model predicted simple folds such as hairpins and double helices. Important modifications in the force field now allow us to treat a much larger variety of structures thanks to the possibility of one base to form multiple contacts and", "after": "Using a reduced set of particles and detailed interactions accounting for base-pairing and stacking we show that", "start_char_pos": 192, "end_char_pos": 450 }, { "type": "R", "before": "pairings, which are essential for the formation and the stability of structures such as pseudoknots, multiple helices, and the complex architecturesof riboswitches", "after": "and multiple base interactions are necessary to capture the full physical behavior of complex RNAs. In this paper we give a full account of the model and we present results on the folding, stability and free energy surfaces of 16 systems with 12 to 76 nucleotides of increasingly complex architectures, ranging from monomers to dimers, using a total of 850", "start_char_pos": 465, "end_char_pos": 628 }, { "type": "A", "before": null, "after": "simulation time", "start_char_pos": 633, "end_char_pos": 633 } ]
[ 0, 191, 281 ]
1404.0601
1
In Figueroa-L\'opez et al. (2013), a second order approximation for at-the-money (ATM) option prices is derived for a large class of exponential L\'evy models, with or without a Brownian component. The purpose of this article is twofold. First, we relax the regularity conditions imposed in Figueroa-L\'opez et al. (2013) on the L\'evy density to the weakest possible conditions for such an expansion to make sense . Second, we show that the formulas extend both to the case of "close-to-the-money" strikes and to the case where the continuous Brownian component is replaced by an independent stochastic volatility process with leverage.
In Figueroa-L\'opez et al. (2013), a second order approximation for at-the-money (ATM) option prices is derived for a large class of exponential L\'evy models, with or without a Brownian component. The purpose of this article is twofold. First, we relax the regularity conditions imposed in Figueroa-L\'opez et al. (2013) on the L\'evy density to the weakest possible conditions for such an expansion to be well defined . Second, we show that the formulas extend both to the case of "close-to-the-money" strikes and to the case where the continuous Brownian component is replaced by an independent stochastic volatility process with leverage.
[ { "type": "R", "before": "make sense", "after": "be well defined", "start_char_pos": 404, "end_char_pos": 414 } ]
[ 0, 197, 237, 416 ]
1404.0763
1
Topological constraints can affect both equilibrium and dynamics of polymer systems, and can play a role in URLanization in the cell. Despite of many theoretical conjectures, effects of topological constraints on a single compact polymer have not been systematically studied. Here we use simulations to address this longstanding problem and find that topological constraints create a new equilibrium state of a globular polymer. In this state, which resembles the conjectured fractal (crumpled ) globule , subchains of a polymer form largely unknotted and asymptotically compact crumples.
Topological constraints can affect both equilibrium and dynamic properties of polymer systems, and can play a role in URLanization of chromosomes. Despite many theoretical studies, the effects of topological constraints on the equilibrium state of a single compact polymer have not been systematically studied. Here we use simulations to address this longstanding problem . We find that sufficiently long unknotted polymers differ from knotted ones in the spatial and topological states of their subchains. The unknotted globule has subchains that are mostly unknotted and form asymptotically compact R_G(s) \sim s^{1/3 Grosberg et al., Journal de Physique, 1988 , 49, 2095 , but differs from its idealized hierarchy of self-similar, isolated and compact crumples.
[ { "type": "R", "before": "dynamics", "after": "dynamic properties", "start_char_pos": 56, "end_char_pos": 64 }, { "type": "R", "before": "in the cell. Despite of many theoretical conjectures,", "after": "of chromosomes. Despite many theoretical studies, the", "start_char_pos": 121, "end_char_pos": 174 }, { "type": "A", "before": null, "after": "the equilibrium state of", "start_char_pos": 213, "end_char_pos": 213 }, { "type": "R", "before": "and find that topological constraints create a new equilibrium state of a globular polymer. In this state, which resembles the conjectured fractal (crumpled ) globule", "after": ". We find that sufficiently long unknotted polymers differ from knotted ones in the spatial and topological states of their subchains. The unknotted globule has subchains that are mostly unknotted and form asymptotically compact R_G(s) \\sim s^{1/3", "start_char_pos": 338, "end_char_pos": 504 }, { "type": "A", "before": null, "after": "Grosberg et al., Journal de Physique, 1988", "start_char_pos": 505, "end_char_pos": 505 }, { "type": "R", "before": "subchains of a polymer form largely unknotted and asymptotically", "after": "49, 2095", "start_char_pos": 508, "end_char_pos": 572 }, { "type": "A", "before": null, "after": ", but differs from its idealized hierarchy of self-similar, isolated and", "start_char_pos": 573, "end_char_pos": 573 } ]
[ 0, 133, 276, 429 ]
1404.1027
1
Biological sensory systems react to changes in their surroundings. They are characterized by fast response and slow adaptation to varying environmental cues. Insofar as sensory adaptive systems map environmental changes to changes of their internal degrees of freedom, they can be regarded as computational devices manipulating information. Landauer established that information is ultimately physical, and its manipulation subject to the entropic and energetic bounds of thermodynamics. Thus the fundamental costs of biological sensory adaptation can be elucidated by tracking how the information the system has about its environment is altered. These bounds are particularly relevant for URLanisms, which unlike everyday computers operate at very low energies. In this paper, we establish a general framework to the thermodynamics of information processing in sensing. With it, we quantify how during sensory adaptation information about the past is erased, while information about the present is gathered. This process produces entropy larger than the amount of old information erased and has an energetic cost bounded by the amount of new information written to memory. We apply these principles to the E. coli's chemotaxis pathway during binary ligand concentration changes. In this regime, we quantify the amount of information stored by each methyl group , and show that receptors consume energy in the range of the information-theoretic minimum. Our work provides a basis for further inquiries into more complex phenomena, such as gradient sensing , frequency response, and signal amplification .
Biological sensory systems react to changes in their surroundings. They are characterized by fast response and slow adaptation to varying environmental cues. Insofar as sensory adaptive systems map environmental changes to changes of their internal degrees of freedom, they can be regarded as computational devices manipulating information. Landauer established that information is ultimately physical, and its manipulation subject to the entropic and energetic bounds of thermodynamics. Thus the fundamental costs of biological sensory adaptation can be elucidated by tracking how the information the system has about its environment is altered. These bounds are particularly relevant for URLanisms, which unlike everyday computers operate at very low energies. In this paper, we establish a general framework for the thermodynamics of information processing in sensing. With it, we quantify how during sensory adaptation information about the past is erased, while information about the present is gathered. This process produces entropy larger than the amount of old information erased and has an energetic cost bounded by the amount of new information written to memory. We apply these principles to the E. coli's chemotaxis pathway during binary ligand concentration changes. In this regime, we quantify the amount of information stored by each methyl group and show that receptors consume energy in the range of the information-theoretic minimum. Our work provides a basis for further inquiries into more complex phenomena, such as gradient sensing and frequency response .
[ { "type": "R", "before": "to", "after": "for", "start_char_pos": 811, "end_char_pos": 813 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1362, "end_char_pos": 1363 }, { "type": "R", "before": ", frequency response, and signal amplification", "after": "and frequency response", "start_char_pos": 1556, "end_char_pos": 1602 } ]
[ 0, 66, 157, 340, 487, 646, 762, 870, 1008, 1173, 1279, 1453 ]
1404.1051
1
Stock markets are efficient in the weak form in the sense that no significant autocorrelations can be identified in the returns. However, the microscopic mechanisms are unclear. We aim at understanding the impacts of order flows on the weak-form efficiency through computational experiments based on an empirical order-driven model . Three possible determinants embedded in the model are investigated, including the tail heaviness of relative prices of the placed orders characterized by the tail index \alpha_x , the degree of long memory in relative prices quantified by its Hurst index H_x, and the strength of long memory in order direction depicted by H_x. It is found that the degree of autocorrelations in returns (quantified by its Hurst index H_r ) is negatively correlated with \alpha_x and H_x and positively correlated with H_s. In addition, the values of \alpha_x and H_x have negligible impacts on H_r, whereas H_s exhibits a dominating impact on H_r. Our results suggest that stock markets are complex adaptive systems and URLanize to a critical state in which the returns are not correlated .
Social and economic systems are complex adaptive systems, in which heterogenous agents interact and evolve in a URLanized manner, and macroscopic laws emerge from microscopic properties. To understand the behaviors of complex systems, computational experiments based on physical and mathematical models provide a useful tools. Here, we perform computational experiments using a phenomenological order-driven model called the modified Mike-Farmer (MMF) to predict the impacts of order flows on the autocorrelations in ultra-high-frequency returns, quantified by Hurst index H_r . Three possible determinants embedded in the MMF model are investigated, including the Hurst index H_s of order directions, the Hurst index H_x and the power-law tail index \alpha_x of the relative prices of placed orders. The computational experiments predict that H_r is negatively correlated with \alpha_x and H_x and positively correlated with H_s. In addition, the values of \alpha_x and H_x have negligible impacts on H_r, whereas H_s exhibits a dominating impact on H_r. The predictions of the MMF model on the dependence of H_r upon H_s and H_x are verified by the empirical results obtained from the order flow data of 43 Chinese stocks .
[ { "type": "R", "before": "Stock markets are efficient in the weak form in the sense that no significant autocorrelations can be identified in the returns. However, the microscopic mechanisms are unclear. We aim at understanding the impacts of order flows on the weak-form efficiency through", "after": "Social and economic systems are complex adaptive systems, in which heterogenous agents interact and evolve in a URLanized manner, and macroscopic laws emerge from microscopic properties. To understand the behaviors of complex systems,", "start_char_pos": 0, "end_char_pos": 264 }, { "type": "R", "before": "an empirical", "after": "physical and mathematical models provide a useful tools. Here, we perform computational experiments using a phenomenological", "start_char_pos": 300, "end_char_pos": 312 }, { "type": "A", "before": null, "after": "called the modified Mike-Farmer (MMF) to predict the impacts of order flows on the autocorrelations in ultra-high-frequency returns, quantified by Hurst index H_r", "start_char_pos": 332, "end_char_pos": 332 }, { "type": "A", "before": null, "after": "MMF", "start_char_pos": 379, "end_char_pos": 379 }, { "type": "R", "before": "tail heaviness of relative prices of the placed orders characterized by the", "after": "Hurst index H_s of order directions, the Hurst index H_x and the power-law", "start_char_pos": 418, "end_char_pos": 493 }, { "type": "R", "before": ", the degree of long memory in relative prices quantified by its Hurst index H_x, and the strength of long memory in order direction depicted by H_x. It is found that the degree of autocorrelations in returns (quantified by its Hurst index H_r )", "after": "of the relative prices of placed orders. The computational experiments predict that H_r", "start_char_pos": 514, "end_char_pos": 759 }, { "type": "R", "before": "Our results suggest that stock markets are complex adaptive systems and URLanize to a critical state in which the returns are not correlated", "after": "The predictions of the MMF model on the dependence of H_r upon H_s and H_x are verified by the empirical results obtained from the order flow data of 43 Chinese stocks", "start_char_pos": 968, "end_char_pos": 1108 } ]
[ 0, 128, 177, 334, 403, 842 ]