doc_id
stringlengths 2
10
| revision_depth
stringclasses 5
values | before_revision
stringlengths 3
309k
| after_revision
stringlengths 5
309k
| edit_actions
list | sents_char_pos
list |
---|---|---|---|---|---|
1510.08161 | 2 | We study the price of Asian options with floating-strike when the underlying asset price follows a Markov-modulated (or regime-switching) geometric Brownian motion , where both the drift and diffusion coefficients depend on an independent continuous-time finite-state Markov chain. We propose an iterative procedure that converges to the option prices without recourse to solving a coupled PDE system . Our approach makes use of path properties of Brownian motion and the Fixed-Point Theorem. | We study the price of Asian options with floating-strike when the underlying asset price follows a Markov-modulated (or regime-switching) geometric Brownian motion and both the interest rate and diffusion coefficient depend on an independent continuous-time finite-state Markov chain. We propose an iterative procedure that converges to the option prices without recourse to solving a coupled PDE system . The method can also be applied to fixed-strike Asian options . Our approach makes use of path properties of Brownian motion and the Fixed-Point Theorem. | [
{
"type": "R",
"before": ", where both the drift and diffusion coefficients",
"after": "and both the interest rate and diffusion coefficient",
"start_char_pos": 164,
"end_char_pos": 213
},
{
"type": "A",
"before": null,
"after": ". The method can also be applied to fixed-strike Asian options",
"start_char_pos": 401,
"end_char_pos": 401
}
]
| [
0,
281,
403
]
|
1510.08161 | 3 | We study the price of Asian options with floating-strike when the underlying asset price follows a Markov-modulated (or regime-switching ) geometric Brownian motion and both the interest rate and diffusion coefficient depend on an independent continuous-time finite-state Markov chain. We propose an iterative procedure that converges to the option prices without recourse to solving a coupled PDE system. The method can also be applied to fixed-strike Asian options. Our approach makes use of path properties of Brownian motion and the Fixed-Point Theorem . | We characterize the price of an Asian option, a financial contract, as a fixed-point of a non-linear operator. In recent years, there has been interest in incorporating changes of regime into the parameters describing the evolution of the underlying asset price, namely the interest rate and the volatility, to model sudden exogenous events in the economy. Asian options are particularly interesting because the payoff depends on the integrated asset price. We study the case of both floating- and fixed-strike Asian call options with arithmetic averaging when the asset follows a regime-switching geometric Brownian motion with coefficients that depend on a Markov chain. The typical approach to finding the value of a financial option is to solve an associated system of coupled partial differential equations. Alternatively, we propose an iterative procedure that converges to the value of this contract with geometric rate using a classical fixed-point theorem . | [
{
"type": "R",
"before": "study",
"after": "characterize",
"start_char_pos": 3,
"end_char_pos": 8
},
{
"type": "R",
"before": "Asian options with floating-strike when the underlying asset price follows a Markov-modulated (or",
"after": "an Asian option, a financial contract, as a fixed-point of a non-linear operator. In recent years, there has been interest in incorporating changes of regime into the parameters describing the evolution of the underlying asset price, namely the interest rate and the volatility, to model sudden exogenous events in the economy. Asian options are particularly interesting because the payoff depends on the integrated asset price. We study the case of both floating- and fixed-strike Asian call options with arithmetic averaging when the asset follows a",
"start_char_pos": 22,
"end_char_pos": 119
},
{
"type": "D",
"before": ")",
"after": null,
"start_char_pos": 137,
"end_char_pos": 138
},
{
"type": "R",
"before": "and both the interest rate and diffusion coefficient depend on an independent continuous-time finite-state",
"after": "with coefficients that depend on a",
"start_char_pos": 165,
"end_char_pos": 271
},
{
"type": "R",
"before": "We",
"after": "The typical approach to finding the value of a financial option is to solve an associated system of coupled partial differential equations. Alternatively, we",
"start_char_pos": 286,
"end_char_pos": 288
},
{
"type": "R",
"before": "option prices without recourse to solving a coupled PDE system. The method can also be applied to fixed-strike Asian options. Our approach makes use of path properties of Brownian motion and the Fixed-Point Theorem",
"after": "value of this contract with geometric rate using a classical fixed-point theorem",
"start_char_pos": 342,
"end_char_pos": 556
}
]
| [
0,
285,
405,
467
]
|
1510.08237 | 1 | Motivation: Prediction of phenotypes from high-dimensional data is a crucial task in precision biology and medicine. Many technologies employ genomic biomarkers to characterize phenotypes. However, such elements are not sufficient to explain the underlying biology. To improve this, pathway analysis techniques have been proposed. Nevertheless, such methods have shown lack of precision in phenotypes classification. Results: Here we propose a novel methodology called MITHrIL (Mirna enrIched paTHway Impact anaLysis) for the analysis of signaling pathways . MITHrIL extends pathways by adding missing regulatory elements, such as microRNAs, and their interactions with genes. The method takes as input the expression values of genes and/or microRNAs and returns a list of pathways sorted according to their deregulation degee , together with the corresponding statistical significance (p-values). Our analysis shows that MITHrIL outperforms its competitors even in the worst case. In addition, our method is able to correctly classify sets of tumor samples drawn from TCGA (overall error rate equal to 0.90\%) . Availability: MITHrIL is freely available at the following URL: URL | Motivation: Prediction of phenotypes from high-dimensional data is a crucial task in precision biology and medicine. Many technologies employ genomic biomarkers to characterize phenotypes. However, such elements are not sufficient to explain the underlying biology. To improve this, pathway analysis techniques have been proposed. Nevertheless, such methods have shown lack of accuracy in phenotypes classification. Results: Here we propose a novel methodology called MITHrIL (Mirna enrIched paTHway Impact anaLysis) for the analysis of signaling pathways , which has built on top of the work of Tarca et al., 2009. MITHrIL extends pathways by adding missing regulatory elements, such as microRNAs, and their interactions with genes. The method takes as input the expression values of genes and/or microRNAs and returns a list of pathways sorted according to their deregulation degree , together with the corresponding statistical significance (p-values). Our analysis shows that MITHrIL outperforms its competitors even in the worst case. In addition, our method is able to correctly classify sets of tumor samples drawn from TCGA . Availability: MITHrIL is freely available at the following URL: URL | [
{
"type": "R",
"before": "precision",
"after": "accuracy",
"start_char_pos": 377,
"end_char_pos": 386
},
{
"type": "R",
"before": ".",
"after": ", which has built on top of the work of Tarca et al., 2009.",
"start_char_pos": 557,
"end_char_pos": 558
},
{
"type": "R",
"before": "degee",
"after": "degree",
"start_char_pos": 821,
"end_char_pos": 826
},
{
"type": "D",
"before": "(overall error rate equal to 0.90\\%)",
"after": null,
"start_char_pos": 1074,
"end_char_pos": 1110
}
]
| [
0,
116,
188,
265,
330,
416,
676,
897,
981,
1112
]
|
1510.08299 | 1 | Unlike traditional materials, living cells actively generate forces at the molecular scale that change their structure and mechanical properties. This nonequilibrium activity is essential for cellular function, and drives processes such as cell division. Single molecule studies have uncovered the detailed force kinetics of isolated motor proteins in-vitro , however their behavior in-vivo has been elusivedue to the complex environment inside the cell . Here, we quantify active force generation in living oocytesusing in-vivo optical trapping and laser interferometry of endogenous vesicles. We integrate an experimental and theoretical framework to connect mesoscopic measurements of nonequilibrium properties to the underlying molecular-scale force kinetics . Our results show that force generation by myosin-V drives the cytoplasmic-skeleton out-of-equilibrium (at frequencies below 300 Hz) and actively softens the environment. In vivo myosin-V activity generates a forceof F \sim 0.4 pN, with a power-stroke of length \Delta x \sim 20 nm and duration \tau \sim 300 \mus , that drives vesicle motion at v_v \sim 320 nm/s. This framework is widely applicable to quantify nonequilibrium properties of living cells and other soft active materials . | Active diffusion of intracellular components is emerging as an important process in cell biology. This process is mediated by complex assemblies of molecular motors and cytoskeletal filaments that drive force generation in the cytoplasm and facilitate enhanced motion. The kinetics of molecular motors have been precisely characterized in-vitro by single molecule approaches, however, their in-vivo behavior has remained elusive . Here, we study the myosin-V driven active diffusion of vesicles in mouse oocytes, where this process plays a key role in nuclear positioning during development, and combine an experimental and theoretical framework to extract molecular-scale force kinetics in-vivo (motor force, power-stroke , and velocity). We find that myosin-V induces rapid kicks of duration \tau \sim 300 \mus resulting in an average force of F \sim 0.4 pN on vesicles. Our results reveal that measuring in-vivo active fluctuations allows extraction of the underlying molecular motor activity and demonstrates a widely applicable mesoscopic framework to access molecular-scale force kinetics . | [
{
"type": "R",
"before": "Unlike traditional materials, living cells actively generate forces at the molecular scale that change their structure and mechanical properties. This nonequilibrium activity is essential for cellular function, and drives processes such as cell division. Single molecule studies have uncovered the detailed force kinetics of isolated motor proteins",
"after": "Active diffusion of intracellular components is emerging as an important process in cell biology. This process is mediated by complex assemblies of molecular motors and cytoskeletal filaments that drive force generation in the cytoplasm and facilitate enhanced motion. The kinetics of molecular motors have been precisely characterized",
"start_char_pos": 0,
"end_char_pos": 348
},
{
"type": "R",
"before": ", however their behavior",
"after": "by single molecule approaches, however, their",
"start_char_pos": 358,
"end_char_pos": 382
},
{
"type": "R",
"before": "has been elusivedue to the complex environment inside the cell",
"after": "behavior has remained elusive",
"start_char_pos": 391,
"end_char_pos": 453
},
{
"type": "R",
"before": "quantify active force generation in living oocytesusing in-vivo optical trapping and laser interferometry of endogenous vesicles. We integrate",
"after": "study the myosin-V driven active diffusion of vesicles in mouse oocytes, where this process plays a key role in nuclear positioning during development, and combine",
"start_char_pos": 465,
"end_char_pos": 607
},
{
"type": "R",
"before": "connect mesoscopic measurements of nonequilibrium properties to the underlying",
"after": "extract",
"start_char_pos": 653,
"end_char_pos": 731
},
{
"type": "R",
"before": ". Our results show that force generation by myosin-V drives the cytoplasmic-skeleton out-of-equilibrium (at frequencies below 300 Hz) and actively softens the environment. In vivo myosin-V activity generates a forceof F \\sim 0.4 pN, with a",
"after": "in-vivo (motor force,",
"start_char_pos": 763,
"end_char_pos": 1002
},
{
"type": "R",
"before": "of length \\Delta x \\sim 20 nm and",
"after": ", and velocity). We find that myosin-V induces rapid kicks of",
"start_char_pos": 1016,
"end_char_pos": 1049
},
{
"type": "R",
"before": ", that drives vesicle motion at v_v \\sim 320 nm/s. This framework is widely applicable to quantify nonequilibrium properties of living cells and other soft active materials",
"after": "resulting in an average force of F \\sim 0.4 pN on vesicles. Our results reveal that measuring in-vivo active fluctuations allows extraction of the underlying molecular motor activity and demonstrates a widely applicable mesoscopic framework to access molecular-scale force kinetics",
"start_char_pos": 1078,
"end_char_pos": 1250
}
]
| [
0,
145,
254,
455,
594,
764,
934,
1128
]
|
1510.08299 | 2 | Active diffusion of intracellular components is emerging as an important process in cell biology. This process is mediated by complex assemblies of molecular motors and cytoskeletal filaments that drive force generation in the cytoplasm and facilitate enhanced motion. The kinetics of molecular motors have been precisely characterized in-vitro by single molecule approaches, however, their in-vivo behavior has remained elusive. Here, we study the myosin-V driven active diffusion of vesicles in mouse oocytes, where this process plays a key role in nuclear positioning during development, and combine an experimental and theoretical framework to extract molecular-scale force kinetics in-vivo (motor force, power-stroke, and velocity) . We find that myosin-V induces rapid kicks of duration \tau \sim 300 \mus resulting in an average force of F \sim 0.4 pN on vesicles . Our results reveal that measuring in-vivo active fluctuations allows extraction of the underlying molecular motor activity and demonstrates a widely applicable mesoscopic framework to access molecular-scale force kinetics. | Active diffusion of intracellular components is emerging as an important process in cell biology. This process is mediated by complex assemblies of molecular motors and cytoskeletal filaments that drive force generation in the cytoplasm and facilitate enhanced motion. The kinetics of molecular motors have been precisely characterized in-vitro by single molecule approaches, however, their in-vivo behavior remains elusive. Here, we study the active diffusion of vesicles in mouse oocytes, where this process plays a key role in nuclear positioning during development, and combine an experimental and theoretical framework to extract molecular-scale force kinetics ( force, power-stroke, and velocity) of the in-vivo active process. Assuming a single dominant process, we find that the nonequilibrium activity induces rapid kicks of duration \tau \sim 300 \mus resulting in an average force of F \sim 0.4 pN on vesicles in in-vivo oocytes, remarkably similar to the kinetics of in-vitro myosin-V . Our results reveal that measuring in-vivo active fluctuations allows extraction of the molecular-scale activity in agreement with single-molecule studies and demonstrates a mesoscopic framework to access force kinetics. | [
{
"type": "R",
"before": "has remained",
"after": "remains",
"start_char_pos": 408,
"end_char_pos": 420
},
{
"type": "D",
"before": "myosin-V driven",
"after": null,
"start_char_pos": 449,
"end_char_pos": 464
},
{
"type": "R",
"before": "in-vivo (motor",
"after": "(",
"start_char_pos": 687,
"end_char_pos": 701
},
{
"type": "R",
"before": ". We find that myosin-V",
"after": "of the in-vivo active process. Assuming a single dominant process, we find that the nonequilibrium activity",
"start_char_pos": 737,
"end_char_pos": 760
},
{
"type": "A",
"before": null,
"after": "in in-vivo oocytes, remarkably similar to the kinetics of in-vitro myosin-V",
"start_char_pos": 871,
"end_char_pos": 871
},
{
"type": "R",
"before": "underlying molecular motor activity",
"after": "molecular-scale activity in agreement with single-molecule studies",
"start_char_pos": 961,
"end_char_pos": 996
},
{
"type": "D",
"before": "widely applicable",
"after": null,
"start_char_pos": 1016,
"end_char_pos": 1033
},
{
"type": "D",
"before": "molecular-scale",
"after": null,
"start_char_pos": 1065,
"end_char_pos": 1080
}
]
| [
0,
97,
268,
429,
738,
873
]
|
1510.08439 | 1 | We consider a stochastic control problem for a class of nonlinear kernels. More precisely, our problem of interest consists in the optimization , over a set of possibly non-dominated probability measures, of solutions of backward stochastic differential equations (BSDEs). Since BSDEs are non-linear generalizations of the traditional (linear) expectations, this problem can be understood as stochastic control of a family of nonlinear expectations, or equivalently of nonlinear kernels. Our first main contribution is to prove a dynamic pro- gramming principle for this control problem in an abstract setting, which we then use to provide a semimartingale characterization of the value function. We next explore several applications of our results. We first obtain a wellposedness result for second order BSDEs (as introduced in [ 76 ]) which does not require any regularity assumption on the terminal condition and the generator. Then we prove a non-linear optional decomposition in a robust setting, extending recent results of [ 63 ], which we then use to obtain a superhedging duality in uncertain, incomplete and non-linear financial markets. Finally, we relate, under addi- tional regularity assumptions, the value function to a viscosity solution of an appropriate path-dependent partial differential equation (PPDE). | We consider a stochastic control problem for a class of nonlinear kernels. More precisely, our problem of interest consists in the optimisation , over a set of possibly non-dominated probability measures, of solutions of backward stochastic differential equations (BSDEs). Since BSDEs are nonlinear generalisations of the traditional (linear) expectations, this problem can be understood as stochastic control of a family of nonlinear expectations, or equivalently of nonlinear kernels. Our first main contribution is to prove a dynamic programming principle for this control problem in an abstract setting, which we then use to provide a semi-martingale characterisation of the value function. We next explore several applications of our results. We first obtain a wellposedness result for second order BSDEs (as introduced in [ 86 ]) which does not require any regularity assumption on the terminal condition and the generator. Then we prove a nonlinear optional decomposition in a robust setting, extending recent results of [ 71 ], which we then use to obtain a super-hedging duality in uncertain, incomplete and nonlinear financial markets. Finally, we relate, under additional regularity assumptions, the value function to a viscosity solution of an appropriate path-dependent partial differential equation (PPDE). | [
{
"type": "R",
"before": "optimization",
"after": "optimisation",
"start_char_pos": 131,
"end_char_pos": 143
},
{
"type": "R",
"before": "non-linear generalizations",
"after": "nonlinear generalisations",
"start_char_pos": 289,
"end_char_pos": 315
},
{
"type": "R",
"before": "pro- gramming",
"after": "programming",
"start_char_pos": 538,
"end_char_pos": 551
},
{
"type": "R",
"before": "semimartingale characterization",
"after": "semi-martingale characterisation",
"start_char_pos": 642,
"end_char_pos": 673
},
{
"type": "R",
"before": "76",
"after": "86",
"start_char_pos": 832,
"end_char_pos": 834
},
{
"type": "R",
"before": "non-linear",
"after": "nonlinear",
"start_char_pos": 948,
"end_char_pos": 958
},
{
"type": "R",
"before": "63",
"after": "71",
"start_char_pos": 1033,
"end_char_pos": 1035
},
{
"type": "R",
"before": "superhedging",
"after": "super-hedging",
"start_char_pos": 1069,
"end_char_pos": 1081
},
{
"type": "R",
"before": "non-linear",
"after": "nonlinear",
"start_char_pos": 1119,
"end_char_pos": 1129
},
{
"type": "R",
"before": "addi- tional",
"after": "additional",
"start_char_pos": 1175,
"end_char_pos": 1187
}
]
| [
0,
74,
272,
487,
696,
749,
931,
1148
]
|
1510.08729 | 1 | In any physically constructible network there is a structure-function relationship between the geometry of the network and its dynamics. The network's geometry, i.e. its physical structure, constrains and bounds signaling and the flow of information through the network. In this paper we explore how the physical geometry of a network constrains and ultimately determines its dynamics. We construct a formal theoretical framework of the relationship between network structure and function and how information flows through a network. We show how a strictly local process at the scale of individual node pairs directly affects the behavior of the system at the whole network scale. Individual nodes responding to information from the upstream nodes it is connected to produce the observable emergent dynamics of the entire networkat a global scale, independent of and without any knowledge of what all the other nodes in the network may be doing. We then provide empirical evidence that at least some important examples of both naturally occurring and engineered networks are capable of approaching a state of optimal dynamical efficiency, one of the key results of the theory. While we progressively build up to what this formally means, informally it means that these networks have evolved or have been designed to optimize how they are able to handle the processing of information by matching the dynamical requirements of individual nodes to the flow of information (i.e. signals) between nodes in the network . In different ways, we use the theory to investigate properties of pyramidal neurons in biological neural networks in the visual cortex, the prevalence of the small world network topology, and the internet router network . | The functional and computational power of a network emerges as a property of the system operating as a coherent whole at a global scale, not at the scale of individual nodes. A natural question to ask is what are the physical principles that allow a collection of nodes to interact in such a way that they produce emergent global network dynamics? There is in general a lack of sufficient theory capable of providing insights into universal physical principles that bridge individual node dynamics to the behavior of the network as a whole in a deterministic way. Here, we show that in any physically constructible geometric network, the geometry of the network constrains and bounds signaling and the flow of information through the network, and develop a theory derived from foundational principles of neural signaling that can both describe and predict the dynamics of geometric networks from considerations of node dynamics. We provide empirical evidence that at least some important examples of widely different naturally occurring and engineered networks , specifically, the prevalence of the small world network topology, axonal branching of pyramidal neurons in the visual cortex, and the internet router network , are capable of approaching a state of optimal dynamical efficiency, a concept that is derived from the theory. The framework we develop has a wide range of (non-biological) engineering and neurophysiological applications . | [
{
"type": "R",
"before": "In any physically constructible network there is a structure-function relationship between the geometry of the network and its dynamics. The network's geometry, i.e. its physical structure, constrains and bounds signaling and the flow of information through the network. In this paper we explore how the physical geometry of a network constrains and ultimately determines its dynamics. We construct a formal theoretical framework of the relationship between network structure and function and how information flows through a network. We show how a strictly local process at the scale of individual node pairs directly affects",
"after": "The functional and computational power of a network emerges as a property of the system operating as a coherent whole at a global scale, not at the scale of individual nodes. A natural question to ask is what are the physical principles that allow a collection of nodes to interact in such a way that they produce emergent global network dynamics? There is in general a lack of sufficient theory capable of providing insights into universal physical principles that bridge individual node dynamics to",
"start_char_pos": 0,
"end_char_pos": 625
},
{
"type": "R",
"before": "system at the whole network scale. Individual nodes responding to information from the upstream nodes it is connected to produce the observable emergent dynamics of the entire networkat a global scale, independent of and without any knowledge of what all the other nodes in the network may be doing. We then",
"after": "network as a whole in a deterministic way. Here, we show that in any physically constructible geometric network, the geometry of the network constrains and bounds signaling and the flow of information through the network, and develop a theory derived from foundational principles of neural signaling that can both describe and predict the dynamics of geometric networks from considerations of node dynamics. We",
"start_char_pos": 646,
"end_char_pos": 953
},
{
"type": "R",
"before": "both",
"after": "widely different",
"start_char_pos": 1022,
"end_char_pos": 1026
},
{
"type": "R",
"before": "are capable of approaching a state of optimal dynamical efficiency, one of the key results of the theory. While we progressively build up to what this formally means, informally it means that these networks have evolved or have been designed to optimize how they are able to handle the processing of information by matching the dynamical requirements of individual nodes to the flow of information (i.e. signals) between nodes in the network . In different ways, we use the theory to investigate properties",
"after": ", specifically, the prevalence of the small world network topology, axonal branching",
"start_char_pos": 1071,
"end_char_pos": 1577
},
{
"type": "D",
"before": "biological neural networks in",
"after": null,
"start_char_pos": 1602,
"end_char_pos": 1631
},
{
"type": "D",
"before": "the prevalence of the small world network topology,",
"after": null,
"start_char_pos": 1651,
"end_char_pos": 1702
},
{
"type": "A",
"before": null,
"after": ", are capable of approaching a state of optimal dynamical efficiency, a concept that is derived from the theory. The framework we develop has a wide range of (non-biological) engineering and neurophysiological applications",
"start_char_pos": 1735,
"end_char_pos": 1735
}
]
| [
0,
136,
270,
385,
533,
680,
945,
1176,
1514
]
|
1510.08729 | 2 | The functional and computational power of a network emerges as a property of the system operating as a coherent whole at a global scale, not at the scale of individual nodes. A natural question to ask is what are the physical principles that allow a collection of nodes to interact in such a way that they produce emergent global network dynamics? There is in general a lack of sufficient theory capable of providing insights into universal physical principles that bridge individual node dynamics to the behavior of the network as a whole in a deterministic way. Here, we show that in any physically constructible geometric network, the geometry of the network constrains and bounds signaling and the flow of information through the network , and develop a theory derived from foundational principles of neural signaling that can both describe and predict the dynamics of geometric networks from considerations of node dynamics. We provide empirical evidence that at least some important examples of widely different naturally occurring and engineered networks , specifically, the prevalence of the small world network topology, axonal branching of pyramidal neurons in the visual cortex, and the internet router network, are capable of approaching a state of optimal dynamical efficiency, a concept that is derived from the theory. The framework we develop has a wide range of (non-biological) engineering and neurophysiological applications . | The functional and computational power of a network emerges as a prop- erty of the system operating as a coherent whole at a global scale, not at the scale of individual nodes. A natural question to ask is what are the physical principles that allow a collection of nodes to interact in such away that they produce emergent global network dynamics? Here, we address this question by developing a generalized theoretical framework that formally describes how in any physically constructible geometric network, the geome- try of the network i.e. its physical structure, constrains and bounds signaling and the flow of information through the network . This then in turn predicts the emergent dynamics of the network. Our results illustrate how the in- terplay between strictly local geometric and temporal process at the scale of individual node pairs directly affects the behavior of the whole network. We also provide empirical evidence that at least some important examples of widely different naturally occurring and engineered networks follow these theoretical principles . | [
{
"type": "R",
"before": "property",
"after": "prop- erty",
"start_char_pos": 65,
"end_char_pos": 73
},
{
"type": "R",
"before": "a way",
"after": "away",
"start_char_pos": 290,
"end_char_pos": 295
},
{
"type": "D",
"before": "There is in general a lack of sufficient theory capable of providing insights into universal physical principles that bridge individual node dynamics to the behavior of the network as a whole in a deterministic way.",
"after": null,
"start_char_pos": 348,
"end_char_pos": 563
},
{
"type": "R",
"before": "show that",
"after": "address this question by developing a generalized theoretical framework that formally describes how",
"start_char_pos": 573,
"end_char_pos": 582
},
{
"type": "R",
"before": "geometry",
"after": "geome- try",
"start_char_pos": 638,
"end_char_pos": 646
},
{
"type": "A",
"before": null,
"after": "i.e. its physical structure,",
"start_char_pos": 662,
"end_char_pos": 662
},
{
"type": "R",
"before": ", and develop a theory derived from foundational principles of neural signaling that can both describe and predict the dynamics of geometric networks from considerations of node dynamics. We",
"after": ". This then in turn predicts the emergent dynamics of the network. Our results illustrate how the in- terplay between strictly local geometric and temporal process at the scale of individual node pairs directly affects the behavior of the whole network. We also",
"start_char_pos": 743,
"end_char_pos": 933
},
{
"type": "R",
"before": ", specifically, the prevalence of the small world network topology, axonal branching of pyramidal neurons in the visual cortex, and the internet router network, are capable of approaching a state of optimal dynamical efficiency, a concept that is derived from the theory. The framework we develop has a wide range of (non-biological) engineering and neurophysiological applications",
"after": "follow these theoretical principles",
"start_char_pos": 1063,
"end_char_pos": 1444
}
]
| [
0,
174,
347,
563,
930,
1334
]
|
1510.08729 | 3 | The functional and computational power of a network emerges as a prop- erty of the system operating as a coherent whole at a global scale, not at the scale of individual nodes. A natural question to ask is what are the physical principles that allow a collection of nodes to interact in such away that they produce emergent global network dynamics? Here, we address this question by developing a generalized theoretical framework that formally describes how in any physically constructible geometric network, the geome- try of the network i.e. its physical structure, constrains and bounds signaling and the flow of information through the network. This then in turn predicts the emergent dynamics of the network. Our results illustrate how the in- terplay between strictly local geometric and temporal process at the scale of individual node pairs directly affects the behavior of the whole network. We also provide empirical evidence that at least some important examples of widely different naturally occurring and engineered networks follow these theoretical principles . | Understanding how local interactions among connected nodes in a network result in global network dynamics and behaviors remains a critical open problem in network theory. This is important both for understanding complex networks, including the brain, and for the controlled design of networks intended to achieve a specific function. Here, we describe the construction and theoretical analysis of a framework derived from canonical neurophysiological principles that models the competing dynamics of incident signals into nodes along directed edges in a network. The framework describes the dynamics between the offset in the latencies of propagating signals, which reflect the geometry of the edges and conduction velocities, and the internal refractory dynamics and processing times of the downstream node. One of the main theoretical results is the definition of a ratio between the speed of signaling or information flow, which is bounded by the spatial geometry, and the internal time it takes for individual nodes to process incoming signals. We show that an optimal ratio is one where the speed of information propagation between connected nodes does not exceed the internal dynamic time scale of the nodes. A mismatch of this ratio leads to sub-optimal signaling and information flows in a network, and even a breakdown in signaling all together . | [
{
"type": "R",
"before": "The functional and computational power of a network emerges as a prop- erty of the system operating as a coherent whole at a global scale, not at the scale of individual nodes. A natural question to ask is what are the physical principles that allow a collection of nodes to interact in such away that they produce emergent global network dynamics? Here, we address this question by developing a generalized theoretical framework that formally describes how in any physically constructible geometric network,",
"after": "Understanding how local interactions among connected nodes in a network result in global network dynamics and behaviors remains a critical open problem in network theory. This is important both for understanding complex networks, including the brain, and for the controlled design of networks intended to achieve a specific function. Here, we describe the construction and theoretical analysis of a framework derived from canonical neurophysiological principles that models the competing dynamics of incident signals into nodes along directed edges in a network. The framework describes the dynamics between the offset in the latencies of propagating signals, which reflect the geometry of the edges and conduction velocities, and",
"start_char_pos": 0,
"end_char_pos": 508
},
{
"type": "R",
"before": "geome- try of the network i.e. its physical structure, constrains and bounds signaling and the flow of information through the network. This then in turn predicts the emergent dynamics of the network. Our results illustrate how the in- terplay between strictly local geometric and temporal process at the scale of individual node pairs directly affects the behavior of the whole network. We also provide empirical evidence that at least some important examples of widely different naturally occurring and engineered networks follow these theoretical principles",
"after": "internal refractory dynamics and processing times of the downstream node. One of the main theoretical results is the definition of a ratio between the speed of signaling or information flow, which is bounded by the spatial geometry, and the internal time it takes for individual nodes to process incoming signals. We show that an optimal ratio is one where the speed of information propagation between connected nodes does not exceed the internal dynamic time scale of the nodes. A mismatch of this ratio leads to sub-optimal signaling and information flows in a network, and even a breakdown in signaling all together",
"start_char_pos": 513,
"end_char_pos": 1073
}
]
| [
0,
176,
348,
648,
713,
900
]
|
1510.08729 | 4 | Understanding how local interactions among connected nodes in a network result in global network dynamics and behaviors remains a critical open problem in network theory. This is important both for understanding complex networks, including the brain, and for the controlled design of networks intended to achieve a specific function. Here, we describe the construction and theoretical analysis of a framework derived from canonical neurophysiological principles that models the competing dynamics of incident signals into nodesalong directed edges in a network. The framework describes the dynamics between the offset in the latencies of propagating signals, which reflect the geometry of the edges and conduction velocities, and the internal refractory dynamics and processing times of the downstream node. One of the main theoretical results is the definition of a ratio between the speed of signaling or information flow, which is bounded by the spatial geometry, and the internal time it takes for individual nodes to process incoming signals. We show that an optimal ratio is one where the speed of information propagation between connected nodes does not exceed the internal dynamic time scale of the nodes. A mismatch of this ratio leads to sub-optimal signaling and information flows in a network, and even a breakdown in signaling all together . | Networks are ubiquitous throughout science and engineering. A number of methods, including some from our own group, have explored how one goes about computing or predicting the dynamics of networks given information about internal models of individual nodes and network connectivity, possibly with additional information provided by statistical or descriptive metrics that characterize the network. But what can be inferred about network dynamics when there is no knowledge or information about the internal model or dynamics of participating nodes? Here, we explore how connected subsets of nodes competitively interact in order to activate a common downstream node they connect into. We achieve this by assuming a simple set of rules borrowed from neurophysiology. The model we develop reflects a local process from which global network dynamics emerges. We call this model a competitive refractory dynanics model. It is derived from a consideration of spatial and temporal summation in biological neurons, whereby summating post synaptic potentials (PSPs) along the dendritic tree contribute towards the membrane potential at the initial segment reaching a threshold potential. We first show how the 'winning node' or set of 'winning' nodes that achieve activation of a downstream node is computable by the model. We then derive a formal definition of optimized network signaling within our framework. We define a ratio between the signaling latencies on the edges of the network and the internal time it takes individual nodes to process incoming signals. We show that an optimal ratio is one where the speed of information propagation between connected nodes does not exceed the internal dynamic time scale of the nodes. We then show how we can use these results to arrive at a unique interpretation for the prevalence of the small world network topology in natural and engineered systems . | [
{
"type": "R",
"before": "Understanding how local interactions among connected nodes in a network result in global network dynamics and behaviors remains a critical open problem in network theory. This is important both for understanding complex networks, including the brain, and for the controlled design of networks intended to achieve a specific function. Here, we describe the construction and theoretical analysis of a framework derived from canonical neurophysiological principles that models the competing dynamics of incident signals into nodesalong directed edges in",
"after": "Networks are ubiquitous throughout science and engineering. A number of methods, including some from our own group, have explored how one goes about computing or predicting the dynamics of networks given information about internal models of individual nodes and network connectivity, possibly with additional information provided by statistical or descriptive metrics that characterize the network. But what can be inferred about network dynamics when there is no knowledge or information about the internal model or dynamics of participating nodes? Here, we explore how connected subsets of nodes competitively interact in order to activate a common downstream node they connect into. We achieve this by assuming a simple set of rules borrowed from neurophysiology. The model we develop reflects a local process from which global network dynamics emerges. We call this model a competitive refractory dynanics model. It is derived from a consideration of spatial and temporal summation in biological neurons, whereby summating post synaptic potentials (PSPs) along the dendritic tree contribute towards the membrane potential at the initial segment reaching a threshold potential. We first show how the 'winning node' or set of 'winning' nodes that achieve activation of a downstream node is computable by the model. We then derive a formal definition of optimized network signaling within our framework. We define",
"start_char_pos": 0,
"end_char_pos": 550
},
{
"type": "D",
"before": "network. The framework describes the dynamics between the offset in the latencies of propagating signals, which reflect the geometry of the edges and conduction velocities, and the internal refractory dynamics and processing times of the downstream node. One of the main theoretical results is the definition of a",
"after": null,
"start_char_pos": 553,
"end_char_pos": 866
},
{
"type": "R",
"before": "speed of signaling or information flow, which is bounded by the spatial geometry,",
"after": "signaling latencies on the edges of the network",
"start_char_pos": 885,
"end_char_pos": 966
},
{
"type": "D",
"before": "for",
"after": null,
"start_char_pos": 998,
"end_char_pos": 1001
},
{
"type": "R",
"before": "A mismatch of this ratio leads to sub-optimal signaling and information flows in a network, and even a breakdown in signaling all together",
"after": "We then show how we can use these results to arrive at a unique interpretation for the prevalence of the small world network topology in natural and engineered systems",
"start_char_pos": 1214,
"end_char_pos": 1352
}
]
| [
0,
170,
333,
561,
807,
1047,
1213
]
|
1510.08931 | 1 | Calmodulin (CaM) is a ubiquitous calcium binding protein consisting of two structurally similar domains with distinct stabilities, binding affinities, and flexibilities. We present coarse grained simulations that suggest the mechanism for the domain's allosteric transitions between the open and closed conformations depend on subtle differences in the folded state topology of the two domains. Throughout a wide temperature range, the simulated transition mechanism of the N-terminal domain (nCaM) follows a two-state transition mechanism while domain opening in the C-terminal domain (cCaM) involves unfolding and refolding of the tertiary structure. The appearance of the unfolded intermediate occurs at a higher temperature in nCaM than it does in cCaM . That is, we find that cCaM unfolds more readily along the transition route than nCaM. Furthermore, unfolding and refolding of the domain significantly slows the domain opening and closing rates of cCaM , a distinct scenario which can potentially influence the mechanism of calcium binding to each domain. | Calmodulin (CaM) is a ubiquitous calcium binding protein consisting of two structurally similar domains with distinct stabilities, binding affinities, and flexibilities. We present coarse grained simulations that suggest the mechanism for the domain's allosteric transitions between the open and closed conformations depend on subtle differences in the folded state topology of the two domains. Throughout a wide temperature range, the simulated transition mechanism of the N-terminal domain (nCaM) follows a two-state transition mechanism while domain opening in the C-terminal domain (cCaM) involves unfolding and refolding of the tertiary structure. The appearance of the unfolded intermediate occurs at a higher temperature in nCaM than it does in cCaM consistent with nCaM's higher thermal stability. Under approximate physiological conditions, the simulated unfolded state population of cCaM accounts for 10\% of the population with nearly all of the sampled transitions (approximately 95\%) unfolding and refolding during the conformational change. Transient unfolding significantly slows the domain opening and closing rates of cCaM . This potentially influences the mechanism of calcium binding to each domain. | [
{
"type": "R",
"before": ". That is, we find that cCaM unfolds more readily along the transition route than nCaM. Furthermore,",
"after": "consistent with nCaM's higher thermal stability. Under approximate physiological conditions, the simulated unfolded state population of cCaM accounts for 10\\% of the population with nearly all of the sampled transitions (approximately 95\\%)",
"start_char_pos": 757,
"end_char_pos": 857
},
{
"type": "R",
"before": "of the domain",
"after": "during the conformational change. Transient unfolding",
"start_char_pos": 882,
"end_char_pos": 895
},
{
"type": "R",
"before": ", a distinct scenario which can potentially influence",
"after": ". This potentially influences",
"start_char_pos": 961,
"end_char_pos": 1014
}
]
| [
0,
169,
394,
652,
844
]
|
1511.00026 | 1 | We consider a strictly pathwise setting for Delta hedging exotic options, based on F\"ollmer's pathwise It\=o calculus. Price trajectories are d-dimensional continuous functions whose pathwise quadratic variations and covariations are determined by a given local-volatility matrix. The existence of Delta hedging strategies in this pathwise setting is established via an existence result for a recursive scheme of parabolic Cauchy problems . Our main result establishes the nonexistence of pathwise arbitrage opportunities in a class of strategies containing these Delta hedging strategies and under relatively mild conditions on the local-volatility matrix. | We consider a strictly pathwise setting for Delta hedging exotic options, based on F\"ollmer's pathwise It\=o calculus. Price trajectories are d-dimensional continuous functions whose pathwise quadratic variations and covariations are determined by a given local volatility matrix. The existence of Delta hedging strategies in this pathwise setting is established via existence results for recursive schemes of parabolic Cauchy problems and via the existence of functional Cauchy problems on path space . Our main results establish the nonexistence of pathwise arbitrage opportunities in classes of strategies containing these Delta hedging strategies and under relatively mild conditions on the local volatility matrix. | [
{
"type": "R",
"before": "local-volatility",
"after": "local volatility",
"start_char_pos": 257,
"end_char_pos": 273
},
{
"type": "R",
"before": "an existence result for a recursive scheme",
"after": "existence results for recursive schemes",
"start_char_pos": 368,
"end_char_pos": 410
},
{
"type": "A",
"before": null,
"after": "and via the existence of functional Cauchy problems on path space",
"start_char_pos": 440,
"end_char_pos": 440
},
{
"type": "R",
"before": "result establishes",
"after": "results establish",
"start_char_pos": 452,
"end_char_pos": 470
},
{
"type": "R",
"before": "a class",
"after": "classes",
"start_char_pos": 527,
"end_char_pos": 534
},
{
"type": "R",
"before": "local-volatility",
"after": "local volatility",
"start_char_pos": 635,
"end_char_pos": 651
}
]
| [
0,
119,
281,
442
]
|
1511.00182 | 1 | In this paper we provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. Our work is motivated by an interest in constructing a protein-protein interaction network that captures key features associated with Parkinson's disease. The proposed method is based on a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by choosing a minimal seed list as starting point for the significance test. The null model is based on random seed lists of same length as a minimum seed list which generates the subnetwork; in this random seed list the nodes have approximately the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which significantly deviate from random on an appropriate set of statistics and hence make suggestions as to which of the many network sampling methods might capture useful information for a real world protein-protein interaction network. | Our work is motivated by an interest in constructing a protein-protein interaction network that captures key features associated with Parkinson's disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. We provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork; in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein-protein interaction network. | [
{
"type": "R",
"before": "In this paper we",
"after": "Our work is motivated by an interest in constructing a protein-protein interaction network that captures key features associated with Parkinson's disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. We",
"start_char_pos": 0,
"end_char_pos": 16
},
{
"type": "D",
"before": "Our work is motivated by an interest in constructing a protein-protein interaction network that captures key features associated with Parkinson's disease.",
"after": null,
"start_char_pos": 218,
"end_char_pos": 372
},
{
"type": "R",
"before": "is based on",
"after": "uses",
"start_char_pos": 393,
"end_char_pos": 404
},
{
"type": "R",
"before": "choosing",
"after": "constructing",
"start_char_pos": 520,
"end_char_pos": 528
},
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 552,
"end_char_pos": 552
},
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 643,
"end_char_pos": 643
},
{
"type": "R",
"before": "which",
"after": "that",
"start_char_pos": 679,
"end_char_pos": 684
},
{
"type": "R",
"before": "approximately",
"after": "(approximately)",
"start_char_pos": 751,
"end_char_pos": 764
},
{
"type": "R",
"before": "significantly deviate",
"after": "deviate significantly",
"start_char_pos": 884,
"end_char_pos": 905
},
{
"type": "D",
"before": "hence make suggestions as to which of the many network sampling methods",
"after": null,
"start_char_pos": 958,
"end_char_pos": 1029
}
]
| [
0,
217,
372,
428,
594,
710,
832
]
|
1511.01207 | 1 | The paper studies market models based on trajectory spaces, properties of such models are obtained without recourse to probabilistic assumptions . For a given European option, an interval of rational pricesexists under a more general condition than the usual no-arbitrage requirement. The paper develops computational results in order to evaluate the option bounds; the global minmax optimization, defining the price interval, is reduced to a local minmax optimization via dynamic programming. A general class of trajectory sets is described for which the market model introduced by Britten Jones and Neuberger is nested as a particular case. We also develop a market model based on an operational setting constraining market movements and investor's portfolio rebalances. Numerical examples are presented, the effect of the presence of arbitrage on the price bounds is illustrated. | The paper studies sub and super-replication price bounds for contingent claims defined on general trajectory based market models. No prior probabilistic or topological assumptions are placed on the trajectory space, trading is assumed to take place at a finite number of occasions but not bounded in number nor necessarily equally spaced in time . For a given option, there exists an interval bounding the set of possible fair prices; such interval exists under more general conditions than the usual no-arbitrage requirement. The paper develops a backward recursive method to evaluate the option bounds; the global minmax optimization, defining the price interval, is reduced to a local minmax optimization via dynamic programming. Trajectory sets are introduced for which existing non-probabilistic markets models are nested as a particular case. Several examples are presented, the effect of the presence of arbitrage on the price bounds is illustrated. | [
{
"type": "R",
"before": "market models based on trajectory spaces, properties of such models are obtained without recourse to probabilistic assumptions",
"after": "sub and super-replication price bounds for contingent claims defined on general trajectory based market models. No prior probabilistic or topological assumptions are placed on the trajectory space, trading is assumed to take place at a finite number of occasions but not bounded in number nor necessarily equally spaced in time",
"start_char_pos": 18,
"end_char_pos": 144
},
{
"type": "R",
"before": "European option, an interval of rational pricesexists under a more general condition",
"after": "option, there exists an interval bounding the set of possible fair prices; such interval exists under more general conditions",
"start_char_pos": 159,
"end_char_pos": 243
},
{
"type": "R",
"before": "computational results in order",
"after": "a backward recursive method",
"start_char_pos": 304,
"end_char_pos": 334
},
{
"type": "R",
"before": "A general class of trajectory sets is described for which the market model introduced by Britten Jones and Neuberger is",
"after": "Trajectory sets are introduced for which existing non-probabilistic markets models are",
"start_char_pos": 494,
"end_char_pos": 613
},
{
"type": "R",
"before": "We also develop a market model based on an operational setting constraining market movements and investor's portfolio rebalances. Numerical",
"after": "Several",
"start_char_pos": 643,
"end_char_pos": 782
}
]
| [
0,
146,
284,
365,
493,
642,
772
]
|
1511.01238 | 1 | Complex systems may have billion components making consensus formation slow and difficult. Recently several overlapping stories emerged from various disciplines, including protein structures, neuroscience and social networks, showing that fast responses to known stimuli involve a network core of few, strongly connected nodes . In unexpected situations the core may fail to provide a coherent response, thus the stimulus propagates to the periphery of the network. Here the final response is determined by a large number of weakly connected nodes mobilizing the collective memory and opinion, i. e. the slow democracy exercising the 'wisdom of crowds' . This mechanism resembles to Kahneman's "Thinking, Fast and Slow" discriminating fast, pattern-based and slow, contemplative decision making. The generality of the response also shows that democracy is neither only a moral stance nor only a decision making technique, but a very efficient general learning strategy developed by complex systems during evolution.The duality of fast core and slow majority may increase our understanding of metabolic, signaling, ecosystem, swarming or market processes, as well as may help to construct novel methods to explore unusual network responses, deep-learning neural network structures and core-periphery targeting drug design strategies. (Illustrative videos can be downloaded from here: URL | I hypothesize that re-occurring prior experience of complex systems mobilizes a fast response, whose attractor is encoded by their strongly connected network core. In contrast, responses to novel stimuli are often slow and require the weakly connected network periphery. Upon repeated stimulus, peripheral network nodes remodel the network core that encodes the attractor of the new response. This "core-periphery learning" theory reviews and generalizes the heretofore fragmented knowledge on attractor formation by neural networks, periphery-driven innovation and a number of recent reports on the adaptation of protein, neuronal and social networks. The coreperiphery learning theory may increase our understanding of signaling, memory formation, information encoding and decision-making processes. Moreover, the power of network periphery-related 'wisdom of crowds' inventing creative, novel responses indicates that deliberative democracy is a slow yet efficient learning strategy developed as the success of a billion-year evolution. | [
{
"type": "R",
"before": "Complex systems may have billion components making consensus formation slow and difficult. Recently several overlapping stories emerged from various disciplines, including protein structures, neuroscience and social networks, showing that fast responses to known stimuli involve a network core of few, strongly connected nodes . In unexpected situations the core may fail to provide a coherent response, thus the stimulus propagates to the periphery of the network. Here the final response is determined by a large number of weakly connected nodes mobilizing the collective memory and opinion, i. e. the slow democracy exercising the",
"after": "I hypothesize that re-occurring prior experience of complex systems mobilizes a fast response, whose attractor is encoded by their strongly connected network core. In contrast, responses to novel stimuli are often slow and require the weakly connected network periphery. Upon repeated stimulus, peripheral network nodes remodel the network core that encodes the attractor of the new response. This \"core-periphery learning\" theory reviews and generalizes the heretofore fragmented knowledge on attractor formation by neural networks, periphery-driven innovation and a number of recent reports on the adaptation of protein, neuronal and social networks. The coreperiphery learning theory may increase our understanding of signaling, memory formation, information encoding and decision-making processes. Moreover, the power of network periphery-related",
"start_char_pos": 0,
"end_char_pos": 633
},
{
"type": "R",
"before": ". This mechanism resembles to Kahneman's \"Thinking, Fast and Slow\" discriminating fast, pattern-based and slow, contemplative decision making. The generality of the response also shows that democracy is neither only a moral stance nor only a decision making technique, but a very efficient general",
"after": "inventing creative, novel responses indicates that deliberative democracy is a slow yet efficient",
"start_char_pos": 653,
"end_char_pos": 950
},
{
"type": "R",
"before": "by complex systems during evolution.The duality of fast core and slow majority may increase our understanding of metabolic, signaling, ecosystem, swarming or market processes, as well as may help to construct novel methods to explore unusual network responses, deep-learning neural network structures and core-periphery targeting drug design strategies. (Illustrative videos can be downloaded from here: URL",
"after": "as the success of a billion-year evolution.",
"start_char_pos": 979,
"end_char_pos": 1386
}
]
| [
0,
90,
328,
465,
654,
795,
1015,
1332
]
|
1511.01667 | 1 | Molecular dynamics (MD) simulations using a graphics processing unit (GPU) has been employed in order to determine the conformational space of the methane-thiosulfonate spin label (MTSL) attached to the activation loop of the Aurora-A kinase protein and compared with quantum mechanical (QM) methods rooted on density functional theory (DFT). MD provided a wealth of information about interactions between the MTSL and the residues of the protein and on the different motional contributions to the overall dynamics of the MTSL . Data obtained from MD were seen to be in good agreement with those obtained from QM but the dynamics of the system revealed more interactions than those observed from QMmethods . A strong correlation between the tumbling of the protein and the transitions of the X4 and X5 dihedral angles of the MTSL , was observed with a consequent effect also the distribution of the nitroxide (NO) group in the space . Theoretical EPR spectra calculated from opportunely selected MD frames showing interactions between the MTSL and residues of the protein were seen to be in good agreement with the experimental EPR spectrum , indicating a predominance of some conformational states of the full spin-labelled system. This work is a starting point for deeper experimental and theoretical studies of the rotational and translational diffusion properties of the Aurora-A kinase protein related to its overall tumbling and biological activity. | Classical molecular dynamics (MD) simulations , within the AMBER program package that runs entirely on a CUDA-enabled NVIDIA graphic processing unit (GPU) , were employed to study the dynamics of the methane-thiosulfonate spin labelled (MTSL) Aurora-A kinase activation loop in a very short time and with good quality of the sampling. The MD simulation provided a wealth of information on the interactions between MTSL and protein residues, and on the different motional contributions to the overall dynamics of the MTSL that were validated using a multifrequency electron paramagnetic resonance (EPR) approach. The latter relayed on the frequency dependence of the resolution of the fast and slow motions of the spin probe and was used to distinguish the fast internal motion of the spin label from the slow protein tumbling. Data obtained from MD were in good agreement with those obtained from quantum mechanical (QM) methods, but more interactions within the dynamics of the system were revealed than from QM . A strong correlation between the tumbling of the protein and the transitions of the X4 dihedral angle of the MTSL was observed with a consequent effect on the distribution of the nitroxide (NO) group in space and time. The theoretical EPR spectra were calculated using selected configurations of MTSL probing different micro-environments of the protein characterized by different polarity. The comparison between the theoretical and experimental 9 GHz and 94 GHz EPR spectra revealed that some fits were in good agreement with the experimental EPR spectra , indicating a predominance of some conformational states of the full spin-labelled system. This work is a starting point for deeper experimental and theoretical studies of the diffusion properties of the Aurora-A kinase protein related to its overall tumbling and biological activity. | [
{
"type": "R",
"before": "Molecular",
"after": "Classical molecular",
"start_char_pos": 0,
"end_char_pos": 9
},
{
"type": "R",
"before": "using a graphics",
"after": ", within the AMBER program package that runs entirely on a CUDA-enabled NVIDIA graphic",
"start_char_pos": 36,
"end_char_pos": 52
},
{
"type": "R",
"before": "has been employed in order to determine the conformational space",
"after": ", were employed to study the dynamics",
"start_char_pos": 75,
"end_char_pos": 139
},
{
"type": "R",
"before": "label",
"after": "labelled",
"start_char_pos": 174,
"end_char_pos": 179
},
{
"type": "D",
"before": "attached to the activation loop of the",
"after": null,
"start_char_pos": 187,
"end_char_pos": 225
},
{
"type": "R",
"before": "protein and compared with quantum mechanical (QM) methods rooted on density functional theory (DFT). MD",
"after": "activation loop in a very short time and with good quality of the sampling. The MD simulation",
"start_char_pos": 242,
"end_char_pos": 345
},
{
"type": "R",
"before": "about interactions between the MTSL and the residues of the protein",
"after": "on the interactions between MTSL",
"start_char_pos": 379,
"end_char_pos": 446
},
{
"type": "A",
"before": null,
"after": "protein residues, and",
"start_char_pos": 451,
"end_char_pos": 451
},
{
"type": "R",
"before": ".",
"after": "that were validated using a multifrequency electron paramagnetic resonance (EPR) approach. The latter relayed on the frequency dependence of the resolution of the fast and slow motions of the spin probe and was used to distinguish the fast internal motion of the spin label from the slow protein tumbling.",
"start_char_pos": 528,
"end_char_pos": 529
},
{
"type": "D",
"before": "seen to be",
"after": null,
"start_char_pos": 557,
"end_char_pos": 567
},
{
"type": "R",
"before": "QM but",
"after": "quantum mechanical (QM) methods, but more interactions within",
"start_char_pos": 611,
"end_char_pos": 617
},
{
"type": "R",
"before": "revealed more interactions than those observed from QMmethods",
"after": "were revealed than from QM",
"start_char_pos": 645,
"end_char_pos": 706
},
{
"type": "R",
"before": "and X5 dihedral angles",
"after": "dihedral angle",
"start_char_pos": 796,
"end_char_pos": 818
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 831,
"end_char_pos": 832
},
{
"type": "R",
"before": "also",
"after": "on",
"start_char_pos": 871,
"end_char_pos": 875
},
{
"type": "R",
"before": "the space . Theoretical EPR spectra calculated from opportunely selected MD frames showing interactions between the MTSL and residues",
"after": "space and time. The theoretical EPR spectra were calculated using selected configurations of MTSL probing different micro-environments",
"start_char_pos": 924,
"end_char_pos": 1057
},
{
"type": "R",
"before": "were seen to be",
"after": "characterized by different polarity. The comparison between the theoretical and experimental 9 GHz and 94 GHz EPR spectra revealed that some fits were",
"start_char_pos": 1073,
"end_char_pos": 1088
},
{
"type": "R",
"before": "spectrum",
"after": "spectra",
"start_char_pos": 1133,
"end_char_pos": 1141
},
{
"type": "D",
"before": "rotational and translational",
"after": null,
"start_char_pos": 1319,
"end_char_pos": 1347
}
]
| [
0,
342,
708,
935,
1233
]
|
1511.01667 | 3 | Classical molecular dynamics (MD) simulations, within the AMBER program package that runs entirely on a CUDA-enabled NVIDIA graphic processing unit (GPU), were employed to study with low computational cost and good quality of the sampling the dynamics of the methane-thiosulfonate spin label (MTSL) attached to the activation loop of the Aurora-A kinase. MD provided a wealth of information about the timescale of the different motional contributions to the overall dynamics of the spin label . These data were validated by multi-frequency continuous-wave electron paramagnetic resonance (EPR) measurements, that relying on the frequency dependence of the fast and slow motions of the spin probe were used to distinguish the fast internal motion of the spin label from slow protein tumbling. It was found that the activation loop oscillated between two conformational states separated by 7 Angstrom and the average structures obtained from the MD trajectories showed the MTSL exposed to the solvent and probing the C-lobe of the protein . The theoretical 9 and 94 GHz EPR spectra were calculated using configurations representing the interactions between MTSL and water and the tyrosine residue 208 in the C-lobe ; and the comparison with experimental EPR spectra revealed that fits successfully reproduced the experimental spectra in agreement with the MD results . | Classical molecular dynamics (MD) simulations, within the AMBER program package that runs entirely on a CUDA-enabled NVIDIA graphic processing unit (GPU), were employed to study with low computational cost the dynamics of the methane-thiosulfonate spin label (MTSL) attached to the activation loop of Aurora-A kinase. MD provided information about the conformational space of MTSL in the protein environment; an isotropic and uniform distribution of orientations of the spin label in space was found due to the large exposure of the activation loop to the solvent water. A hydrodynamic approach was employed to determine the rotational protein tumbling, while the internal motion of the spin probe was determined from 94 GHz measurements that reflect the fast motions . The theoretical 9 GHz EPR spectra were calculated using configurations representing interactions between MTSL and water and the tyrosine residue 208 in the C-lobe and the comparison with experimental EPR spectra produced fits that successfully reproduced the experimental spectra in agreement with the average structures obtained from the MD trajectories showing the MTSL exposed to the solvent and probing the C-lobe of the protein . | [
{
"type": "R",
"before": "and good quality of the sampling the",
"after": "the",
"start_char_pos": 206,
"end_char_pos": 242
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 334,
"end_char_pos": 337
},
{
"type": "D",
"before": "a wealth of",
"after": null,
"start_char_pos": 367,
"end_char_pos": 378
},
{
"type": "R",
"before": "timescale of",
"after": "conformational space of MTSL in the protein environment; an isotropic and uniform distribution of orientations of",
"start_char_pos": 401,
"end_char_pos": 413
},
{
"type": "D",
"before": "different motional contributions to the overall dynamics of the",
"after": null,
"start_char_pos": 418,
"end_char_pos": 481
},
{
"type": "R",
"before": ". These data were validated by multi-frequency continuous-wave electron paramagnetic resonance (EPR) measurements, that relying on the frequency dependence of the fast and slow motions of the spin probe were used to distinguish the fast",
"after": "in space was found due to the large exposure of the activation loop to the solvent water. A hydrodynamic approach was employed to determine the rotational protein tumbling, while the",
"start_char_pos": 493,
"end_char_pos": 729
},
{
"type": "R",
"before": "label from slow protein tumbling. It was found that the activation loop oscillated between two conformational states separated by 7 Angstrom and the average structures obtained from the MD trajectories showed the MTSL exposed to the solvent and probing the C-lobe of the protein",
"after": "probe was determined from 94 GHz measurements that reflect the fast motions",
"start_char_pos": 758,
"end_char_pos": 1036
},
{
"type": "D",
"before": "and 94",
"after": null,
"start_char_pos": 1057,
"end_char_pos": 1063
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 1130,
"end_char_pos": 1133
},
{
"type": "D",
"before": ";",
"after": null,
"start_char_pos": 1213,
"end_char_pos": 1214
},
{
"type": "R",
"before": "revealed that fits",
"after": "produced fits that",
"start_char_pos": 1264,
"end_char_pos": 1282
},
{
"type": "R",
"before": "MD results",
"after": "average structures obtained from the MD trajectories showing the MTSL exposed to the solvent and probing the C-lobe of the protein",
"start_char_pos": 1354,
"end_char_pos": 1364
}
]
| [
0,
354,
494,
791,
1038,
1214
]
|
1511.01667 | 4 | Classical molecular dynamics (MD) simulations, within the AMBER program package that runs entirely on a CUDA-enabled NVIDIA graphic processing unit (GPU), were employed to study with low computational cost the dynamics of the methane-thiosulfonate spin label (MTSL) attached to the activation loop of Aurora-A kinase. MD provided information about the conformational space of MTSL in the protein environment; an isotropic and uniform distribution of orientations of the spin label in space was found due to the large exposure of the activation loop to the solvent water. A hydrodynamic approach was employed to determine the rotational protein tumbling, while the internal motion of the spin probe was determined from 94 GHz measurements that reflect the fast motions. The theoretical 9 GHz EPR spectra were calculated using configurations representing interactions between MTSL and water and the tyrosine residue 208 in the C-lobe and the comparison with experimental EPR spectra produced fits that successfully reproduced the experimental spectra in agreement with the average structures obtained from the MD trajectories showing the MTSL exposed to the solvent and probing the C-lobe of the protein . | The understanding of kinase structure is mostly based on protein crystallography, which is limited by the requirement to trap molecules within a crystal lattice. Characterisation of the conformations of the activation loop in solution, are important to enhance the understanding of molecular processes related to diseases and to support the discovery of small molecule kinase inhibitors. In this work, molecular dynamics simulations have been employed in order to study structure and dynamics of the activation loop of the Aurora-A kinase. The main conformational states were determined using a clustering analysis routine within the AMBER software and the predominant modes of motion of the activation were determined performing a principal component analysis that revealed different degree of flexibility within the activation loop . The 9 GHz EPR spectrum of the Aurora-A kinase was measured in order to study the dynamics of the MTSL spin label attached within the activation loop, MD provided information about the different motional contributions to the overall dynamics of the MTSL and about interactions between the MTSL and the residues of the protein. Data obtained from MD were seen to be in good agreement with those obtained from QM performed in previous work and with the experimental EPR data. This work is a starting point for deeper experimental and theoretical studies of the rotational and translational diffusion properties of the Aurora-A kinase protein related to its overall tumbling and biological activity . | [
{
"type": "R",
"before": "Classical molecular dynamics (MD) simulations, within the AMBER program package that runs entirely on a CUDA-enabled NVIDIA graphic processing unit (GPU), were employed to study with low computational cost the",
"after": "The understanding of kinase structure is mostly based on protein crystallography, which is limited by the requirement to trap molecules within a crystal lattice. Characterisation of the conformations of the activation loop in solution, are important to enhance the understanding of molecular processes related to diseases and to support the discovery of small molecule kinase inhibitors. In this work, molecular dynamics simulations have been employed in order to study structure and",
"start_char_pos": 0,
"end_char_pos": 209
},
{
"type": "D",
"before": "methane-thiosulfonate spin label (MTSL) attached to the",
"after": null,
"start_char_pos": 226,
"end_char_pos": 281
},
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 301,
"end_char_pos": 301
},
{
"type": "R",
"before": "MD provided information about the conformational space of MTSL in the protein environment; an isotropic and uniform distribution of orientations of the spin label in space was found due to the large exposure of",
"after": "The main conformational states were determined using a clustering analysis routine within the AMBER software and the predominant modes of motion of the activation were determined performing a principal component analysis that revealed different degree of flexibility within",
"start_char_pos": 319,
"end_char_pos": 529
},
{
"type": "R",
"before": "to the solvent water. A hydrodynamic approach was employed to determine the rotational protein tumbling, while the internal motion of the spin probe was determined from 94 GHz measurements that reflect the fast motions. The theoretical",
"after": ". The",
"start_char_pos": 550,
"end_char_pos": 785
},
{
"type": "R",
"before": "GHz EPR spectra were calculated using configurations representing interactions between MTSL and water and the tyrosine residue 208 in the C-lobe and the comparison with experimental EPR spectra produced fits that successfully reproduced the experimental spectra in agreement with the average structures obtained from",
"after": "GHz EPR spectrum of the Aurora-A kinase was measured in order to study the dynamics of the MTSL spin label attached within the activation loop, MD provided information about the different motional contributions to the overall dynamics of the MTSL and about interactions between the MTSL and",
"start_char_pos": 788,
"end_char_pos": 1104
},
{
"type": "R",
"before": "MD trajectories showing the MTSL exposed to the solvent and probing the C-lobe of the protein",
"after": "residues of the protein. Data obtained from MD were seen to be in good agreement with those obtained from QM performed in previous work and with the experimental EPR data. This work is a starting point for deeper experimental and theoretical studies of the rotational and translational diffusion properties of the Aurora-A kinase protein related to its overall tumbling and biological activity",
"start_char_pos": 1109,
"end_char_pos": 1202
}
]
| [
0,
318,
409,
571,
769
]
|
1511.01667 | 5 | The understanding of kinase structure is mostly based on protein crystallography, which is limited by the requirement to trap molecules within a crystal lattice. Characterisation of the conformations of the activation loop in solution , are important to enhance the understanding of molecular processes related to diseases and to support the discovery of small molecule kinase inhibitors. In this work, molecular dynamics simulations have been employed in order to study structure and dynamics of the activation loop of the Aurora-A kinase . The main conformational states were determined using a clustering analysis routine within the AMBER software and the predominant modes of motion of the activation were determined performing a principal component analysis that revealed different degree of flexibility within the activation loop . The 9 GHz EPR spectrum of the Aurora-A kinase was measured in order to study the dynamics of the MTSL spin label attached within the activation loop , MD provided information about the different motional contributions to the overall dynamics of the MTSL and about interactions between the MTSL and the residues of the protein . Data obtained from MD were seen to be in good agreement with those obtained from QM performed in previous work and with the experimental EPR data . This work is a starting point for deeper experimental and theoretical studies of the rotational and translational diffusion properties of the Aurora-A kinase protein related to its overall tumbling and biological activity. | The understanding of kinase structure is mostly based on protein crystallography, which is limited by the requirement to trap molecules within a crystal lattice. Therefore, characterisations of the conformational dynamics of the activation loop in solution are important to enhance the understanding of molecular processes related to diseases and to support the discovery of small molecule kinase inhibitors. In this work, we demonstrated that long molecular dynamics simulations exhaustively sampled all the conformational space of the activation loop of the Aurora-A kinase and of the methane-thiosulfonate spin label, introduced into the activation loop for the electron paramagnetic measurements. MD was used to determine structural fluctuations, order parameters and rotational correlation times of the motion of the activation loop and of the MTSL. Theoretical data obtained were used as input for the calculation of the room temperature 9 GHz continuous wave EPR of the Aurora-A kinase in solution and the comparison between simulated and experimental date revealed that the motion of the protein and spin label occurred on comparable timescales . This work is a starting point for deeper experimental and theoretical studies of the rotational and translational diffusion properties of the Aurora-A kinase protein related to its biological activity. | [
{
"type": "R",
"before": "Characterisation of the conformations",
"after": "Therefore, characterisations of the conformational dynamics",
"start_char_pos": 162,
"end_char_pos": 199
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 235,
"end_char_pos": 236
},
{
"type": "A",
"before": null,
"after": "we demonstrated that long",
"start_char_pos": 403,
"end_char_pos": 403
},
{
"type": "R",
"before": "have been employed in order to study structure and dynamics",
"after": "exhaustively sampled all the conformational space",
"start_char_pos": 435,
"end_char_pos": 494
},
{
"type": "R",
"before": ". The main conformational states were determined using a clustering analysis routine within the AMBER software and the predominant modes of motion of the activation were determined performing a principal component analysis that revealed different degree of flexibility within",
"after": "and of",
"start_char_pos": 541,
"end_char_pos": 816
},
{
"type": "A",
"before": null,
"after": "methane-thiosulfonate spin label, introduced into the",
"start_char_pos": 821,
"end_char_pos": 821
},
{
"type": "R",
"before": ". The",
"after": "for the electron paramagnetic measurements. MD was used to determine structural fluctuations, order parameters and rotational correlation times of the motion of the activation loop and of the MTSL. Theoretical data obtained were used as input for the calculation of the room temperature",
"start_char_pos": 838,
"end_char_pos": 843
},
{
"type": "R",
"before": "GHz EPR spectrum of the",
"after": "GHz continuous wave EPR of the",
"start_char_pos": 846,
"end_char_pos": 869
},
{
"type": "R",
"before": "kinase was measured in order to study the dynamics of the MTSL spin label attached within the activation loop , MD provided information about the different motional contributions to the overall dynamics of the MTSL and about interactions between the MTSL and",
"after": "kinase in solution and",
"start_char_pos": 879,
"end_char_pos": 1137
},
{
"type": "R",
"before": "residues",
"after": "comparison between simulated and experimental date revealed that the motion",
"start_char_pos": 1142,
"end_char_pos": 1150
},
{
"type": "R",
"before": ". Data obtained from MD were seen to be in good agreement with those obtained from QM performed in previous work and with the experimental EPR data",
"after": "and spin label occurred on comparable timescales",
"start_char_pos": 1166,
"end_char_pos": 1313
},
{
"type": "D",
"before": "overall tumbling and",
"after": null,
"start_char_pos": 1497,
"end_char_pos": 1517
}
]
| [
0,
161,
388,
542,
839,
1315
]
|
1511.01707 | 1 | The main purpose of this tutorial is to introduce the particle Metropolis-Hastings (PMH) algorithm for parameter inference in nonlinear state space models (SSMs) . Throughout the tutorial, we develop an implementation of the PMH algorithm (and the integrated particle filter) in the statistical programming language R (similar code for MATLAB and Python is also provided on GitHub) . Moreover, we provide the reader with some intuition to why the algorithm works and discuss some solutions to numerical problems that might occur in practice. To illustrate the use of PMH, we consider parameter inference in a linear Gaussian SSM with synthetic data and a nonlinear stochastic volatility model with real-world data. We conclude the tutorial by discussing important possible improvements to the algorithm and listing suitable references for further study. | We provide a gentle introduction to the particle Metropolis-Hastings (PMH) algorithm for parameter inference in nonlinear state space models (SSMs) together with a software implementation in the statistical programming language R. Throughout this tutorial, we develop an implementation of the PMH algorithm (and the integrated particle filter) , which is distributed as the package pmhtutorial available from the CRAN repository . Moreover, we provide the reader with some intuition for how the algorithm operates and discuss some solutions to numerical problems that might occur in practice. To illustrate the use of PMH, we consider parameter inference in a linear Gaussian SSM with synthetic data and a nonlinear stochastic volatility model with real-world data. We conclude the tutorial by discussing important possible improvements to the algorithm and we also list suitable references for further study. | [
{
"type": "R",
"before": "The main purpose of this tutorial is to introduce",
"after": "We provide a gentle introduction to",
"start_char_pos": 0,
"end_char_pos": 49
},
{
"type": "R",
"before": ". Throughout the",
"after": "together with a software implementation in the statistical programming language R. Throughout this",
"start_char_pos": 162,
"end_char_pos": 178
},
{
"type": "R",
"before": "in the statistical programming language R (similar code for MATLAB and Python is also provided on GitHub)",
"after": ", which is distributed as the package pmhtutorial available from the CRAN repository",
"start_char_pos": 276,
"end_char_pos": 381
},
{
"type": "R",
"before": "to why the algorithm works",
"after": "for how the algorithm operates",
"start_char_pos": 436,
"end_char_pos": 462
},
{
"type": "R",
"before": "listing",
"after": "we also list",
"start_char_pos": 807,
"end_char_pos": 814
}
]
| [
0,
163,
383,
541,
714
]
|
1511.01707 | 2 | We provide a gentle introduction to the particle Metropolis-Hastings (PMH) algorithm for parameter inference in nonlinear state space models (SSMs) together with a software implementation in the statistical programming language R. Throughout this tutorial, we develop an implementation of the PMH algorithm (and the integrated particle filter ), which is distributed as the package pmhtutorial available from the CRAN repository. Moreover , we provide the reader with some intuition for how the algorithm operates and discuss some solutions to numerical problems that might occur in practice. To illustrate the use of PMH, we consider parameter inference in a linear Gaussian SSM with synthetic data and a nonlinear stochastic volatility model with real-world data . We conclude the tutorial by discussing important possible improvements to the algorithm and we also list suitable references for further study . | This tutorial provides a gentle introduction to the particle Metropolis-Hastings (PMH) algorithm for parameter inference in nonlinear state-space models together with a software implementation in the statistical programming language R. We employ a step-by-step approach to develop an implementation of the PMH algorithm (and the particle filter within) together with the reader. This final implementation is also available as the package pmhtutorial on the CRAN repository. Throughout the tutorial , we provide some intuition as to how the algorithm operates and discuss some solutions to problems that might occur in practice. To illustrate the use of PMH, we consider parameter inference in a linear Gaussian state-space model with synthetic data and a nonlinear stochastic volatility model with real-world data . | [
{
"type": "R",
"before": "We provide",
"after": "This tutorial provides",
"start_char_pos": 0,
"end_char_pos": 10
},
{
"type": "R",
"before": "state space models (SSMs)",
"after": "state-space models",
"start_char_pos": 122,
"end_char_pos": 147
},
{
"type": "R",
"before": "Throughout this tutorial, we",
"after": "We employ a step-by-step approach to",
"start_char_pos": 231,
"end_char_pos": 259
},
{
"type": "R",
"before": "integrated particle filter ), which is distributed",
"after": "particle filter within) together with the reader. This final implementation is also available",
"start_char_pos": 316,
"end_char_pos": 366
},
{
"type": "R",
"before": "available from",
"after": "on",
"start_char_pos": 394,
"end_char_pos": 408
},
{
"type": "R",
"before": "Moreover",
"after": "Throughout the tutorial",
"start_char_pos": 430,
"end_char_pos": 438
},
{
"type": "R",
"before": "the reader with some intuition for",
"after": "some intuition as to",
"start_char_pos": 452,
"end_char_pos": 486
},
{
"type": "D",
"before": "numerical",
"after": null,
"start_char_pos": 544,
"end_char_pos": 553
},
{
"type": "R",
"before": "SSM",
"after": "state-space model",
"start_char_pos": 676,
"end_char_pos": 679
},
{
"type": "D",
"before": ". We conclude the tutorial by discussing important possible improvements to the algorithm and we also list suitable references for further study",
"after": null,
"start_char_pos": 765,
"end_char_pos": 909
}
]
| [
0,
230,
429,
592,
766
]
|
1511.03011 | 1 | Three-dimensional protein structures usually contain regions of local order, called secondary structure, such as \alpha-helices and \beta-sheets. Secondary structure is characterized by the local rotational state of the protein backbone, quantified by two dihedral angles called \phi and \psi. Particular types of secondary structure can generally be described by a single (diffuse) location on a two-dimensional plot drawn in the space of the angles \phi and \psi, called a Ramachandran plot. By contrast, a recently-discovered nanomaterial made from peptoids, structural isomers of peptides, displays a secondary-structure motif corresponding to %DIFDELCMD < {\it %%% two regions on the Ramachandran plot [Mannige %DIFDELCMD < {\it %%% et al. ,%DIFDELCMD < {\em %%% Nature 526, 415 (2015)]. In order to describe such `higher-order' secondary structure in a compact way , we introduce here a means of describing regions on the Ramachandran plot in terms of a single %DIFDELCMD < {\em %%% Ramachandran number , {R}, which is a structurally meaningful combination of \phi and \psi. We show that the potential applications of {R} are numerous: it can be used to describe the geometric content of protein structures, and can be used to draw diagrams that reveal, at a glance, the frequency of occurrence of regular secondary structures and disordered regions in large protein datasets. We propose that {R} might be used as an order parameter for protein geometry for a wide range of applications. | Three-dimensional protein structures usually contain regions of local order, called secondary structure, such as \alpha-helices and \beta-sheets. Secondary structure is characterized by the local rotational state of the protein backbone, quantified by two dihedral angles called \phi and \psi. Particular types of secondary structure can generally be described by a single (diffuse) location on a two-dimensional plot drawn in the space of the angles \phi and \psi, called a Ramachandran plot. By contrast, a recently-discovered nanomaterial made from peptoids, structural isomers of peptides, displays a secondary-structure motif corresponding to %DIFDELCMD < {\it %%% two regions on the Ramachandran plot [Mannige %DIFDELCMD < {\it %%% et al. %DIFDELCMD < {\em %%% , Nature 526, 415 (2015)]. In order to describe such `higher-order' secondary structure in a compact way we introduce here a means of describing regions on the Ramachandran plot in terms of a single %DIFDELCMD < {\em %%% Ramachandran number , {R}, which is a structurally meaningful combination of \phi and \psi. We show that the potential applications of {R} are numerous: it can be used to describe the geometric content of protein structures, and can be used to draw diagrams that reveal, at a glance, the frequency of occurrence of regular secondary structures and disordered regions in large protein datasets. We propose that {R} might be used as an order parameter for protein geometry for a wide range of applications. | [
{
"type": "R",
"before": "two",
"after": "two",
"start_char_pos": 670,
"end_char_pos": 673
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 745,
"end_char_pos": 746
},
{
"type": "R",
"before": "Nature",
"after": ", Nature",
"start_char_pos": 768,
"end_char_pos": 774
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 871,
"end_char_pos": 872
},
{
"type": "R",
"before": "Ramachandran number",
"after": "Ramachandran number",
"start_char_pos": 989,
"end_char_pos": 1008
}
]
| [
0,
145,
293,
493,
792,
1080,
1382
]
|
1511.03159 | 1 | We identify a large class of Orlicz spaces X for which the topology \sigma(X,X_n^\sim) fails the C-property . We also apply a variant of the C-property to establish a w^*-representation theorem for proper convex increasing functionals on dual Banach lattices that satisfy a suitable version of Delbaen's Fatou property ] . | We identify a large class of Orlicz spaces X for which the topology \sigma(X,X_n^\sim) fails the C-property introduced in 7 . We also establish a variant of the C-property and use it to prove a w^*-representation theorem for proper convex increasing functionals on dual Banach lattices that satisfy a suitable version of Delbaen's Fatou property . Our results apply, in particular, to risk measures on all Orlicz spaces over 0,1] which is not L_1[0,1] . | [
{
"type": "A",
"before": null,
"after": "introduced in",
"start_char_pos": 108,
"end_char_pos": 108
},
{
"type": "A",
"before": null,
"after": "7",
"start_char_pos": 109,
"end_char_pos": 109
},
{
"type": "R",
"before": "apply",
"after": "establish",
"start_char_pos": 120,
"end_char_pos": 125
},
{
"type": "R",
"before": "to establish",
"after": "and use it to prove",
"start_char_pos": 154,
"end_char_pos": 166
},
{
"type": "A",
"before": null,
"after": ". Our results apply, in particular, to risk measures on all Orlicz spaces over",
"start_char_pos": 321,
"end_char_pos": 321
},
{
"type": "A",
"before": null,
"after": "0,1",
"start_char_pos": 322,
"end_char_pos": 322
},
{
"type": "A",
"before": null,
"after": "which is not L_1[0,1]",
"start_char_pos": 324,
"end_char_pos": 324
}
]
| [
0,
111
]
|
1511.03876 | 1 | We present in this paper a new computation principle based on the use of prior information from multiple sources for computing the premium charged to a policyholder. Under this framework , we propose alternative collective and Bayes premiums and describe some approaches to compute them. Several examples illustrates the new framework for premium computation. | We present in this paper a new premium computation principle based on the use of prior information from multiple sources for computing the premium charged to a policyholder. Under this framework , based on the use of Ordered Weighted Averaging (OWA) operators , we propose alternative collective and Bayes premiums and describe some approaches to compute them. Several examples illustrates the new framework for premium computation. | [
{
"type": "A",
"before": null,
"after": "premium",
"start_char_pos": 31,
"end_char_pos": 31
},
{
"type": "A",
"before": null,
"after": ", based on the use of Ordered Weighted Averaging (OWA) operators",
"start_char_pos": 188,
"end_char_pos": 188
}
]
| [
0,
166,
289
]
|
1511.03965 | 1 | Recent works in quantitative evolution have shown that biological structures are constrained by selected phenotypes in unexpected ways . This is also observed in simulations of gene network evolution, where complex realistic traits naturally appear even if they have not been explicitly selected . An important biological example is the absolute discrimination between different ligand "qualities", such as immune decisions based on binding times to T cell receptors (TCRs) or Fc%DIFDELCMD < \epsilonRIs%%% . In evolutionary simulations, the phenomenon of absolute discrimination is not achieved without detrimental ligand antagonism : a "dog in the manger" effect in which ligands unable to trigger response prevent agonists to do so. A priori it seems paradoxical to improve ligand discrimination in a context of increased ligand antagonism, and how such contradictory phenotypes can be disentangled is unclear. Here we establish for the first time a direct mathematical causal link between absolute discriminationand ligand antagonism.Inspired by the famous discussion by Gould and Lewontin, we thus qualify antagonism as a "phenotypic spandrel": a phenotype existing as a necessary by-product of another phenotype. We exhibit a general model for absolute discrimination , and further show how addition of proofreading steps inverts the expected hierarchy of antagonism without fully cancelling it. Phenotypic spandrels reveal the internal feedbacks and constraints structuring response in signalling pathways, in very similar way to symmetries structuring physical laws\epsilonRIs . | %DIFDELCMD < \epsilonRIs%%% We consider the general problem of absolute discrimination between categories of ligands irrespective of their concentration. An instance of this problem is immune discrimination between self and not-self. We connect this problem to biochemical adaptation, and establish that ligand antagonism - the ability of sub threshold ligands to negatively impact response - is a necessary consequence of absolute discrimination.Thus antagonism constitutes a "phenotypic spandrel": a phenotype existing as a necessary by-product of another phenotype. We exhibit a simple analytic model of absolute discrimination displaying ligand antagonism, where antagonism strength is linear in distance from threshold. This contrasts with proofreading based models, where antagonism vanishes far from threshold and thus displays an inverted hierarchy of antagonism compared to simple model . The phenotypic spandrel studied here is expected to structure many decision pathways such as immune detection mediated by TCRs and Fc\epsilonRIs . | [
{
"type": "D",
"before": "Recent works in quantitative evolution have shown that biological structures are constrained by selected phenotypes in unexpected ways . This is also observed in simulations of gene network evolution, where complex realistic traits naturally appear even if they have not been explicitly selected . An important biological example is the absolute discrimination between different ligand \"qualities\", such as immune decisions based on binding times to T cell receptors (TCRs) or Fc",
"after": null,
"start_char_pos": 0,
"end_char_pos": 479
},
{
"type": "R",
"before": ". In evolutionary simulations, the phenomenon of absolute discrimination is not achieved without detrimental ligand antagonism : a \"dog in the manger\" effect in which ligands unable to trigger response prevent agonists to do so. A priori it seems paradoxical to improve ligand discrimination in a context of increased ligand antagonism, and how such contradictory phenotypes can be disentangled is unclear. Here we establish for the first time a direct mathematical causal link between absolute discriminationand ligand antagonism.Inspired by the famous discussion by Gould and Lewontin, we thus qualify antagonism as",
"after": "We consider the general problem of absolute discrimination between categories of ligands irrespective of their concentration. An instance of this problem is immune discrimination between self and not-self. We connect this problem to biochemical adaptation, and establish that ligand antagonism - the ability of sub threshold ligands to negatively impact response - is",
"start_char_pos": 507,
"end_char_pos": 1124
},
{
"type": "A",
"before": null,
"after": "necessary consequence of absolute discrimination.Thus antagonism constitutes a",
"start_char_pos": 1127,
"end_char_pos": 1127
},
{
"type": "R",
"before": "general model for absolute discrimination , and further show how addition of proofreading steps inverts the expected",
"after": "simple analytic model of absolute discrimination displaying ligand antagonism, where antagonism strength is linear in distance from threshold. This contrasts with proofreading based models, where antagonism vanishes far from threshold and thus displays an inverted",
"start_char_pos": 1233,
"end_char_pos": 1349
},
{
"type": "R",
"before": "without fully cancelling it. Phenotypic spandrels reveal the internal feedbacks and constraints structuring response in signalling pathways, in very similar way to symmetries structuring physical laws",
"after": "compared to simple model . The phenotypic spandrel studied here is expected to structure many decision pathways such as immune detection mediated by TCRs and Fc",
"start_char_pos": 1374,
"end_char_pos": 1574
}
]
| [
0,
136,
297,
735,
913,
1038,
1219,
1402
]
|
1511.03965 | 2 | We consider the general problem of absolute discrimination between categories of ligands irrespective of their concentration. An instance of this problem is immune discrimination between self and not-self . We connect this problem to biochemical adaptation , and establish thatligand antagonism - the ability of sub threshold ligands to negatively impact response - is a necessary consequence of absolute discrimination . Thus antagonism constitutes a "phenotypic spandrel": a phenotype existing as a necessary by-product of another phenotype. We exhibit a simple analytic model of absolute discrimination displaying ligand antagonism, where antagonism strength is linear in distance from threshold. This contrasts with proofreading based models , where antagonism vanishes far from threshold and thus displays an inverted hierarchy of antagonism compared to simple model . The phenotypic spandrel studied here is expected to structure many decision pathways such as immune detection mediated by TCRs and Fc \epsilonRIs . | We consider the general problem of sensitive and specific discrimination between biochemical species. An important instance is immune discrimination between self and not-self , where it is also observed experimentally that ligands just below discrimination threshold negatively impact response, a phenomenon called antagonism. We characterize mathematically the generic properties of such discrimination, first relating it to biochemical adaptation . Then, based on basic biochemical rules, we establish that, surprisingly, antagonism is a generic consequence of any strictly specific discrimination made independently from ligand concentration . Thus antagonism constitutes a "phenotypic spandrel": a phenotype existing as a necessary by-product of another phenotype. We exhibit a simple analytic model of discrimination displaying antagonism, where antagonism strength is linear in distance from detection threshold. This contrasts with traditional proofreading based models where antagonism vanishes far from threshold and thus displays an inverted hierarchy of antagonism compared to simpler models . The phenotypic spandrel studied here is expected to structure many decision pathways such as immune detection mediated by TCRs and FC \epsilonRIs , as well as endocrine signalling/disruption . | [
{
"type": "R",
"before": "absolute discrimination between categories of ligands irrespective of their concentration. An instance of this problem",
"after": "sensitive and specific discrimination between biochemical species. An important instance",
"start_char_pos": 35,
"end_char_pos": 153
},
{
"type": "R",
"before": ". We connect this problem",
"after": ", where it is also observed experimentally that ligands just below discrimination threshold negatively impact response, a phenomenon called antagonism. We characterize mathematically the generic properties of such discrimination, first relating it",
"start_char_pos": 205,
"end_char_pos": 230
},
{
"type": "R",
"before": ", and establish thatligand antagonism - the ability of sub threshold ligands to negatively impact response - is a necessary consequence of absolute discrimination",
"after": ". Then, based on basic biochemical rules, we establish that, surprisingly, antagonism is a generic consequence of any strictly specific discrimination made independently from ligand concentration",
"start_char_pos": 257,
"end_char_pos": 419
},
{
"type": "R",
"before": "absolute discrimination displaying ligand",
"after": "discrimination displaying",
"start_char_pos": 582,
"end_char_pos": 623
},
{
"type": "A",
"before": null,
"after": "detection",
"start_char_pos": 689,
"end_char_pos": 689
},
{
"type": "A",
"before": null,
"after": "traditional",
"start_char_pos": 721,
"end_char_pos": 721
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 748,
"end_char_pos": 749
},
{
"type": "R",
"before": "simple model",
"after": "simpler models",
"start_char_pos": 861,
"end_char_pos": 873
},
{
"type": "R",
"before": "Fc",
"after": "FC",
"start_char_pos": 1007,
"end_char_pos": 1009
},
{
"type": "A",
"before": null,
"after": ", as well as endocrine signalling/disruption",
"start_char_pos": 1022,
"end_char_pos": 1022
}
]
| [
0,
125,
206,
421,
543,
700,
875
]
|
1511.04096 | 1 | We consider a limit order book, where buyers and sellers register to trade a security at specific prices. The largest price the buyers on the book are willing to pay to buy the security is called the market bid price, and the smallest price the sellers on the book are willing to receive to sell the security is called the market ask price. Market ask price is always greater than the market bid price, and these prices move upwards and downwards due to new arrivals, market trades, and cancellations. When the two prices become equal, a trade occurs, and immediately after the trade, these prices bounce back, that is, the market bid price decreases and the market ask price increases. We model these two price processes as `` bouncing geometric Brownian motions ( GBM)'': that is, the price processes evolve according to two independent GBMs between trading times . We show that, under this model, the inter-trading times follow an inverse Gaussian distribution , and the logarithmic returns between consecutive trading times follow a normal inverse Gaussian distribution. We show that the logarithmic trading price process is a renewal reward process, and that, under a suitable scaling, this renewal reward process converges to a standard Brownian motion \to \to0 . Finally, we develop a GBM asymptotic model for trading prices, and derive a simple and effective prediction formula . We illustrate the effectiveness of the prediction methods with an example using real stock price data. | We consider a limit order book, where buyers and sellers register to trade a security at specific prices. The largest price buyers on the book are willing to offer is called the market bid price, and the smallest price sellers on the book are willing to accept is called the market ask price. Market ask price is always greater than market bid price, and these prices move upwards and downwards due to new arrivals, market trades, and cancellations. We model these two price processes as " bouncing geometric Brownian motions ( GBMs)", which are defined as exponentials of two mutually reflected Brownian motions. We then modify these bouncing GBMs to construct a discrete time stochastic process of trading times and trading prices, which is parameterized by a positive parameter \delta. Under this model, it is shown that the inter-trading times are inverse Gaussian distributed , and the logarithmic returns between consecutive trading times follow a normal inverse Gaussian distribution. Our main results show that the logarithmic trading price process is a renewal reward process, and under a suitable scaling, this process converges to a standard Brownian motion as \delta\to 0. We also prove that the modified ask and bid processes approach the original bouncing GBMs as \delta\to0 . Finally, we derive a simple and effective prediction formula for trading prices, and illustrate the effectiveness of the prediction formula with an example using real stock price data. | [
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 124,
"end_char_pos": 127
},
{
"type": "R",
"before": "pay to buy the security",
"after": "offer",
"start_char_pos": 162,
"end_char_pos": 185
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 241,
"end_char_pos": 244
},
{
"type": "R",
"before": "receive to sell the security",
"after": "accept",
"start_char_pos": 280,
"end_char_pos": 308
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 381,
"end_char_pos": 384
},
{
"type": "D",
"before": "When the two prices become equal, a trade occurs, and immediately after the trade, these prices bounce back, that is, the market bid price decreases and the market ask price increases.",
"after": null,
"start_char_pos": 502,
"end_char_pos": 686
},
{
"type": "R",
"before": "``",
"after": "\"",
"start_char_pos": 725,
"end_char_pos": 727
},
{
"type": "R",
"before": "GBM)'': that is, the price processes evolve according to two independent GBMs between trading times . We show that, under",
"after": "GBMs)\", which are defined as exponentials of two mutually reflected Brownian motions. We then modify these bouncing GBMs to construct a discrete time stochastic process of trading times and trading prices, which is parameterized by a positive parameter \\delta. Under",
"start_char_pos": 766,
"end_char_pos": 887
},
{
"type": "A",
"before": null,
"after": "it is shown that",
"start_char_pos": 900,
"end_char_pos": 900
},
{
"type": "R",
"before": "follow an inverse Gaussian distribution",
"after": "are inverse Gaussian distributed",
"start_char_pos": 925,
"end_char_pos": 964
},
{
"type": "R",
"before": "We",
"after": "Our main results",
"start_char_pos": 1076,
"end_char_pos": 1078
},
{
"type": "D",
"before": "that,",
"after": null,
"start_char_pos": 1160,
"end_char_pos": 1165
},
{
"type": "D",
"before": "renewal reward",
"after": null,
"start_char_pos": 1197,
"end_char_pos": 1211
},
{
"type": "A",
"before": null,
"after": "as \\delta",
"start_char_pos": 1260,
"end_char_pos": 1260
},
{
"type": "A",
"before": null,
"after": "0. We also prove that the modified ask and bid processes approach the original bouncing GBMs as \\delta",
"start_char_pos": 1264,
"end_char_pos": 1264
},
{
"type": "D",
"before": "develop a GBM asymptotic model for trading prices, and",
"after": null,
"start_char_pos": 1283,
"end_char_pos": 1337
},
{
"type": "R",
"before": ". We",
"after": "for trading prices, and",
"start_char_pos": 1387,
"end_char_pos": 1391
},
{
"type": "R",
"before": "methods",
"after": "formula",
"start_char_pos": 1439,
"end_char_pos": 1446
}
]
| [
0,
105,
340,
501,
686,
867,
1075,
1388
]
|
1511.04314 | 1 | Financial models are studied where each asset may potentially lose value relative to any other. To this end, the paradigm of a pre-determined num\'eraire is abandoned in favour of a symmetrical point of view where all assets have equal priority. This approach yields novel versions of the Fundamental Theorems of Asset Pricing, which clarify and extend non-classical pricing formulas used in the financial community. Furthermore, conditioning on non-devaluation, each asset can serve as proper num\'eraire and a classical no-arbitrage condition can be formulated. It is shown when and how these local conditions can be aggregated to a global no-arbitrage condition . | Financial models are studied where each asset may potentially lose value relative to any other. Conditioning on non-devaluation, each asset can serve as proper num\'eraire and classical valuation rules can be formulated. It is shown when and how these local valuation rules can be aggregated to obtain global arbitrage-free valuation formulas . | [
{
"type": "R",
"before": "To this end, the paradigm of a pre-determined num\\'eraire is abandoned in favour of a symmetrical point of view where all assets have equal priority. This approach yields novel versions of the Fundamental Theorems of Asset Pricing, which clarify and extend non-classical pricing formulas used in the financial community. Furthermore, conditioning",
"after": "Conditioning",
"start_char_pos": 96,
"end_char_pos": 442
},
{
"type": "R",
"before": "a classical no-arbitrage condition",
"after": "classical valuation rules",
"start_char_pos": 510,
"end_char_pos": 544
},
{
"type": "R",
"before": "conditions",
"after": "valuation rules",
"start_char_pos": 601,
"end_char_pos": 611
},
{
"type": "R",
"before": "a global no-arbitrage condition",
"after": "obtain global arbitrage-free valuation formulas",
"start_char_pos": 633,
"end_char_pos": 664
}
]
| [
0,
95,
245,
416,
563
]
|
1511.04768 | 1 | We study optimal investment problems with transaction costs under Kahneman and Tversky's cumulative prospective theory (CPT). A CPT investor makes investment decisions in a single-period discrete time financial market consisting of one risk-free asset and one risky asset, in which trading the risky asset incurs proportional costs. The objective is to seek the optimal investment to maximize the prospect value of the investor's final wealth. We have obtained explicit optimal investment to this problem in two examples. An economic analysis is conducted to investigate the impact of the transaction costs and risk aversion on the optimal investment . | We study optimal investment problems under the framework of cumulative prospect theory (CPT). A CPT investor makes investment decisions in a single-period financial market with transaction costs. The objective is to seek the optimal investment strategy that maximizes the prospect value of the investor's final wealth. We obtain the optimal investment strategy explicitly in two examples. An economic analysis is conducted to investigate the impact of the transaction costs and risk aversion on the optimal investment strategy . | [
{
"type": "R",
"before": "with transaction costs under Kahneman and Tversky's cumulative prospective",
"after": "under the framework of cumulative prospect",
"start_char_pos": 37,
"end_char_pos": 111
},
{
"type": "R",
"before": "discrete time financial market consisting of one risk-free asset and one risky asset, in which trading the risky asset incurs proportional",
"after": "financial market with transaction",
"start_char_pos": 187,
"end_char_pos": 325
},
{
"type": "R",
"before": "to maximize",
"after": "strategy that maximizes",
"start_char_pos": 381,
"end_char_pos": 392
},
{
"type": "R",
"before": "have obtained explicit optimal investment to this problem",
"after": "obtain the optimal investment strategy explicitly",
"start_char_pos": 447,
"end_char_pos": 504
},
{
"type": "A",
"before": null,
"after": "strategy",
"start_char_pos": 651,
"end_char_pos": 651
}
]
| [
0,
125,
332,
443,
521
]
|
1511.04863 | 1 | In an incomplete market, with incompleteness stemming from stochastic factors imperfectly correlated with the underlying stocks, we derive representations of homothetic forward investment performance processes (power, exponential and logarithmic) . We develop a connection with ergodic and infinite horizon quadratic BSDE, and with a risk-sensitive control problem. We also develop a connection, for large trading horizons, with a family of traditional homothetic value function processes . | In an incomplete market, with incompleteness stemming from stochastic factors imperfectly correlated with the underlying stocks, we derive representations of homothetic (power, exponential and logarithmic) forward performance processes in factor-form using ergodic BSDE. We also develop a connection between the forward processes and infinite horizon BSDE, and , moreover, with risk-sensitive optimization. In addition, we develop a connection, for large time horizons, with a family of classical homothetic value function processes with random endowments . | [
{
"type": "D",
"before": "forward investment performance processes",
"after": null,
"start_char_pos": 169,
"end_char_pos": 209
},
{
"type": "R",
"before": ". We",
"after": "forward performance processes in factor-form using ergodic BSDE. We also",
"start_char_pos": 247,
"end_char_pos": 251
},
{
"type": "R",
"before": "with ergodic",
"after": "between the forward processes",
"start_char_pos": 273,
"end_char_pos": 285
},
{
"type": "D",
"before": "quadratic",
"after": null,
"start_char_pos": 307,
"end_char_pos": 316
},
{
"type": "R",
"before": "with a",
"after": ", moreover, with",
"start_char_pos": 327,
"end_char_pos": 333
},
{
"type": "R",
"before": "control problem. We also",
"after": "optimization. In addition, we",
"start_char_pos": 349,
"end_char_pos": 373
},
{
"type": "R",
"before": "trading",
"after": "time",
"start_char_pos": 406,
"end_char_pos": 413
},
{
"type": "R",
"before": "traditional",
"after": "classical",
"start_char_pos": 441,
"end_char_pos": 452
},
{
"type": "A",
"before": null,
"after": "with random endowments",
"start_char_pos": 489,
"end_char_pos": 489
}
]
| [
0,
248,
365
]
|
1511.04935 | 1 | Tail risk measures such as the conditional value-at-risk are useful in the context of portfolio selection for quantifying potential losses in worst cases. However, for scenario-based problems these are problematic: because the value of a tail risk measure only depends on a small subset of the support of the distribution of asset returns, traditional scenario based methods, which spread scenarios evenly across the whole support of the distribution, yield very unstable solutions unless we use a very large number scenarios . In this paper we propose a problem-driven scenario generation methodology for portfolio selection problems using a tail risk measure where the the asset returns have elliptical or near-elliptical distribution. Our approach in effect prioritizes the construction of scenarios in the areas of the distributionwhich correspond to the tail losses of feasible portfolios. The methodology is shown to work particularly well when the distribution of assets returns are positively correlated and heavy-tailed, and the performance is shown to improve as we tighten the constraints on feasible assets . | Scenario generation is the construction of a discrete random vector to represent parameters of uncertain values in a stochastic program. Most approaches to scenario generation are distribution-driven, that is, they attempt to construct a random vector which captures well in a probabilistic sense the uncertainty. On the other hand, a problem-driven approach may be able to exploit the structure of a problem to provide a more concise representation of the uncertainty. There have been only a few problem-driven approaches proposed, and these have been heuristic in nature . In this paper we propose what is, as far as we are aware, the first analytic approach to problem-driven scenario generation . This approach applies to stochastic programs with a tail risk measure , such as conditional value-at-risk. Since tail risk measures only depend on the upper tail of a distribution, standard methods of scenario generation, which typically spread there scenarios evenly across the support of the solution, struggle to adequately represent tail risk well . | [
{
"type": "R",
"before": "Tail risk measures such as the conditional value-at-risk are useful in the context of portfolio selection for quantifying potential losses in worst cases. However, for scenario-based problems these are problematic: because the value of a tail risk measure only depends on a small subset of the support of the distribution of asset returns, traditional scenario based methods, which spread scenarios evenly across the whole support of the distribution, yield very unstable solutions unless we use a very large number scenarios",
"after": "Scenario generation is the construction of a discrete random vector to represent parameters of uncertain values in a stochastic program. Most approaches to scenario generation are distribution-driven, that is, they attempt to construct a random vector which captures well in a probabilistic sense the uncertainty. On the other hand, a problem-driven approach may be able to exploit the structure of a problem to provide a more concise representation of the uncertainty. There have been only a few problem-driven approaches proposed, and these have been heuristic in nature",
"start_char_pos": 0,
"end_char_pos": 525
},
{
"type": "R",
"before": "a",
"after": "what is, as far as we are aware, the first analytic approach to",
"start_char_pos": 553,
"end_char_pos": 554
},
{
"type": "R",
"before": "methodology for portfolio selection problems using",
"after": ". This approach applies to stochastic programs with",
"start_char_pos": 590,
"end_char_pos": 640
},
{
"type": "R",
"before": "where the the asset returns have elliptical or near-elliptical distribution. Our approach in effect prioritizes the construction of scenarios in the areas of the distributionwhich correspond to the tail losses of feasible portfolios. The methodology is shown to work particularly well when the distribution of assets returns are positively correlated and heavy-tailed, and the performance is shown to improve as we tighten the constraints on feasible assets",
"after": ", such as conditional value-at-risk. Since tail risk measures only depend on the upper tail of a distribution, standard methods of scenario generation, which typically spread there scenarios evenly across the support of the solution, struggle to adequately represent tail risk well",
"start_char_pos": 661,
"end_char_pos": 1118
}
]
| [
0,
154,
527,
737,
894
]
|
1511.04935 | 2 | Scenario generation is the construction of a discrete random vector to represent parameters of uncertain values in a stochastic program. Most approaches to scenario generation are distribution-driven, that is, they attempt to construct a random vector which captures well in a probabilistic sense the uncertainty. On the other hand, a problem-driven approach may be able to exploit the structure of a problem to provide a more concise representation of the uncertainty. There have been only a few problem-driven approaches proposed, and these have been heuristic in nature. In this paper we propose what is, as far as we are aware, the first analytic approach to problem-driven scenario generation . This approach applies to stochastic programs with a tail risk measure, such as conditional value-at-risk. Since tail risk measures only depend on the upper tail of a distribution , standard methods of scenario generation, which typically spread there scenarios evenly across the support of the solution, struggle to adequately represent tail risk well . | In this paper we propose a problem-driven scenario generation approach to the single-period portfolio selection problem which use tail risk measures such as conditional value-at-risk. Tail risk measures are useful for quantifying potential losses in worst cases. However, for scenario-based problems these are problematic: because the value of a tail risk measure only depends on a small subset of the support of the distribution of asset returns, traditional scenario based methods, which spread scenarios evenly across the whole support of the distribution, yield very unstable solutions unless we use a very large number of scenarios. The proposed approach works by prioritizing the construction of scenarios in the areas of a probability distribution which correspond to the tail losses of feasible portfolios. The proposed approach can be applied to difficult instances of the portfolio selection problem characterized by high-dimensions, non-elliptical distributions of asset returns, and the presence of integer variables. It is also observed that the methodology works better as the feasible set of portfolios becomes more constrained. Based on this fact, a heuristic algorithm based on the sample average approximation method is proposed. This algorithm works by adding artificial constraints to the problem which are gradually tightened, allowing one to telescope onto high quality solutions . | [
{
"type": "D",
"before": "Scenario generation is the construction of a discrete random vector to represent parameters of uncertain values in a stochastic program. Most approaches to scenario generation are distribution-driven, that is, they attempt to construct a random vector which captures well in a probabilistic sense the uncertainty. On the other hand, a problem-driven approach may be able to exploit the structure of a problem to provide a more concise representation of the uncertainty. There have been only a few problem-driven approaches proposed, and these have been heuristic in nature.",
"after": null,
"start_char_pos": 0,
"end_char_pos": 573
},
{
"type": "R",
"before": "what is, as far as we are aware, the first analytic approach to",
"after": "a",
"start_char_pos": 599,
"end_char_pos": 662
},
{
"type": "R",
"before": ". This approach applies to stochastic programs with a tail risk measure,",
"after": "approach to the single-period portfolio selection problem which use tail risk measures",
"start_char_pos": 698,
"end_char_pos": 770
},
{
"type": "R",
"before": "Since tail risk measures only depend on the upper tail of a distribution , standard methods of scenario generation, which typically spread there",
"after": "Tail risk measures are useful for quantifying potential losses in worst cases. However, for scenario-based problems these are problematic: because the value of a tail risk measure only depends on a small subset of the support of the distribution of asset returns, traditional scenario based methods, which spread",
"start_char_pos": 806,
"end_char_pos": 950
},
{
"type": "A",
"before": null,
"after": "whole",
"start_char_pos": 979,
"end_char_pos": 979
},
{
"type": "R",
"before": "solution, struggle to adequately represent tail risk well",
"after": "distribution, yield very unstable solutions unless we use a very large number of scenarios. The proposed approach works by prioritizing the construction of scenarios in the areas of a probability distribution which correspond to the tail losses of feasible portfolios. The proposed approach can be applied to difficult instances of the portfolio selection problem characterized by high-dimensions, non-elliptical distributions of asset returns, and the presence of integer variables. It is also observed that the methodology works better as the feasible set of portfolios becomes more constrained. Based on this fact, a heuristic algorithm based on the sample average approximation method is proposed. This algorithm works by adding artificial constraints to the problem which are gradually tightened, allowing one to telescope onto high quality solutions",
"start_char_pos": 995,
"end_char_pos": 1052
}
]
| [
0,
136,
313,
469,
573,
699,
805
]
|
1511.05712 | 1 | Characterizing the link between small-scale chromatin structure and large-scale chromosome conformation is a prerequisite for understanding transcription. Yet, it remains poorly characterized. We present a simple biophysical model , where chromosomes are described in terms of folding of a chromatin sequence with alternating blocks of fibers with different thickness. We demonstrate that chromosomes undergo prominent conformational changes when the two fibers form separate domains. Conversely, when small stretches of the thinner fiber are randomly distributed, they act as impurities and conformational changes can be observed only at small length and time scales. Our results bring a limit to the possibility of detecting variations in the behavior of chromosomes due to chromatin modifications, and suggest that the debate whether chromosomes expand upon transcription, which is fueled by conflicting experimental observations, can be reconciled by examining how transcribed loci are distributed. Finally, to validate our conclusions, we compare our results to experimental FISH data . | Characterizing the link between small-scale chromatin structure and large-scale chromosome folding during interphase is a prerequisite for understanding transcription. Yet, this link remains poorly investigated. Here, we introduce a simple biophysical model where interphase chromosomes are described in terms of the folding of chromatin sequences composed of alternating blocks of fibers with different thicknesses and flexibilities, and we use it to study the influence of sequence disorder on chromosome behaviors in space and time. By employing extensive computer simulations,we thus demonstrate that chromosomes undergo noticeable conformational changes only on length-scales smaller than 10^5 basepairs and time-scales shorter than a few seconds, and we suggest there might exist effective upper bounds to the detection of chromosome URLanization in eukaryotes. We prove the relevance of our framework by modeling recent experimental FISH data on murine chromosomes . | [
{
"type": "R",
"before": "conformation",
"after": "folding during interphase",
"start_char_pos": 91,
"end_char_pos": 103
},
{
"type": "R",
"before": "it remains poorly characterized. We present",
"after": "this link remains poorly investigated. Here, we introduce",
"start_char_pos": 160,
"end_char_pos": 203
},
{
"type": "R",
"before": ", where",
"after": "where interphase",
"start_char_pos": 231,
"end_char_pos": 238
},
{
"type": "R",
"before": "folding of a chromatin sequence with",
"after": "the folding of chromatin sequences composed of",
"start_char_pos": 277,
"end_char_pos": 313
},
{
"type": "R",
"before": "thickness. We",
"after": "thicknesses and flexibilities, and we use it to study the influence of sequence disorder on chromosome behaviors in space and time. By employing extensive computer simulations,we thus",
"start_char_pos": 358,
"end_char_pos": 371
},
{
"type": "R",
"before": "prominent conformational changes when the two fibers form separate domains. Conversely, when small stretches of the thinner fiber are randomly distributed, they act as impurities and conformational changes can be observed only at small length and time scales. Our results bring a limit to the possibility of detecting variations in the behavior of chromosomes due to chromatin modifications, and suggest that the debate whether chromosomes expand upon transcription, which is fueled by conflicting experimental observations, can be reconciled by examining how transcribed loci are distributed. Finally, to validate our conclusions, we compare our results to experimental FISH data",
"after": "noticeable conformational changes only on length-scales smaller than 10^5 basepairs and time-scales shorter than a few seconds, and we suggest there might exist effective upper bounds to the detection of chromosome URLanization in eukaryotes. We prove the relevance of our framework by modeling recent experimental FISH data on murine chromosomes",
"start_char_pos": 409,
"end_char_pos": 1089
}
]
| [
0,
154,
192,
368,
484,
668,
1002
]
|
1511.06032 | 1 | We introduce an optimal measure transformation problem for zero coupon bond prices based on dynamic relative entropy of probability measures. In the default-free case we prove the equivalence of the optimal measure transformation problem and an optimal stochastic control problem of Gombani and Runggaldier (Math. Financ. 23(4):659-686, 2013) for bond prices. We also consider the optimal measure transformation problem for defaultable bonds, futures contracts, and forward contracts. We provide financial interpretations of the optimal measure transformation problems in terms of the maximization of returns subject to a relative entropy penalty term. In general the solution of the optimal measure transformation problem is characterized by the solution of certain decoupled nonlinear forward-backward stochastic differential equations (FBSDEs) . In specific classes of models we show how these FBSDEs can be solved explicitly or at least numerically . | We introduce the entropic measure transform (EMT) problem for a general process and prove the existence of a unique optimal measure characterizing the solution. The density process of the optimal measure is characterized using a semimartingale BSDE under general conditions. The EMT is used to reinterpret the conditional entropic risk-measure and to obtain a convenient formula for the conditional expectation of a process which admits an affine representation under a related measure. The entropic measure transform is then used provide a new characterization of defaultable bond prices, forward prices, and futures prices when the asset is driven by a jump diffusion. The characterization of these pricing problems in terms of the EMT provides economic interpretations as a maximization of returns subject to a penalty for removing financial risk as expressed through the aggregate relative entropy . The EMT is shown to extend the optimal stochastic control characterization of default-free bond prices of Gombani and Runggaldier (Math. Financ. 23(4):659-686, 2013). These methods are illustrated numerically with an example in the defaultable bond setting . | [
{
"type": "R",
"before": "an optimal measure transformation problem for zero coupon bond prices based on dynamic relative entropy of probability measures. In the default-free case we prove the equivalence of the optimal measure transformation problem and an optimal stochastic control problem of Gombani and Runggaldier (Math. Financ. 23(4):659-686, 2013) for bond prices. We also consider",
"after": "the entropic measure transform (EMT) problem for a general process and prove the existence of a unique optimal measure characterizing the solution. The density process of",
"start_char_pos": 13,
"end_char_pos": 376
},
{
"type": "R",
"before": "transformation problem for defaultable bonds, futures contracts, and forward contracts. We provide financial interpretations of the optimal measure transformation",
"after": "is characterized using a semimartingale BSDE under general conditions. The EMT is used to reinterpret the conditional entropic risk-measure and to obtain a convenient formula for the conditional expectation of a process which admits an affine representation under a related measure. The entropic measure transform is then used provide a new characterization of defaultable bond prices, forward prices, and futures prices when the asset is driven by a jump diffusion. The characterization of these pricing",
"start_char_pos": 397,
"end_char_pos": 559
},
{
"type": "A",
"before": null,
"after": "EMT provides economic interpretations as a",
"start_char_pos": 585,
"end_char_pos": 585
},
{
"type": "R",
"before": "relative entropy penalty term. In general the solution of the optimal measure transformation problem is characterized by the solution of certain decoupled nonlinear forward-backward stochastic differential equations (FBSDEs)",
"after": "penalty for removing financial risk as expressed through the aggregate relative entropy",
"start_char_pos": 623,
"end_char_pos": 847
},
{
"type": "R",
"before": "In specific classes of models we show how these FBSDEs can be solved explicitly or at least numerically",
"after": "The EMT is shown to extend the optimal stochastic control characterization of default-free bond prices of Gombani and Runggaldier (Math. Financ. 23(4):659-686, 2013). These methods are illustrated numerically with an example in the defaultable bond setting",
"start_char_pos": 850,
"end_char_pos": 953
}
]
| [
0,
141,
313,
359,
484,
653,
849
]
|
1511.06482 | 1 | Bond graphs can be used to build thermodynamically-compliant hierarchical models of biomolec- ular systems. As bond graphs have been widely used to model, analyse and synthesise engineering systems, this paper suggests that they can play the same role in the modelling, analysis and syn- thesis of biomolecular systems. The particular structure of bond graphs arising from biomolecular systems is established and used to elucidate the relation between thermodynamically closed and open systems. Block diagram representations of the dynamics implied by these bond graphs are used to reveal implicit feedback structures and are linearised to allow the application of control- theoretical methods. Two concepts of modularity are examined: computational modularity where physical correct- ness is retained and behavioural modularity where module behaviour (such as ultrasensitivity) is retained. As well as providing computational modularity, bond graphs provide a natural formula- tion of behavioural modularity and reveal the sources of retroactivity. A bond graph approach to reducing retroactivity, and thus inter-module interaction, is shown to require a power supply such as that provided by the AT P = ADP + P i reaction. The MAPK cascade (Raf-MEK-ERK pathway) is used as an illustrative example which demon- strates how the computational modularity provided by the bond graph approach avoids the errors associated with assuming irreversible Michaelis-Menten kinetics and emphasises the necessity for a power supply to support behavioural modularity in signalling networks . | Bond graphs can be used to build thermodynamically-compliant hierarchical models of biomolecular systems. As bond graphs have been widely used to model, analyse and synthesise engineering systems, this paper suggests that they can play the same role in the modelling, analysis and synthesis of biomolecular systems. The particular structure of bond graphs arising from biomolecular systems is established and used to elucidate the relation between thermodynamically closed and open systems. Block diagram representations of the dynamics implied by these bond graphs are used to reveal implicit feedback structures and are linearised to allow the application of control-theoretical methods. Two concepts of modularity are examined: computational modularity where physical correctness is retained and behavioural modularity where module behaviour (such as ultrasensitivity) is retained. As well as providing computational modularity, bond graphs provide a natural formulation of behavioural modularity and reveal the sources of retroactivity. A bond graph approach to reducing retroactivity, and thus inter-module interaction, is shown to require a power supply such as that provided by the ATP = ADP + Pi reaction. The MAPK cascade (Raf-MEK-ERK pathway) is used as an illustrative example . | [
{
"type": "R",
"before": "biomolec- ular",
"after": "biomolecular",
"start_char_pos": 84,
"end_char_pos": 98
},
{
"type": "R",
"before": "syn- thesis",
"after": "synthesis",
"start_char_pos": 283,
"end_char_pos": 294
},
{
"type": "R",
"before": "control- theoretical",
"after": "control-theoretical",
"start_char_pos": 665,
"end_char_pos": 685
},
{
"type": "R",
"before": "correct- ness",
"after": "correctness",
"start_char_pos": 776,
"end_char_pos": 789
},
{
"type": "R",
"before": "formula- tion",
"after": "formulation",
"start_char_pos": 969,
"end_char_pos": 982
},
{
"type": "R",
"before": "AT P",
"after": "ATP",
"start_char_pos": 1198,
"end_char_pos": 1202
},
{
"type": "R",
"before": "P i",
"after": "Pi",
"start_char_pos": 1211,
"end_char_pos": 1214
},
{
"type": "D",
"before": "which demon- strates how the computational modularity provided by the bond graph approach avoids the errors associated with assuming irreversible Michaelis-Menten kinetics and emphasises the necessity for a power supply to support behavioural modularity in signalling networks",
"after": null,
"start_char_pos": 1299,
"end_char_pos": 1575
}
]
| [
0,
107,
319,
494,
694,
891,
1049,
1224
]
|
1511.06943 | 1 | In this paper we present a class of risk measures composed of coherent risk measures with generalized deviation measures . Based on the Limitedness axiom, we prove that this set is a sub-class of coherent risk measures. We present extensions of this result for the case of convex or co-monotone coherent risk measures. Under this perspective, we propose a specific formulation that generates, from any coherent measure, a generalized deviation based on the dispersion of results worse than it, which leads to a very interesting risk measure. Moreover, we present some examples of risk measures that lie in our proposed class . | The definition of risk is based on two main concepts: the possibility of loss, and variability. In this paper we present a composition of risk and deviation measures, which capt these two concepts . Based on the proposed Limitedness axiom, we prove that this set is a sub-class of coherent , convex or co-monotone risk measures, conform the properties of the two components . | [
{
"type": "A",
"before": null,
"after": "The definition of risk is based on two main concepts: the possibility of loss, and variability.",
"start_char_pos": 0,
"end_char_pos": 0
},
{
"type": "R",
"before": "class of risk measures composed of coherent risk measures with generalized deviation measures",
"after": "composition of risk and deviation measures, which capt these two concepts",
"start_char_pos": 28,
"end_char_pos": 121
},
{
"type": "A",
"before": null,
"after": "proposed",
"start_char_pos": 137,
"end_char_pos": 137
},
{
"type": "R",
"before": "risk measures. We present extensions of this result for the case of",
"after": ",",
"start_char_pos": 207,
"end_char_pos": 274
},
{
"type": "R",
"before": "coherent risk measures. Under this perspective, we propose a specific formulation that generates, from any coherent measure, a generalized deviation based on the dispersion of results worse than it, which leads to a very interesting risk measure. Moreover, we present some examples of risk measures that lie in our proposed class",
"after": "risk measures, conform the properties of the two components",
"start_char_pos": 297,
"end_char_pos": 626
}
]
| [
0,
221,
320,
543
]
|
1511.06943 | 2 | The definition of risk is based on two main concepts: the possibility of loss , and variability. In this paper we present a composition of risk and deviation measures, which capt these two concepts. Based on the proposed Limitedness axiom, we prove that this set is a sub-class of coherent, convex or co-monotone risk measures, conform the properties of the two components . | The intuition on risk is based on two main concepts: loss and variability. In this paper we present a composition of risk and deviation measures, which capt these two concepts. Based on the proposed Limitedness axiom, we prove that this composition is a coherent, convex or co-monotone risk measure, conform properties of the two components . We also provide examples of known and new risk measures constructed under this framework, in order to highlight the importance of our approach, specially the role of the Limitedness axiom . | [
{
"type": "R",
"before": "definition of",
"after": "intuition on",
"start_char_pos": 4,
"end_char_pos": 17
},
{
"type": "R",
"before": "the possibility of loss ,",
"after": "loss",
"start_char_pos": 54,
"end_char_pos": 79
},
{
"type": "R",
"before": "set is a sub-class of",
"after": "composition is a",
"start_char_pos": 259,
"end_char_pos": 280
},
{
"type": "R",
"before": "measures, conform the",
"after": "measure, conform",
"start_char_pos": 318,
"end_char_pos": 339
},
{
"type": "A",
"before": null,
"after": ". We also provide examples of known and new risk measures constructed under this framework, in order to highlight the importance of our approach, specially the role of the Limitedness axiom",
"start_char_pos": 373,
"end_char_pos": 373
}
]
| [
0,
96,
198
]
|
1511.06943 | 3 | The intuition on risk is based on two main concepts: loss and variability. In this paper we present a composition of risk and deviation measures, which capt these two concepts. Based on the proposed Limitedness axiom, we prove that this composition is a coherent, convex or co-monotone risk measure, conform properties of the two components . We also provide examples of known and new risk measures constructed under this framework , in order to highlight the importance of our approach, specially the role of the Limitedness axiom. | The intuition of risk is based on two main concepts: loss and variability. In this paper , we present a composition of risk and deviation measures, which contemplate these two concepts. Based on the proposed Limitedness axiom, we prove that this resulting composition, based on properties of the two components , is a coherent risk measure. Similar results for the cases of convex and co-monotone risk measures are exposed. We also provide examples of known and new risk measures constructed under this framework in order to highlight the importance of our approach, especially the role of the Limitedness axiom. | [
{
"type": "R",
"before": "on",
"after": "of",
"start_char_pos": 14,
"end_char_pos": 16
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 89,
"end_char_pos": 89
},
{
"type": "R",
"before": "capt",
"after": "contemplate",
"start_char_pos": 153,
"end_char_pos": 157
},
{
"type": "R",
"before": "composition is a coherent, convex or co-monotone risk measure, conform",
"after": "resulting composition, based on",
"start_char_pos": 238,
"end_char_pos": 308
},
{
"type": "R",
"before": ".",
"after": ", is a coherent risk measure. Similar results for the cases of convex and co-monotone risk measures are exposed.",
"start_char_pos": 342,
"end_char_pos": 343
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 433,
"end_char_pos": 434
},
{
"type": "R",
"before": "specially",
"after": "especially",
"start_char_pos": 489,
"end_char_pos": 498
}
]
| [
0,
74,
177,
343
]
|
1511.07230 | 1 | In this paper, we focus on model-free pricing and robust hedging of options depending on the local time , consistent with Vanilla options. This problem is classically approached by means of the Skorokhod embedding problem (SEP), which consists in representing a given probability on the real line as the distribution of a Brownian motion stopped at a chosen stopping time . By using the stochastic control approach initiated by Galichon, Henry-Labordere and Touzi, we recover the optimal hedging strategies and the corresponding prices given by Vallois ' embeddings to the SEP . Furthermore, we extend the analysis to the two-marginal case . We provide a construction of two-marginal embedding and some examples for which the robust superhedging problem is solved . Finally, a special multi-marginal case is studied, where we construct a Markov martingale and compute its explicit generator . In particular, we provide a new example of fake Brownian motion. | In this paper, we focus on model-free pricing and robust hedging of options depending on the local time when one or more marginals of the underlying price process are known . By using the stochastic control approach initiated in Galichon, Henry-Labord\`ere and Touzi, we identify the optimal hedging strategies and the corresponding prices in the one-marginal case. As a by-product, we recover the property that the Vallois solutions to the Skorokhod embedding problem (SEP) maximize and minimize the expectation of any convex function of the local time . Furthermore, we extend the analysis to the two-marginal case , where we provide candidates for the optimal hedging strategies, and, we construct a new solution to the two-marginal SEP as a generalization of the Vallois embedding . Finally, a special multi-marginal case is studied, where the stopping times given by Vallois are well-ordered. In the n-marginal case, we solve the robust hedging problem as it essentially reduces to the one-marginal case. In the full marginal setting, we construct a remarkable Markov martingale and compute its generator explicitly . In particular, this provides a new example of fake Brownian motion. | [
{
"type": "R",
"before": ", consistent with Vanilla options. This problem is classically approached by means of the Skorokhod embedding problem (SEP), which consists in representing a given probability on the real line as the distribution of a Brownian motion stopped at a chosen stopping time",
"after": "when one or more marginals of the underlying price process are known",
"start_char_pos": 104,
"end_char_pos": 371
},
{
"type": "R",
"before": "by Galichon, Henry-Labordere",
"after": "in Galichon, Henry-Labord\\`ere",
"start_char_pos": 425,
"end_char_pos": 453
},
{
"type": "R",
"before": "recover",
"after": "identify",
"start_char_pos": 468,
"end_char_pos": 475
},
{
"type": "R",
"before": "given by Vallois ' embeddings to the SEP",
"after": "in the one-marginal case. As a by-product, we recover the property that the Vallois solutions to the Skorokhod embedding problem (SEP) maximize and minimize the expectation of any convex function of the local time",
"start_char_pos": 536,
"end_char_pos": 576
},
{
"type": "R",
"before": ". We provide a construction of",
"after": ", where we provide candidates for the optimal hedging strategies, and, we construct a new solution to the",
"start_char_pos": 640,
"end_char_pos": 670
},
{
"type": "R",
"before": "embedding and some examples for which the robust superhedging problem is solved",
"after": "SEP as a generalization of the Vallois embedding",
"start_char_pos": 684,
"end_char_pos": 763
},
{
"type": "R",
"before": "we construct a",
"after": "the stopping times given by Vallois are well-ordered. In the n-marginal case, we solve the robust hedging problem as it essentially reduces to the one-marginal case. In the full marginal setting, we construct a remarkable",
"start_char_pos": 823,
"end_char_pos": 837
},
{
"type": "R",
"before": "explicit generator",
"after": "generator explicitly",
"start_char_pos": 872,
"end_char_pos": 890
},
{
"type": "R",
"before": "we provide",
"after": "this provides",
"start_char_pos": 908,
"end_char_pos": 918
}
]
| [
0,
138,
373,
641,
765,
892
]
|
1511.07230 | 2 | In this paper, we focus on model-free pricing and robust hedging of options depending on the local time when one or more marginals of the underlying price process are known. By using the stochastic control approach initiated in Galichon, Henry-Labord\`ere and Touzi, we identify the optimal hedging strategies and the corresponding prices in the one-marginal case . As a by-product, we recover the property that the Valloissolutions to the Skorokhod embedding problem (SEP) maximize and minimize the expectation of any convex function of the local time. Furthermore, we extend the analysis to the two-marginal case, where we provide candidates for the optimal hedging strategies, and, we construct a new solution to the two-marginal SEP as a generalization of the Vallois embedding . Finally, a special multi-marginal case is studied, where the stopping times given by Vallois are well-ordered . In the n-marginal case, we solve the robust hedging problem as it essentially reduces to the one-marginal case. In the full marginal setting, we construct a remarkable Markov martingale and compute its generator explicitly. In particular, this provides a new example of fake Brownian motion. | In this paper, we provide some results on Skorokhod embedding with local time and its applications to the robust hedging problem in finance. First we investigate the robust hedging of options depending on the local time by using the recently introduced stochastic control approach , in order to identify the optimal hedging strategies , as well as the market models that realize the extremal no-arbitrage prices . As a by-product, the optimality of Vallois' Skorokhod embeddings is recovered. In addition, under appropriate conditions, we derive a new solution to the two-marginal Skorokhod embedding as a generalization of the Vallois solution. It turns out from our analysis that one needs to relax the monotonicity assumption on the embedding functions in order to embed a larger class of marginal distributions . Finally, in a full-marginal setting where the stopping times given by Vallois are well-ordered , we construct a remarkable Markov martingale which provides a new example of fake Brownian motion. | [
{
"type": "R",
"before": "focus on model-free pricing and robust hedging",
"after": "provide some results on Skorokhod embedding with local time and its applications to the robust hedging problem in finance. First we investigate the robust hedging",
"start_char_pos": 18,
"end_char_pos": 64
},
{
"type": "R",
"before": "when one or more marginals of the underlying price process are known. By using the",
"after": "by using the recently introduced",
"start_char_pos": 104,
"end_char_pos": 186
},
{
"type": "R",
"before": "initiated in Galichon, Henry-Labord\\`ere and Touzi, we",
"after": ", in order to",
"start_char_pos": 215,
"end_char_pos": 269
},
{
"type": "R",
"before": "and the corresponding prices in the one-marginal case",
"after": ", as well as the market models that realize the extremal no-arbitrage prices",
"start_char_pos": 310,
"end_char_pos": 363
},
{
"type": "R",
"before": "we recover the property that the Valloissolutions to the Skorokhod embedding problem (SEP) maximize and minimize the expectation of any convex function of the local time. Furthermore, we extend the analysis to the two-marginal case, where we provide candidates for the optimal hedging strategies, and, we construct",
"after": "the optimality of Vallois' Skorokhod embeddings is recovered. In addition, under appropriate conditions, we derive",
"start_char_pos": 383,
"end_char_pos": 697
},
{
"type": "R",
"before": "SEP",
"after": "Skorokhod embedding",
"start_char_pos": 733,
"end_char_pos": 736
},
{
"type": "R",
"before": "embedding",
"after": "solution. It turns out from our analysis that one needs to relax the monotonicity assumption on the embedding functions in order to embed a larger class of marginal distributions",
"start_char_pos": 772,
"end_char_pos": 781
},
{
"type": "R",
"before": "a special multi-marginal case is studied,",
"after": "in a full-marginal setting",
"start_char_pos": 793,
"end_char_pos": 834
},
{
"type": "R",
"before": ". In the n-marginal case, we solve the robust hedging problem as it essentially reduces to the one-marginal case. In the full marginal setting, we",
"after": ", we",
"start_char_pos": 894,
"end_char_pos": 1040
},
{
"type": "R",
"before": "and compute its generator explicitly. In particular, this",
"after": "which",
"start_char_pos": 1082,
"end_char_pos": 1139
}
]
| [
0,
173,
365,
553,
783,
895,
1007,
1119
]
|
1511.07540 | 1 | This study demonstrates that incorrect values are entered into pairwise comparisons matrix for processing . In the current situation, it leads to pairwise comparisons rating scale paradox. A solution , based on normalization, is proposed . | This study demonstrates that incorrect data are entered into a pairwise comparisons matrix for processing into weights for the data collected by a rating scale. Unprocessed rating scale data lead to a paradox. A solution to it , based on normalization, is proposed . This is an essential correction for virtually all pairwise comparisons methods using rating scales. The illustration of the relative error currently, taking place, is discussed . | [
{
"type": "R",
"before": "values",
"after": "data",
"start_char_pos": 39,
"end_char_pos": 45
},
{
"type": "A",
"before": null,
"after": "a",
"start_char_pos": 63,
"end_char_pos": 63
},
{
"type": "R",
"before": ". In the current situation, it leads to pairwise comparisons rating scale",
"after": "into weights for the data collected by a rating scale. Unprocessed rating scale data lead to a",
"start_char_pos": 107,
"end_char_pos": 180
},
{
"type": "A",
"before": null,
"after": "to it",
"start_char_pos": 201,
"end_char_pos": 201
},
{
"type": "A",
"before": null,
"after": ". This is an essential correction for virtually all pairwise comparisons methods using rating scales. The illustration of the relative error currently, taking place, is discussed",
"start_char_pos": 240,
"end_char_pos": 240
}
]
| [
0,
108,
189
]
|
1511.07773 | 1 | For the last two decades, most financial markets have undergone an evolution toward electronification. The market for corporate bonds is one of the last major financial markets to follow this unavoidable path. Traditionally quote-driven ( that is , dealer-driven) rather than order-driven, the market for corporate bonds is still mainly dominated by voice trading, but a lot of electronic platforms have emerged that make it possible for buy-side agents to simultaneously request several dealers for quotes, or even directly trade with other buy-siders. The research presented in this article is based on a large proprietary database of requests for quotes (RFQ) sent, through the multi-dealer-to-client (MD2C) platforms operated by Bloomberg Fixed Income Trading and Tradeweb , to one of the major liquidity providers in European corporate bonds. Our goal is (i) to model the RFQ process on these platforms and the resulting competition between dealers, (ii) to use the RFQ database in order to implicit from our model the behavior of both dealers and clients on MD2C platforms , and (iii) to study the influence of several bond characteristics on the behavior of market participants . | For the last two decades, most financial markets have undergone an evolution toward electronification. The market for corporate bonds is one of the last major financial markets to follow this unavoidable path. Traditionally quote-driven ( i.e. , dealer-driven) rather than order-driven, the market for corporate bonds is still mainly dominated by voice trading, but a lot of electronic platforms have emerged . These electronic platforms make it possible for buy-side agents to simultaneously request several dealers for quotes, or even directly trade with other buy-siders. The research presented in this article is based on a large proprietary database of requests for quotes (RFQ) sent, through the multi-dealer-to-client (MD2C) platform operated by Bloomberg Fixed Income Trading , to one of the major liquidity providers in European corporate bonds. Our goal is (i) to model the RFQ process on these platforms and the resulting competition between dealers, and (ii) to use our model in order to implicit from the RFQ database the behavior of both dealers and clients on MD2C platforms . | [
{
"type": "R",
"before": "that is",
"after": "i.e.",
"start_char_pos": 239,
"end_char_pos": 246
},
{
"type": "R",
"before": "that",
"after": ". These electronic platforms",
"start_char_pos": 412,
"end_char_pos": 416
},
{
"type": "R",
"before": "platforms",
"after": "platform",
"start_char_pos": 711,
"end_char_pos": 720
},
{
"type": "D",
"before": "and Tradeweb",
"after": null,
"start_char_pos": 764,
"end_char_pos": 776
},
{
"type": "A",
"before": null,
"after": "and",
"start_char_pos": 955,
"end_char_pos": 955
},
{
"type": "R",
"before": "the RFQ database",
"after": "our model",
"start_char_pos": 968,
"end_char_pos": 984
},
{
"type": "R",
"before": "our model the",
"after": "the RFQ database the",
"start_char_pos": 1011,
"end_char_pos": 1024
},
{
"type": "D",
"before": ", and (iii) to study the influence of several bond characteristics on the behavior of market participants",
"after": null,
"start_char_pos": 1080,
"end_char_pos": 1185
}
]
| [
0,
102,
209,
553,
847
]
|
1511.08068 | 1 | The properties of the interbank market have been discussed widely in the literature . However a proper model selection between URLanizations of the network in a small number of blocks, for example bipartite, core-periphery, and modular , has not been performed. In this paper, by inferring a Stochastic Block Model on the e-MID interbank market in the period 2010-2014 , we show that in normal conditions the network URLanized either as a bipartite structure or as a three community structure, where a group of intermediaries mediates between borrowers and lenders . In exceptional conditions, such as after LTRO, one of the most important unconventional measure by ECB at the beginning of 2012, the most likely structure becomes a random one and only in 2014 the e-MID market went back to a normal URLanization. By investigating the strategy of individual banks, we show that the disappearance of many lending banks and the strategy switch of a very small set of banks from borrower to lender is likely at the origin of this structural change. | The topological properties of interbank networks have been discussed widely in the literature mainly because of their relevance for systemic risk. Here we propose to use the Stochastic Block Model to investigate and perform a model selection among several possible two URLanizations of the network : these include bipartite, core-periphery, and modular structures. We apply our method to the e-MID interbank market in the period 2010-2014 and we show that in normal conditions the most likely URLanization is a bipartite structure . In exceptional conditions, such as after LTRO, one of the most important unconventional measures by ECB at the beginning of 2012, the most likely structure becomes a random one and only in 2014 the e-MID market went back to a normal URLanization. By investigating the strategy of individual banks, we explore possible explanations and we show that the disappearance of many lending banks and the strategy switch of a very small set of banks from borrower to lender is likely at the origin of this structural change. | [
{
"type": "R",
"before": "properties of the interbank market",
"after": "topological properties of interbank networks",
"start_char_pos": 4,
"end_char_pos": 38
},
{
"type": "R",
"before": ". However a proper model selection between",
"after": "mainly because of their relevance for systemic risk. Here we propose to use the Stochastic Block Model to investigate and perform a model selection among several possible two",
"start_char_pos": 84,
"end_char_pos": 126
},
{
"type": "R",
"before": "in a small number of blocks, for example",
"after": ": these include",
"start_char_pos": 156,
"end_char_pos": 196
},
{
"type": "R",
"before": ", has not been performed. In this paper, by inferring a Stochastic Block Model on",
"after": "structures. We apply our method to",
"start_char_pos": 236,
"end_char_pos": 317
},
{
"type": "R",
"before": ",",
"after": "and",
"start_char_pos": 369,
"end_char_pos": 370
},
{
"type": "R",
"before": "network URLanized either as",
"after": "most likely URLanization is",
"start_char_pos": 409,
"end_char_pos": 436
},
{
"type": "D",
"before": "or as a three community structure, where a group of intermediaries mediates between borrowers and lenders",
"after": null,
"start_char_pos": 459,
"end_char_pos": 564
},
{
"type": "R",
"before": "measure",
"after": "measures",
"start_char_pos": 655,
"end_char_pos": 662
},
{
"type": "A",
"before": null,
"after": "we explore possible explanations and",
"start_char_pos": 864,
"end_char_pos": 864
}
]
| [
0,
85,
261,
566,
812
]
|
1511.08194 | 1 | For every adapted, c\`{agl\`{a}d } process (strategy) G and typical c\'rdl\'rg price paths whose jumps are no greater than some c>0 we define integral G\cdot S as a limit of simple integrals. | For every adapted, gl\`{a}d } c\`agl\`ad process (strategy) G and typical c\`adl\`ag price paths whose jumps satisfy some mild growth condition we define integral G\cdot S as a limit of simple integrals. | [
{
"type": "D",
"before": "c\\`{a",
"after": null,
"start_char_pos": 19,
"end_char_pos": 24
},
{
"type": "A",
"before": null,
"after": "c\\`agl\\`ad",
"start_char_pos": 35,
"end_char_pos": 35
},
{
"type": "R",
"before": "c\\'rdl\\'rg",
"after": "c\\`adl\\`ag",
"start_char_pos": 69,
"end_char_pos": 79
},
{
"type": "R",
"before": "are no greater than some c>0",
"after": "satisfy some mild growth condition",
"start_char_pos": 104,
"end_char_pos": 132
}
]
| [
0
]
|
1511.08621 | 1 | It was proposed earlier that Pfcrmp (Plasmodium falciparum chloroquine resistance marker protein) may be the chloroquine target protein in nucleus. In this communication, further evidence is presented to support the view that Pfcrmp may play akey role in chloroquine antimalarial actions as well as resistance development. | It was proposed earlier that Pfcrmp (Plasmodium falciparum chloroquine resistance marker protein) may be the chloroquine 's target protein in nucleus. In this communication, further evidence is presented to support the view that Pfcrmp may play a key role in chloroquine antimalarial actions as well as resistance development. | [
{
"type": "A",
"before": null,
"after": "'s",
"start_char_pos": 121,
"end_char_pos": 121
},
{
"type": "R",
"before": "akey",
"after": "a key",
"start_char_pos": 243,
"end_char_pos": 247
}
]
| [
0,
148
]
|
1512.00268 | 1 | BACKGROUND : The field of 3D chromatin interaction mapping is changing our point of view on the genome . Despite the increase in the number of studies, there are surprisingly few network analyses performed on these datasets and the network topology is rarely considered . Assortativity is a network property that has been widely used in the social sciences to measure the probability of nodes with similar values of a specific feature to interact preferentially. We propose a new approach, the Chromatin feature Assortativity Score (ChAS ), to integrate the epigenomic landscape of a specific cell type with its chromatin interaction network. RESULTS: We analyse two chromatin interaction datasets for embryonic stem cells , which were generated with two very recent promoter capture HiC methods. These datasets define networksof interactions amongst promoters and between promoters and other genomic loci . We calculate the presence of a collection of 78 chromatin features in the chromatin fragments that constitute the nodes of the network. Looking at the ChAS of these epigenomic features in the interaction networks, we find Polycomb Group proteins and associated histone marks to be the most assortative factors . Remarkably, we observe higher ChAS of the actively elongating form of RNA Polymerase 2 compared to the inactive forms in interactions between promoters and other distal elements, suggesting an important role for active elongation in promoter-enhancer contacts. CONCLUSIONS: Our method can be used to compare the association of epigenomic features to different type of genomic contacts. Furthermore, it facilitates the comparison of any number of chromatin interaction datasets in the context of the corresponding epigenomic landscape . | Background : The field of 3D chromatin interaction mapping is changing our point of view on the genome , paving the way for new insights into URLanization. Network analysis is a natural and powerful way of modelling chromatin interactions . Assortativity is a network property that has been widely used in the social sciences to measure the probability of nodes with similar values of a specific feature to interact preferentially. We propose a new approach, using Chromatin feature Assortativity (ChAs ), to integrate the epigenomic landscape of a specific cell type with its chromatin interaction network. Results: We use high-resolution Promoter Capture Hi-C and Hi-Cap data as well as ChIA-PET data from embryonic stem cells to generate promoter-centered interaction networks . We calculate the presence of a collection of 78 chromatin features in the chromatin fragments constituting the nodes of the network. Based on the ChAs of these epigenomic features calculated in 4 different interaction networks, we find Polycomb Group proteins and associated histone marks to play a prominent role . Remarkably, in promoter-centered networks, we observe higher ChAs of the actively elongating form of RNA Polymerase 2 compared to inactive forms in interactions between promoters and other elements. Conclusions: Contacts amongst promoters and between promoters and other elements have different characteristic epigenomic features. Using ChAs we identify a possible role of the elongating form of RNAPII in enhancer activity. Our approach facilitates the study of multiple genome-wide epigenomic profiles, considering network topology and allowing for the comparison of any number of chromatin interaction networks . | [
{
"type": "R",
"before": "BACKGROUND",
"after": "Background",
"start_char_pos": 0,
"end_char_pos": 10
},
{
"type": "R",
"before": ". Despite the increase in the number of studies, there are surprisingly few network analyses performed on these datasets and the network topology is rarely considered",
"after": ", paving the way for new insights into URLanization. Network analysis is a natural and powerful way of modelling chromatin interactions",
"start_char_pos": 103,
"end_char_pos": 269
},
{
"type": "R",
"before": "the",
"after": "using",
"start_char_pos": 490,
"end_char_pos": 493
},
{
"type": "R",
"before": "Score (ChAS",
"after": "(ChAs",
"start_char_pos": 526,
"end_char_pos": 537
},
{
"type": "R",
"before": "RESULTS: We analyse two chromatin interaction datasets for",
"after": "Results: We use high-resolution Promoter Capture Hi-C and Hi-Cap data as well as ChIA-PET data from",
"start_char_pos": 643,
"end_char_pos": 701
},
{
"type": "R",
"before": ", which were generated with two very recent promoter capture HiC methods. These datasets define networksof interactions amongst promoters and between promoters and other genomic loci",
"after": "to generate promoter-centered interaction networks",
"start_char_pos": 723,
"end_char_pos": 905
},
{
"type": "R",
"before": "that constitute",
"after": "constituting",
"start_char_pos": 1002,
"end_char_pos": 1017
},
{
"type": "R",
"before": "Looking at the ChAS",
"after": "Based on the ChAs",
"start_char_pos": 1044,
"end_char_pos": 1063
},
{
"type": "R",
"before": "in the",
"after": "calculated in 4 different",
"start_char_pos": 1093,
"end_char_pos": 1099
},
{
"type": "R",
"before": "be the most assortative factors",
"after": "play a prominent role",
"start_char_pos": 1186,
"end_char_pos": 1217
},
{
"type": "A",
"before": null,
"after": "in promoter-centered networks,",
"start_char_pos": 1232,
"end_char_pos": 1232
},
{
"type": "R",
"before": "ChAS",
"after": "ChAs",
"start_char_pos": 1251,
"end_char_pos": 1255
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 1320,
"end_char_pos": 1323
},
{
"type": "R",
"before": "distal elements, suggesting an important role for active elongation in promoter-enhancer contacts. CONCLUSIONS: Our method can be used to compare the association of epigenomic features to different type of genomic contacts. Furthermore, it facilitates",
"after": "elements. Conclusions: Contacts amongst promoters and between promoters and other elements have different characteristic epigenomic features. Using ChAs we identify a possible role of the elongating form of RNAPII in enhancer activity. Our approach facilitates the study of multiple genome-wide epigenomic profiles, considering network topology and allowing for",
"start_char_pos": 1383,
"end_char_pos": 1634
},
{
"type": "R",
"before": "datasets in the context of the corresponding epigenomic landscape",
"after": "networks",
"start_char_pos": 1689,
"end_char_pos": 1754
}
]
| [
0,
104,
271,
462,
642,
796,
907,
1043,
1481,
1606
]
|
1512.00327 | 1 | The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, redundant new metrics are proposed frequently, and privacy studies are often incomparable. In this survey we alleviate these problems by structuring the landscape of privacy metrics. For this we explain and discuss a selection of over eighty privacy metrics and introduce a categorization based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on eight questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement. | The goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over eighty privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement. | [
{
"type": "R",
"before": "redundant",
"after": "instead of using existing metrics,",
"start_char_pos": 381,
"end_char_pos": 390
},
{
"type": "R",
"before": "For this",
"after": "To this end,",
"start_char_pos": 564,
"end_char_pos": 572
},
{
"type": "R",
"before": "a categorization",
"after": "categorizations",
"start_char_pos": 653,
"end_char_pos": 669
},
{
"type": "R",
"before": "eight",
"after": "nine",
"start_char_pos": 857,
"end_char_pos": 862
}
]
| [
0,
164,
252,
367,
471,
563,
781,
1012
]
|
1512.00496 | 1 | Metastasis is a process of cell migration that can be collective and guided by chemical cues. Viewing metastasis in this way, as a physical phenomenon, allows one to draw upon insights from other studies of collective sensing and migration in cell biology. Here we review recent progress in the study of cell sensing and migration as collective phenomena, including in the context of metastatic cells. We describe simple physical models of sensing and migration , and we survey the experimental evidence that cells operate near the purely physical limitsto their behavior . We conclude by contrasting cells' sensory abilities with their sensitivity to drugs, and suggesting potential alternatives to cell-death-based cancer therapies. | Metastasis is a process of cell migration that can be collective and guided by chemical cues. Viewing metastasis in this way, as a physical phenomenon, allows one to draw upon insights from other studies of collective sensing and migration in cell biology. Here we review recent progress in the study of cell sensing and migration as collective phenomena, including in the context of metastatic cells. We describe simple physical models that yield the limits to the precision of cell sensing , and we review experimental evidence that cells operate near these limits. Models of collective migration are surveyed in order understand how collective metastatic invasion can occur . We conclude by contrasting cells' sensory abilities with their sensitivity to drugs, and suggesting potential alternatives to cell-death-based cancer therapies. | [
{
"type": "R",
"before": "of sensing and migration",
"after": "that yield the limits to the precision of cell sensing",
"start_char_pos": 437,
"end_char_pos": 461
},
{
"type": "R",
"before": "survey the",
"after": "review",
"start_char_pos": 471,
"end_char_pos": 481
},
{
"type": "R",
"before": "the purely physical limitsto their behavior",
"after": "these limits. Models of collective migration are surveyed in order understand how collective metastatic invasion can occur",
"start_char_pos": 528,
"end_char_pos": 571
}
]
| [
0,
93,
256,
401,
573
]
|
1512.01088 | 1 | Network inference is advancing rapidly , and new methods are proposed on a regular basis. Understanding the advantages and limitations of different network inference methods is key to their effective application in different circumstances. The common structural properties shared by diverse networks naturally pose a challenge when it comes to devising accurate inference methods, but surprisingly, there is a paucity of comparison and evaluation methods. Historically, every new methodology has only been tested against "gold standard" (true-values ) purpose-designed synthetic and real-world (validated) biological networks. In this paper we aim to assess the impact of taking into consideration topological and information-theoretic complexity aspects in the evaluation of the final accuracy of an inference procedure. Specifically, we will compare the best inference methods, in both graph-theoretic and information-theoretic terms, for preserving topological properties and the original information content of synthetic and biological networks. New methods for performance comparison are introduced by borrowing ideas from gene set enrichment analysis and by applying concept from algorithmic complexity. Experimental results show that no individual algorithm outperforms all others in all cases, and that the challenging and non-trivial nature of network inference is evident in the struggle of some of the algorithms to turn in a performance that is better than random guesswork. Therefore special care should be taken to suit the method used to the specific purpose . Finally, we show that evaluations from data generated representing different underlying topologies have different signatures that can be used to better choose a network reconstruction method. | Network inference is a rapidly advancing field, with new methods being proposed on a regular basis. Understanding the advantages and limitations of different network inference methods is key to their effective application in different circumstances. The common structural properties shared by diverse networks naturally pose a challenge when it comes to devising accurate inference methods, but surprisingly, there is a paucity of comparison and evaluation methods. Historically, every new methodology has only been tested against gold standard (true values ) purpose-designed synthetic and real-world (validated) biological networks. In this paper we aim to assess the impact of taking into consideration aspects of topological and information content in the evaluation of the final accuracy of an inference procedure. Specifically, we will compare the best inference methods, in both graph-theoretic and information-theoretic terms, for preserving topological properties and the original information content of synthetic and biological networks. New methods for performance comparison are introduced by borrowing ideas from gene set enrichment analysis and by applying concepts from algorithmic complexity. Experimental results show that no individual algorithm outperforms all others in all cases, and that the challenging and non-trivial nature of network inference is evident in the struggle of some of the algorithms to turn in a performance that is superior to random guesswork. Therefore special care should be taken to suit the method to the purpose at hand . Finally, we show that evaluations from data generated using different underlying topologies have different signatures that can be used to better choose a network reconstruction method. | [
{
"type": "R",
"before": "advancing rapidly , and new methods are",
"after": "a rapidly advancing field, with new methods being",
"start_char_pos": 21,
"end_char_pos": 60
},
{
"type": "R",
"before": "\"gold standard\" (true-values",
"after": "gold standard",
"start_char_pos": 521,
"end_char_pos": 549
},
{
"type": "A",
"before": null,
"after": "(true values",
"start_char_pos": 550,
"end_char_pos": 550
},
{
"type": "R",
"before": "topological and information-theoretic complexity aspects",
"after": "aspects of topological and information content",
"start_char_pos": 699,
"end_char_pos": 755
},
{
"type": "R",
"before": "concept",
"after": "concepts",
"start_char_pos": 1174,
"end_char_pos": 1181
},
{
"type": "R",
"before": "better than",
"after": "superior to",
"start_char_pos": 1458,
"end_char_pos": 1469
},
{
"type": "R",
"before": "used to the specific purpose",
"after": "to the purpose at hand",
"start_char_pos": 1546,
"end_char_pos": 1574
},
{
"type": "R",
"before": "representing",
"after": "using",
"start_char_pos": 1631,
"end_char_pos": 1643
}
]
| [
0,
89,
239,
455,
627,
822,
1050,
1210,
1487,
1576
]
|
1512.01230 | 1 | Culture impacts every aspect of society. This paper studies the impact of one of the most important dimensions of culture -- the tension between individualism and collectivism -- on some of the most important aspects of society -- size and wealth. We present a mathematical model of the consequences of individualism and collectivism and derive implications of the model for the size and wealth of the society and for the wealth distribution within the society. Our model provides a useful lens for examining some empirical data; simple regressions suggest that our model explains a significant portion of the data . | This paper presents a dynamic model to study the impact on the economic outcomes in different societies during the Malthusian Era of individualism (time spent working alone) and collectivism (complementary time spent working with others). The model is driven by opposing forces: a greater degree of collectivism provides a higher safety net for low quality workers but a greater degree of individualism allows high quality workers to leave larger bequests. The model suggests that more individualistic societies display smaller populations, greater per capita income and greater income inequality. Some (limited) historical evidence is consistent with these predictions . | [
{
"type": "R",
"before": "Culture impacts every aspect of society. This paper studies the impact of one of the most important dimensions of culture -- the tension between individualism and collectivism -- on some of the most important aspects of society -- size and wealth. We present a mathematical model of the consequences of individualism and collectivism and derive implications of the model for the size and wealth of the society and for the wealth distribution within the society. Our model provides a useful lens for examining some empirical data; simple regressions suggest that our model explains a significant portion of the data",
"after": "This paper presents a dynamic model to study the impact on the economic outcomes in different societies during the Malthusian Era of individualism (time spent working alone) and collectivism (complementary time spent working with others). The model is driven by opposing forces: a greater degree of collectivism provides a higher safety net for low quality workers but a greater degree of individualism allows high quality workers to leave larger bequests. The model suggests that more individualistic societies display smaller populations, greater per capita income and greater income inequality. Some (limited) historical evidence is consistent with these predictions",
"start_char_pos": 0,
"end_char_pos": 614
}
]
| [
0,
40,
247,
461,
529
]
|
1512.01488 | 1 | We provide a Fundamental Theorem of Asset Pricing and a Superhedging Theorem for a model independent discrete time financial market with proportional transaction costs. We consider a probability-free version of the No Robust Arbitrage condition introduced in Schachermayer ['04] and show that this is equivalent to the existence of Consistent Price Systems. Moreover, we prove that the superhedging price for a claim g coincides with the frictionless superhedging price of g for a suitable process in the bid-ask spread. | We provide a Fundamental Theorem of Asset Pricing and a Superhedging Theorem for a model independent discrete time financial market with proportional transaction costs. We consider a probability-free version of the Robust No Arbitrage condition introduced in Schachermayer ['04] and show that this is equivalent to the existence of Consistent Price Systems. Moreover, we prove that the superhedging price for a claim g coincides with the frictionless superhedging price of g for a suitable process in the bid-ask spread. | [
{
"type": "R",
"before": "No Robust",
"after": "Robust No",
"start_char_pos": 215,
"end_char_pos": 224
}
]
| [
0,
168,
357
]
|
1512.01609 | 1 | This paper considers a cost minimization problem for data centers with N servers and randomly arriving service requests. A central router decides which server to use for each new request. Each server has three types of states (active, idle, setup) with different costs and time durations. The servers operate asynchronously over their own states and can choose one of multiple sleep modes when idle. We develop an online distributed control algorithm so that each server makes its own decisions and the overall time average cost is near optimal with probability 1. The algorithm does not need probability information for the arrival rate or job sizes. Next, an improved algorithm that uses a single queue is developed via a "virtualization" technique . The improvement is shown to provide the same (near optimal) costs while significantly reducing delay, as shown in simulations . | This paper considers a cost minimization problem for data centers with N servers and randomly arriving service requests. A central router decides which server to use for each new request. Each server has three types of states (active, idle, setup) with different costs and time durations. The servers operate asynchronously over their own states and can choose one of multiple sleep modes when idle. We develop an online distributed control algorithm so that each server makes its own decisions , the request queues are bounded and the overall time average cost is near optimal with probability 1. The algorithm does not need probability information for the arrival rate or job sizes. Next, an improved algorithm that uses a single queue is developed via a "virtualization" technique which is shown to provide the same (near optimal) costs . Simulation experiments on a real data center traffic trace demonstrate the efficiency of our algorithm compared to other existing algorithms . | [
{
"type": "A",
"before": null,
"after": ", the request queues are bounded",
"start_char_pos": 495,
"end_char_pos": 495
},
{
"type": "R",
"before": ". The improvement",
"after": "which",
"start_char_pos": 752,
"end_char_pos": 769
},
{
"type": "R",
"before": "while significantly reducing delay, as shown in simulations",
"after": ". Simulation experiments on a real data center traffic trace demonstrate the efficiency of our algorithm compared to other existing algorithms",
"start_char_pos": 820,
"end_char_pos": 879
}
]
| [
0,
120,
187,
288,
399,
565,
652,
753
]
|
1512.01698 | 1 | This note gives a simple construction of the pathwise stochastic integral \int_0^t\phi d\omega for a continuous integrand \phi and continuous price path \omega . | This note gives a simple construction of the pathwise stochastic integral \int_0^t\phi d\omega for a continuous integrand \phi whose variation index is finite and a continuous price path \omega as integrator. A basic version of It\^o's lemma shows that this definition agrees with F\"ollmer's . | [
{
"type": "R",
"before": "and",
"after": "whose variation index is finite and a",
"start_char_pos": 127,
"end_char_pos": 130
},
{
"type": "A",
"before": null,
"after": "as integrator. A basic version of It\\^o's lemma shows that this definition agrees with F\\\"ollmer's",
"start_char_pos": 160,
"end_char_pos": 160
}
]
| [
0
]
|
1512.01698 | 2 | This note gives a simple construction of the pathwise stochastic integral \int_0^t\phi d\omega for a continuous integrand \phi whose variation index is finite and a continuous price path \omega as integrator. A basic version of It\^o's lemma shows that this definition agrees with F\"ollmer's . | This note gives a simple construction of the pathwise It\^o integral \int_0^t\phi d\omega for a continuous integrand \phi whose variation index is finite and a continuous price path \omega as integrator. A basic version of It\^o's lemma shows that this definition agrees with F\"ollmer's . Finally, we show the existence of \int_0^t\phi d\omega for a c\`adl\`ag \phi with a finite variation index and a c\`adl\`ag integrator \omega with jumps bounded by 1 in absolute value . | [
{
"type": "R",
"before": "stochastic",
"after": "It\\^o",
"start_char_pos": 54,
"end_char_pos": 64
},
{
"type": "A",
"before": null,
"after": ". Finally, we show the existence of \\int_0^t\\phi d\\omega for a c\\`adl\\`ag \\phi with a finite variation index and a c\\`adl\\`ag integrator \\omega with jumps bounded by 1 in absolute value",
"start_char_pos": 293,
"end_char_pos": 293
}
]
| [
0,
208
]
|
1512.01698 | 3 | This note gives a simple construction of the pathwise It\^o integral \int_0^t\phi d\omega for a continuous integrand \phi whose variation index is finite and a continuous price path \omega as integrator. A basic version of It\^o's lemma shows that this definition agrees with F\"ollmer's. Finally, we show the existence of \int_0^t\phi d\omega for a c\`adl\`ag \phi with a finite variation index and a c\`adl\`ag integrator \omega with jumps bounded by 1 in absolute value . | This paper gives a simple construction of the pathwise It\^o integral \int_0^t\phi d\omega for a continuous integrand \phi whose variation index is finite and a continuous price path \omega as integrator. The definition is pathwise in that neither \phi nor \omega are assumed to be paths of stochastic processes, and the It\^o integral exists almost surely in a non-probabilistic financial sense. A basic version of It\^o's lemma shows that our definition of the It\^o integral agrees with F\"ollmer's. Finally, we propose a tentative definition and show the existence of \int_0^t\phi d\omega for a c\`adl\`ag integrand \phi with a finite variation index and a c\`adl\`ag integrator \omega with jumps bounded in a predictable manner . | [
{
"type": "R",
"before": "note",
"after": "paper",
"start_char_pos": 5,
"end_char_pos": 9
},
{
"type": "A",
"before": null,
"after": "The definition is pathwise in that neither \\phi nor \\omega are assumed to be paths of stochastic processes, and the It\\^o integral exists almost surely in a non-probabilistic financial sense.",
"start_char_pos": 204,
"end_char_pos": 204
},
{
"type": "R",
"before": "this definition",
"after": "our definition of the It\\^o integral",
"start_char_pos": 249,
"end_char_pos": 264
},
{
"type": "A",
"before": null,
"after": "propose a tentative definition and",
"start_char_pos": 302,
"end_char_pos": 302
},
{
"type": "A",
"before": null,
"after": "integrand",
"start_char_pos": 363,
"end_char_pos": 363
},
{
"type": "R",
"before": "by 1 in absolute value",
"after": "in a predictable manner",
"start_char_pos": 453,
"end_char_pos": 475
}
]
| [
0,
203,
289
]
|
1512.01698 | 4 | This paper gives a simple construction of the pathwise It\^o integral \int_0^t\phi d\omega for a continuous integrand \phi whose variation index is finite and a continuous price path \omega as integrator . The definition is pathwise in that neither \phi nor \omega are assumed to be paths of stochastic processes, and the It\^o integral exists almost surely in a non-probabilistic financial sense. A basic version of It\^o's lemma shows that our definition of the It\^o integral agrees with F\"ollmer's. Finally, we propose a tentative definition and show the existence of \int_0^t\phi d\omega for a c\`adl\`ag integrand \phi with a finite variation index and a c\`adl\`ag integrator \omega with jumps bounded in a predictable manner. | This paper gives several simple constructions of the pathwise Ito integral \int_0^t\phi d\omega for an integrand \phi and a price path \omega as integrator , with \phi and \omega satisfying various topological and analytical conditions. The definitions are purely pathwise in that neither \phi nor \omega are assumed to be paths of stochastic processes, and the Ito integral exists almost surely in a non-probabilistic financial sense. For example, one of the results shows the existence of \int_0^t\phi d\omega for a cadlag integrand \phi and a cadlag integrator \omega with jumps bounded in a predictable manner. | [
{
"type": "R",
"before": "a simple construction",
"after": "several simple constructions",
"start_char_pos": 17,
"end_char_pos": 38
},
{
"type": "R",
"before": "It\\^o",
"after": "Ito",
"start_char_pos": 55,
"end_char_pos": 60
},
{
"type": "R",
"before": "a continuous integrand \\phi whose variation index is finite and a continuous",
"after": "an integrand \\phi and a",
"start_char_pos": 95,
"end_char_pos": 171
},
{
"type": "R",
"before": ". The definition is",
"after": ", with \\phi and \\omega satisfying various topological and analytical conditions. The definitions are purely",
"start_char_pos": 204,
"end_char_pos": 223
},
{
"type": "R",
"before": "It\\^o",
"after": "Ito",
"start_char_pos": 322,
"end_char_pos": 327
},
{
"type": "R",
"before": "A basic version of It\\^o's lemma shows that our definition of the It\\^o integral agrees with F\\\"ollmer's. Finally, we propose a tentative definition and show the",
"after": "For example, one of the results shows the",
"start_char_pos": 398,
"end_char_pos": 559
},
{
"type": "R",
"before": "c\\`adl\\`ag integrand \\phi with a finite variation index and a c\\`adl\\`ag",
"after": "cadlag integrand \\phi and a cadlag",
"start_char_pos": 600,
"end_char_pos": 672
}
]
| [
0,
205,
397,
503
]
|
1512.01790 | 1 | The sequence of amino acid monomers in the primary structure of protein is decided by the corresponding sequence of codons (triplets of nucleic acid monomers) on the template messenger RNA (mRNA). The polymerization of a protein, by incorporation of the successive amino acid monomers, is carried out by a molecular machine called ribosome. Transfer RNA (tRNA) molecules, each species of which is "charged" with a specific amino acid, enters the ribosome and participates in the reading of the codon by the ribosome. Both mis-reading of mRNA codon and prior mis-charging of a tRNA can lead to "mis-sense" error, i. e,. erroneous substitution of a correct amino acid monomer by an incorrect one during the synthesis of a protein. We develop a theoretical model of protein synthesis that allows for both types of contributions to the "mis-sense" error. We report exact analytical formulae for several quantities that characterize the interplay of mis-charging of tRNA and mis-reading of mRNA. The average rate of elongation of a protein is given by a generalized Michaelis-Menten-like formula. We discuss the main implications of these results. These formulae will be very useful in future in analyzing the data collected during experimental investigations of this phenomenon{\it . | The sequence of amino acid monomers in the primary structure of a protein is decided by the corresponding sequence of codons (triplets of nucleic acid monomers) on the template messenger RNA (mRNA). The polymerization of a protein, by incorporation of the successive amino acid monomers, is carried out by a molecular machine called ribosome. We develop a stochastic kinetic model that captures the possibilities of mis-reading of mRNA codon and prior mis-charging of a tRNA . By a combination of analytical and numerical methods we obtain the distribution of the times taken for incorporation of the successive amino acids in the growing protein in this mathematical model. The corresponding exact analytical expression for the average rate of elongation of a nascent protein is a `biologically motivated' generalization of the{\it Michaelis-Menten formula for the average rate of enzymatic reactions. This generalized Michaelis-Menten-like formula (and the exact analytical expressions for a few other quantities) that we report here display the interplay of four different branched pathways corresponding to selection of four different types of tRNA . | [
{
"type": "A",
"before": null,
"after": "a",
"start_char_pos": 64,
"end_char_pos": 64
},
{
"type": "R",
"before": "Transfer RNA (tRNA) molecules, each species of which is \"charged\" with a specific amino acid, enters the ribosome and participates in the reading of the codon by the ribosome. Both",
"after": "We develop a stochastic kinetic model that captures the possibilities of",
"start_char_pos": 342,
"end_char_pos": 522
},
{
"type": "R",
"before": "can lead to \"mis-sense\" error, i. e,. erroneous substitution of a correct amino acid monomer by an incorrect one during the synthesis of a protein. We develop a theoretical model of protein synthesis that allows for both types of contributions to the \"mis-sense\" error. We report exact analytical formulae for several quantities that characterize the interplay of mis-charging of tRNA and mis-reading of mRNA. The",
"after": ". By a combination of analytical and numerical methods we obtain the distribution of the times taken for incorporation of the successive amino acids in the growing protein in this mathematical model. The corresponding exact analytical expression for the",
"start_char_pos": 582,
"end_char_pos": 995
},
{
"type": "R",
"before": "protein is given by a generalized Michaelis-Menten-like formula. We discuss the main implications of these results. These formulae will be very useful in future in analyzing the data collected during experimental investigations of this phenomenon",
"after": "nascent protein is a `biologically motivated' generalization of the",
"start_char_pos": 1028,
"end_char_pos": 1274
},
{
"type": "A",
"before": null,
"after": "Michaelis-Menten formula",
"start_char_pos": 1279,
"end_char_pos": 1279
},
{
"type": "A",
"before": null,
"after": "for the average rate of enzymatic reactions. This generalized Michaelis-Menten-like formula (and the exact analytical expressions for a few other quantities) that we report here display the interplay of four different branched pathways corresponding to selection of four different types of tRNA",
"start_char_pos": 1280,
"end_char_pos": 1280
}
]
| [
0,
197,
341,
517,
729,
851,
991,
1092,
1143
]
|
1512.02233 | 1 | Here, we present the World Trade Atlas 1870-2013, a collection of annual world trade maps in which distances incorporate the different dimensions that affect international trade , beyond mere geography. The atlas provides us with information regarding the long-term evolution of the international trade system and demonstrates that, in terms of trade, the world is not flat , but hyperbolic . The departure from flatness has been increasing since World War I, meaning that differences in trade distances are growing and trade networks are becoming more hierarchical. Smaller-scale economies are moving away from other countries except for the largest economies; meanwhile those large economies are increasing their chances of becoming connected worldwide. At the same time, Preferential Trade Agreements do not fit in perfectly with natural communities within the trade space and have not necessarily reduced internal trade barriers. We discuss an interpretation in terms of globalization, hierarchization, and localization; three simultaneous forces that shape the international trade system. | Here, we present the World Trade Atlas 1870-2013, a collection of annual world trade maps in which distance combines economic size and the different dimensions that affect international trade beyond mere geography. Trade distances, which are based on a gravity model predicting the existence of significant trade channels, are such that the closer countries are in trade space, the greater their chance of becoming connected. The atlas provides us with information regarding the long-term evolution of the international trade system and demonstrates that, in terms of trade, the world is not flat but hyperbolic, as a reflection of its complex architecture . The departure from flatness has been increasing since World War I, meaning that differences in trade distances are growing and trade networks are becoming more hierarchical. Smaller-scale economies are moving away from other countries except for the largest economies; meanwhile those large economies are increasing their chances of becoming connected worldwide. At the same time, Preferential Trade Agreements do not fit in perfectly with natural communities within the trade space and have not necessarily reduced internal trade barriers. We discuss an interpretation in terms of globalization, hierarchization, and localization; three simultaneous forces that shape the international trade system. | [
{
"type": "R",
"before": "distances incorporate",
"after": "distance combines economic size and",
"start_char_pos": 99,
"end_char_pos": 120
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 178,
"end_char_pos": 179
},
{
"type": "A",
"before": null,
"after": "Trade distances, which are based on a gravity model predicting the existence of significant trade channels, are such that the closer countries are in trade space, the greater their chance of becoming connected.",
"start_char_pos": 203,
"end_char_pos": 203
},
{
"type": "R",
"before": ", but hyperbolic",
"after": "but hyperbolic, as a reflection of its complex architecture",
"start_char_pos": 375,
"end_char_pos": 391
}
]
| [
0,
202,
393,
567,
662,
756,
934,
1025
]
|
1512.02937 | 1 | The number of molecules involved in a cell or subcellular structure is sometimes rather small. In this situation, ordinary macroscopic-level fluctuations can be overwhelmed by non-negligible large fluctuations, which results in drastic changes in chemical-reaction dynamics and the resulting statistics compared to those observed under a large system size with a large number of molecules. Thus, analyses based on the average behavior of molecules valid at a macroscopic level (i.e., with a large-number limit) will not necessarily hold when considering a small size system containing a much smaller number of molecules . In order to understand how salient changes emerge from fluctuations in molecular number, we here quantitatively define small-number effect by focusing on a "mesoscopic" level, in which the concentration distribution is distinguishable both from micro- and macroscopic ones, and propose a criterion for determining whether or not such an effect can emerge in a given chemical reaction network. Using the proposed criterion, we systematically derive a list of motifs of chemical reaction networks that can show small-number effects, which includes motifs showing emergence of the power law and the bimodal distribution observable in a mesoscopic regime with respect to molecule number . Motif analysis revealed that autocatalytic reactions are essential for the emergence of the small-number effect . The list of motifs provided herein is helpful in the search for candidates of biochemical reactions with a small-number effect for possible biological functions, as well as for designing a reaction system whose behavior can change drastically depending on molecule number, rather than concentration. | The number of molecules involved in a cell or subcellular structure is sometimes rather small. In this situation, ordinary macroscopic-level fluctuations can be overwhelmed by non-negligible large fluctuations, which results in drastic changes in chemical-reaction dynamics and statistics compared to those observed under a macroscopic system (i.e., with a large number of molecules ) . In order to understand how salient changes emerge from fluctuations in molecular number, we here quantitatively define small-number effect by focusing on a `mesoscopic' level, in which the concentration distribution is distinguishable both from micro- and macroscopic ones, and propose a criterion for determining whether or not such an effect can emerge in a given chemical reaction network. Using the proposed criterion, we systematically derive a list of motifs of chemical reaction networks that can show small-number effects, which includes motifs showing emergence of the power law and the bimodal distribution observable in a mesoscopic regime with respect to molecule number . The list of motifs provided herein is helpful in the search for candidates of biochemical reactions with a small-number effect for possible biological functions, as well as for designing a reaction system whose behavior can change drastically depending on molecule number, rather than concentration. | [
{
"type": "D",
"before": "the resulting",
"after": null,
"start_char_pos": 278,
"end_char_pos": 291
},
{
"type": "R",
"before": "large system size with a large number of molecules. Thus, analyses based on the average behavior of molecules valid at a macroscopic level",
"after": "macroscopic system",
"start_char_pos": 338,
"end_char_pos": 476
},
{
"type": "R",
"before": "large-number limit) will not necessarily hold when considering a small size system containing a much smaller",
"after": "large",
"start_char_pos": 491,
"end_char_pos": 599
},
{
"type": "A",
"before": null,
"after": ")",
"start_char_pos": 620,
"end_char_pos": 620
},
{
"type": "R",
"before": "\"mesoscopic\"",
"after": "`mesoscopic'",
"start_char_pos": 779,
"end_char_pos": 791
},
{
"type": "D",
"before": ". Motif analysis revealed that autocatalytic reactions are essential for the emergence of the small-number effect",
"after": null,
"start_char_pos": 1306,
"end_char_pos": 1419
}
]
| [
0,
94,
389,
622,
1015,
1421
]
|
1512.04741 | 1 | The Multi Variate Mixture Dynamics model is a tractable, dynamical, arbitrage-free multivariate model characterized by transparency on the dependence structure, since closed form formulae for terminal correlations, average correlations and copula function are available. It also allows for complete decorrelation between assets and instantaneous variances. Each single asset is modelled according to a lognormal mixture dynamics model, and this univariate version is widely used in the industry due to its flexibility and accuracy. The same property holds for the multivariate process of all assets, whose density is a mixture of multivariate basic densities. This allows for consistency of single asset and index/portfolio smile. In this paper, we generalize the MVMD model by introducing shifted dynamics and we propose a definition of implied correlation under this model. We investigate whether the model is able to consistently reproduce the implied volatility of FX cross rates , once the single components are calibrated to univariate shifted lognormal mixture dynamics models . We compare the performance of the shifted MVMD model in terms of implied correlation with those of the shifted Simply Correlated Mixture Dynamics model where the dynamics of the single assets are connected naively by introducing correlation among their Brownian motions. Finally, we introduce a model with uncertain volatilities and correlation. The Markovian projection of this model is a generalization of the shifted MVMD model. | The Multi Variate Mixture Dynamics model is a tractable, dynamical, arbitrage-free multivariate model characterized by transparency on the dependence structure, since closed form formulae for terminal correlations, average correlations and copula function are available. It also allows for complete decorrelation between assets and instantaneous variances. Each single asset is modelled according to a lognormal mixture dynamics model, and this univariate version is widely used in the industry due to its flexibility and accuracy. The same property holds for the multivariate process of all assets, whose density is a mixture of multivariate basic densities. This allows for consistency of single asset and index/portfolio smile. In this paper, we generalize the MVMD model by introducing shifted dynamics and we propose a definition of implied correlation under this model. We investigate whether the model is able to consistently reproduce the implied volatility of FX cross rates once the single components are calibrated to univariate shifted lognormal mixture dynamics models . We consider in particular the case of the Chinese renminbi FX rate, showing that the shifted MVMD model correctly recovers the CNY/EUR smile given the EUR/USD smile and the USD/CNY smile, thus highlighting that the model can also work as an arbitrage free volatility smile extrapolation tool for cross currencies that may not be liquid or fully observable . We compare the performance of the shifted MVMD model in terms of implied correlation with those of the shifted Simply Correlated Mixture Dynamics model where the dynamics of the single assets are connected naively by introducing correlation among their Brownian motions. Finally, we introduce a model with uncertain volatilities and correlation. The Markovian projection of this model is a generalization of the shifted MVMD model. | [
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 984,
"end_char_pos": 985
},
{
"type": "A",
"before": null,
"after": ". We consider in particular the case of the Chinese renminbi FX rate, showing that the shifted MVMD model correctly recovers the CNY/EUR smile given the EUR/USD smile and the USD/CNY smile, thus highlighting that the model can also work as an arbitrage free volatility smile extrapolation tool for cross currencies that may not be liquid or fully observable",
"start_char_pos": 1084,
"end_char_pos": 1084
}
]
| [
0,
270,
356,
531,
659,
730,
875,
1086,
1357,
1432
]
|
1512.04880 | 1 | Two-component feedback loops are dynamical systems arising in mathematical biology that describe the time evolution of interacting molecules diffusing on a graph . These dynamical systems closely resemble a Hamiltonian system in \Bbb R^{2n}, but with the canonical equation for one of the variables in each conjugate pair rescaled by a ratio of the diffusion coefficients. The ratio therefore measures the obstruction preventing a two-component feedback loop from being Hamiltonian (where the ratio equals one) . To generalise two-component feedback loops to symplectic manifolds in this paper we introduce and study the properties of deformed Hamiltonian vector fields on Lagrangian fibrations. We describe why these objects have some interesting applications to symplectic geometry and discuss how their biological interpretation motivates new problems in Floer theory, mirror symmetry, and the study of \Bbb D-K\"{a}hler manifolds . | Two-component feedback loops (TCFLs) are dynamical systems arising in mathematical biology that describe the time evolution of pairs of interacting molecules using complex network theory . These dynamical systems closely resemble a Hamiltonian system in \Bbb R^{2n}, but with the canonical equation for one of the variables in each conjugate pair rescaled by a number called the Turing instability parameter. The Turing instability parameter therefore measures the obstruction preventing a TCFL from being Hamiltonian where the Turing instability parameter equals one . To generalise TCFLs to symplectic manifolds in this paper we introduce and study the properties of deformed Hamiltonian vector fields on Lagrangian fibrations. We describe why these objects have some interesting applications to symplectic geometry and discuss how their biological interpretation motivates new problems in Floer theory, mirror symmetry, and the study of \Bbb D-K\"{a}hler manifolds . Since many questions in complex network theory can be translated into the topological setting, this paper therefore serves to bring a selection of ideas from biology to pure mathematics . | [
{
"type": "A",
"before": null,
"after": "(TCFLs)",
"start_char_pos": 29,
"end_char_pos": 29
},
{
"type": "R",
"before": "interacting molecules diffusing on a graph",
"after": "pairs of interacting molecules using complex network theory",
"start_char_pos": 120,
"end_char_pos": 162
},
{
"type": "R",
"before": "ratio of the diffusion coefficients. The ratio",
"after": "number called the Turing instability parameter. The Turing instability parameter",
"start_char_pos": 337,
"end_char_pos": 383
},
{
"type": "R",
"before": "two-component feedback loop",
"after": "TCFL",
"start_char_pos": 432,
"end_char_pos": 459
},
{
"type": "R",
"before": "(where the ratio equals one)",
"after": "where the Turing instability parameter equals one",
"start_char_pos": 483,
"end_char_pos": 511
},
{
"type": "R",
"before": "two-component feedback loops",
"after": "TCFLs",
"start_char_pos": 528,
"end_char_pos": 556
},
{
"type": "A",
"before": null,
"after": ". Since many questions in complex network theory can be translated into the topological setting, this paper therefore serves to bring a selection of ideas from biology to pure mathematics",
"start_char_pos": 935,
"end_char_pos": 935
}
]
| [
0,
164,
373,
513,
696
]
|
1512.04880 | 2 | Two-component feedback loops (TCFLs) are dynamical systems arising in mathematical biology that describe the time evolution of pairs of interacting molecules using complex network theory. These dynamical systems closely resemble a Hamiltonian system in %DIFDELCMD < \Bbb %%% R ^{2n}, but with the canonical equation for one of the variables in each conjugate pair rescaled by a number called the Turing instability parameter. The Turing instability parameter therefore measures the obstruction preventing a TCFL from being Hamiltonian where the Turing instability parameter equals one. To generalise TCFLs to symplectic manifolds in this paper we introduce and study the properties of deformed Hamiltonian vector fields on Lagrangian fibrations. We describe why these objects have some interesting applications to symplectic geometry and discuss how their biological interpretation motivates new problems in Floer theory, mirror symmetry, and the study of %DIFDELCMD < \Bbb %%% D-K \"{a}hler manifolds . Since many questions in complex network theory can be translated into the topological setting, this paper therefore serves to bring a selection of ideas from biology to pure mathematics . | Networks of planar Hamiltonian systems closely resemble Hamiltonian system in %DIFDELCMD < \Bbb %%% \mathbb{R ^{2n}, but with the canonical equation for one of the variables in each conjugate pair rescaled by a number called the Turing instability parameter. To generalise these dynamical systems to symplectic manifolds in this paper we introduce and study the properties of deformed Hamiltonian vector fields on Lagrangian fibrations. We describe why these objects have some interesting applications to symplectic geometry and discuss how their physical interpretation motivates new problems in Floer theory, mirror symmetry, and the study of %DIFDELCMD < \Bbb %%% \mathbb{D \"{a}hler manifolds . | [
{
"type": "R",
"before": "Two-component feedback loops (TCFLs) are dynamical systems arising in mathematical biology that describe the time evolution of pairs of interacting molecules using complex network theory. These dynamical",
"after": "Networks of planar Hamiltonian",
"start_char_pos": 0,
"end_char_pos": 203
},
{
"type": "D",
"before": "a",
"after": null,
"start_char_pos": 229,
"end_char_pos": 230
},
{
"type": "R",
"before": "R",
"after": "\\mathbb{R",
"start_char_pos": 275,
"end_char_pos": 276
},
{
"type": "R",
"before": "The Turing instability parameter therefore measures the obstruction preventing a TCFL from being Hamiltonian where the Turing instability parameter equals one. To generalise TCFLs",
"after": "To generalise these dynamical systems",
"start_char_pos": 426,
"end_char_pos": 605
},
{
"type": "R",
"before": "biological",
"after": "physical",
"start_char_pos": 856,
"end_char_pos": 866
},
{
"type": "R",
"before": "D-K",
"after": "\\mathbb{D",
"start_char_pos": 978,
"end_char_pos": 981
},
{
"type": "D",
"before": ". Since many questions in complex network theory can be translated into the topological setting, this paper therefore serves to bring a selection of ideas from biology to pure mathematics",
"after": null,
"start_char_pos": 1002,
"end_char_pos": 1189
}
]
| [
0,
187,
425,
585,
745
]
|
1512.04880 | 3 | Networks of planar Hamiltonian systems closely resemble Hamiltonian system in R^{2n}, but with the canonical equation for one of the variables in each conjugate pair rescaled by a number called the Turing instability parameter. To generalise these dynamical systems to symplectic manifolds in this paper we introduce and study the properties of deformed Hamiltonian vector fields on Lagrangian fibrations. We describe why these objects have some interesting applications to symplectic geometry and discuss how their physical interpretation motivates new problems in Floer theory, mirror symmetry, and the study of \mathbb{D-K\"{a}hler manifolds} . | Certain dissipative physical systems closely resemble Hamiltonian systems in R^{2n}, but with the canonical equation for one of the variables in each conjugate pair rescaled by a real parameter. To generalise these dynamical systems to symplectic manifolds in this paper we introduce and study the properties of deformed Hamiltonian vector fields on Lagrangian fibrations. We describe why these objects have some interesting applications to symplectic geometry and discuss how their physical interpretation motivates new problems in -K\"{a}hler manifolds} mathematics . | [
{
"type": "R",
"before": "Networks of planar Hamiltonian",
"after": "Certain dissipative physical",
"start_char_pos": 0,
"end_char_pos": 30
},
{
"type": "R",
"before": "system",
"after": "systems",
"start_char_pos": 68,
"end_char_pos": 74
},
{
"type": "R",
"before": "number called the Turing instability",
"after": "real",
"start_char_pos": 180,
"end_char_pos": 216
},
{
"type": "D",
"before": "Floer theory, mirror symmetry, and the study of \\mathbb{D",
"after": null,
"start_char_pos": 566,
"end_char_pos": 623
},
{
"type": "A",
"before": null,
"after": "mathematics",
"start_char_pos": 646,
"end_char_pos": 646
}
]
| [
0,
227,
405
]
|
1512.05015 | 1 | We consider continuous-time stochastic optimal control problems featuring Conditional Value-at-Risk (CVaR) in the objective. The major difficulty in these problems arises from time-inconsistency, which prevents us from directly using dynamic programming. To resolve this challenge, we convert to an equivalent bilevel optimization problem in which the inner optimization problem is standard stochastic control. Furthermore, we provide conditions under which the outer objective function is convex and differentiable. We compute the outer objective's value via a Hamilton-Jacobi-Bellman equation and its gradient via the viscosity solution of a linear parabolic equation, which allows us to perform gradient descent. The significance of this result is that we provide an efficient dynamic programming-based algorithm for optimal control of CVaR without lifting the state-space. To broaden the applicability of the proposed algorithm, we provide convergent approximation schemes in cases where our key assumptions do not hold and characterize relevant suboptimality bounds. In addition, we extend our method to a more general class of risk metrics, which includes mean-variance and median-deviation. We also demonstrate a concrete application to portfolio optimization under CVaR constraints. Our results contribute an efficient framework for solving time-inconsistent CVaR-based dynamic optimization. | We consider continuous-time stochastic optimal control problems featuring Conditional Value-at-Risk (CVaR) in the objective. The major difficulty in these problems arises from time-inconsistency, which prevents us from directly using dynamic programming. To resolve this challenge, we convert to an equivalent bilevel optimization problem in which the inner optimization problem is standard stochastic control. Furthermore, we provide conditions under which the outer objective function is convex and differentiable. We compute the outer objective's value via a Hamilton-Jacobi-Bellman equation and its gradient via the viscosity solution of a linear parabolic equation, which allows us to perform gradient descent. The significance of this result is that we provide an efficient dynamic programming-based algorithm for optimal control of CVaR without lifting the state-space. To broaden the applicability of the proposed algorithm, we propose convergent approximation schemes in cases where our key assumptions do not hold and characterize relevant suboptimality bounds. In addition, we extend our method to a more general class of risk metrics, which includes mean-variance and median-deviation. We also demonstrate a concrete application to portfolio optimization under CVaR constraints. Our results contribute an efficient framework for solving time-inconsistent CVaR-based sequential optimization. | [
{
"type": "R",
"before": "provide",
"after": "propose",
"start_char_pos": 936,
"end_char_pos": 943
},
{
"type": "R",
"before": "dynamic",
"after": "sequential",
"start_char_pos": 1378,
"end_char_pos": 1385
}
]
| [
0,
124,
254,
410,
516,
715,
876,
1071,
1197,
1290
]
|
1512.05066 | 1 | This paper examined the size difference of avalanches among industrial sectors triggered by demand by using the production-inventory model and the observed data. Also, we investigated how each industrial sector can be affected in terms of network topology by using the control theory. We obtained the following results. (1) The size of avalanches are diverse depending on sectors where demands are given . (2) The simulated avalanche size for the policies actually conducted correspond well to the ex-post evaluations of the policies. (3) The expectation to get involved into avalanches are diverse depending on sectors. (4) Service sectors and small firms are difficult to be indirectly affected by fiscal policies. On the other hand, construction, manufacturing, and wholesale sectors are well affected by fiscal policies. (5) If we need to clip a network without losing the effect of the fiscal policy, we can clip the network by the descending order of firms' capital size . | This study examine the difference in the size of avalanches among industries triggered by demand shocks, which can be rephrased by control of the economy or fiscal policy, and by using the production-inventory model and observed data. We obtain the following results. (1) The size of avalanches follows power law . (2) The mean sizes of avalanches for industries are diverse but their standard deviations highly overlap. (3) We compare the simulation with an input-output table and with the actual policies. They are compatible . | [
{
"type": "R",
"before": "paper examined the size difference",
"after": "study examine the difference in the size",
"start_char_pos": 5,
"end_char_pos": 39
},
{
"type": "R",
"before": "industrial sectors",
"after": "industries",
"start_char_pos": 60,
"end_char_pos": 78
},
{
"type": "R",
"before": "by",
"after": "shocks, which can be rephrased by control of the economy or fiscal policy, and by",
"start_char_pos": 99,
"end_char_pos": 101
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 143,
"end_char_pos": 146
},
{
"type": "R",
"before": "Also, we investigated how each industrial sector can be affected in terms of network topology by using the control theory. We obtained",
"after": "We obtain",
"start_char_pos": 162,
"end_char_pos": 296
},
{
"type": "R",
"before": "are diverse depending on sectors where demands are given",
"after": "follows power law",
"start_char_pos": 347,
"end_char_pos": 403
},
{
"type": "R",
"before": "simulated avalanche size for the policies actually conducted correspond well to the ex-post evaluations of the policies. (3) The expectation to get involved into avalanches are diverse depending on sectors. (4) Service sectors and small firms are difficult to be indirectly affected by fiscal policies. On the other hand, construction, manufacturing, and wholesale sectors are well affected by fiscal policies. (5) If we need to clip a network without losing the effect of the fiscal policy, we can clip the network by the descending order of firms' capital size",
"after": "mean sizes of avalanches for industries are diverse but their standard deviations highly overlap. (3) We compare the simulation with an input-output table and with the actual policies. They are compatible",
"start_char_pos": 414,
"end_char_pos": 976
}
]
| [
0,
161,
284,
319,
534,
620,
716,
824
]
|
1512.05602 | 1 | Bone's mechanostat theory describes the adaptation of bone tissues to their mechanical environment. Many experiments have investigated and observed such structural adaptation. However, there is still much uncertainty about the existence of a well-defined reference mechanical state at which bone structure is adapted and stable. The dynamic nature of bone tissues and wide range of timescales at which mechanical adaptation is observed clinically and experimentally make it difficult to define such a reference state . We propose here a mechanostat theory that takes into account the cellular origin of bone's mechanosensitivity . This theory includes (i) a cell-specific reference state that enables mechanosensing cells to gauge a mechanical stimulus and to respond to it; (ii) a rapid, but partial desensitisation of the mechanosensing cells to the mechanical stimulus; and (iii) the replacement of the mechanosensing cells during bone remodelling, which resets the cell-specific reference state. The cell-specific reference state we propose is assumed to be encoded in the cells during their formation. It provides a long-lasting memory of the current mechanical stimulus gradually reset by bone remodelling. We test this theory by simulating long-term mechanical disuse (modelling spinal cord injury), and short-term mechanical loadings (modelling daily exercises) with a mathematical model. Our proposed cell-based mechanostat theory gives a cellular interpretation of the different phenomena and timescales occurring during the mechanical adaptation of bone tissues . The consideration of osteocyte desensitisation and osteocyte replacement enables to resolve several shortcomings of the standard mechanostat theory. | Bone's mechanostat theory describes the adaptation of bone tissues to their mechanical environment. Many experiments have investigated and observed such structural adaptation. However, there is still much uncertainty about how to define the reference mechanical state at which bone structure is adapted and stable. Clinical and experimental observations show that this reference state varies both in space and in time, over a wide range of timescales . We propose an osteocyte-based mechanostat theory that links various timescales of structural adaptation with various dynamic features of the osteocyte network in bone . This theory assumes that osteocytes are formed adapted to their current local mechanical environment through modulation of morphological and genotypic osteocyte properties involved in mechanical sensitivity. We distinguish two main types of physiological responses by which osteocytes subsequently modify the reference mechanical state. One is the replacement of osteocytes during bone remodelling, which occurs over the long timescales of bone turnover. The other is cell desensitisation responses, which occur more rapidly and reversibly during an osteocyte's lifetime. The novelty of this theory is to propose that long-lasting morphological and genotypic osteocyte properties provide a material basis for a long-term mechanical memory of bone that is gradually reset by bone remodelling. We test this theory by simulating long-term mechanical disuse (modelling spinal cord injury), and short-term mechanical loadings (modelling daily exercises) with a mathematical model. The consideration of osteocyte desensitisation and of osteocyte replacement by remodelling is able to capture the different phenomena and timescales observed during the mechanical adaptation of bone tissues , lending support to this theory. | [
{
"type": "R",
"before": "the existence of a well-defined",
"after": "how to define the",
"start_char_pos": 223,
"end_char_pos": 254
},
{
"type": "R",
"before": "The dynamic nature of bone tissues and",
"after": "Clinical and experimental observations show that this reference state varies both in space and in time, over a",
"start_char_pos": 329,
"end_char_pos": 367
},
{
"type": "D",
"before": "at which mechanical adaptation is observed clinically and experimentally make it difficult to define such a reference state",
"after": null,
"start_char_pos": 393,
"end_char_pos": 516
},
{
"type": "R",
"before": "here a",
"after": "an osteocyte-based",
"start_char_pos": 530,
"end_char_pos": 536
},
{
"type": "R",
"before": "takes into account the cellular origin of bone's mechanosensitivity",
"after": "links various timescales of structural adaptation with various dynamic features of the osteocyte network in bone",
"start_char_pos": 561,
"end_char_pos": 628
},
{
"type": "R",
"before": "includes (i) a cell-specific reference state that enables mechanosensing cells to gauge a mechanical stimulus and to respond to it; (ii) a rapid, but partial desensitisation of",
"after": "assumes that osteocytes are formed adapted to their current local mechanical environment through modulation of morphological and genotypic osteocyte properties involved in mechanical sensitivity. We distinguish two main types of physiological responses by which osteocytes subsequently modify the reference mechanical state. One is",
"start_char_pos": 643,
"end_char_pos": 819
},
{
"type": "D",
"before": "mechanosensing cells to the mechanical stimulus; and (iii) the",
"after": null,
"start_char_pos": 824,
"end_char_pos": 886
},
{
"type": "R",
"before": "the mechanosensing cells",
"after": "osteocytes",
"start_char_pos": 902,
"end_char_pos": 926
},
{
"type": "R",
"before": "resets the cell-specific reference state. The cell-specific reference state we propose is assumed to be encoded in the cells during their formation. It provides a",
"after": "occurs over the long timescales of bone turnover. The other is cell desensitisation responses, which occur more rapidly and reversibly during an osteocyte's lifetime. The novelty of this theory is to propose that",
"start_char_pos": 958,
"end_char_pos": 1120
},
{
"type": "R",
"before": "memory of the current mechanical stimulus",
"after": "morphological and genotypic osteocyte properties provide a material basis for a long-term mechanical memory of bone that is",
"start_char_pos": 1134,
"end_char_pos": 1175
},
{
"type": "R",
"before": "Our proposed cell-based mechanostat theory gives a cellular interpretation of",
"after": "The consideration of osteocyte desensitisation and of osteocyte replacement by remodelling is able to capture",
"start_char_pos": 1397,
"end_char_pos": 1474
},
{
"type": "R",
"before": "occurring",
"after": "observed",
"start_char_pos": 1514,
"end_char_pos": 1523
},
{
"type": "R",
"before": ". The consideration of osteocyte desensitisation and osteocyte replacement enables to resolve several shortcomings of the standard mechanostat",
"after": ", lending support to this",
"start_char_pos": 1573,
"end_char_pos": 1715
}
]
| [
0,
99,
175,
328,
518,
630,
774,
872,
999,
1106,
1212,
1396,
1574
]
|
1512.05924 | 1 | In this paper, we study a class of quadratic-exponential growth BSDEs with jumps. The quadratic structure was introduced by Barrieu & El Karoui (2013) and yields a very useful universal bound on the possible solutions. With the bounded terminal condition as well as an additional local Lipschitz continuity, we give a simple and streamlined proof for the existence and the uniqueness of the solution . The universal bound and the stability result for the locally Lipschitz BSDEs with coefficients in the BMO space enable us to show the strong convergence of a sequence of globally Lipschitz BSDEs . The result is then used to generalize the existing results on the Malliavin's differentiability of the quadratic BSDEs in the diffusion setup to the quadratic-exponential growth BSDEs with jumps . | We investigate a class of quadratic-exponential growth BSDEs with jumps. The quadratic structure introduced by Barrieu & El Karoui (2013) yields the universal bound on the possible solutions. With a bounded terminal condition and local Lipschitz continuity, we give a simple and streamlined proof for the existence as well as the uniqueness of the solution without using the comparison principle. The properties of locally Lipschitz BSDEs with coefficients in BMO space enable us to show the strong convergence of a sequence of globally Lipschitz BSDEs to the interested one, which is then used to give sufficient conditions for the Malliavin's differentiability . | [
{
"type": "R",
"before": "In this paper, we study",
"after": "We investigate",
"start_char_pos": 0,
"end_char_pos": 23
},
{
"type": "D",
"before": "was",
"after": null,
"start_char_pos": 106,
"end_char_pos": 109
},
{
"type": "R",
"before": "and yields a very useful",
"after": "yields the",
"start_char_pos": 151,
"end_char_pos": 175
},
{
"type": "R",
"before": "the",
"after": "a",
"start_char_pos": 224,
"end_char_pos": 227
},
{
"type": "R",
"before": "as well as an additional",
"after": "and",
"start_char_pos": 255,
"end_char_pos": 279
},
{
"type": "R",
"before": "and",
"after": "as well as",
"start_char_pos": 365,
"end_char_pos": 368
},
{
"type": "R",
"before": ". The universal bound and the stability result for the",
"after": "without using the comparison principle. The properties of",
"start_char_pos": 400,
"end_char_pos": 454
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 500,
"end_char_pos": 503
},
{
"type": "R",
"before": ". The result",
"after": "to the interested one, which",
"start_char_pos": 597,
"end_char_pos": 609
},
{
"type": "R",
"before": "generalize the existing results on the",
"after": "give sufficient conditions for the",
"start_char_pos": 626,
"end_char_pos": 664
},
{
"type": "D",
"before": "of the quadratic BSDEs in the diffusion setup to the quadratic-exponential growth BSDEs with jumps",
"after": null,
"start_char_pos": 695,
"end_char_pos": 793
}
]
| [
0,
81,
218,
401,
598
]
|
1512.05924 | 2 | We investigate a class of quadratic-exponential growth BSDEs with jumps. The quadratic structure introduced by Barrieu & El Karoui (2013) yields the universal bound on the possible solutions. With a bounded terminal condition and local Lipschitz continuity , we give a simple and streamlined proof for the existence as well as the uniqueness of the solution without using the comparison principle. The properties of locally Lipschitz BSDEs with coefficients in BMO space enable us to show the strong convergence of a sequence of globally Lipschitz BSDEs to the interested one , which is then used to give sufficient conditions for the Malliavin's differentiability. | We investigate a class of quadratic-exponential growth BSDEs with jumps. The quadratic structure introduced by Barrieu & El Karoui (2013) yields the universal bounds on the possible solutions. With local Lipschitz continuity and the so-called A_gamma-condition for the comparison principle to hold, we prove the existence of a unique solution under the general quadratic-exponential structure. We have also shown that the strong convergence occurs under more general (not necessarily monotone) sequence of drivers , which is then applied to give the sufficient conditions for the Malliavin's differentiability. | [
{
"type": "R",
"before": "bound",
"after": "bounds",
"start_char_pos": 159,
"end_char_pos": 164
},
{
"type": "D",
"before": "a bounded terminal condition and",
"after": null,
"start_char_pos": 197,
"end_char_pos": 229
},
{
"type": "R",
"before": ", we give a simple and streamlined proof for the existence as well as the uniqueness of the solution without using the comparison principle. The properties of locally Lipschitz BSDEs with coefficients in BMO space enable us to show",
"after": "and",
"start_char_pos": 257,
"end_char_pos": 488
},
{
"type": "A",
"before": null,
"after": "so-called A_gamma-condition for the comparison principle to hold, we prove the existence of a unique solution under the general quadratic-exponential structure. We have also shown that the",
"start_char_pos": 493,
"end_char_pos": 493
},
{
"type": "R",
"before": "of a sequence of globally Lipschitz BSDEs to the interested one",
"after": "occurs under more general (not necessarily monotone) sequence of drivers",
"start_char_pos": 513,
"end_char_pos": 576
},
{
"type": "R",
"before": "used to give",
"after": "applied to give the",
"start_char_pos": 593,
"end_char_pos": 605
}
]
| [
0,
72,
191,
397
]
|
1512.06151 | 1 | In this paper, we investigate the non-linear Black--Scholes equation: u_t+ax^2u_{xx}+bx^3u_{xx}^2+c(xu_x-u)=0,\quad a,b>0,\ c\geq0. and show that one can be reduced to the equation u_t+(u_{xx}+u_x)^2=0 by an appropriate point transformation of variables. For the last equation, we study the group-theoretic properties, namely, we find the maximal algebra of invariance of its in Lie sense, carry out the symmetry reduction and seek for a number of exact group-invariant solutions of this equation. Using the obtained results , we get a number of exact solutions of the Black--Scholes equation . | In this paper, we investigate the non-linear Black--Scholes equation: u_t+ax^2u_{xx}+bx^3u_{xx}^2+c(xu_x-u)=0,\quad a,b>0,\ c\geq0. and show that the one can be reduced to the equation u_t+(u_{xx}+u_x)^2=0 by an appropriate point transformation of variables. For the resulting equation, we study the group-theoretic properties, namely, we find the maximal algebra of invariance of its in Lie sense, carry out the symmetry reduction and seek for a number of exact group-invariant solutions of the equation. Using the results obtained , we get a number of exact solutions of the Black--Scholes equation under study and apply the ones to resolving several boundary value problems with appropriate from the economic point of view terminal and boundary conditions . | [
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 146,
"end_char_pos": 146
},
{
"type": "R",
"before": "last",
"after": "resulting",
"start_char_pos": 264,
"end_char_pos": 268
},
{
"type": "R",
"before": "this",
"after": "the",
"start_char_pos": 484,
"end_char_pos": 488
},
{
"type": "R",
"before": "obtained results",
"after": "results obtained",
"start_char_pos": 509,
"end_char_pos": 525
},
{
"type": "A",
"before": null,
"after": "under study and apply the ones to resolving several boundary value problems with appropriate from the economic point of view terminal and boundary conditions",
"start_char_pos": 594,
"end_char_pos": 594
}
]
| [
0,
255,
498
]
|
1512.06159 | 1 | In this paper, we investigate the implication of non-stationary market microstructure noise to integrated volatility estimation, provide statistical tools to test stationarity and non-stationarity in market microstructure noise , and discuss how to measure liquidity risk using high frequency financial data. In particular, we discuss the impact of non-stationary microstructure noise on TSRV (Two-Scale Realized Variance) estimator , and design three test statistics by exploiting the edge effectsand asymptotic approximation. The asymptotic distributions of these test statistics are provided under both stationary and non-stationary noise assumptionsrespectively, and we empirically measure aggregate liquidity risks by these test statistics from 2006 to 2013. As byproducts, functional dependence and endogenous market microstructure noise are briefly discussed. Our empirical study indicates the prevalence of non-stationary market microstructure noise in the New York Stock Exchange. | In this paper, we provide non-parametric statistical tools to test stationarity of microstructure noise in general hidden Ito semimartingales , and discuss how to measure liquidity risk using high frequency financial data. In particular, we investigate the impact of non-stationary microstructure noise on some volatility estimators , and design three complementary tests by exploiting edge effects, information aggregation of local estimates and high-frequency asymptotic approximation. The asymptotic distributions of these tests are available under both stationary and non-stationary assumptions, thereby enable us to conservatively control type-I errors and meanwhile ensure the proposed tests enjoy the asymptotically optimal statistical power. Besides it also enables us to empirically measure aggregate liquidity risks by these test statistics . As byproducts, functional dependence and endogenous microstructure noise are briefly discussed. Simulation with a realistic configuration corroborates our theoretical results, and our empirical study indicates the prevalence of non-stationary microstructure noise in New York Stock Exchange. | [
{
"type": "R",
"before": "investigate the implication of non-stationary market microstructure noise to integrated volatility estimation, provide",
"after": "provide non-parametric",
"start_char_pos": 18,
"end_char_pos": 136
},
{
"type": "R",
"before": "and non-stationarity in market microstructure noise",
"after": "of microstructure noise in general hidden Ito semimartingales",
"start_char_pos": 176,
"end_char_pos": 227
},
{
"type": "R",
"before": "discuss",
"after": "investigate",
"start_char_pos": 327,
"end_char_pos": 334
},
{
"type": "R",
"before": "TSRV (Two-Scale Realized Variance) estimator",
"after": "some volatility estimators",
"start_char_pos": 388,
"end_char_pos": 432
},
{
"type": "R",
"before": "test statistics by exploiting the edge effectsand",
"after": "complementary tests by exploiting edge effects, information aggregation of local estimates and high-frequency",
"start_char_pos": 452,
"end_char_pos": 501
},
{
"type": "R",
"before": "test statistics are provided",
"after": "tests are available",
"start_char_pos": 566,
"end_char_pos": 594
},
{
"type": "R",
"before": "noise assumptionsrespectively, and we",
"after": "assumptions, thereby enable us to conservatively control type-I errors and meanwhile ensure the proposed tests enjoy the asymptotically optimal statistical power. Besides it also enables us to",
"start_char_pos": 636,
"end_char_pos": 673
},
{
"type": "R",
"before": "from 2006 to 2013.",
"after": ".",
"start_char_pos": 745,
"end_char_pos": 763
},
{
"type": "D",
"before": "market",
"after": null,
"start_char_pos": 816,
"end_char_pos": 822
},
{
"type": "R",
"before": "Our",
"after": "Simulation with a realistic configuration corroborates our theoretical results, and our",
"start_char_pos": 867,
"end_char_pos": 870
},
{
"type": "D",
"before": "market",
"after": null,
"start_char_pos": 930,
"end_char_pos": 936
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 961,
"end_char_pos": 964
}
]
| [
0,
308,
527,
763,
866
]
|
1512.06454 | 1 | The discrete-time multifactor Vasicek model is a tractable Gaussian spot rate model. Typically, two- or three-factor versions allow to capture the dependence structure between yields with different times to maturity in an appropriate way. In practice, re-calibration of the model to the prevailing market conditions leads to model parameters which change over time. Therefore, the model parameters should be understood as being time-dependent , or even stochastic. Following the consistent re-calibration (CRC) approach, we construct models as concatenations of yield curve increments of Hull-White extended multifactor Vasicek models with different parameters. The CRC approach provides attractive tractable models that preserve the no-arbitrage premise. As a numerical example we fit Swiss interest rates using CRC multifactor Vasicek models. | The discrete-time multifactor Vasicek model is a tractable Gaussian spot rate model. Typically, two- or three-factor versions allow one to capture the dependence structure between yields with different times to maturity in an appropriate way. In practice, re-calibration of the model to the prevailing market conditions leads to model parameters that change over time. Therefore, the model parameters should be understood as being time-dependent or even stochastic. Following the consistent re-calibration (CRC) approach, we construct models as concatenations of yield curve increments of Hull-White extended multifactor Vasicek models with different parameters. The CRC approach provides attractive tractable models that preserve the no-arbitrage premise. As a numerical example , we fit Swiss interest rates using CRC multifactor Vasicek models. | [
{
"type": "A",
"before": null,
"after": "one",
"start_char_pos": 132,
"end_char_pos": 132
},
{
"type": "R",
"before": "which",
"after": "that",
"start_char_pos": 343,
"end_char_pos": 348
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 444,
"end_char_pos": 445
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 780,
"end_char_pos": 780
}
]
| [
0,
84,
239,
366,
465,
662,
756
]
|
1512.06479 | 1 | A central task in analyzing complex dynamics is to determine the loci of information storage and the communication topology of information flows within a system. Over the last decade and a half, diagnostics for the latter have come to be dominated by the transfer entropy. Via straightforward examples, we show that it and a derivative quantity, the causation entropy, do not, in fact, quantify the flow of information. At one and the same time they can overestimate flow or underestimate influence. We isolate why this is the case and propose alternate measures for information flow. An auxiliary consequencereveals that the proliferation of networks as a now-common theoretical model for large-scale systems in concert with the use of transfer-like entropies has shoehorned dyadic relationships into our structural interpretation of URLanization and behavior of complex systems , despite the occurrence of polyadic dependencies. The net result is that much of the URLanization of complex systems goes undetected. | A central task in analyzing complex dynamics is to determine the loci of information storage and the communication topology of information flows within a system. Over the last decade and a half, diagnostics for the latter have come to be dominated by the transfer entropy. Via straightforward examples, we show that it and a derivative quantity, the causation entropy, do not, in fact, quantify the flow of information. At one and the same time they can overestimate flow or underestimate influence. We isolate why this is the case and propose several avenues to alternate measures for information flow. We also address an auxiliary consequence: The proliferation of networks as a now-common theoretical model for large-scale systems , in concert with the use of transfer-like entropies , has shoehorned dyadic relationships into our structural interpretation of URLanization and behavior of complex systems . This interpretation thus fails to include the effects of polyadic dependencies. The net result is that much of the URLanization of complex systems may go undetected. | [
{
"type": "A",
"before": null,
"after": "several avenues to",
"start_char_pos": 544,
"end_char_pos": 544
},
{
"type": "R",
"before": "An auxiliary consequencereveals that the",
"after": "We also address an auxiliary consequence: The",
"start_char_pos": 586,
"end_char_pos": 626
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 711,
"end_char_pos": 711
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 763,
"end_char_pos": 763
},
{
"type": "R",
"before": ", despite the occurrence",
"after": ". This interpretation thus fails to include the effects",
"start_char_pos": 883,
"end_char_pos": 907
},
{
"type": "R",
"before": "goes",
"after": "may go",
"start_char_pos": 1001,
"end_char_pos": 1005
}
]
| [
0,
161,
272,
419,
499,
585,
933
]
|
1512.07043 | 1 | Several results on the sign properties of Metzler matrices are obtained. It is first established that checking the sign-stability of a Metzler sign-matrix can be either characterized in terms of the Hurwitz stability of the unit sign-matrix in the corresponding qualitative class, or in terms of the acyclicity of the graph associated with the sign-pattern . Similar results are obtained for the case of block-matrices and mixed-matrices, the latter containing both sign patterns and fixed real entries. The problem of assessing the sign-stability of the convex full of a finite family of Metzler matrices is also solved, and a necessary and sufficient condition for the existence of a common Lyapunov function for all the matrices in the convex hull is obtained. The notion of relative sign-stability is also introduced and a sufficient condition for the relative sign-stability of Metzler matrices is proposed . Several applications of the results are discussed in the last section. | Several results on the sign properties of Metzler matrices are obtained. It is first established that checking the sign-stability of a Metzler sign-matrix can be either characterized in terms of the Hurwitz stability of the unit sign-matrix in the corresponding qualitative class, or in terms the negativity of the diagonal elements of the Metzler sign-matrix and the acyclicity of the associated directed graph . Similar results are obtained for the case of Metzler block-matrices and Metzler mixed-matrices, the latter being a class of Metzler matrices containing both sign- and real-type entries. The problem of assessing the sign-stability of the convex hull of a finite and summable family of Metzler matrices is also solved, and a necessary and sufficient condition for the existence of common Lyapunov functions for all the matrices in the convex hull is obtained. The concept of sign-stability is then generalized to the concept of Ker_+(B)-sign-stability, a problem that arises in analysis of certain jump Markov processes. A sufficient condition for the Ker_+(B)-sign-stability of Metzler sign-matrices is obtained and formulated using inverses of sign-matrices and the concept of L^+-matrices . Several applications of the results are discussed in the last section. | [
{
"type": "R",
"before": "of the",
"after": "the negativity of the diagonal elements of the Metzler sign-matrix and the",
"start_char_pos": 293,
"end_char_pos": 299
},
{
"type": "R",
"before": "graph associated with the sign-pattern",
"after": "associated directed graph",
"start_char_pos": 318,
"end_char_pos": 356
},
{
"type": "A",
"before": null,
"after": "Metzler",
"start_char_pos": 404,
"end_char_pos": 404
},
{
"type": "A",
"before": null,
"after": "Metzler",
"start_char_pos": 424,
"end_char_pos": 424
},
{
"type": "R",
"before": "containing both sign patterns and fixed real",
"after": "being a class of Metzler matrices containing both sign- and real-type",
"start_char_pos": 452,
"end_char_pos": 496
},
{
"type": "R",
"before": "full",
"after": "hull",
"start_char_pos": 564,
"end_char_pos": 568
},
{
"type": "A",
"before": null,
"after": "and summable",
"start_char_pos": 581,
"end_char_pos": 581
},
{
"type": "R",
"before": "a common Lyapunov function",
"after": "common Lyapunov functions",
"start_char_pos": 687,
"end_char_pos": 713
},
{
"type": "R",
"before": "notion of relative",
"after": "concept of",
"start_char_pos": 771,
"end_char_pos": 789
},
{
"type": "R",
"before": "also introduced and a",
"after": "then generalized to the concept of Ker_+(B)-sign-stability, a problem that arises in analysis of certain jump Markov processes. A",
"start_char_pos": 808,
"end_char_pos": 829
},
{
"type": "R",
"before": "relative sign-stability of Metzler matrices is proposed",
"after": "Ker_+(B)-sign-stability of Metzler sign-matrices is obtained and formulated using inverses of sign-matrices and the concept of L^+-matrices",
"start_char_pos": 859,
"end_char_pos": 914
}
]
| [
0,
72,
505,
766,
916
]
|
1512.07043 | 2 | Several results on the sign properties of Metzler matrices are obtained. It is first established that checking the sign-stability of a Metzler sign-matrix can be either characterized in terms of the Hurwitz stability of the unit sign-matrix in the corresponding qualitative class, or in terms the negativity of the diagonal elements of the Metzler sign-matrix and the acyclicity of the associated directed graph. Similar results are obtained for the case of Metzler block-matrices and Metzler mixed-matrices, the latter being a class of Metzler matrices containing both sign- and real-type entries. The problem of assessing the sign-stability of the convex hull of a finite and summable family of Metzler matrices is also solved, and a necessary and sufficient condition for the existence of common Lyapunov functions for all the matrices in the convex hull is obtained. The concept of sign-stability is then generalized to the concept of Ker_+(B)-sign-stability, a problem that arises in analysis of certain jump Markov processes. A sufficient condition for the Ker_+(B)-sign-stability of Metzler sign-matrices is obtained and formulated using inverses of sign-matrices and the concept of L^+-matrices. Several applications of the results are discussed in the last section. | Several results about sign properties of Metzler matrices are obtained. It is first established that checking the sign-stability of a Metzler sign-matrix can be either characterized in terms of the Hurwitz stability of the unit sign-matrix in the corresponding qualitative class, or in terms the negativity of the diagonal elements of the Metzler sign-matrix and the acyclicity of the associated directed graph. Similar results are obtained for the case of Metzler block-matrices and Metzler mixed-matrices, the latter being a class of Metzler matrices containing both sign- and real-type entries. The problem of assessing the sign-stability of the convex hull of a finite and summable family of Metzler matrices is also solved, and a necessary and sufficient condition for the existence of common Lyapunov functions for all the matrices in the convex hull is obtained. The concept of sign-stability is then generalized to the concept of Ker_+(B)-sign-stability, a problem that arises in the analysis of certain jump Markov processes. A sufficient condition for the Ker_+(B)-sign-stability of Metzler sign-matrices is obtained and formulated using inverses of sign-matrices and the concept of L^+-matrices. Several applications of the results are discussed in the last section. | [
{
"type": "R",
"before": "on the",
"after": "about",
"start_char_pos": 16,
"end_char_pos": 22
},
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 989,
"end_char_pos": 989
}
]
| [
0,
72,
412,
598,
870,
1032,
1204
]
|
1512.07337 | 1 | Initial margin involves funding costs that have to be transferred to client side derivatives pricing. This article extends the liability-side pricing theory with an exogenously determined initial margin profile or endogenously , delta approximated initial margin. In the former case, margin valuation adjustment (MVA) is defined as the liability-side discounted expected margin profile, while in the latter, an extended partial differential equation is derived and solved by finite difference for an all-in fair value, decomposable into coherent CVA, FVA and MVA. Initial margin funding charge enters the PDE's delta term as a cost, irrespective of being a long or short position, effectively a tax that cannot be transferred between two clearing members, or a bid-ask spread a market maker charges uncollateralized clients or fellow financial counterparties . An IM multiplier is applied to calibrate to historical data and CCP specific requirements to allow portfolio incremental pricing. At 5-day margin period of risk, 99-percentile, and 150 bp margin funding cost, a standalone at-the-money uncollateralized swap of 10 year maturity shows about 2 basis point equivalent charge. The model illustrates that recent CME-LCH basis spread widening is related to elevated MVA accompanying dealers' hedging of customer flows. | This article prices OTC derivatives with either an exogenously determined initial margin profile or endogenously approximated initial margin. In the former case, margin valuation adjustment (MVA) is defined as the liability-side discounted expected margin profile, while in the latter, an extended partial differential equation is derived and solved for an all-in fair value, decomposable into coherent CVA, FVA and MVA. For uncollateralized customer trades, MVA can be transferred to the customer via an extension of the liability-side pricing theory. For BCBS-IOSCO covered OTC derivatives, a market maker has to charge financial counterparties a bid-ask spread to transfer its funding cost . An IM multiplier is applied to calibrate to external IM models to allow portfolio incremental pricing. In particular, a link to ISDA SIMM for equity, commodity and fx risks is established through the PDE with its vega and curvature IM components captured fully. Numerical examples are given for swaps and equity portfolios and offer a plausible attribution of recent CME-LCH basis spread widening to elevated MVA accompanying dealers' hedging of customer flows. | [
{
"type": "R",
"before": "Initial margin involves funding costs that have to be transferred to client side derivatives pricing. This article extends the liability-side pricing theory with",
"after": "This article prices OTC derivatives with either",
"start_char_pos": 0,
"end_char_pos": 161
},
{
"type": "D",
"before": ", delta",
"after": null,
"start_char_pos": 227,
"end_char_pos": 234
},
{
"type": "D",
"before": "by finite difference",
"after": null,
"start_char_pos": 472,
"end_char_pos": 492
},
{
"type": "R",
"before": "Initial margin funding charge enters the PDE's delta term as a cost, irrespective of being a long or short position, effectively a tax that cannot be transferred between two clearing members, or a",
"after": "For uncollateralized customer trades, MVA can be transferred to the customer via an extension of the liability-side pricing theory. For BCBS-IOSCO covered OTC derivatives, a market maker has to charge financial counterparties a",
"start_char_pos": 564,
"end_char_pos": 760
},
{
"type": "R",
"before": "spread a market maker charges uncollateralized clients or fellow financial counterparties",
"after": "spread to transfer its funding cost",
"start_char_pos": 769,
"end_char_pos": 858
},
{
"type": "R",
"before": "historical data and CCP specific requirements",
"after": "external IM models",
"start_char_pos": 905,
"end_char_pos": 950
},
{
"type": "R",
"before": "At 5-day margin period of risk, 99-percentile, and 150 bp margin funding cost, a standalone at-the-money uncollateralized swap of 10 year maturity shows about 2 basis point equivalent charge. The model illustrates that",
"after": "In particular, a link to ISDA SIMM for equity, commodity and fx risks is established through the PDE with its vega and curvature IM components captured fully. Numerical examples are given for swaps and equity portfolios and offer a plausible attribution of",
"start_char_pos": 991,
"end_char_pos": 1209
},
{
"type": "D",
"before": "is related",
"after": null,
"start_char_pos": 1247,
"end_char_pos": 1257
}
]
| [
0,
101,
263,
563,
860,
990,
1182
]
|
1512.07644 | 1 | Several biological tissues undergo changes in their geometry and in their bulk material properties by modelling and remodelling processes. Modelling synthesises tissue in some regions and removes tissue in others. Remodelling overwrites old tissue material properties with newly formed, immature tissue properties. As a result, tissues are made up of different ``patches'' , i.e., adjacent tissue regions of different ages and different material properties, within evolving boundaries. In this paper, generalised equations governing the spatio-temporal evolution of such tissues are developed within the continuum model. These equations take into account nonconservative, discontinuous surface mass balance due to creation and destruction of material at moving interfaces, and bulk balance due to tissue maturation. These equations make it possible to model patchy tissue states and their evolution without explicitly maintaining a record of when/where resorption and formation processes occurred. The time evolution of spatially averaged tissue properties is derived systematically by integration. These spatially-averaged equations cannot be written in closed form as they retain traces that tissue destruction is localised at tissue boundaries. The formalism developed in this paper is applied to bone tissues, which exhibit strong material heterogeneities due to their slow mineralisation and remodelling processes. Evolution equations are proposed in particular for osteocyte density and bone mineral density. Effective average equations for bone mineral density (BMD) and tissue mineral density (TMD) are derived using a mean-field approximation. The error made by this approximation when remodelling patchy tissue is investigated . | Several biological tissues undergo changes in their geometry and in their bulk material properties by modelling and remodelling processes. Modelling synthesises tissue in some regions and removes tissue in others. Remodelling overwrites old tissue material properties with newly formed, immature tissue properties. As a result, tissues are made up of different "patches" , i.e., adjacent tissue regions of different ages and different material properties, within evolving boundaries. In this paper, generalised equations governing the spatio-temporal evolution of such tissues are developed within the continuum model. These equations take into account nonconservative, discontinuous surface mass balance due to creation and destruction of material at moving interfaces, and bulk balance due to tissue maturation. These equations make it possible to model patchy tissue states and their evolution without explicitly maintaining a record of when/where resorption and formation processes occurred. The time evolution of spatially averaged tissue properties is derived systematically by integration. These spatially-averaged equations cannot be written in closed form as they retain traces that tissue destruction is localised at tissue boundaries. The formalism developed in this paper is applied to bone tissues, which exhibit strong material heterogeneities due to their slow mineralisation and remodelling processes. Evolution equations are proposed in particular for osteocyte density and bone mineral density. Effective average equations for bone mineral density (BMD) and tissue mineral density (TMD) are derived using a mean-field approximation. The error made by this approximation when remodelling patchy tissue is investigated . The specific time signatures of BMD or TMD during remodelling events may provide a way to detect these events occurring at lower, unseen spatial resolutions from microCT scans . | [
{
"type": "R",
"before": "``patches''",
"after": "\"patches\"",
"start_char_pos": 361,
"end_char_pos": 372
},
{
"type": "A",
"before": null,
"after": ". The specific time signatures of BMD or TMD during remodelling events may provide a way to detect these events occurring at lower, unseen spatial resolutions from microCT scans",
"start_char_pos": 1737,
"end_char_pos": 1737
}
]
| [
0,
138,
213,
314,
485,
620,
815,
997,
1098,
1247,
1419,
1514,
1652
]
|
1512.08098 | 1 | Given two families of continuous functions u and v on a topological space X, we define a preorder R=R(u,v) on X by the condition that any member of u is an R-increasing and any member of v is an R-decreasing function. It turns out that if the topological space X is quasi-compact and sequentially compact, then any element x\in X is R-dominated by an R-maximal element m\in X: xRm . In particular, since the (n-1)-dimensional simplex is a compact subset of the real n-dimensional vector space, then considering its members as portfolios consisting of n financial assets, we obtain the classical 1952 result of Harry Markowitz that any portfolio is dominated by an efficient portfolio. Moreover, several other examples of possible application of this general setup are presented. | Given two families of continuous functions u and v on a topological space X, we define a preorder R=R(u,v) on X by the condition that any member of u is an R-increasing and any member of v is an R-decreasing function. It turns out that if the topological space X is quasi-compact and sequentially compact, then any element of X is R-dominated by an R-maximal element of X . In particular, since the (n-1)-dimensional simplex is a compact subset of the real n-dimensional vector space, then considering its members as portfolios consisting of n financial assets, we obtain the classical 1952 result of Harry Markowitz that any portfolio is dominated by an efficient portfolio. Moreover, several other examples of possible application of this general setup are presented. | [
{
"type": "R",
"before": "x\\in",
"after": "of",
"start_char_pos": 323,
"end_char_pos": 327
},
{
"type": "R",
"before": "m\\in X: xRm",
"after": "of X",
"start_char_pos": 369,
"end_char_pos": 380
}
]
| [
0,
217,
684
]
|
1512.08291 | 1 | We extend the two-dimension (2D) ePlace algorithm to a flat, analytic, mixed-size placement algorithm ePlace-3D , for three-dimension integrated circuits (3D-ICs) . Nonlinear optimizationis applied over the entire cuboid domain. Specifically, we develop (1) eDensity-3D: an electrostatics based 3D placement density function with globally uniform smoothness (2) a 3D numerical solution with improved spectral formulation (3) a 3D nonlinear pre conditioner for convergence acceleration (4) interleaved 2D-3D placement for quality and efficiency enhancement. We integrate all the features into our 3D-IC placement prototype ePlace-3D. Compared to the leading placers mPL6-3D and NTUplace3-3D , our algorithm produces 6.44\% and 37.15\% shorter wirelength, 9.11\% and 10.27\% fewer vertical interconnects (VI) while runs 2.55x faster and 0.30x slower on average of all the ten IBM-PLACE circuits , respectively. We also validate ePlace-3D on the large-scale modern mixed-size (MMS) 3D circuits , which shows high performance and scalability. | We propose a flat, analytic, mixed-size placement algorithm ePlace-3D for three-dimension integrated circuits (3D-ICs) using nonlinear optimization. Our contributions are (1) electrostatics based 3D density function with globally uniform smoothness (2) 3D numerical solution with improved spectral formulation (3) 3D nonlinear pre-conditioner for convergence acceleration (4) interleaved 2D-3D placement for efficiency enhancement. Our placer outperforms the leading work mPL6-3D and NTUplace3-3D with 6.44\% and 37.15\% shorter wirelength, 9.11\% and 10.27\% fewer 3D vertical interconnects (VI) on average of IBM-PLACE circuits . Validation on the large-scale modern mixed-size (MMS) 3D circuits shows high performance and scalability. | [
{
"type": "R",
"before": "extend the two-dimension (2D) ePlace algorithm to",
"after": "propose",
"start_char_pos": 3,
"end_char_pos": 52
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 112,
"end_char_pos": 113
},
{
"type": "R",
"before": ". Nonlinear optimizationis applied over the entire cuboid domain. Specifically, we develop",
"after": "using nonlinear optimization. Our contributions are",
"start_char_pos": 163,
"end_char_pos": 253
},
{
"type": "D",
"before": "eDensity-3D: an",
"after": null,
"start_char_pos": 258,
"end_char_pos": 273
},
{
"type": "D",
"before": "placement",
"after": null,
"start_char_pos": 298,
"end_char_pos": 307
},
{
"type": "D",
"before": "a",
"after": null,
"start_char_pos": 362,
"end_char_pos": 363
},
{
"type": "D",
"before": "a",
"after": null,
"start_char_pos": 425,
"end_char_pos": 426
},
{
"type": "R",
"before": "pre conditioner",
"after": "pre-conditioner",
"start_char_pos": 440,
"end_char_pos": 455
},
{
"type": "D",
"before": "quality and",
"after": null,
"start_char_pos": 521,
"end_char_pos": 532
},
{
"type": "R",
"before": "We integrate all the features into our 3D-IC placement prototype ePlace-3D. Compared to the leading placers",
"after": "Our placer outperforms the leading work",
"start_char_pos": 557,
"end_char_pos": 664
},
{
"type": "R",
"before": ", our algorithm produces",
"after": "with",
"start_char_pos": 690,
"end_char_pos": 714
},
{
"type": "A",
"before": null,
"after": "3D",
"start_char_pos": 779,
"end_char_pos": 779
},
{
"type": "D",
"before": "while runs 2.55x faster and 0.30x slower",
"after": null,
"start_char_pos": 808,
"end_char_pos": 848
},
{
"type": "D",
"before": "all the ten",
"after": null,
"start_char_pos": 863,
"end_char_pos": 874
},
{
"type": "R",
"before": ", respectively. We also validate ePlace-3D",
"after": ". Validation",
"start_char_pos": 894,
"end_char_pos": 936
},
{
"type": "D",
"before": ", which",
"after": null,
"start_char_pos": 992,
"end_char_pos": 999
}
]
| [
0,
228,
556,
632,
909
]
|
1512.08609 | 1 | We investigate the stationary characteristics of an M/G/1 retrial queue where the server, subject to active failures, primarily attends incoming calls and directs outgoing calls only when idle. On finding the server unavailable ( busy or failed), inbound calls join the orbit and reattempt for service at exponentially-distributed time intervals. The system stability condition and probability generating functions of the number of calls in orbit and system are derived and evaluated numerically in the context of mean system size, server availability, failure frequency , and orbit waiting time. | Efficient use of call center operators through technological innovations more often come at the expense of added operation management issues. In this paper, the stationary characteristics of an M/G/1 retrial queue is investigated where the single server, subject to active failures, primarily attends incoming calls and directs outgoing calls only when idle. The incoming calls arriving at the server follow a Poisson arrival process, while outgoing calls are made in an exponentially distributed time. On finding the server unavailable ( either busy or temporarily broken down), incoming calls intrinsically join the virtual orbit from which they re-attempt for service at exponentially distributed time intervals. The system stability condition along with probability generating functions for the joint queue length distribution of the number of calls in the orbit and the state of the server are derived and evaluated numerically in the context of mean system size, server availability, failure frequency and orbit waiting time. | [
{
"type": "R",
"before": "We investigate the",
"after": "Efficient use of call center operators through technological innovations more often come at the expense of added operation management issues. In this paper, the",
"start_char_pos": 0,
"end_char_pos": 18
},
{
"type": "R",
"before": "where the",
"after": "is investigated where the single",
"start_char_pos": 72,
"end_char_pos": 81
},
{
"type": "A",
"before": null,
"after": "The incoming calls arriving at the server follow a Poisson arrival process, while outgoing calls are made in an exponentially distributed time.",
"start_char_pos": 194,
"end_char_pos": 194
},
{
"type": "R",
"before": "busy or failed), inbound calls join the orbit and reattempt",
"after": "either busy or temporarily broken down), incoming calls intrinsically join the virtual orbit from which they re-attempt",
"start_char_pos": 231,
"end_char_pos": 290
},
{
"type": "R",
"before": "exponentially-distributed",
"after": "exponentially distributed",
"start_char_pos": 306,
"end_char_pos": 331
},
{
"type": "R",
"before": "and",
"after": "along with",
"start_char_pos": 379,
"end_char_pos": 382
},
{
"type": "A",
"before": null,
"after": "for the joint queue length distribution",
"start_char_pos": 416,
"end_char_pos": 416
},
{
"type": "R",
"before": "orbit and system",
"after": "the orbit and the state of the server",
"start_char_pos": 443,
"end_char_pos": 459
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 573,
"end_char_pos": 574
}
]
| [
0,
193,
347
]
|
1601.00175 | 1 | We study the problem of selling an asset near its ultimate maximum in the minimax setting. The regret-based notion of a perfect stopping time is introduced. The related selling rule improves any earlier one and cannot be improved by further delay. A perfect stopping time is unique and has the following form: one should sell the asset if its price deviates from the running maximum by a certain time-dependent quantity. The result is applicable to a quite general price model . | We study the problem of selling an asset near its ultimate maximum in the minimax setting. The regret-based notion of a perfect stopping time is introduced. A perfect stopping time is uniquely characterized by its optimality properties and has the following form: one should sell the asset if its price deviates from the running maximum by a certain time-dependent quantity. The related selling rule improves any earlier one and cannot be improved by further delay. The results, which are applicable to a quite general price model , are illustrated by several examples . | [
{
"type": "D",
"before": "The related selling rule improves any earlier one and cannot be improved by further delay.",
"after": null,
"start_char_pos": 157,
"end_char_pos": 247
},
{
"type": "R",
"before": "unique",
"after": "uniquely characterized by its optimality properties",
"start_char_pos": 275,
"end_char_pos": 281
},
{
"type": "R",
"before": "result is",
"after": "related selling rule improves any earlier one and cannot be improved by further delay. The results, which are",
"start_char_pos": 425,
"end_char_pos": 434
},
{
"type": "A",
"before": null,
"after": ", are illustrated by several examples",
"start_char_pos": 477,
"end_char_pos": 477
}
]
| [
0,
90,
156,
247,
420
]
|
1601.00991 | 1 | We present explicit formulas - that are also computer code - for 101 real-life quantitative trading alphas. Their average holding period approximately ranges 0.6-6.4 days. The average pair-wise correlation of these alphas is low, 15.9\%. The returns are strongly correlated with volatility, but have no significant dependence on turnover, directly confirming an earlier result by two of us based on a more indirect empirical analysis. We further find empirically that turnover has poor explanatory power for alpha correlations. | We present explicit formulas - that are also computer code - for 101 real-life quantitative trading alphas. Their average holding period approximately ranges 0.6-6.4 days. The average pair-wise correlation of these alphas is low, 15.9\%. The returns are strongly correlated with volatility, but have no significant dependence on turnover, directly confirming an earlier result based on a more indirect empirical analysis. We further find empirically that turnover has poor explanatory power for alpha correlations. | [
{
"type": "D",
"before": "by two of us",
"after": null,
"start_char_pos": 377,
"end_char_pos": 389
}
]
| [
0,
107,
171,
237,
434
]
|
1601.01753 | 1 | Geography effect is investigated for the Chinese stock market , based on the daily data of individual stocks. Companies located around the stock markets are found to greatly contribute to the markets in the geographical sector. A geographical correlation is introduced to quantify the geography effect on the stock correlation, which is observed to approach steady as the company location moves to the northeast China . Stock distance effect is further studied, where companies are found to more likely set their headquarters close to each other. In the normal market environment, the stock correlation decays with the stock distance , but is independent of the stock distance in and after the financial crisis . | Geography effect is investigated for the Chinese stock market including the Shanghai and Shenzhen stock markets , based on the daily data of individual stocks. The Shanghai city and the Guangdong province can be identified in the stock geographical sector. By investigating a geographical correlation on a geographical parameter, the stock location is found to have an impact on the financial dynamics, except for the financial crisis time of the Shenzhen market . Stock distance effect is further studied, with a crossover behavior observed for the stock distance distribution. The probability of the short distance is much greater than that of the long distance. The average stock correlation is found to weakly decay with the stock distance for the Shanghai stock market, but stays nearly stable for different stock distance for the Shenzhen stock market . | [
{
"type": "A",
"before": null,
"after": "including the Shanghai and Shenzhen stock markets",
"start_char_pos": 62,
"end_char_pos": 62
},
{
"type": "R",
"before": "Companies located around the stock markets are found to greatly contribute to the markets in the",
"after": "The Shanghai city and the Guangdong province can be identified in the stock",
"start_char_pos": 111,
"end_char_pos": 207
},
{
"type": "R",
"before": "A geographical correlation is introduced to quantify the geography effect on the stock correlation, which is observed to approach steady as the company location moves to the northeast China",
"after": "By investigating a geographical correlation on a geographical parameter, the stock location is found to have an impact on the financial dynamics, except for the financial crisis time of the Shenzhen market",
"start_char_pos": 229,
"end_char_pos": 418
},
{
"type": "R",
"before": "where companies are found to more likely set their headquarters close to each other. In the normal market environment, the stock correlation decays",
"after": "with a crossover behavior observed for the stock distance distribution. The probability of the short distance is much greater than that of the long distance. The average stock correlation is found to weakly decay",
"start_char_pos": 463,
"end_char_pos": 610
},
{
"type": "R",
"before": ", but is independent of the stock distance in and after the financial crisis",
"after": "for the Shanghai stock market, but stays nearly stable for different stock distance for the Shenzhen stock market",
"start_char_pos": 635,
"end_char_pos": 711
}
]
| [
0,
110,
228,
420,
547
]
|
1601.02160 | 1 | Despite the central role that antibodies play in the adaptive immune system and in biotechnology, much remains unknown about the quantitative relationship between an antibody's amino acid sequence and its antigen binding affinity. Here we describe a new experimental approach, called Tite-Seq, that is capable of measuring binding titration curves and corresponding affinities for thousands of variant antibodies in parallel. The measurement of titration curves eliminates the confounding effects of antibody expression and stability inherent to standard deep mutational scanning assays. We demonstrate Tite-Seq on the CDR1H and CDR3H regions of a well-studied scFv antibody. Our data sheds light on the structural basis for antigen binding affinity , and suggests a dominant role for CDR1H in establishing antibody stability. Tite-Seq fills a large gap in the ability to measure critical aspects of the adaptive immune system, and can be readily used for studying sequence-affinity landscapes in other protein systems. | Despite the central role that antibodies play in the adaptive immune system and in biotechnology, much remains unknown about the quantitative relationship between an antibody's amino acid sequence and its antigen binding affinity. Here we describe a new experimental approach, called Tite-Seq, that is capable of measuring binding titration curves and corresponding affinities for thousands of variant antibodies in parallel. The measurement of titration curves eliminates the confounding effects of antibody expression and stability that arise in standard deep mutational scanning assays. We demonstrate Tite-Seq on the CDR1H and CDR3H regions of a well-studied scFv antibody. Our data shed light on the structural basis for antigen binding affinity and suggests a role for secondary CDR loops in establishing antibody stability. Tite-Seq fills a large gap in the ability to measure critical aspects of the adaptive immune system, and can be readily used for studying sequence-affinity landscapes in other protein systems. | [
{
"type": "R",
"before": "inherent to",
"after": "that arise in",
"start_char_pos": 534,
"end_char_pos": 545
},
{
"type": "R",
"before": "sheds",
"after": "shed",
"start_char_pos": 685,
"end_char_pos": 690
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 750,
"end_char_pos": 751
},
{
"type": "R",
"before": "dominant role for CDR1H",
"after": "role for secondary CDR loops",
"start_char_pos": 767,
"end_char_pos": 790
}
]
| [
0,
230,
425,
587,
675,
826
]
|
1601.02240 | 1 | The sizes in a population of genetically identical%DIFDELCMD < {\it %%% Escherichia coli cells is known to vary, independent of growth stage. Multiple genes and proteins involved in cell division and DNA segregation are involved in regulating cell size. At the same time, physical factors such as temperature, growth rate and population size also appear to affect cell sizes. How these physical factors interact with the genetically encoded ones to produce such effects is however not clearly understood . Here, we have developed a multi-scale model of bacterial DNA replication coupled to cell division in the context of a logistically growing population. DNA replication is modeled as the stochastic dynamics of replication forks (RFs) which transition probabilistically between two states, stalled and recovered. In the model stalled RFs or incomplete DNA replication results in aberrant cell division. Simulating this model , we demonstrate that the cell-size variability of cultures depends strongly on population size and the growth phase . Our model also predicts the variability in cell sizes is independent of growth temperature between 22^o and 42^oC. To test the model, we perform experiments with%DIFDELCMD < {\it %%% E. coli strains mutant for recA, sulA and slmA, factors known to affect replication fork dynamics and coupling to cell division. Our validated model of RFstalling based cell division, can reproduce the variability in cell size distribution seen in these mutant strains, and provides a predictive tool to further examine stochastic effects in bacterial cell size regulation . | The %DIFDELCMD < {\it %%% variability in cell size of an isogenic population of Escherichia coli has been widely reported in experiment. The probability density function (PDF) of cell lengths has been variously described by exponential and lognormal functions. While temperature, population density and growth rate have all been shown to affect E. coli cell size distributions, and recent models have validated a link between growth rate and cell size through DNA replication, cell size variability is thought to emerge from growth rate variability. A mechanistic link that could distinguish the source of stochasticity, could improve our understanding of cell size regulation . Here, we have developed a population dynamics model of individual cell division based on the BCD, birth, chromosome replication and division model, with DNA replication based on the Cooper and Helmstetter (CH) multi-fork replication. In our model, stochasticity in the model arises solely from the dynamics of DNA replication forks. We model the forks as two-state systems: stalled and recovered . Our model %DIFDELCMD < {\it %%% predicts an increase in cell size variability with growth rate, consistent with previous experimental reports. We perturb the model to test the effect of increased replication fork (RF) stalling frequency, or uncoupling RF stalling from the cell-division machinery. Indeed, despite ignoring DNA and protein segregation asymmetry, the model can faithfully reproduce quantitative changes in cell size distributions. In our model, multi-fork replication produces multiplicative 'noise' and provides a mechanism linking growth rate and cell size variability . | [
{
"type": "D",
"before": "sizes in a population of genetically identical",
"after": null,
"start_char_pos": 4,
"end_char_pos": 50
},
{
"type": "D",
"before": "Escherichia coli",
"after": null,
"start_char_pos": 72,
"end_char_pos": 88
},
{
"type": "R",
"before": "cells is known to vary, independent of growth stage. Multiple genes and proteins involved in cell division and DNA segregation are involved in regulating cell size. At the same time, physical factors such as temperature, growth rate and population size also appear to affect cell sizes. How these physical factors interact with the genetically encoded ones to produce such effects is however not clearly understood",
"after": "variability in cell size of an isogenic population of Escherichia coli has been widely reported in experiment. The probability density function (PDF) of cell lengths has been variously described by exponential and lognormal functions. While temperature, population density and growth rate have all been shown to affect E. coli cell size distributions, and recent models have validated a link between growth rate and cell size through DNA replication, cell size variability is thought to emerge from growth rate variability. A mechanistic link that could distinguish the source of stochasticity, could improve our understanding of cell size regulation",
"start_char_pos": 89,
"end_char_pos": 503
},
{
"type": "R",
"before": "multi-scale model of bacterial DNA replication coupled to cell division in the context of a logistically growing population. DNA replication is modeled as the stochastic dynamics of replication forks (RFs) which transition probabilistically between two states, stalled and recovered. In",
"after": "population dynamics model of individual cell division based on the BCD, birth, chromosome replication and division model, with DNA replication based on the Cooper and Helmstetter (CH) multi-fork replication. In our model, stochasticity in the model arises solely from the dynamics of DNA replication forks. We model",
"start_char_pos": 532,
"end_char_pos": 818
},
{
"type": "R",
"before": "model stalled RFs or incomplete DNA replication results in aberrant cell division. Simulating this model , we demonstrate that the cell-size variability of cultures depends strongly on population size and the growth phase",
"after": "forks as two-state systems: stalled and recovered",
"start_char_pos": 823,
"end_char_pos": 1044
},
{
"type": "D",
"before": "also predicts the variability in cell sizes is independent of growth temperature between 22^o and 42^oC. To test the model, we perform experiments with",
"after": null,
"start_char_pos": 1057,
"end_char_pos": 1208
},
{
"type": "D",
"before": "E. coli",
"after": null,
"start_char_pos": 1230,
"end_char_pos": 1237
},
{
"type": "R",
"before": "strains mutant for recA, sulA and slmA, factors known to affect replication fork dynamics and coupling to cell division. Our validated model of RFstalling based cell division, can reproduce the variability",
"after": "predicts an increase in cell size variability with growth rate, consistent with previous experimental reports. We perturb the model to test the effect of increased replication fork (RF) stalling frequency, or uncoupling RF stalling from the cell-division machinery. Indeed, despite ignoring DNA and protein segregation asymmetry, the model can faithfully reproduce quantitative changes",
"start_char_pos": 1238,
"end_char_pos": 1443
},
{
"type": "R",
"before": "distribution seen in these mutant strains,",
"after": "distributions. In our model, multi-fork replication produces multiplicative 'noise'",
"start_char_pos": 1457,
"end_char_pos": 1499
},
{
"type": "R",
"before": "predictive tool to further examine stochastic effects in bacterial cell size regulation",
"after": "mechanism linking growth rate and cell size variability",
"start_char_pos": 1515,
"end_char_pos": 1602
}
]
| [
0,
141,
253,
375,
505,
656,
815,
905,
1046,
1161,
1358
]
|
1601.02578 | 1 | Biochemical interactions are inherently stochastic: the time for the reactions to fire and which reaction fires next are both random variables. It has been argued that natural systems use stochasticity to perform functions that would be impossible in a deterministic setting. However, the mechanisms used by cells to compute in a noisy environment are not well understood. We explore the range of probabilistic behaviours that can be engineered with Chemical Reaction Networks (CRNs). We show that at steady state CRNs are able to "program" any distribution with support in N^m, with m \geq 1. We present an algorithm to systematically program a CRN so that its stochastic semantics at steady state approximates a particular distribution with arbitrarily small error . We also give optimized schemes for special distributions, including the uniform distribution. Finally, we formulate a calculus that is complete for finite support distributions, and that can be compiled to a restricted class of CRNs that at steady state realize those distributions . We illustrate the approach on an example of drug resistance in bacteria . | We explore the range of probabilistic behaviours that can be engineered with Chemical Reaction Networks (CRNs). We show that at steady state CRNs are able to "program" any distribution with finite support in N^m, with m \geq 1. Moreover, any distribution with countable infinite support can be approximated with arbitrarily small error under the L^1 norm . We also give optimized schemes for special distributions, including the uniform distribution. Finally, we formulate a calculus to compute on distributions that is complete for finite support distributions, and can be compiled to a restricted class of CRNs that at steady state realize those distributions . | [
{
"type": "D",
"before": "Biochemical interactions are inherently stochastic: the time for the reactions to fire and which reaction fires next are both random variables. It has been argued that natural systems use stochasticity to perform functions that would be impossible in a deterministic setting. However, the mechanisms used by cells to compute in a noisy environment are not well understood.",
"after": null,
"start_char_pos": 0,
"end_char_pos": 372
},
{
"type": "A",
"before": null,
"after": "finite",
"start_char_pos": 563,
"end_char_pos": 563
},
{
"type": "R",
"before": "We present an algorithm to systematically program a CRN so that its stochastic semantics at steady state approximates a particular distribution with",
"after": "Moreover, any distribution with countable infinite support can be approximated with",
"start_char_pos": 595,
"end_char_pos": 743
},
{
"type": "A",
"before": null,
"after": "under the L^1 norm",
"start_char_pos": 768,
"end_char_pos": 768
},
{
"type": "A",
"before": null,
"after": "to compute on distributions",
"start_char_pos": 898,
"end_char_pos": 898
},
{
"type": "D",
"before": "that",
"after": null,
"start_char_pos": 954,
"end_char_pos": 958
},
{
"type": "D",
"before": ". We illustrate the approach on an example of drug resistance in bacteria",
"after": null,
"start_char_pos": 1054,
"end_char_pos": 1127
}
]
| [
0,
143,
275,
372,
484,
594,
770,
864,
1055
]
|
1601.03435 | 1 | The dual risk model is a popular model in finance and insurance, which is mainly used to model the wealth process of a venture capital or high tech company. Optimal dividends have been extensively studied in the literature for the dual risk model. It is well known that the value function of this optimal control problem does not yield closed-form formulas except in some special cases. In this paper, we study the asymptotics of the optimal dividends problem when the parameters go to either zero or infinity. Our results provide us insights to the optimal strategies and the optimal values when the parameters are extreme. | The dual risk model is a popular model in finance and insurance, which is often used to model the wealth process of a venture capital or high tech company. Optimal dividends have been extensively studied in the literature for the dual risk model. It is well known that the value function of this optimal control problem does not yield closed-form solutions except in some special cases. In this paper, we study the asymptotics of the optimal dividends problem when the parameters of the model go to either zero or infinity. Our results provide insights to the optimal strategies and the optimal values when the parameters are extreme. | [
{
"type": "R",
"before": "mainly",
"after": "often",
"start_char_pos": 74,
"end_char_pos": 80
},
{
"type": "R",
"before": "formulas",
"after": "solutions",
"start_char_pos": 348,
"end_char_pos": 356
},
{
"type": "A",
"before": null,
"after": "of the model",
"start_char_pos": 480,
"end_char_pos": 480
},
{
"type": "D",
"before": "us",
"after": null,
"start_char_pos": 532,
"end_char_pos": 534
}
]
| [
0,
156,
247,
386,
511
]
|
1601.04043 | 1 | We can overcome uncertainty with uncertainty. Using randomness in our choices and in what we control and hence in the decision making process could potentially offset the uncertainty inherent in the environment and yield better outcomes. This methodology is suitable for the social sciences since the primary source of uncertainty are the members of the system themselves and presently, no methods are known to fully determine the outcomes in such an environment, which perhaps would require being able to read the minds of everyone involved and to anticipate their actions continously . Admittedly, we are not qualified to recommend whether such an approach is conducive for the natural sciences, unless perhaps, bounds can be established on the levels of uncertainty in a system and it is shown conclusively that a better understanding of the system and hence improved decision making will not alter the outcomes. We consider a number of examples and develop both the theoretical framework and empirical tests where such an approach might be helpful, with the common prescription ' Don't Optimize, Simply Randomize' . 1. Newsvendor Inventory Management Problem 2. School Admissions. 3. Journal Submissions. 4. Job Candidate Selection. 5. Stock Picking. | We can overcome uncertainty with uncertainty. Using randomness in our choices and in what we control and hence in the decision making process could potentially offset the uncertainty inherent in the environment and yield better outcomes. This methodology is suitable for the social sciences since the primary source of uncertainty are the members of the system themselves and presently, no methods are known to fully determine the outcomes in such an environment, which perhaps would require being able to read the minds of everyone involved and to anticipate their actions continuously . Admittedly, we are not qualified to recommend whether such an approach is conducive for the natural sciences, unless perhaps, bounds can be established on the levels of uncertainty in a system and it is shown conclusively that a better understanding of the system and hence improved decision making will not alter the outcomes. We consider a number of examples and develop both the theoretical framework and empirical tests where such an approach might be helpful, with the common prescription , " Don't Simply Optimize, Also Randomize, perhaps best described by the term - Randomoptimization" . 1. Newsvendor Inventory Management Problem 2. School Admissions. 3. Journal Submissions. 4. Job Candidate Selection. 5. Stock Picking. | [
{
"type": "R",
"before": "continously",
"after": "continuously",
"start_char_pos": 574,
"end_char_pos": 585
},
{
"type": "R",
"before": "'",
"after": ", \"",
"start_char_pos": 1082,
"end_char_pos": 1083
},
{
"type": "R",
"before": "Optimize, Simply Randomize'",
"after": "Simply Optimize, Also Randomize, perhaps best described by the term - Randomoptimization\"",
"start_char_pos": 1090,
"end_char_pos": 1117
}
]
| [
0,
45,
237,
587,
915,
1184,
1208,
1236
]
|
1601.04043 | 2 | We can overcome uncertainty with uncertainty. Using randomness in our choices and in what we control and hence in the decision making process could potentially offset the uncertainty inherent in the environment and yield better outcomes. This methodology is suitable for the social sciences since the primary source of uncertainty are the members of the system themselves and presently, no methods are known to fully determine the outcomes in such an environment, which perhaps would require being able to read the minds of everyone involved and to anticipate their actions continuously. Admittedly, we are not qualified to recommend whether such an approach is conducive for the natural sciences, unless perhaps, bounds can be established on the levels of uncertainty in a system and it is shown conclusively that a better understanding of the system and hence improved decision making will not alter the outcomes. We consider a number of examples and develop both the theoretical framework and empirical tests where such an approach might be helpful, with the common prescription, " Don't Simply Optimize, Also Randomize, perhaps best described by the term - Randomoptimization" . 1. Newsvendor Inventory Management Problem 2. School Admissions. 3. Journal Submissions. 4. Job Candidate Selection. 5. Stock Picking. | We can overcome uncertainty with uncertainty. Using randomness in our choices and in what we control and hence in the decision making process could potentially offset the uncertainty inherent in the environment and yield better outcomes. This methodology is suitable for the social sciences since the primary source of uncertainty are the members of the system themselves and presently, no methods are known to fully determine the outcomes in such an environment, which perhaps would require being able to read the minds of everyone involved and to anticipate their actions continuously. Admittedly, we are not qualified to recommend whether such an approach is conducive for the natural sciences, unless perhaps, bounds can be established on the levels of uncertainty in a system and it is shown conclusively that a better understanding of the system and hence improved decision making will not alter the outcomes. We consider a number of examples and develop both the theoretical framework and empirical tests where such an approach might be helpful, with the common prescription, ' Don't Simply Optimize, Also Randomize, perhaps best described by the term - Randoptimization' . 1. Newsvendor Inventory Management Problem 2. School Admissions. 3. Journal Submissions. 4. Job Candidate Selection. 5. Stock Picking. | [
{
"type": "R",
"before": "\"",
"after": "'",
"start_char_pos": 1083,
"end_char_pos": 1084
},
{
"type": "R",
"before": "Randomoptimization\"",
"after": "Randoptimization'",
"start_char_pos": 1161,
"end_char_pos": 1180
}
]
| [
0,
45,
237,
587,
915,
1247,
1271,
1299
]
|
1601.04043 | 3 | We can overcome uncertainty with uncertainty. Using randomness in our choices and in what we control and hence in the decision making process could potentially offset the uncertainty inherent in the environment and yield better outcomes. This methodology is suitable for the social sciences since the primary source of uncertainty are the members of the system themselves and presently, no methods are known to fully determine the outcomes in such an environment, which perhaps would require being able to read the minds of everyone involved and to anticipate their actions continuously. Admittedly, we are not qualified to recommend whether such an approach is conducive for the natural sciences, unless perhaps, bounds can be established on the levels of uncertainty in a system and it is shown conclusively that a better understanding of the system and hence improved decision making will not alter the outcomes. We consider a number of examples and develop both the theoretical framework and empirical tests where such an approach might be helpful, with the common prescription, ' Don't Simply Optimize, Also Randomize, perhaps best described by the term - Randoptimization '. 1. Newsvendor Inventory Management Problem 2. School Admissions. 3. Journal Submissions. 4. Job Candidate Selection. 5. Stock Picking . | We can overcome uncertainty with uncertainty. Using randomness in our choices and in what we control and hence in the decision making process could potentially offset the uncertainty inherent in the environment and yield better outcomes. The example we fully develop is the news-vendor inventory management problem with demand uncertainty with the prescription, " Don't Simply Optimize, Also Randomize, perhaps best described by the term - Randoptimization " . | [
{
"type": "R",
"before": "This methodology is suitable for the social sciences since the primary source of uncertainty are the members of the system themselves and presently, no methods are known to fully determine the outcomes in such an environment, which perhaps would require being able to read the minds of everyone involved and to anticipate their actions continuously. Admittedly, we are not qualified to recommend whether such an approach is conducive for the natural sciences, unless perhaps, bounds can be established on the levels of uncertainty in a system and it is shown conclusively that a better understanding of the system and hence improved decision making will not alter the outcomes. We consider a number of examples and develop both the theoretical framework and empirical tests where such an approach might be helpful, with the common prescription, '",
"after": "The example we fully develop is the news-vendor inventory management problem with demand uncertainty with the prescription, \"",
"start_char_pos": 238,
"end_char_pos": 1084
},
{
"type": "R",
"before": "'. 1. Newsvendor Inventory Management Problem 2. School Admissions. 3. Journal Submissions. 4. Job Candidate Selection. 5. Stock Picking",
"after": "\"",
"start_char_pos": 1178,
"end_char_pos": 1314
}
]
| [
0,
45,
237,
587,
915,
1245,
1269,
1297
]
|
1601.04043 | 4 | We can overcome uncertainty with uncertainty. Using randomness in our choices and in what we control and hence in the decision making process could potentially offset the uncertainty inherent in the environment and yield better outcomes. The example we fully develop is the news-vendor inventory management problem with demand uncertainty with the prescription, "Don't Simply Optimize, Also Randomize , perhaps best described by the term - Randoptimization" . | We can overcome uncertainty with uncertainty. Using randomness in our choices and in what we control , and hence in the decision making process , could potentially offset the uncertainty inherent in the environment and yield better outcomes. The example we develop in greater detail is the news-vendor inventory management problem with demand uncertainty . We briefly discuss areas, where such an approach might be helpful, with the common prescription, "Don't Simply Optimize, Also Randomize ; perhaps best described by the term - Randoptimization" . 1. News-vendor Inventory Management 2. School Admissions 3. Journal Submissions 4. Job Candidate Selection 5. Stock Picking 6. Monetary Policy This methodology is suitable for the social sciences since the primary source of uncertainty are the members of the system themselves and presently, no methods are known to fully determine the outcomes in such an environment, which perhaps would require being able to read the minds of everyone involved and to anticipate their actions continuously. Admittedly, we are not qualified to recommend whether such an approach is conducive for the natural sciences, unless perhaps, bounds can be established on the levels of uncertainty in a system and it is shown conclusively that a better understanding of the system and hence improved decision making will not alter the outcomes . | [
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 101,
"end_char_pos": 101
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 143,
"end_char_pos": 143
},
{
"type": "R",
"before": "fully develop",
"after": "develop in greater detail",
"start_char_pos": 255,
"end_char_pos": 268
},
{
"type": "R",
"before": "with the",
"after": ". We briefly discuss areas, where such an approach might be helpful, with the common",
"start_char_pos": 341,
"end_char_pos": 349
},
{
"type": "R",
"before": ",",
"after": ";",
"start_char_pos": 403,
"end_char_pos": 404
},
{
"type": "A",
"before": null,
"after": ". 1. News-vendor Inventory Management 2. School Admissions 3. Journal Submissions 4. Job Candidate Selection 5. Stock Picking 6. Monetary Policy This methodology is suitable for the social sciences since the primary source of uncertainty are the members of the system themselves and presently, no methods are known to fully determine the outcomes in such an environment, which perhaps would require being able to read the minds of everyone involved and to anticipate their actions continuously. Admittedly, we are not qualified to recommend whether such an approach is conducive for the natural sciences, unless perhaps, bounds can be established on the levels of uncertainty in a system and it is shown conclusively that a better understanding of the system and hence improved decision making will not alter the outcomes",
"start_char_pos": 460,
"end_char_pos": 460
}
]
| [
0,
45,
239
]
|
1601.04557 | 1 | In this paper we describe a useful risk management tool to analyse annuity and life insurance portfolios where mortality is modelled stochastically . Yet, there exists a fast and numerically stable algorithm to derive loss distributions exactly, even for large portfolios. We provide various estimation procedures based on publicly available data . The model allows for various other applications, including mortality forecasts . Compared to the Lee-Carter model, we have a more flexible framework, get tighter bounds and can directly extract several sources of uncertainty. Straight-forward model validation techniques are available. | Using an extended version of the credit risk model CreditRisk+, we develop a flexible framework with numerous applications amongst which we find stochastic mortality modelling, forecasting of death causes as well as profit and loss modelling of life insurance and annuity portfolios which can be used in (partial) internal models under Solvency II . Yet, there exists a fast and numerically stable algorithm to derive loss distributions exactly, even for large portfolios. We provide various estimation procedures based on publicly available data . Compared to the Lee-Carter model, we have a more flexible framework, get tighter bounds and can directly extract several sources of uncertainty. Straight-forward model validation techniques are available. | [
{
"type": "R",
"before": "In this paper we describe a useful risk management tool to analyse annuity and life insurance portfolios where mortality is modelled stochastically",
"after": "Using an extended version of the credit risk model CreditRisk+, we develop a flexible framework with numerous applications amongst which we find stochastic mortality modelling, forecasting of death causes as well as profit and loss modelling of life insurance and annuity portfolios which can be used in (partial) internal models under Solvency II",
"start_char_pos": 0,
"end_char_pos": 147
},
{
"type": "D",
"before": ". The model allows for various other applications, including mortality forecasts",
"after": null,
"start_char_pos": 347,
"end_char_pos": 427
}
]
| [
0,
149,
272,
348,
574
]
|
1601.05012 | 1 | We show from a simple model that a country's technological development can be measured by the logarithm of the number of productsit makes. We show that much of the income gaps among countries are due to differences in technology, as measured by this simple metric. Finally, we show that the so-called Economic Complexity Index (ECI), a recently proposed measure of collective knowhow, is in fact an estimate of this simple metric (with correlation above 0.9) . | Researchers developed the Economic Complexity Index (ECI) as a measure of the overall sophistication of a country's products. They argued that this measure explains economic growth better than the conventional variables such as human capital. This paper suggests a simpler measure of production complexity, the logarithm of product diversification, which has a natural foundation in information theory: it measures the information needed to encode the knowledge required to make a country's products. This measure explains well the income differences between countries. It has a basic link with ECI that is strongly supported by the data . | [
{
"type": "R",
"before": "We show from a simple model that",
"after": "Researchers developed the Economic Complexity Index (ECI) as a measure of the overall sophistication of",
"start_char_pos": 0,
"end_char_pos": 32
},
{
"type": "R",
"before": "technological development can be measured by the logarithm of the number of productsit makes. We show that much of the income gaps among countries are due to differences in technology, as measured by this simple metric. Finally, we show that the so-called Economic Complexity Index (ECI), a recently proposed measure of collective knowhow, is in fact an estimate of this simple metric (with correlation above 0.9)",
"after": "products. They argued that this measure explains economic growth better than the conventional variables such as human capital. This paper suggests a simpler measure of production complexity, the logarithm of product diversification, which has a natural foundation in information theory: it measures the information needed to encode the knowledge required to make a country's products. This measure explains well the income differences between countries. It has a basic link with ECI that is strongly supported by the data",
"start_char_pos": 45,
"end_char_pos": 458
}
]
| [
0,
138,
264
]
|
1601.05098 | 1 | Several studies assert that the random access procedure of the Long Term Evolution (LTE) cellular standard may not be effective whenever a massive number of synchronous connection attempts are performed by terminals, as may happen in a typical Internet of Things or Smart City scenario. Nevertheless, simulation studies in real deployment scenarios are missing because many system-level simulators do not implement the LTE random access procedure in detail. In this paper, we propose a patch for the LTE module of ns-3, one of the most prominent open-source network simulators, to improve the accuracy of the routine that simulates the LTE Random Access Channel (RACH). The patched version of the random access procedure is compared with the default one and the issues arising from massive synchronous access from mobile terminals in LTE are assessed with a simulation campaign. | Several studies assert that the random access procedure of the Long Term Evolution (LTE) cellular standard may not be effective whenever a massive number of simultaneous connection attempts are performed by terminals, as may happen in a typical Internet of Things or Smart City scenario. Nevertheless, simulation studies in real deployment scenarios are missing because many system-level simulators do not implement the LTE random access procedure in detail. In this paper, we propose a patch for the LTE module of ns-3, one of the most prominent open-source network simulators, to improve the accuracy of the routine that simulates the LTE Random Access Channel (RACH). The patched version of the random access procedure is compared with the default one and the issues arising from massive simultaneous access from mobile terminals in LTE are assessed via a simulation campaign. | [
{
"type": "R",
"before": "synchronous",
"after": "simultaneous",
"start_char_pos": 157,
"end_char_pos": 168
},
{
"type": "R",
"before": "synchronous",
"after": "simultaneous",
"start_char_pos": 790,
"end_char_pos": 801
},
{
"type": "R",
"before": "with",
"after": "via",
"start_char_pos": 851,
"end_char_pos": 855
}
]
| [
0,
286,
457,
669
]
|
1601.05519 | 1 | We present a stochastic framework to study signal transmission in a generic two-step cascade S \rightarrow X \rightarrow Y. Starting from a set of Langevin equations obeying Gaussian noise processes we calculate the variance and covariance associated with the system components . These quantities are then used to calculate the net synergy within the purview of partial information decomposition. We show that redundancy in information transmission is essentially an important consequence of Markovian property of the two-step cascade motif . | We present a stochastic framework to study signal transmission in a generic two-step cascade S \rightarrow X \rightarrow Y. Starting from a set of Langevin equations obeying Gaussian noise processes we calculate the variance and covariance while considering both linear and nonlinear production terms for different biochemical species of the cascade . These quantities are then used to calculate the net synergy within the purview of partial information decomposition. We show that redundancy in information transmission is essentially an important consequence of Markovian property of the two-step cascade motif . We also show that redundancy increases fidelity of the signalling pathway . | [
{
"type": "R",
"before": "associated with the system components",
"after": "while considering both linear and nonlinear production terms for different biochemical species of the cascade",
"start_char_pos": 240,
"end_char_pos": 277
},
{
"type": "A",
"before": null,
"after": ". We also show that redundancy increases fidelity of the signalling pathway",
"start_char_pos": 541,
"end_char_pos": 541
}
]
| [
0,
279,
396
]
|
1601.07900 | 1 | The parastatistic distribution of a total debt owed to a large number of creditors is considered in relation to the duration of these debts. Calculation debt process depends from the fractal dimension of economic system in which this process takes place. Two actual variants of these dimensions are investigated. Critical values for these variants are determined. Such values are representing the levels after that borrower bankruptcy occurs. The calculation of the critical value is performed by two independent methods: as the point where the entropy of the system reaches its maximum value, and as the point where the chemical potential is zero, which corresponds to the termination of payments on the debt. Both methods lead to the same critical value. When the velocity of money circulation decreases , it is found for what dimensions critical debt value increases and for what decreases with decrease . | Parastatistic distribution of a total debt owed to a large number of creditors considered in relation to the duration of these debts. The process of debt calculation depends on the fractal dimension of economic system in which this process takes place. Two actual variants of these dimensions are investigated. Critical values for these variants are determined. These critical values represent the levels after that borrower bankruptcy occurs. The calculation of the critical value is performed by two independent methods: as the point where the entropy of the system reaches its maximum value, and as the point where the chemical potential is zero, which corresponds to the termination of payments on the debt. Both methods lead to the same critical value. When the velocity of money circulation decrease , it is found for what dimensions critical debt value is increased and for what it is decreased in the case when the velocity of money circulation is increased . | [
{
"type": "R",
"before": "The parastatistic",
"after": "Parastatistic",
"start_char_pos": 0,
"end_char_pos": 17
},
{
"type": "D",
"before": "is",
"after": null,
"start_char_pos": 83,
"end_char_pos": 85
},
{
"type": "R",
"before": "Calculation debt process depends from",
"after": "The process of debt calculation depends on",
"start_char_pos": 141,
"end_char_pos": 178
},
{
"type": "R",
"before": "Such values are representing",
"after": "These critical values represent",
"start_char_pos": 364,
"end_char_pos": 392
},
{
"type": "R",
"before": "decreases",
"after": "decrease",
"start_char_pos": 796,
"end_char_pos": 805
},
{
"type": "R",
"before": "increases",
"after": "is increased",
"start_char_pos": 860,
"end_char_pos": 869
},
{
"type": "R",
"before": "decreases with decrease",
"after": "it is decreased in the case when the velocity of money circulation is increased",
"start_char_pos": 883,
"end_char_pos": 906
}
]
| [
0,
140,
254,
312,
363,
442,
710,
756
]
|
1602.00235 | 1 | The realised characteristics for Discretisation-Invariant (DI) swaps satisfy the aggregation propertywhen restricted to multivariate processes that are ] deterministic functions of martingales. Hence DI swaps have model-free fair values, in that a no-arbitrage assumption for forward prices is sufficient to derive exact option replication portfolios with neither jump nor discrete monitoring errors. This restriction allows the characterisation of a vector space of DI swaps which provides a great variety of risk premia . A sub-class consists of pay-offs with fair values that are also free from numerical integration errors over option strikes , where exact pricing and hedging is possible via dynamic trading strategies on a few simple puts and calls. An SP 500 empirical study on higher-moment and other DI swaps concludes. | Realised pay-offs for discretisation-invariant swaps are those which satisfy a restricted `aggregation property' of Neuberger 2012] for twice continuously differentiable deterministic functions of a multivariate martingale. They are initially characterised as solutions to a second-order system of PDEs, then those pay-offs based on martingale and log-martingale processes alone form a vector space. Hence there exist an infinite variety of other variance and higher-moment risk premia that are less prone to bias than standard variance swaps because their option replication portfolios have no discrete-monitoring or jump errors . Their fair values are also independent of the monitoring partition. A sub-class consists of pay-offs with fair values that are further free from numerical integration errors over option strikes . Here exact pricing and hedging is possible via dynamic trading strategies on a few vanilla puts and calls. An S P 500 empirical study on higher-moment and other DI swaps concludes. | [
{
"type": "R",
"before": "The realised characteristics for Discretisation-Invariant (DI) swaps satisfy the aggregation propertywhen restricted to multivariate processes that are",
"after": "Realised pay-offs for discretisation-invariant swaps are those which satisfy a restricted `aggregation property' of Neuberger",
"start_char_pos": 0,
"end_char_pos": 151
},
{
"type": "A",
"before": null,
"after": "2012",
"start_char_pos": 152,
"end_char_pos": 152
},
{
"type": "A",
"before": null,
"after": "for twice continuously differentiable",
"start_char_pos": 154,
"end_char_pos": 154
},
{
"type": "R",
"before": "martingales. Hence DI swaps have model-free fair values, in that a no-arbitrage assumption for forward prices is sufficient to derive exact",
"after": "a multivariate martingale. They are initially characterised as solutions to a second-order system of PDEs, then those pay-offs based on martingale and log-martingale processes alone form a vector space. Hence there exist an infinite variety of other variance and higher-moment risk premia that are less prone to bias than standard variance swaps because their",
"start_char_pos": 182,
"end_char_pos": 321
},
{
"type": "R",
"before": "with neither jump nor discrete monitoring errors. This restriction allows the characterisation of a vector space of DI swaps which provides a great variety of risk premia",
"after": "have no discrete-monitoring or jump errors",
"start_char_pos": 352,
"end_char_pos": 522
},
{
"type": "A",
"before": null,
"after": "Their fair values are also independent of the monitoring partition.",
"start_char_pos": 525,
"end_char_pos": 525
},
{
"type": "R",
"before": "also",
"after": "further",
"start_char_pos": 585,
"end_char_pos": 589
},
{
"type": "R",
"before": ", where",
"after": ". Here",
"start_char_pos": 649,
"end_char_pos": 656
},
{
"type": "R",
"before": "simple",
"after": "vanilla",
"start_char_pos": 735,
"end_char_pos": 741
},
{
"type": "R",
"before": "SP",
"after": "S",
"start_char_pos": 761,
"end_char_pos": 763
},
{
"type": "A",
"before": null,
"after": "P",
"start_char_pos": 764,
"end_char_pos": 764
}
]
| [
0,
194,
401,
524,
757
]
|
1602.00509 | 1 | The efficiency of intracellular transport of cargo from specific source to target locations is strongly dependent upon molecular motor assisted motion along cytoskeleton filaments, microtubules and actin filaments. Radial transport along microtubules and lateral transport along the filaments of the actin cortex underneath the cell membrane are characteristic for cells with a centrosome. Here we show that this specific URLanization for ballistic transport in conjunction with intermittent diffusion realizes a spatially inhomogeneous intermittent search strategythat is in general optimal for small thicknesses of the actin cortex. We prove optimality in terms of mean first passage times for three different, frequently encountered intracellular transport tasks: i) the narrow escapeproblem (e.g. transport of cargo to a synapse or other specific region of the cell membrane, ii) reaction kinetics enhancement (e.g. binding of a mobile particle with a immobile or mobile target within the cell, iii) the reaction-escape problem (e. g. release of cargo at a synapse after intracellular vesicle pairing. The results indicate that living cells realize optimal search strategies for various intracellular transport problems %DIFDELCMD < {\it %%% economically through a spatial URLanization that involves only a narrow actin cortex rather than a cell body filled with randomly oriented actin filaments. | We consider random search processes alternating stochastically between diffusion and ballistic motion, in which the distribution function of ballistic motion directions varies from point to point in space. The specific space dependence of the directional distribution together with the switching rates between the two modes of motion establishes a spatially inhomogeneous search strategy. We show that the mean first passage times for several standard search problems - narrow escape, reaction partner finding, reaction-escape - can be minimized with a directional distribution that is reminiscent of the URLanization of the cytoskeleton filaments of cells with a centrosome: radial ballistic transport from center to periphery and back, and ballistic transport in random directions within a concentric shell of thickness \Delta_{\rm opt that living cells realize efficient search strategies for various intracellular transport problems %DIFDELCMD < {\it %%% economically through a spatial URLanization that involves radial microtubules in the central region and only a narrow actin cortex rather than a cell body filled with randomly oriented actin filaments. | [
{
"type": "R",
"before": "The efficiency of intracellular transport of cargo from specific source to target locations is strongly dependent upon molecular motor assisted motion along cytoskeleton filaments, microtubules and actin filaments. Radial transport along microtubules and lateral transport along the filaments of the actin cortex underneath the cell membrane are characteristic for cells with a centrosome. Here we show that this specific URLanization for ballistic transport in conjunction with intermittent diffusion realizes a spatially inhomogeneous intermittent search strategythat is in general optimal for small thicknesses of the actin cortex. We prove optimality in terms of",
"after": "We consider random search processes alternating stochastically between diffusion and ballistic motion, in which the distribution function of ballistic motion directions varies from point to point in space. The specific space dependence of the directional distribution together with the switching rates between the two modes of motion establishes a spatially inhomogeneous search strategy. We show that the",
"start_char_pos": 0,
"end_char_pos": 666
},
{
"type": "R",
"before": "three different, frequently encountered intracellular transport tasks: i) the narrow escapeproblem (e.g. transport of cargo to a synapse or other specific region of the cell membrane, ii) reaction kinetics enhancement (e.g. binding of a mobile particle with a immobile or mobile target within the cell, iii) the",
"after": "several standard search problems - narrow escape, reaction partner finding,",
"start_char_pos": 696,
"end_char_pos": 1007
},
{
"type": "R",
"before": "problem (e. g. release of cargo at a synapse after intracellular vesicle pairing. The results indicate",
"after": "- can be minimized with a directional distribution that is reminiscent of the URLanization of the cytoskeleton filaments of cells with a centrosome: radial ballistic transport from center to periphery and back, and ballistic transport in random directions within a concentric shell of thickness \\Delta_{\\rm opt",
"start_char_pos": 1024,
"end_char_pos": 1126
},
{
"type": "R",
"before": "optimal",
"after": "efficient",
"start_char_pos": 1153,
"end_char_pos": 1160
},
{
"type": "R",
"before": "economically",
"after": "economically",
"start_char_pos": 1246,
"end_char_pos": 1258
},
{
"type": "A",
"before": null,
"after": "radial microtubules in the central region and",
"start_char_pos": 1304,
"end_char_pos": 1304
}
]
| [
0,
214,
389,
634,
1105
]
|
1602.00570 | 1 | We consider an investor faced with the classical portfolio problem of optimal investment in a log- Brownian stock and a fixed-interest bond, but constrained to choose portfolio and consumption strategies which reduce a dynamic shortfall risk measure. For continuous and discrete-time financial markets we investigate the loss in expected utility of intermediate consumption and terminal wealth caused by imposing a dynamic risk constraint. We derive the dynamic programming equations for the resulting stochastic optimal control problems and solve them numerically. Our numerical results indicate that the loss of portfolio performance is quite small while the risk is reduced considerably. We also investigate discretization effects and the loss in performance if trading is possible at discrete time points only. | We consider an investor faced with the classical portfolio problem of optimal investment in a log-Brownian stock and a fixed-interest bond, but constrained to choose portfolio and consumption strategies that reduce a dynamic shortfall risk measure. For continuous and discrete-time financial markets we investigate the loss in expected utility of intermediate consumption and terminal wealth caused by imposing a dynamic risk constraint. We derive the dynamic programming equations for the resulting stochastic optimal control problems and solve them numerically. Our numerical results indicate that the loss of portfolio performance is quite small while the risk is notably reduced. In particular, we investigate time discretization effects and the loss in performance if trading is possible at discrete time points only. | [
{
"type": "R",
"before": "log- Brownian",
"after": "log-Brownian",
"start_char_pos": 94,
"end_char_pos": 107
},
{
"type": "R",
"before": "which",
"after": "that",
"start_char_pos": 204,
"end_char_pos": 209
},
{
"type": "R",
"before": "reduced considerably. We also investigate",
"after": "notably reduced. In particular, we investigate time",
"start_char_pos": 669,
"end_char_pos": 710
}
]
| [
0,
250,
439,
565,
690
]
|
1602.00570 | 2 | We consider an investor faced with the classical portfolio problem of optimal investment in a log-Brownian stock and a fixed-interest bond, but constrained to choose portfolio and consumption strategies that reduce a dynamic shortfall risk measure. For continuous and discrete-time financial markets we investigate the loss in expected utility of intermediate consumption and terminal wealth caused by imposing a dynamic risk constraint. We derive the dynamic programming equations for the resulting stochastic optimal control problems and solve them numerically. Our numerical results indicate that the loss of portfolio performance is quite small while the risk is notably reduced. In particular, we investigate time discretization effects and the loss in performance if trading is possible at discrete time points only . | We consider an investor facing a classical portfolio problem of optimal investment in a log-Brownian stock and a fixed-interest bond, but constrained to choose portfolio and consumption strategies that reduce a dynamic shortfall risk measure. For continuous- and discrete-time financial markets we investigate the loss in expected utility of intermediate consumption and terminal wealth caused by imposing a dynamic risk constraint. We derive the dynamic programming equations for the resulting stochastic optimal control problems and solve them numerically. Our numerical results indicate that the loss of portfolio performance is not too large while the risk is notably reduced. We then investigate time discretization effects and find that the loss of portfolio performance resulting from imposing a risk constraint is typically bigger than the loss resulting from infrequent trading . | [
{
"type": "R",
"before": "faced with the",
"after": "facing a",
"start_char_pos": 24,
"end_char_pos": 38
},
{
"type": "R",
"before": "continuous",
"after": "continuous-",
"start_char_pos": 253,
"end_char_pos": 263
},
{
"type": "R",
"before": "quite small",
"after": "not too large",
"start_char_pos": 637,
"end_char_pos": 648
},
{
"type": "R",
"before": "In particular, we",
"after": "We then",
"start_char_pos": 684,
"end_char_pos": 701
},
{
"type": "R",
"before": "the loss in performance if trading is possible at discrete time points only",
"after": "find that the loss of portfolio performance resulting from imposing a risk constraint is typically bigger than the loss resulting from infrequent trading",
"start_char_pos": 746,
"end_char_pos": 821
}
]
| [
0,
248,
437,
563,
683
]
|
1602.00619 | 1 | We derive a semi-analytic solution for a stock loan driven by a hyper-exponential model (HEM) in which the lender is allowed to liquidate when the stock drops to a sufficiently low price. To do so, we extend a result of N. Cai et al. regarding the generalized Laplace transform of the first passage time of the HEM to two flat barriers, which is of independent interest . | We derive a " semi-analytic " solution for a stock loan in which the lender forces liquidation when the loan-to-collateral ratio drops beneath a certain threshold. We use this to study the sensitivity of the contract to model parameters . | [
{
"type": "A",
"before": null,
"after": "\"",
"start_char_pos": 12,
"end_char_pos": 12
},
{
"type": "A",
"before": null,
"after": "\"",
"start_char_pos": 27,
"end_char_pos": 27
},
{
"type": "D",
"before": "driven by a hyper-exponential model (HEM)",
"after": null,
"start_char_pos": 54,
"end_char_pos": 95
},
{
"type": "R",
"before": "is allowed to liquidate when the stock drops to a sufficiently low price. To do so, we extend a result of N. Cai et al. regarding the generalized Laplace transform of the first passage time of the HEM to two flat barriers, which is of independent interest",
"after": "forces liquidation when the loan-to-collateral ratio drops beneath a certain threshold. We use this to study the sensitivity of the contract to model parameters",
"start_char_pos": 116,
"end_char_pos": 371
}
]
| [
0,
189
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.