doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1407.7559
1
This paper builds upon the fundamental paper by \mbox{%DIFAUXCMD niwa2009 ] provides the unique possibility to analyze the relative aggregation/folding propensity of the elements of the entire Escherichia coli (E. coli) proteome in a cell-free standardized microenvironment. The hardness of the problem comes from the superposition between the driving forces of intra- and inter-molecule interactions and it is mirrored by the evidences of shift from folding to aggregation phenotypes by single-point mutations \mbox{%DIFAUXCMD doi:10.1021/ja1116233 state-of-the-art classification methods coming from the field of structural pattern recognition, with the aim to compare different representations of the same proteins of the Niwa et al. data base , going from pure sequence to chemico-physical labeled (contact) graphs . By this comparison, we are able to identify some interesting general properties of protein universe, going from the confirming of a threshold size around 250 residues ( discriminating "easily foldable" from " difficultly foldable" molecules consistent with other independent data on protein domains architecture) to the relevance of contact graphs eigenvalue ordering for folding behavior discrimination and characterization of the E. coli data. The soundness of the experimental results presented in this paper is proved by the statistically relevant relationships discovered among the chemico-physical description of proteins and the developed cost matrix of substitution used in the various discrimination systems.
This paper builds upon the fundamental work of Niwa et al. 34], which provides the unique possibility to analyze the relative aggregation/folding propensity of the elements of the entire Escherichia coli (E. coli) proteome in a cell-free standardized microenvironment. The hardness of the problem comes from the superposition between the driving forces of intra- and inter-molecule interactions and it is mirrored by the evidences of shift from folding to aggregation phenotypes by single-point mutations 10 . Here we apply several state-of-the-art classification methods coming from the field of structural pattern recognition, with the aim to compare different representations of the same proteins gathered from the Niwa et al. data base ; such representations include sequences and labeled (contact) graphs enriched with chemico-physical attributes . By this comparison, we are able to identify also some interesting general properties of proteins. Notably, (i) we suggest a threshold around 250 residues discriminating "easily foldable" from " hardly foldable" molecules consistent with other independent experiments, and (ii) we highlight the relevance of contact graph spectra for folding behavior discrimination and characterization of the E. coli solubility data. The soundness of the experimental results presented in this paper is proved by the statistically relevant relationships discovered among the chemico-physical description of proteins and the developed cost matrix of substitution used in the various discrimination systems.
[ { "type": "R", "before": "paper by \\mbox{%DIFAUXCMD niwa2009", "after": "work of Niwa et al.", "start_char_pos": 39, "end_char_pos": 73 }, { "type": "A", "before": null, "after": "34", "start_char_pos": 74, "end_char_pos": 74 }, { "type": "A", "before": null, "after": ", which", "start_char_pos": 75, "end_char_pos": 75 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD doi:10.1021/ja1116233", "after": "10", "start_char_pos": 511, "end_char_pos": 549 }, { "type": "A", "before": null, "after": ". Here we apply several", "start_char_pos": 550, "end_char_pos": 550 }, { "type": "R", "before": "of", "after": "gathered from", "start_char_pos": 719, "end_char_pos": 721 }, { "type": "R", "before": ", going from pure sequence to chemico-physical", "after": "; such representations include sequences and", "start_char_pos": 748, "end_char_pos": 794 }, { "type": "A", "before": null, "after": "enriched with chemico-physical attributes", "start_char_pos": 820, "end_char_pos": 820 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 867, "end_char_pos": 867 }, { "type": "R", "before": "protein universe, going from the confirming of a threshold size", "after": "proteins. Notably, (i) we suggest a threshold", "start_char_pos": 907, "end_char_pos": 970 }, { "type": "D", "before": "(", "after": null, "start_char_pos": 991, "end_char_pos": 992 }, { "type": "R", "before": "difficultly", "after": "hardly", "start_char_pos": 1033, "end_char_pos": 1044 }, { "type": "R", "before": "data on protein domains architecture) to", "after": "experiments, and (ii) we highlight", "start_char_pos": 1099, "end_char_pos": 1139 }, { "type": "R", "before": "graphs eigenvalue ordering", "after": "graph spectra", "start_char_pos": 1165, "end_char_pos": 1191 }, { "type": "A", "before": null, "after": "solubility", "start_char_pos": 1264, "end_char_pos": 1264 } ]
[ 0, 274, 532, 822, 1270 ]
1407.7725
1
In this paper we focus on pricing of structured products in energy markets using utility indifference pricing approach . In particular, we compute the buyer 's price of such derivatives for an agent investing in the forward market , whose preferences are described by an exponential utility function. Such a price is characterized in terms of continuous viscosity solutions of suitable non-linear PDEs. This provides an effective way to compute both an optimal exercise strategy for the structured product and a portfolio strategy to partially hedge the financial position. In the complete market case, the financial hedge turns out to be perfect and the PDE reduces to particular cases already treated in the literature. Moreover, in a model with two assets and constant correlation , we obtain a representation of the price as the value function of an auxiliary simpler optimization problem under a risk neutral probability, that can be viewed as a perturbation of the minimal entropy martingale measure. Finally, numerical results are provided.
In this paper we study the pricing and hedging of structured products in energy markets , such as swing and virtual gas storage, using the exponential utility indifference pricing approach in a general incomplete multivariate market model driven by finitely many stochastic factors. The buyer of such contracts is allowed to trade in the forward market in order to hedge the risk of his position. We fully characterize the buyer's utility indifference price of a given product in terms of continuous viscosity solutions of suitable nonlinear PDEs. This gives a way to identify reasonable candidates for the optimal exercise strategy for the structured product as well as for the corresponding hedging strategy. Moreover, in a model with two correlated assets, one traded and one nontraded , we obtain a representation of the price as the value function of an auxiliary simpler optimization problem under a risk neutral probability, that can be viewed as a perturbation of the minimal entropy martingale measure. Finally, numerical results are provided.
[ { "type": "R", "before": "focus on pricing", "after": "study the pricing and hedging", "start_char_pos": 17, "end_char_pos": 33 }, { "type": "R", "before": "using", "after": ", such as swing and virtual gas storage, using the exponential", "start_char_pos": 75, "end_char_pos": 80 }, { "type": "R", "before": ". In particular, we compute the buyer 's price of such derivatives for an agent investing", "after": "in a general incomplete multivariate market model driven by finitely many stochastic factors. The buyer of such contracts is allowed to trade", "start_char_pos": 119, "end_char_pos": 208 }, { "type": "R", "before": ", whose preferences are described by an exponential utility function. Such a price is characterized", "after": "in order to hedge the risk of his position. We fully characterize the buyer's utility indifference price of a given product", "start_char_pos": 231, "end_char_pos": 330 }, { "type": "R", "before": "non-linear", "after": "nonlinear", "start_char_pos": 386, "end_char_pos": 396 }, { "type": "R", "before": "provides an effective way to compute both an", "after": "gives a way to identify reasonable candidates for the", "start_char_pos": 408, "end_char_pos": 452 }, { "type": "R", "before": "and a portfolio strategy to partially hedge the financial position. In the complete market case, the financial hedge turns out to be perfect and the PDE reduces to particular cases already treated in the literature.", "after": "as well as for the corresponding hedging strategy.", "start_char_pos": 506, "end_char_pos": 721 }, { "type": "R", "before": "assets and constant correlation", "after": "correlated assets, one traded and one nontraded", "start_char_pos": 752, "end_char_pos": 783 } ]
[ 0, 120, 300, 402, 573, 721, 1006 ]
1407.8033
1
This paper deals with the relations among structural, topological, and chemical properties of the E.Coli proteome from the vantage point of the solubility/aggregation propensities of proteins. Each E.Coli protein is initially represented according to its known folded 3D shape. This step consists basically in representing the available E.Coli proteins in terms of graphs. We first analyze those graphs by considering pure topological characterizations, i.e., by analyzing the mass fractal dimension and the distribution underlying both shortest paths and vertex degrees. Results confirm the general architectural principles of proteins. Successively, we focus on the statistical properties of a representation of such graphs in terms of vectors composed of several numerical features, which we extracted from their structural representation. We found that protein size is the main discriminator for the solubility, while however there are other factors that help explaining the solubility . We finally analyze such data through a novel one-class classifier, with the aim of discriminating among very and poorly soluble proteins. Results are encouraging and consolidate the potential of pattern recognition techniques when employed to describe complex biological systems.
This paper deals with the relations among structural, topological, and chemical properties of the E.Coli proteome from the vantage point of the solubility/aggregation propensity of proteins. Each E.Coli protein is initially represented according to its known folded 3D shape. This step consists in representing the available E.Coli proteins in terms of graphs. We first analyze those graphs by considering pure topological characterizations, i.e., by analyzing the mass fractal dimension and the distribution underlying both shortest paths and vertex degrees. Results confirm the general architectural principles of proteins. Successively, we focus on the statistical properties of a representation of such graphs in terms of vectors composed of several numerical features, which we extracted from their structural representation. We found that protein size is the main discriminator for the solubility, while however there are other factors that help explaining the solubility degree . We finally analyze such data through a novel one-class classifier, with the aim of discriminating among very and poorly soluble proteins. Results are encouraging and consolidate the potential of pattern recognition techniques when employed to describe complex biological systems.
[ { "type": "R", "before": "propensities", "after": "propensity", "start_char_pos": 167, "end_char_pos": 179 }, { "type": "D", "before": "basically", "after": null, "start_char_pos": 297, "end_char_pos": 306 }, { "type": "A", "before": null, "after": "degree", "start_char_pos": 990, "end_char_pos": 990 } ]
[ 0, 192, 277, 372, 571, 637, 842, 992, 1130 ]
1407.8083
1
We consider the problem of robustly determining the m slowest dynamical modes of a reversible dynamical system, with a particular focus on the analysis of equilibrium molecular dynamics simulations. We show that the problem can be formulated as the variational optimization of a single scalar functional , a generalized matrix Rayleigh quotient (GMRQ), which measures the ability of a rank-m projection operator to capture the slow dynamics of the system. While a variational theorem bounds the GMRQ from above by the sum of the first m eigenvalues of the system's propagator, we show that this bound can be violated when the requisite matrix elements are estimated subject to statistical uncertainty. Furthermore, this overfitting can be detected and avoided through cross-validation in which the GMRQ is evaluated for the purpose of model selection on data that was held out during training . These result make it possible to , for the first time, construct a unified, consistent objective function for the parameterization of Markov state models for protein dynamics which captures the tradeoff between systematic and statistical errors.
Markov state models (MSMs) are a widely used method for approximating the eigenspectrum of the molecular dynamics propagator, yielding insight into the long-timescale statistical kinetics and slow dynamical modes of biomolecular systems. However, the lack of a unified theoretical framework for choosing between alternative models has hampered progress, especially for non-experts applying these methods to novel biological systems. Here, we consider cross-validation with a new objective function for estimators of these slow dynamical modes , a generalized matrix Rayleigh quotient (GMRQ), which measures the ability of a rank-m projection operator to capture the slow subspace of the system. It is shown that a variational theorem bounds the GMRQ from above by the sum of the first m eigenvalues of the system's propagator, but that this bound can be violated when the requisite matrix elements are estimated subject to statistical uncertainty. This overfitting can be detected and avoided through cross-validation . These result make it possible to construct Markov state models for protein dynamics in a way that appropriately captures the tradeoff between systematic and statistical errors.
[ { "type": "R", "before": "We consider the problem of robustly determining the m slowest", "after": "Markov state models (MSMs) are a widely used method for approximating the eigenspectrum of the molecular dynamics propagator, yielding insight into the long-timescale statistical kinetics and slow", "start_char_pos": 0, "end_char_pos": 61 }, { "type": "R", "before": "a reversible dynamical system, with a particular focus on the analysis of equilibrium molecular dynamics simulations. We show that the problem can be formulated as the variational optimization of a single scalar functional", "after": "biomolecular systems. However, the lack of a unified theoretical framework for choosing between alternative models has hampered progress, especially for non-experts applying these methods to novel biological systems. Here, we consider cross-validation with a new objective function for estimators of these slow dynamical modes", "start_char_pos": 81, "end_char_pos": 303 }, { "type": "R", "before": "dynamics", "after": "subspace", "start_char_pos": 432, "end_char_pos": 440 }, { "type": "R", "before": "While", "after": "It is shown that", "start_char_pos": 456, "end_char_pos": 461 }, { "type": "R", "before": "we show", "after": "but", "start_char_pos": 577, "end_char_pos": 584 }, { "type": "R", "before": "Furthermore, this", "after": "This", "start_char_pos": 702, "end_char_pos": 719 }, { "type": "D", "before": "in which the GMRQ is evaluated for the purpose of model selection on data that was held out during training", "after": null, "start_char_pos": 785, "end_char_pos": 892 }, { "type": "R", "before": ", for the first time, construct a unified, consistent objective function for the parameterization of", "after": "construct", "start_char_pos": 928, "end_char_pos": 1028 }, { "type": "R", "before": "which", "after": "in a way that appropriately", "start_char_pos": 1070, "end_char_pos": 1075 } ]
[ 0, 198, 455, 701, 894 ]
1407.8300
1
In stochastic portfolio theory, a relative arbitrage is an equity portfolio which outperforms a benchmark portfolio over a specified horizon. When the market is diverse and sufficiently volatile, and the benchmark is the market or a buy-and-hold portfolio, functionally generated portfolios provide a systematic way of constructing relative arbitrages. In this paper we show that if the market portfolio is replaced by the equal or entropy weighted portfolio among many others, no relative arbitrages can be constructed using functionally generated portfolios. We also introduce and study a shaped-constrained optimization problem for functionally generated portfolios in the spirit of maximum likelihood estimation of a log-concave density.
In stochastic portfolio theory, a relative arbitrage is an equity portfolio which is guaranteed to outperform a benchmark portfolio over a finite horizon. When the market is diverse and sufficiently volatile, and the benchmark is the market or a buy-and-hold portfolio, functionally generated portfolios introduced by Fernholz provide a systematic way of constructing relative arbitrages. In this paper we show that if the market portfolio is replaced by the equal or entropy weighted portfolio among many others, no relative arbitrages can be constructed under the same conditions using functionally generated portfolios. We also introduce and study a shaped-constrained optimization problem for functionally generated portfolios in the spirit of maximum likelihood estimation of a log-concave density.
[ { "type": "R", "before": "outperforms", "after": "is guaranteed to outperform", "start_char_pos": 82, "end_char_pos": 93 }, { "type": "R", "before": "specified", "after": "finite", "start_char_pos": 123, "end_char_pos": 132 }, { "type": "A", "before": null, "after": "introduced by Fernholz", "start_char_pos": 291, "end_char_pos": 291 }, { "type": "A", "before": null, "after": "under the same conditions", "start_char_pos": 521, "end_char_pos": 521 } ]
[ 0, 141, 353, 562 ]
1408.0915
1
Myosin-V is a highly processive dimeric protein that walks with 36nm steps along actin tracks, powered by coordinated ATP hydrolysis reactions in the two myosin heads. No previous theoretical models of the myosin-V walk reproduce all the observed trends of velocity and run length with [ADP], [ATP] and external forcing. In particular, a result that has eluded all theoretical studies based upon rigorous physical chemistry is that run length decreases with both increasing [ADP] and [ATP]. We introduce a novel model comparison framework to ascertain which mechanisms in existing models reproduce which experimental trends and hence guide development of models that can reproduce them all. We formulate models as reaction networks between distinct mechanochemical states with energetically determined transition rates. For each network architecture, we compare predictions for velocity and run length to a subset of experimentally measured values, and fit unknown parameters using a bespoke MCSA optimization routine. Finally we determine which experimental trends are replicated by the best-fit model for each architecture. Only two models capture them all: one involving [ADP]-dependent mechanical detachment, and another including [ADP]-dependent futile cycling and nucleotide pocket collapse. Comparing model-predicted and experimentally observed kinetic transition rates favors the latter.
Myosin-V is a highly processive dimeric protein that walks with 36nm steps along actin tracks, powered by coordinated ATP hydrolysis reactions in the two myosin heads. No previous theoretical models of the myosin-V walk reproduce all the observed trends of velocity and run-length with [ADP], [ATP] and external forcing. In particular, a result that has eluded all theoretical studies based upon rigorous physical chemistry is that run length decreases with both increasing [ADP] and [ATP]. We systematically analyse which mechanisms in existing models reproduce which experimental trends and use this information to guide the development of models that can reproduce them all. We formulate models as reaction networks between distinct mechanochemical states with energetically determined transition rates. For each network architecture, we compare predictions for velocity and run length to a subset of experimentally measured values, and fit unknown parameters using a bespoke MCSA optimization routine. Finally we determine which experimental trends are replicated by the best-fit model for each architecture. Only two models capture them all: one involving [ADP]-dependent mechanical detachment, and another including [ADP]-dependent futile cycling and nucleotide pocket collapse. Comparing model-predicted and experimentally observed kinetic transition rates favors the latter.
[ { "type": "R", "before": "run length", "after": "run-length", "start_char_pos": 270, "end_char_pos": 280 }, { "type": "R", "before": "introduce a novel model comparison framework to ascertain", "after": "systematically analyse", "start_char_pos": 494, "end_char_pos": 551 }, { "type": "R", "before": "hence guide", "after": "use this information to guide the", "start_char_pos": 628, "end_char_pos": 639 } ]
[ 0, 167, 320, 490, 690, 819, 1018, 1125, 1297 ]
1408.1327
1
The radical pair model for avian magnetoreception has been significantly efficacious in explaining the magnetosensitive behavior of chemical compass . In this model, we have a multi-spin system evolving under a specific Hamiltonian assisted by neurological spin-dependent recombination channels which give an elegant compass action that many species are belived to be using . In this study, we analyze the radical pair model form a microscopic spin transitional point of view and establish the role of nuclear and environmental decoherence in radical pair spin dynamics. We identify the spin interplay between singlet state and three triplet states of radical due to Zeeman and hyperfine and examine the distinctive roles of nuclear and environmental decoherence from this perspective. Additionally, we revisit some of the earlier results concerning radical pair model from this fresh outlook and provide more comprehensive explanation to those. The approach is aimed to equip us more for solid state emulation of avian compass and design long coherehce time physical systems .
The radical pair model has been successful in explaining behavioral characteristics of the geomagnetic compass believed to underlie the navigation capability of certain avian species . In this study, the spin dynamics of the radical pair model and decoherence therein are interpreted from a microscopic state transition point of view . This helps to elucidate the interplay between the hyperfine and Zeeman interactions that enables the avian compass, and the distinctive effects of nuclear and environmental decoherence on it. Using a quantum information theoretic quantifier of coherence, we find that nuclear decoherence induces new structure in the spin dynamics without materially affecting the compass action; environmental decoherence, on the other hand, completely disrupts it .
[ { "type": "R", "before": "for avian magnetoreception has been significantly efficacious in explaining the magnetosensitive behavior of chemical compass . In this model, we have a multi-spin system evolving under a specific Hamiltonian assisted by neurological spin-dependent recombination channels which give an elegant compass action that many species are belived to be using", "after": "has been successful in explaining behavioral characteristics of the geomagnetic compass believed to underlie the navigation capability of certain avian species", "start_char_pos": 23, "end_char_pos": 373 }, { "type": "R", "before": "we analyze the", "after": "the spin dynamics of the", "start_char_pos": 391, "end_char_pos": 405 }, { "type": "R", "before": "form a microscopic spin transitional", "after": "and decoherence therein are interpreted from a microscopic state transition", "start_char_pos": 425, "end_char_pos": 461 }, { "type": "R", "before": "and establish the role of nuclear and environmental decoherence in radical pair spin dynamics. We identify the spin interplay between singlet state and three triplet states of radical due to Zeeman and hyperfine and examine the distinctive roles", "after": ". This helps to elucidate the interplay between the hyperfine and Zeeman interactions that enables the avian compass, and the distinctive effects", "start_char_pos": 476, "end_char_pos": 721 }, { "type": "R", "before": "from this perspective. Additionally, we revisit some of the earlier results concerning radical pair model from this fresh outlook and provide more comprehensive explanation to those. The approach is aimed to equip us more for solid state emulation of avian compass and design long coherehce time physical systems", "after": "on it. Using a quantum information theoretic quantifier of coherence, we find that nuclear decoherence induces new structure in the spin dynamics without materially affecting the compass action; environmental decoherence, on the other hand, completely disrupts it", "start_char_pos": 763, "end_char_pos": 1075 } ]
[ 0, 150, 375, 570, 785, 945 ]
1408.1382
1
This paper studies the utility maximization problem on consumption with addictive habit formation in the markets with proportional transaction costs and unbounded random endowment. To model the proportional transaction costs, we adopt Kabanov's multi-asset framework with a cash account. At the terminal time t=T, the investor can receive an unbounded random endowment for which we propose a new definition of acceptable portfolio processes depending on the strictly consistent price system (SCPS). We prove a type of super-hedging theorem for a family of workable contingent claims using the acceptable portfolios and random endowment which enables us to obtain the consumption budget constraint result under the market frictions. With the path dependence reduction and the embedding approach, the existence and uniqueness of the optimal consumption are proved using the auxiliary primal and dual processes and the convex duality analysis.
This paper studies the utility maximization problem on consumption with addictive habit formation in the market with proportional transaction costs and unbounded random endowment. To model the proportional transaction costs, we adopt the Kabanov's multi-asset framework with a cash account. At the terminal time t=T, the investor can receive an unbounded random endowment for which we propose a new definition of acceptable portfolio processes depending on the strictly consistent price system (SCPS). We prove a type of super-hedging theorem for a family of workable contingent claims using the acceptable portfolios and random endowment which enables us to obtain the consumption budget constraint result under the market frictions. With the path dependence reduction and the embedding approach, the existence and uniqueness of the optimal consumption are proved using the auxiliary primal and dual processes and the convex duality analysis.
[ { "type": "R", "before": "markets", "after": "market", "start_char_pos": 105, "end_char_pos": 112 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 235, "end_char_pos": 235 } ]
[ 0, 180, 288, 499, 732 ]
1408.1382
2
This paper studies the utility maximization problem on consumption with addictive habit formation in the market with proportional transaction costs and unbounded random endowment . To model the proportional transaction costs, we adopt the Kabanov's multi-asset framework with a cash account. At the terminal time t= T, the investor can receive an unbounded random endowment for which we propose a new definition of acceptable portfolio processes depending on the strictly consistent price system (SCPS). We prove a type of super-hedging theorem for a family of workable contingent claims using the acceptable portfolios and random endowment which enables us to obtain the consumption budget constraint result under the market frictions. With the path dependence reduction and the embedding approach, the existence and uniqueness of the optimal consumption are proved using the auxiliary primal and dual processes and the convex duality analysis .
This paper studies the optimal consumption under the addictive habit formation preference in markets with transaction costs and unbounded random endowments . To model the proportional transaction costs, we adopt the Kabanov's multi-asset framework with a cash account. At the terminal time T, the investor can receive unbounded random endowments for which we propose a new definition of acceptable portfolios based on the strictly consistent price system (SCPS). We prove a type of super-hedging theorem using the acceptable portfolios which enables us to obtain the consumption budget constraint condition under market frictions. Applying the path dependence reduction and the embedding approach, the existence and uniqueness of the optimal consumption are obtained using some auxiliary processes and the duality approach. As an application of the duality theory, the market isomorphism with special discounting factors is also discussed in the sense that the original optimal consumption with habit formation is equivalent to the standard optimal consumption problem without habits impact, however, in a modified isomorphic market model .
[ { "type": "R", "before": "utility maximization problem on consumption with", "after": "optimal consumption under the", "start_char_pos": 23, "end_char_pos": 71 }, { "type": "R", "before": "in the market with proportional", "after": "preference in markets with", "start_char_pos": 98, "end_char_pos": 129 }, { "type": "R", "before": "endowment", "after": "endowments", "start_char_pos": 169, "end_char_pos": 178 }, { "type": "D", "before": "t=", "after": null, "start_char_pos": 313, "end_char_pos": 315 }, { "type": "R", "before": "an unbounded random endowment", "after": "unbounded random endowments", "start_char_pos": 344, "end_char_pos": 373 }, { "type": "R", "before": "portfolio processes depending", "after": "portfolios based", "start_char_pos": 426, "end_char_pos": 455 }, { "type": "D", "before": "for a family of workable contingent claims", "after": null, "start_char_pos": 545, "end_char_pos": 587 }, { "type": "D", "before": "and random endowment", "after": null, "start_char_pos": 620, "end_char_pos": 640 }, { "type": "R", "before": "result under the", "after": "condition under", "start_char_pos": 702, "end_char_pos": 718 }, { "type": "R", "before": "With", "after": "Applying", "start_char_pos": 737, "end_char_pos": 741 }, { "type": "R", "before": "proved using the auxiliary primal and dual", "after": "obtained using some auxiliary", "start_char_pos": 860, "end_char_pos": 902 }, { "type": "R", "before": "convex duality analysis", "after": "duality approach. As an application of the duality theory, the market isomorphism with special discounting factors is also discussed in the sense that the original optimal consumption with habit formation is equivalent to the standard optimal consumption problem without habits impact, however, in a modified isomorphic market model", "start_char_pos": 921, "end_char_pos": 944 } ]
[ 0, 180, 291, 503, 736 ]
1408.1382
3
This paper studies the optimal consumption under the addictive habit formation preference in markets with transaction costs and unbounded random endowments. To model the proportional transaction costs, we adopt the Kabanov's multi-asset framework with a cash account. At the terminal time T, the investor can receive unbounded random endowments for which we propose a new definition of acceptable portfolios based on the strictly consistent price system (SCPS). We prove a type of super-hedging theorem using the acceptable portfolios which enables us to obtain the consumption budget constraint condition under market frictions. Applying the path dependence reduction and the embedding approach, the existence and uniqueness of the optimal consumption are obtained using some auxiliary processes and the duality approach . As an application of the duality theory, the market isomorphism with special discounting factors is also discussed in the sense that the original optimal consumption with habit formation is equivalent to the standard optimal consumption problem without habits impact, however, in a modified isomorphic market model.
This paper studies the optimal consumption under the addictive habit formation preference in markets with transaction costs and unbounded random endowments. To model the proportional transaction costs, we adopt the Kabanov's multi-asset framework with a cash account. At the terminal time T, the investor can receive unbounded random endowments for which we propose a new definition of acceptable portfolios based on the strictly consistent price system (SCPS). We prove a type of super-hedging theorem using the acceptable portfolios which enables us to obtain the consumption budget constraint condition under market frictions. Applying the path dependence reduction and the embedding approach, we obtain the existence and uniqueness of the optimal consumption using some auxiliary processes and the duality analysis . As an application of the duality theory, the market isomorphism with special discounting factors is also discussed in the sense that the original optimal consumption with habit formation is equivalent to the standard optimal consumption problem without habits impact, however, in a modified isomorphic market model.
[ { "type": "A", "before": null, "after": "we obtain", "start_char_pos": 697, "end_char_pos": 697 }, { "type": "D", "before": "are obtained", "after": null, "start_char_pos": 754, "end_char_pos": 766 }, { "type": "R", "before": "approach", "after": "analysis", "start_char_pos": 814, "end_char_pos": 822 } ]
[ 0, 156, 267, 461, 629, 824 ]
1408.2761
1
The human immuno-deficiency virus sub-type 1 (HIV-1) is evolving to keep up with a changing fitness landscape, due to the various drugs introduced to stop the virus's replication. As the virus adapts, the information the virus encodes about its environment must change, and this change is reflected in the amino-acid composition of proteins, as well as changes in viral RNAs, binding sites, and splice sites. Information can also be encoded in the interaction between residues in a single protein as well as across proteins, leading to a change in the epistatic patterns that can affect how the virus can change in the future. Measuring epistasis usually requires fitness measurements that are difficult to obtain in high-throughput . Here we show that epistasis can be inferred from the pair-wise information between residues, and study how epistasis and information have changed over the long-term. Using HIV-1 protease sequence data from public databases covering the years 1998-2006 (from both treated and untreated subjects), we show that drug treatment has increased the protease's per-site entropies on average. At the same time, the sum of mutual entropies across all pairs of residues within the protease shows a significant increase over the years, indicating an increase in epistasis in response to treatment, a trend not seen within sequences from untreated subjects. Our findings suggest that information theory can be an important tool to study long-term trends in the evolution of macromolecules.
Epistatic interactions between residues in a protein determine its adaptability as well the shape of its evolutionary trajectory. While several studies have shown that strong selection pressures enrich epistatic interactions in an evolving protein, little is known about how epistatic interactions themselves change over time when selection pressures are not constant. Obtaining fitness data to measure epistasis in a protein over significant and evolutionarily relevant time scales is a daunting task, rendering a study of the long-term evolution of epistasis in a protein currently infeasible . Here we analyze the evolution of epistasis in the protease of the human immunodeficiency virus type 1 (HIV-1) using genomic sequences collected for almost a decade from treated and untreated patients, to understand how proteins adapt to a changing environment of treatment over time. We show that mutual information between pairs of residues is a necessary (but not sufficient) condition for epistasis, allowing us to use information as a proxy for epistasis. We analyze the "fossils" of the evolutionary trajectories of a protein contained in the sequence data, and show that epistatic interactions continue to enrich in the HIV-1 protease as more potent drugs enter the market. While initially epistatic interactions are likely to constrain the evolvability of a protein, we find no evidence that the HIV-1 protease has reached its potential for adaptation after 9 years of adapting to a changing drug environment. The protein is able to encode information about novel and more potent drugs using epistatic interactions, while maintaining sufficient diversity and thermostability. We propose that this mechanism is central to protein evolution not just in HIV-1 protease, but for any protein adapting to rapidly changing conditions
[ { "type": "R", "before": "The human immuno-deficiency virus sub-type 1 (HIV-1) is evolving to keep up with a changing fitness landscape, due to the various drugs introduced to stop the virus's replication. As the virus adapts, the information the virus encodes about its environment must change, and this change is reflected in the amino-acid composition of proteins, as well as changes in viral RNAs, binding sites, and splice sites. Information can also be encoded in the interaction", "after": "Epistatic interactions", "start_char_pos": 0, "end_char_pos": 459 }, { "type": "R", "before": "single protein as well as across proteins, leading to a change in the epistatic patterns that can affect how the virus can change in the future. Measuring epistasis usually requires fitness measurements that are difficult to obtain in high-throughput", "after": "protein determine its adaptability as well the shape of its evolutionary trajectory. While several studies have shown that strong selection pressures enrich epistatic interactions in an evolving protein, little is known about how epistatic interactions themselves change over time when selection pressures are not constant. Obtaining fitness data to measure epistasis in a protein over significant and evolutionarily relevant time scales is a daunting task, rendering a study of the long-term evolution of epistasis in a protein currently infeasible", "start_char_pos": 482, "end_char_pos": 732 }, { "type": "R", "before": "show that epistasis can be inferred from the pair-wise information between residues, and study how epistasis and information have changed over the long-term. Using HIV-1 protease sequence data from public databases covering the years 1998-2006 (from both", "after": "analyze the evolution of epistasis in the protease of the human immunodeficiency virus type 1 (HIV-1) using genomic sequences collected for almost a decade from", "start_char_pos": 743, "end_char_pos": 997 }, { "type": "R", "before": "subjects), we show that drug treatment has increased the protease's per-site entropies on average. At the same time, the sum of mutual entropies across all", "after": "patients, to understand how proteins adapt to a changing environment of treatment over time. We show that mutual information between", "start_char_pos": 1020, "end_char_pos": 1175 }, { "type": "R", "before": "within the protease shows a significant increase over the years, indicating an increase in epistasis in response to treatment, a trend not seen within sequences from untreated subjects. Our findings suggest that information theory can be an important tool to study long-term trends in the evolution of macromolecules.", "after": "is a necessary (but not sufficient) condition for epistasis, allowing us to use information as a proxy for epistasis. We analyze the \"fossils\" of the evolutionary trajectories of a protein contained in the sequence data, and show that epistatic interactions continue to enrich in the HIV-1 protease as more potent drugs enter the market. While initially epistatic interactions are likely to constrain the evolvability of a protein, we find no evidence that the HIV-1 protease has reached its potential for adaptation after 9 years of adapting to a changing drug environment. The protein is able to encode information about novel and more potent drugs using epistatic interactions, while maintaining sufficient diversity and thermostability. We propose that this mechanism is central to protein evolution not just in HIV-1 protease, but for any protein adapting to rapidly changing conditions", "start_char_pos": 1194, "end_char_pos": 1511 } ]
[ 0, 179, 408, 626, 734, 900, 1118, 1379 ]
1408.2761
2
Epistatic interactions between residues in a proteindetermine its adaptability as well the shape of its evolutionary trajectory. While several studies have shown that strong selection pressures enrich epistatic interactions in an evolving protein, little is known about how epistatic interactions themselves change over time when selection pressures are not constant. Obtaining fitness data to measure epistasis in a protein over significant and evolutionarily relevant time scales is a daunting task, rendering a study of the long-term evolution of epistasis in a proteincurrently infeasible . Here we analyze the evolution of epistasis in the protease of the human immunodeficiency virus type 1 (HIV-1) using genomic sequences collected for almost a decade from treated and untreated patients, to understand how proteins adapt to a changing environment of treatment over time. We show that mutual information between pairs of residues is a necessary (but not sufficient) condition for epistasis , allowing us to use information as a proxy for epistasis . We analyze the "fossils" of the evolutionary trajectories of a protein contained in the sequence data, and show that epistatic interactions continue to enrich in the HIV-1 protease as more potent drugs enter the market. While initially epistatic interactions are likely to constrain the evolvability of a protein, we find no evidence that the HIV-1 protease has reached its potential for adaptation after 9 years of adapting to a changing drug environment . The protein is able to encode information about novel and more potent drugs using epistatic interactions, while maintaining sufficient diversity and thermostability. We propose that this mechanism is central to protein evolution not just in HIV-1 protease, but for any protein adapting to rapidly changingconditions
Epistatic interactions between residues determine a protein's adaptability and shape its evolutionary trajectory. When a protein experiences a changed environment, it is under strong selection to find a peak in the new fitness landscape. It has been shown that strong selection increases epistatic interactions as well as the ruggedness of the fitness landscape, but little is known about how the epistatic interactions change under selection in the long-term evolution of a protein . Here we analyze the evolution of epistasis in the protease of the human immunodeficiency virus type 1 (HIV-1) using protease sequences collected for almost a decade from both treated and untreated patients, to understand how epistasis changes and how those changes impact the long-term evolvability of a protein. We use an information-theoretic proxy for epistasis that quantifies the co-variation between sites, and show that positive information is a necessary (but not sufficient) condition that detects epistasis in most cases . We analyze the "fossils" of the evolutionary trajectories of the protein contained in the sequence data, and show that epistasis continues to enrich under strong selection, but not for proteins whose environment is unchanged. The increase in epistasis compensates for the information loss due to sequence variability brought about by treatment, and facilitates adaptation in the increasingly rugged fitness landscape of treatment. While epistasis is thought to enhance evolvability via valley-crossing early-on in adaptation, it can hinder adaptation later when the landscape has turned rugged. However, we find no evidence that the HIV-1 protease has reached its potential for evolution after 9 years of adapting to a drug environment that itself is constantly changing.
[ { "type": "R", "before": "in a proteindetermine its adaptability as well the shape of", "after": "determine a protein's adaptability and shape", "start_char_pos": 40, "end_char_pos": 99 }, { "type": "R", "before": "While several studies have", "after": "When a protein experiences a changed environment, it is under strong selection to find a peak in the new fitness landscape. It has been", "start_char_pos": 129, "end_char_pos": 155 }, { "type": "R", "before": "pressures enrich epistatic interactions in an evolving protein,", "after": "increases epistatic interactions as well as the ruggedness of the fitness landscape, but", "start_char_pos": 184, "end_char_pos": 247 }, { "type": "R", "before": "epistatic interactions themselves change over time when selection pressures are not constant. Obtaining fitness data to measure epistasis in a protein over significant and evolutionarily relevant time scales is a daunting task, rendering a study of", "after": "the epistatic interactions change under selection in", "start_char_pos": 274, "end_char_pos": 522 }, { "type": "R", "before": "epistasis in a proteincurrently infeasible", "after": "a protein", "start_char_pos": 550, "end_char_pos": 592 }, { "type": "R", "before": "genomic", "after": "protease", "start_char_pos": 711, "end_char_pos": 718 }, { "type": "A", "before": null, "after": "both", "start_char_pos": 764, "end_char_pos": 764 }, { "type": "R", "before": "proteins adapt to a changing environment of treatment over time. We show that mutual information between pairs of residues", "after": "epistasis changes and how those changes impact the long-term evolvability of a protein. We use an information-theoretic proxy for epistasis that quantifies the co-variation between sites, and show that positive information", "start_char_pos": 815, "end_char_pos": 937 }, { "type": "R", "before": "for epistasis , allowing us to use information as a proxy for epistasis", "after": "that detects epistasis in most cases", "start_char_pos": 984, "end_char_pos": 1055 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 1119, "end_char_pos": 1120 }, { "type": "R", "before": "epistatic interactions continue to enrich in the HIV-1 protease as more potent drugs enter the market. While initially epistatic interactions are likely to constrain the evolvability of a protein,", "after": "epistasis continues to enrich under strong selection, but not for proteins whose environment is unchanged. The increase in epistasis compensates for the information loss due to sequence variability brought about by treatment, and facilitates adaptation in the increasingly rugged fitness landscape of treatment. While epistasis is thought to enhance evolvability via valley-crossing early-on in adaptation, it can hinder adaptation later when the landscape has turned rugged. However,", "start_char_pos": 1175, "end_char_pos": 1371 }, { "type": "R", "before": "adaptation", "after": "evolution", "start_char_pos": 1446, "end_char_pos": 1456 }, { "type": "R", "before": "changing drug environment . The protein is able to encode information about novel and more potent drugs using epistatic interactions, while maintaining sufficient diversity and thermostability. We propose that this mechanism is central to protein evolution not just in HIV-1 protease, but for any protein adapting to rapidly changingconditions", "after": "drug environment that itself is constantly changing.", "start_char_pos": 1488, "end_char_pos": 1831 } ]
[ 0, 128, 367, 594, 879, 1057, 1277, 1515, 1681 ]
1408.3114
1
Ion channels are of major interest and form an area of intensive research in the fields of biophysics and medicine, since they control many vital physiological functions. The aim of this work is to propose a fully stochastic model describing the main characteristics of a multiple channel system , in which ion movement is coupled with a Poisson--Nernst--Planck equation . Exclusion forces are considered and different nondimensionalization procedure, supported by numerical simulation, are discussed. Both cases of nano and micro channels are considered .
Ion channels are of major interest and form an area of intensive research in the fields of biophysics and medicine, since they control many vital physiological functions. The aim of this work is on one hand to propose a fully stochastic and discrete model describing the main characteristics of a multiple channel system . The movement of the ions is coupled, as usual, with a Poisson equation for the electrical field; we have considered in addition the influence of exclusion forces. On the other hand, we have discussed about the nondimensionalization of the stochastic system by using real physical parameters, all supported by numerical simulations. The specific features of both cases of micro and nano channels have been have been taken in due consideration with particular attention to the latter case in order to show that it is necessary to consider inside the channels a discrete and stochastic model for ions movement .
[ { "type": "A", "before": null, "after": "on one hand", "start_char_pos": 195, "end_char_pos": 195 }, { "type": "A", "before": null, "after": "and discrete", "start_char_pos": 226, "end_char_pos": 226 }, { "type": "R", "before": ", in which ion movement is coupled with a Poisson--Nernst--Planck equation . Exclusion forces are considered and different nondimensionalization procedure,", "after": ". The movement of the ions is coupled, as usual, with a Poisson equation for the electrical field; we have considered in addition the influence of exclusion forces. On the other hand, we have discussed about the nondimensionalization of the stochastic system by using real physical parameters, all", "start_char_pos": 298, "end_char_pos": 453 }, { "type": "R", "before": "simulation, are discussed. Both cases of nano and micro channels are considered", "after": "simulations. The specific features of both cases of micro and nano channels have been have been taken in due consideration with particular attention to the latter case in order to show that it is necessary to consider inside the channels a discrete and stochastic model for ions movement", "start_char_pos": 477, "end_char_pos": 556 } ]
[ 0, 170, 503 ]
1408.3114
2
Ion channels are of major interest and form an area of intensive research in the fields of biophysics and medicine , since they control many vital physiological functions. The aim of this work is on one hand to propose a fully stochastic and discrete model describing the main characteristics of a multiple channel system. The movement of the ions is coupled, as usual, with a Poisson equation for the electrical field; we have considered in addition the influence of exclusion forces. On the other hand, we have discussed about the nondimensionalization of the stochastic system by using real physical parameters, all supported by numerical simulations. The specific features of both cases of micro and nano channels have been have been taken in due consideration with particular attention to the latter case in order to show that it is necessary to consider inside the channels a discrete and stochastic model for ions movement .
Ion channels are of major interest and form an area of intensive research in the fields of biophysics and medicine since they control many vital physiological functions. The aim of this work is on one hand to propose a fully stochastic and discrete model describing the main characteristics of a multiple channel system. The movement of the ions is coupled, as usual, with a Poisson equation for the electrical field; we have considered , in addition, the influence of exclusion forces. On the other hand, we have discussed about the nondimensionalization of the stochastic system by using real physical parameters, all supported by numerical simulations. The specific features of both cases of micro- and nanochannels have been taken in due consideration with particular attention to the latter case in order to show that it is necessary to consider a discrete and stochastic model for ions movement inside the channels .
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 115, "end_char_pos": 116 }, { "type": "R", "before": "in addition", "after": ", in addition,", "start_char_pos": 439, "end_char_pos": 450 }, { "type": "R", "before": "micro and nano channels have been have been", "after": "micro- and nanochannels have been", "start_char_pos": 694, "end_char_pos": 737 }, { "type": "D", "before": "inside the channels", "after": null, "start_char_pos": 860, "end_char_pos": 879 }, { "type": "A", "before": null, "after": "inside the channels", "start_char_pos": 930, "end_char_pos": 930 } ]
[ 0, 171, 322, 419, 485, 654 ]
1408.3650
1
This paper builds a model of high-frequency equity returns in clock time by separately modeling the dynamics of trade-time returns and trade arrivals. Our main contributions are threefold. First, we characterize the distributional behavior of high-frequency asset returns both in clock time and trade time and show that when controlling for pre-scheduled market news events, trade-time returns are well characterized by a Gaussian distribution at very fine time scales. Second, we develop a structured and parsimonious model of clock-time returns by subordinating a trade-time Gaussian distribution with a trade arrival process that is associated with a modified Markov-Switching Multifractal Duration (MSMD) model of Chen et al. (2013). Our modification of the MSMD model provides a much better characterization of high-frequency inter-trade durations than the original model of Chen et al. (2013). Over-dispersion in this distribution of inter-trade durations leads to leptokurtosis and volatility clustering in clock-time returns, even when trade-time returns are Gaussian. Finally, we use our model to extrapolate the empirical relationship between trade rate and volatility in an effort to understand conditions of market failure. Our model finds that physical separation of financial markets maintains a natural ceiling on systemic volatility and promotes market stability .
This paper builds a model of high-frequency equity returns by separately modeling the dynamics of trade-time returns and trade arrivals. Our main contributions are threefold. First, we characterize the distributional behavior of high-frequency asset returns both in ordinary clock time and in trade time. We show that when controlling for pre-scheduled market news events, trade-time returns of the highly liquid near-month E-mini S P 500 futures contract are well characterized by a Gaussian distribution at very fine time scales. Second, we develop a structured and parsimonious model of clock-time returns by subordinating a trade-time Gaussian distribution with a trade arrival process that is associated with a modified Markov-Switching Multifractal Duration (MSMD) model . This model provides an excellent characterization of high-frequency inter-trade durations . Over-dispersion in this distribution of inter-trade durations leads to leptokurtosis and volatility clustering in clock-time returns, even when trade-time returns are Gaussian. Finally, we use our model to extrapolate the empirical relationship between trade rate and volatility in an effort to understand conditions of market failure. Our model suggests that the 1,200 km physical separation of financial markets in Chicago and New York/New Jersey provides a natural ceiling on systemic volatility and may contribute to market stability during periods of extremely heavy trading .
[ { "type": "D", "before": "in clock time", "after": null, "start_char_pos": 59, "end_char_pos": 72 }, { "type": "A", "before": null, "after": "ordinary", "start_char_pos": 280, "end_char_pos": 280 }, { "type": "R", "before": "trade time and", "after": "in trade time. We", "start_char_pos": 296, "end_char_pos": 310 }, { "type": "A", "before": null, "after": "of the highly liquid near-month E-mini S", "start_char_pos": 395, "end_char_pos": 395 }, { "type": "A", "before": null, "after": "P 500 futures contract", "start_char_pos": 396, "end_char_pos": 396 }, { "type": "R", "before": "of Chen et al. (2013). Our modification of the MSMD model provides a much better", "after": ". This model provides an excellent", "start_char_pos": 718, "end_char_pos": 798 }, { "type": "R", "before": "than the original model of Chen et al. (2013).", "after": ".", "start_char_pos": 856, "end_char_pos": 902 }, { "type": "R", "before": "finds that", "after": "suggests that the 1,200 km", "start_char_pos": 1249, "end_char_pos": 1259 }, { "type": "R", "before": "maintains", "after": "in Chicago and New York/New Jersey provides", "start_char_pos": 1301, "end_char_pos": 1310 }, { "type": "R", "before": "promotes market stability", "after": "may contribute to market stability during periods of extremely heavy trading", "start_char_pos": 1356, "end_char_pos": 1381 } ]
[ 0, 150, 188, 472, 740, 902, 1079, 1238 ]
1408.3774
1
We study partial hedging for game options in markets with transaction costs bounded from below. More precisely, we assume that the investor's transaction costs for each trade are the minimum between proportional transaction costs and a fixed transaction costs. We prove that in the continuous time Black--Scholes (BS) model, there exists a trading strategy which minimizes the shortfall risk. Furthermore, the trading strategy is given by a dynamical programming algorithm .
We study partial hedging for game options in markets with transaction costs bounded from below. More precisely, we assume that the investor's transaction costs for each trade are the maximum between proportional transaction costs and a fixed transaction costs. We prove that in the continuous time Black--Scholes (BS) model, there exists a trading strategy which minimizes the shortfall risk. Furthermore, we use binomial models in order to provide numerical schemes for the calculation of the shortfall risk and the corresponding optimal portfolio in the BS model .
[ { "type": "R", "before": "minimum", "after": "maximum", "start_char_pos": 183, "end_char_pos": 190 }, { "type": "R", "before": "the trading strategy is given by a dynamical programming algorithm", "after": "we use binomial models in order to provide numerical schemes for the calculation of the shortfall risk and the corresponding optimal portfolio in the BS model", "start_char_pos": 406, "end_char_pos": 472 } ]
[ 0, 95, 260, 392 ]
1408.3873
1
We evaluate a version of the recently-proposed Optimized Dissimilarity Space Embedding (ODSE) classification system that operates in the input space of sequences of generic objects. The ODSE system has been originally presented as a labeled graph classification system . However, since it is founded on the dissimilarity space representation of the input data, the classifier can be easily adapted to any input domain where it is possible to define a meaningful dissimilarity measure. We demonstrate the effectiveness of the ODSE classifier for sequences considering an application dealing with recognition of the solubility degree of the Escherichia coli proteome. Overall, the obtained results, which we stress that have been achieved with no context-dependent tuning of the ODSE system, confirm the validity and generality of the ODSE-based approach for structured data classification.
We evaluate a version of the recently-proposed classification system named Optimized Dissimilarity Space Embedding (ODSE) that operates in the input space of sequences of generic objects. The ODSE system has been originally presented as a classification system for patterns represented as labeled graphs . However, since ODSE is founded on the dissimilarity space representation of the input data, the classifier can be easily adapted to any input domain where it is possible to define a meaningful dissimilarity measure. Here we demonstrate the effectiveness of the ODSE classifier for sequences by considering an application dealing with the recognition of the solubility degree of the Escherichia coli proteome. Solubility, or analogously aggregation propensity, is an important property of protein molecules, which is intimately related to the mechanisms underlying the chemico-physical process of folding. Each protein of our dataset is initially associated with a solubility degree and it is represented as a sequence of symbols, denoting the 20 amino acid residues. The herein obtained computational results, which we stress that have been achieved with no context-dependent tuning of the ODSE system, confirm the validity and generality of the ODSE-based approach for structured data classification.
[ { "type": "A", "before": null, "after": "classification system named", "start_char_pos": 47, "end_char_pos": 47 }, { "type": "D", "before": "classification system", "after": null, "start_char_pos": 95, "end_char_pos": 116 }, { "type": "R", "before": "labeled graph classification system", "after": "classification system for patterns represented as labeled graphs", "start_char_pos": 234, "end_char_pos": 269 }, { "type": "R", "before": "it", "after": "ODSE", "start_char_pos": 287, "end_char_pos": 289 }, { "type": "R", "before": "We", "after": "Here we", "start_char_pos": 486, "end_char_pos": 488 }, { "type": "A", "before": null, "after": "by", "start_char_pos": 556, "end_char_pos": 556 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 597, "end_char_pos": 597 }, { "type": "R", "before": "Overall, the obtained", "after": "Solubility, or analogously aggregation propensity, is an important property of protein molecules, which is intimately related to the mechanisms underlying the chemico-physical process of folding. Each protein of our dataset is initially associated with a solubility degree and it is represented as a sequence of symbols, denoting the 20 amino acid residues. The herein obtained computational", "start_char_pos": 669, "end_char_pos": 690 } ]
[ 0, 182, 271, 485, 668 ]
1408.4618
1
We propose to test the assumption that interconnections across financial institutions can be explained by a diversification motive. This idea stems from the empirical evidence of the existence of long-term exposures that cannot be explained by a liquidity motive (maturity or currency mismatch). We model endogenous interconnections of heterogenous financial institutions facing regulatory constraints using a maximization of their expected utility. Both theoretical and simulation-based results are compared to a stylized genuine financial network. The diversification motive appears to plausibly explain interconnections among key players. Using our model, the impact of regulation on interconnections between major banks -currently discussed at the Basel Committee on Banking Supervision- is analyzed.
We test the hypothesis that interconnections across financial institutions can be explained by a diversification motive. This idea stems from the empirical evidence of the existence of long-term exposures that cannot be explained by a liquidity motive (maturity or currency mismatch). We model endogenous interconnections of heterogenous financial institutions facing regulatory constraints using a maximization of their expected utility. Both theoretical and simulation-based results are compared to a stylized genuine financial network. The diversification motive appears to plausibly explain interconnections among key players. Using our model, the impact of regulation on interconnections between banks -currently discussed at the Basel Committee on Banking Supervision- is analyzed.
[ { "type": "R", "before": "propose to test the assumption", "after": "test the hypothesis", "start_char_pos": 3, "end_char_pos": 33 }, { "type": "D", "before": "major", "after": null, "start_char_pos": 712, "end_char_pos": 717 } ]
[ 0, 131, 295, 449, 549, 641 ]
1408.4848
1
With model uncertainty characterized by a convex, possibly non-dominated set of probability measures, the investor minimizes the cost of hedging a path dependent contingent claim with a given expected success ratio, in a discrete-time, semi-static market of stocks and options. We prove duality results that link the problem of quantile hedging to a randomized composite hypothesis test . Then by assuming a compact path space , an arbitrage-free discretization of the market is proposed as an approximation. The discretized market has a dominating measure, which enables us to calculate the quantile hedging price and the associated hedging strategy by using the generalized Neyman-Pearson Lemma. Finally, the performance of the approximate hedging strategy in the original market and the convergence of the quantile hedging price are analyzed.
With model uncertainty characterized by a convex, possibly non-dominated set of probability measures, the investor minimizes the cost of hedging a path dependent contingent claim with given expected success ratio, in a discrete-time, semi-static market of stocks and options. Based on duality results which link quantile hedging to a randomized composite hypothesis test , an arbitrage-free discretization of the market is proposed as an approximation. The discretized market has a dominating measure, which guarantees the existence of the optimal hedging strategy and enables numerical calculation of the quantile hedging price by applying the generalized Neyman-Pearson Lemma. Finally, the performance in the original market of the approximating hedging strategy and the convergence of the approximating quantile hedging price are analyzed.
[ { "type": "D", "before": "a", "after": null, "start_char_pos": 184, "end_char_pos": 185 }, { "type": "R", "before": "We prove duality results that link the problem of", "after": "Based on duality results which link", "start_char_pos": 278, "end_char_pos": 327 }, { "type": "D", "before": ". Then by assuming a compact path space", "after": null, "start_char_pos": 387, "end_char_pos": 426 }, { "type": "R", "before": "enables us to calculate the", "after": "guarantees the existence of the optimal hedging strategy and enables numerical calculation of the", "start_char_pos": 564, "end_char_pos": 591 }, { "type": "R", "before": "price and", "after": "price by applying", "start_char_pos": 609, "end_char_pos": 618 }, { "type": "D", "before": "associated hedging strategy by using the", "after": null, "start_char_pos": 623, "end_char_pos": 663 }, { "type": "D", "before": "of the approximate hedging strategy", "after": null, "start_char_pos": 723, "end_char_pos": 758 }, { "type": "A", "before": null, "after": "of the approximating hedging strategy", "start_char_pos": 782, "end_char_pos": 782 }, { "type": "A", "before": null, "after": "approximating", "start_char_pos": 810, "end_char_pos": 810 } ]
[ 0, 277, 388, 508, 697 ]
1408.4848
2
With model uncertainty characterized by a convex, possibly non-dominated set of probability measures, the investor minimizes the cost of hedging a path dependent contingent claim with given expected success ratio, in a discrete-time, semi-static market of stocks and options. Based on duality results which link quantile hedging to a randomized composite hypothesis test, an arbitrage-free discretization of the market is proposed as an approximation. The discretized market has a dominating measure, which guarantees the existence of the optimal hedging strategy and enables numerical calculation of the quantile hedging price by applying the generalized Neyman-Pearson Lemma. Finally, the performance in the original market of the approximating hedging strategy and the convergence of the approximating quantile hedging price are analyzed .
With model uncertainty characterized by a convex, possibly non-dominated set of probability measures, the agent minimizes the cost of hedging a path dependent contingent claim with given expected success ratio, in a discrete-time, semi-static market of stocks and options. Based on duality results which link quantile hedging to a randomized composite hypothesis test, an arbitrage-free discretization of the market is proposed as an approximation. The discretized market has a dominating measure, which guarantees the existence of the optimal hedging strategy and helps numerical calculation of the quantile hedging price . As the discretization becomes finer, the approximate quantile hedging price converges and the hedging strategy is asymptotically optimal in the original market .
[ { "type": "R", "before": "investor", "after": "agent", "start_char_pos": 106, "end_char_pos": 114 }, { "type": "R", "before": "enables", "after": "helps", "start_char_pos": 568, "end_char_pos": 575 }, { "type": "R", "before": "by applying the generalized Neyman-Pearson Lemma. Finally, the performance in the original market of the approximating hedging strategy and the convergence of the approximating quantile hedging price are analyzed", "after": ". As the discretization becomes finer, the approximate quantile hedging price converges and the hedging strategy is asymptotically optimal in the original market", "start_char_pos": 628, "end_char_pos": 840 } ]
[ 0, 275, 451, 677 ]
1408.5109
1
Many methods have been developed for finding the commonalities between URLanisms to study their phylogeny. The structure of metabolic networks also reveal important insights into metabolic capacity of species as well as into the habitats where they have evolved. Horizontal gene transfer brings two species, which have evolved in similar environmental condition or lifestyle, close to each other in phylogenetic study based on metabolic network topology. We construct metabolic networks of 79 fully URLanisms and compare their architectures. We use spectral density of normalized Laplacian matrix for comparing structure of networks. The eigenvalues of this matrix not only reflects the global architecture of a network , but also the local topologies which are produced by different graph evolutionary processes like motif duplication or joining. A divergence measure on spectral densities is used to quantify the distances between different metabolic networks and a split tree is constructed from these distances to analyze the phylogeny . In our analysis we show more interest on the species, who belong to different classes but come in the vicinity of each other in phylogeny . With this focus, we reveal interesting insights into the phylogenetic commonality between URLanisms.
Many methods have been developed for finding the commonalities between URLanisms to study their phylogeny. The structure of metabolic networks also reveals valuable insights into metabolic capacity of species as well as into the habitats where they have evolved. We construct metabolic networks of 79 fully URLanisms and compare their architectures. We use spectral density of normalized Laplacian matrix for comparing structure of networks. The eigenvalues of this matrix reflect not only the global architecture of a network but also the local topologies that are produced by different graph evolutionary processes like motif duplication or joining. A divergence measure on spectral densities is used to quantify the distances between various metabolic networks, and a split network is constructed to analyze the phylogeny from these distances . In our analysis , we show more interest on the species, which belong to different classes , but come in the vicinity of each other in phylogeny . We try to explore if they have evolved in similar environmental condition or lifestyle . With this focus, we reveal interesting insights into the phylogenetic commonality between URLanisms.
[ { "type": "R", "before": "reveal important", "after": "reveals valuable", "start_char_pos": 148, "end_char_pos": 164 }, { "type": "D", "before": "Horizontal gene transfer brings two species, which have evolved in similar environmental condition or lifestyle, close to each other in phylogenetic study based on metabolic network topology.", "after": null, "start_char_pos": 263, "end_char_pos": 454 }, { "type": "R", "before": "not only reflects", "after": "reflect not only", "start_char_pos": 665, "end_char_pos": 682 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 720, "end_char_pos": 721 }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 752, "end_char_pos": 757 }, { "type": "R", "before": "different metabolic networks", "after": "various metabolic networks,", "start_char_pos": 933, "end_char_pos": 961 }, { "type": "R", "before": "tree is constructed from these distances", "after": "network is constructed", "start_char_pos": 974, "end_char_pos": 1014 }, { "type": "A", "before": null, "after": "from these distances", "start_char_pos": 1040, "end_char_pos": 1040 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1059, "end_char_pos": 1059 }, { "type": "R", "before": "who", "after": "which", "start_char_pos": 1098, "end_char_pos": 1101 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1130, "end_char_pos": 1130 }, { "type": "A", "before": null, "after": ". We try to explore if they have evolved in similar environmental condition or lifestyle", "start_char_pos": 1183, "end_char_pos": 1183 } ]
[ 0, 106, 262, 454, 541, 633, 847, 1042, 1185 ]
1408.5109
2
Many methods have been developed for finding the commonalities between URLanisms to study their phylogeny. The structure of metabolic networks also reveals valuable insights into metabolic capacity of species as well as into the habitats where they have evolved. We construct metabolic networks of 79 fully URLanisms and compare their architectures. We use spectral density of normalized Laplacian matrix for comparing structure of networks. The eigenvalues of this matrix reflect not only the global architecture of a network but also the local topologies that are produced by different graph evolutionary processes like motif duplication or joining. A divergence measure on spectral densities is used to quantify the distances between various metabolic networks, and a split network is constructed to analyze the phylogeny from these distances. In our analysis, we show more interest on the species, which belong to different classes, but come in the vicinity of each other in phylogeny. We try to explore if they have evolved in similar environmental condition or lifestyle . With this focus, we reveal interesting insights into the phylogenetic commonality between URLanisms.
Many methods have been developed for finding the commonalities between URLanisms to study their phylogeny. The structure of metabolic networks also reveal valuable insights into metabolic capacity of species as well as into the habitats where they have evolved. We constructed metabolic networks of 79 fully URLanisms and compared their architectures. We used spectral density of normalized Laplacian matrix for comparing the structure of networks. The eigenvalues of this matrix reflect not only the global architecture of a network but also the local topologies that are produced by different graph evolutionary processes like motif duplication or joining. A divergence measure on spectral densities is used to quantify the distances between various metabolic networks, and a split network is constructed to analyze the phylogeny from these distances. In our analysis, we focus on the species, which belong to different classes, but appear more related to each other in the phylogeny. We tried to explore whether they have evolved under similar environmental conditions or have similar life histories . With this focus, we have obtained interesting insights into the phylogenetic commonality between URLanisms.
[ { "type": "R", "before": "reveals", "after": "reveal", "start_char_pos": 148, "end_char_pos": 155 }, { "type": "R", "before": "construct", "after": "constructed", "start_char_pos": 266, "end_char_pos": 275 }, { "type": "R", "before": "compare", "after": "compared", "start_char_pos": 321, "end_char_pos": 328 }, { "type": "R", "before": "use", "after": "used", "start_char_pos": 353, "end_char_pos": 356 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 419, "end_char_pos": 419 }, { "type": "R", "before": "show more interest", "after": "focus", "start_char_pos": 868, "end_char_pos": 886 }, { "type": "R", "before": "come in the vicinity of", "after": "appear more related to", "start_char_pos": 942, "end_char_pos": 965 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 980, "end_char_pos": 980 }, { "type": "R", "before": "try to explore if", "after": "tried to explore whether", "start_char_pos": 995, "end_char_pos": 1012 }, { "type": "R", "before": "in similar environmental condition or lifestyle", "after": "under similar environmental conditions or have similar life histories", "start_char_pos": 1031, "end_char_pos": 1078 }, { "type": "R", "before": "reveal", "after": "have obtained", "start_char_pos": 1101, "end_char_pos": 1107 } ]
[ 0, 106, 262, 349, 442, 652, 847, 991, 1080 ]
1408.5269
1
Prion diseases are invariably fatal and highly infectious neurodegenerative diseases that affect a wide variety of mammalian species such as sheep and goats, cattle, deer, elks, humans and mice etc., but rabbits have a low susceptibility to be infected by prion diseases with respect to other species. The stability of rabbit prion protein is due to its highly ordered beta2-alpha2 loop { PLoS One 5(10) e13273 (2010); Journal of Biological Chemistry 285(41) 31682-31693 (2010) and a helix-capping motif within this loop PLoS One 8 (5) e63047 (2013) . The beta2-alpha2 loop has } } been a focus in prion studies. For this loop we found a salt bridge linkage ASP177-ARG163 (O-N) Journal of Theoretical Biology 342 (7 February 2014) 70-82 (2014) . Some scientists said on the 2FJ3.pdb NMR file of the rabbit prion protein, the distance of ASP177-ARG163 (O-N) gives the salt bridge of about 10 angstroms which is nearly null in terms of energy thus think our result is wrong. This opinion is clearly wrong simply due to the 3O79.pdb X-ray file of the rabbit prion protein has this salt bridge. This article is to present very strong evidences (mainly at 300 K room temperature, and other temperatures such as 350 K, 450 K, 500 K) to support this salt bridge result and at the same time we emphasize that all our numerical experiments are completely reproducible .
Prion diseases are invariably fatal and highly infectious neurodegenerative diseases that affect a wide variety of mammalian species such as sheep and goats, cattle, deer, elks, humans and mice etc., but rabbits have a low susceptibility to be infected by prion diseases with respect to other species. The stability of rabbit prion protein is due to its highly ordered \beta 2-{\alpha 2 loop ( PLoS One 5(10) e13273 (2010); Journal of Biological Chemistry 285(41) 31682-31693 (2010) ) and a hydrophobic staple helix-capping motif (PNAS 107(46) 19808-19813 (2010); PLoS One 8 (5) e63047 (2013) ). The \beta}2- \alpha}2 loop and the tail of Helix 3 it interacts with have been a focus in prion protein structure studies. For this loop we found a salt bridge linkage ASP177-ARG163 (O-N) ( Journal of Theoretical Biology 342 (7 February 2014) 70-82 (2014) ) . Some scientists said on the 2FJ3.pdb NMR file of the rabbit prion protein, the distance of ASP177-ARG163 (O-N) gives the salt bridge of about 10 \AA which is nearly null in terms of energy and such a salt bridge is not observed in their work. But, from the 3O79.pdb X-ray file of the rabbit prion protein , we can clearly observe this salt bridge. This article analyses the NMR and X-ray structures and gives an answer to the above question: the salt bridge presents at pH 6.5 in the X-ray structure is simply gone at pH 4.5 in the NMR structure is simply due to the different pH values that impact electrostatics at the salt bridge and hence also impact the structures. Moreover, some molecular dynamics simulation results of the X-ray structure are reported in this article to reveal the secrets of the structural stability of rabbit prion protein .
[ { "type": "R", "before": "beta2-alpha2 loop", "after": "\\beta", "start_char_pos": 369, "end_char_pos": 386 }, { "type": "A", "before": null, "after": "2-", "start_char_pos": 387, "end_char_pos": 387 }, { "type": "A", "before": null, "after": "\\alpha", "start_char_pos": 388, "end_char_pos": 388 }, { "type": "A", "before": null, "after": "2 loop (", "start_char_pos": 389, "end_char_pos": 389 }, { "type": "R", "before": "and a", "after": ") and a hydrophobic staple", "start_char_pos": 479, "end_char_pos": 484 }, { "type": "R", "before": "within this loop", "after": "(PNAS 107(46) 19808-19813 (2010);", "start_char_pos": 505, "end_char_pos": 521 }, { "type": "R", "before": ". The beta2-alpha2 loop has", "after": "). The", "start_char_pos": 551, "end_char_pos": 578 }, { "type": "A", "before": null, "after": "\\beta", "start_char_pos": 579, "end_char_pos": 579 }, { "type": "A", "before": null, "after": "2-", "start_char_pos": 580, "end_char_pos": 580 }, { "type": "A", "before": null, "after": "\\alpha", "start_char_pos": 581, "end_char_pos": 581 }, { "type": "A", "before": null, "after": "2 loop and the tail of Helix 3 it interacts with have", "start_char_pos": 582, "end_char_pos": 582 }, { "type": "A", "before": null, "after": "protein structure", "start_char_pos": 605, "end_char_pos": 605 }, { "type": "A", "before": null, "after": "(", "start_char_pos": 680, "end_char_pos": 680 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 747, "end_char_pos": 747 }, { "type": "R", "before": "angstroms", "after": "\\AA", "start_char_pos": 895, "end_char_pos": 904 }, { "type": "R", "before": "thus think our result is wrong. This opinion is clearly wrong simply due to", "after": "and such a salt bridge is not observed in their work. But, from", "start_char_pos": 945, "end_char_pos": 1020 }, { "type": "R", "before": "has", "after": ", we can clearly observe", "start_char_pos": 1073, "end_char_pos": 1076 }, { "type": "R", "before": "is to present very strong evidences (mainly at 300 K room temperature, and other temperatures such as 350 K, 450 K, 500 K) to support this salt bridge result and at the same time we emphasize that all our numerical experiments are completely reproducible", "after": "analyses the NMR and X-ray structures and gives an answer to the above question: the salt bridge presents at pH 6.5 in the X-ray structure is simply gone at pH 4.5 in the NMR structure is simply due to the different pH values that impact electrostatics at the salt bridge and hence also impact the structures. Moreover, some molecular dynamics simulation results of the X-ray structure are reported in this article to reveal the secrets of the structural stability of rabbit prion protein", "start_char_pos": 1108, "end_char_pos": 1362 } ]
[ 0, 301, 419, 552, 614, 749, 976, 1094 ]
1408.5585
1
Hierarchical analysis is considered and a novel, multilevel model is presented in order to explore causality, chance and complexity in financial economics. A coupled system of models is used to describe multilevel interactions, consistent with market data: the top-level is described by shared risk factors, the next level combines shared risk factors with information variables and bottom-up agent generated structurebased on the framework of arbitrage pricing theory, the lowest level is that of agents generating the prices of individual traded assets and a mechanism for emergence or innovation is considered . Concepts in the hierarchy of complexity are interrogated via five causation classes and the concept of actors, who serve as exemplars for types of causation, is reviewed .
Hierarchical analysis is considered and a multilevel model is presented in order to explore causality, chance and complexity in financial economics. A coupled system of models is used to describe multilevel interactions, consistent with market data: the lowest level is occupied by agents generating the prices of individual traded assets; the next level entails aggregation of stocks into markets; the third level combines shared risk factors with information variables and bottom-up , agent-generated structure, consistent with conditions for no-arbitrage pricing theory; the fourth level describes market factors which originate in the greater economy and the highest levels are described by regulated market structure and the customs and ethics which define the nature of acceptable transactions. A mechanism for emergence or innovation is considered and causal sources are discussed in terms of five causation classes .
[ { "type": "D", "before": "novel,", "after": null, "start_char_pos": 42, "end_char_pos": 48 }, { "type": "R", "before": "top-level is described by shared risk factors, the next level", "after": "lowest level is occupied by agents generating the prices of individual traded assets; the next level entails aggregation of stocks into markets; the third level", "start_char_pos": 261, "end_char_pos": 322 }, { "type": "R", "before": "agent generated structurebased on the framework of arbitrage pricing theory, the lowest level is that of agents generating the prices of individual traded assets and a", "after": ", agent-generated structure, consistent with conditions for no-arbitrage pricing theory; the fourth level describes market factors which originate in the greater economy and the highest levels are described by regulated market structure and the customs and ethics which define the nature of acceptable transactions. A", "start_char_pos": 393, "end_char_pos": 560 }, { "type": "R", "before": ". Concepts in the hierarchy of complexity are interrogated via", "after": "and causal sources are discussed in terms of", "start_char_pos": 613, "end_char_pos": 675 }, { "type": "D", "before": "and the concept of actors, who serve as exemplars for types of causation, is reviewed", "after": null, "start_char_pos": 699, "end_char_pos": 784 } ]
[ 0, 155, 256, 307 ]
1408.5618
1
We present the symmetric thermal optimal path (TOPS) method to determine the time-dependent lead-lag relationship between two stochastic time series. This novel version of the previously introduced TOP method alleviates some inconsistencies by imposing that the lead-lag relationship should be invariant with respect to a time reversal of the time series after a change of sign. This means that, if `X comes before Y', this transforms into `Y comes before X' under a time reversal. We show that previously proposed bootstrap test lacks power and leads too often to a lack of rejection of the null that there is no lead-lag correlation when it is present. We introduce instead two novel tests. The first free energy p-value \rho criterion quantifies the probability that a given lead-lag structure could be obtained from random time series with similar characteristics except of the lead-lag information. The second self-consistent test embodies the idea that, for the lead-lag path to be significant, synchronising the two time series using the time varying lead-lag path should lead to a statistically significant correlation. We perform intensive synthetic tests to demonstrate their performance and limitations. Finally, we apply the TOPS method with the two new tests to the time dependent lead-lag structures of house price and monetary policy of the United Kingdom (UK) and United States (US) from 1991 to 2011. The TOPS approach stresses the importance of accounting for change of regimes, so that similar pieces of information or policies may have drastically different impacts and developments, conditional on the economic, financial and geopolitical conditions. This study reinforces the view that the hypothesis of statistical stationarity is highly questionable.
We present the symmetric thermal optimal path (TOPS) method to determine the time-dependent lead-lag relationship between two stochastic time series. This novel version of the previously introduced TOP method alleviates some inconsistencies by imposing that the lead-lag relationship should be invariant with respect to a time reversal of the time series after a change of sign. This means that, if `X comes before Y', this transforms into `Y comes before X' under a time reversal. We show that previously proposed bootstrap test lacks power and leads too often to a lack of rejection of the null that there is no lead-lag correlation when it is present. We introduce instead two novel tests. The first the free energy p-value \rho criterion quantifies the probability that a given lead-lag structure could be obtained from random time series with similar characteristics except for the lead-lag information. The second self-consistent test embodies the idea that, for the lead-lag path to be significant, synchronizing the two time series using the time varying lead-lag path should lead to a statistically significant correlation. We perform intensive synthetic tests to demonstrate their performance and limitations. Finally, we apply the TOPS method with the two new tests to the time dependent lead-lag structures of house price and monetary policy of the United Kingdom (UK) and United States (US) from 1991 to 2011. The TOPS approach stresses the importance of accounting for change of regimes, so that similar pieces of information or policies may have drastically different impacts and developments, conditional on the economic, financial and geopolitical conditions. This study reinforces the view that the hypothesis of statistical stationarity is highly questionable.
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 703, "end_char_pos": 703 }, { "type": "R", "before": "of", "after": "for", "start_char_pos": 876, "end_char_pos": 878 }, { "type": "R", "before": "synchronising", "after": "synchronizing", "start_char_pos": 1002, "end_char_pos": 1015 } ]
[ 0, 149, 378, 481, 654, 692, 904, 1128, 1215, 1418, 1672 ]
1408.6513
1
Using a structural default model considered in Lipton and Sepp ( 2009 ) we propose a new approach to introducing correlated jumps into this framework. As the result we extend a set of the tractable Levy models which represent the jumps as a sum of the idiosyncratic and common parts. So far in the literature only the discrete and exponential jumps were considered using Marshall and Olkin (1967) method , and only in one and two-dimensional cases. We present realization of our approach in two and three dimensional cases, i. e. compute joint survival probabilities Q of two or three counterparties. We also extend the model by taking into account mutual liabilities of the counterparties and demonstrate their influence on Q. The latter, however, requires a more detailed analysis which will be published elsewhere .
The structural default model of Lipton and Sepp , 2009 is generalized for a set of banks with mutual interbank liabilities whose assets are driven by correlated Levy processes with idiosyncratic and common components. The multi-dimensional problem is made tractable via a novel computational method, which generalizes the one-dimensional fractional partial differential equation method of Itkin, 2014 to the two- and three-dimensional cases. This method is unconditionally stable and of the second order of approximation in space and time; in addition, for many popular Levy models it has linear complexity in each dimension. Marginal and joint survival probabilities for two and three banks with mutual liabilities are computed. The effects of mutual liabilities are discussed, and numerical examples are given to illustrate these effects .
[ { "type": "R", "before": "Using a", "after": "The", "start_char_pos": 0, "end_char_pos": 7 }, { "type": "R", "before": "considered in", "after": "of", "start_char_pos": 33, "end_char_pos": 46 }, { "type": "R", "before": "(", "after": ",", "start_char_pos": 63, "end_char_pos": 64 }, { "type": "R", "before": ") we propose a new approach to introducing correlated jumps into this framework. As the result we extend a set of the tractable Levy models which represent the jumps as a sum of the", "after": "is generalized for a set of banks with mutual interbank liabilities whose assets are driven by correlated Levy processes with", "start_char_pos": 70, "end_char_pos": 251 }, { "type": "R", "before": "parts. So far in the literature only the discrete and exponential jumps were considered using Marshall and Olkin (1967) method , and only in one and two-dimensional cases. We present realization of our approach in two and three dimensional cases, i. e. compute", "after": "components. The multi-dimensional problem is made tractable via a novel computational method, which generalizes the one-dimensional fractional partial differential equation method of Itkin, 2014 to the two- and three-dimensional cases. This method is unconditionally stable and of the second order of approximation in space and time; in addition, for many popular Levy models it has linear complexity in each dimension. Marginal and", "start_char_pos": 277, "end_char_pos": 537 }, { "type": "R", "before": "Q of two or three counterparties. We also extend the model by taking into account mutual liabilities of the counterparties and demonstrate their influence on Q. The latter, however, requires a more detailed analysis which will be published elsewhere", "after": "for two and three banks with mutual liabilities are computed. The effects of mutual liabilities are discussed, and numerical examples are given to illustrate these effects", "start_char_pos": 567, "end_char_pos": 816 } ]
[ 0, 150, 283, 448, 600, 727 ]
1408.6637
1
We introduce two new estimators of the bivariate Hurst exponent in the power-law cross-correlations setting -- the cross-periodogram and X-Whittle estimators . As the spectrum-based estimators are dependent on the part of the spectrum taken into consideration during estimation, a simulation study showing the performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. The newly introduced estimators are less biased than the already existent averaged periodogram estimator which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators.
We introduce two new estimators of the bivariate Hurst exponent in the power-law cross-correlations setting -- the cross-periodogram and local X-Whittle estimators -- as generalizations of their univariate counterparts . As the spectrum-based estimators are dependent on a part of the spectrum taken into consideration during estimation, a simulation study showing performance of the estimators under varying bandwidth parameter as well as correlation between processes and their specification is provided as well. The newly introduced estimators are less biased than the already existent averaged periodogram estimator which, however, has slightly lower variance. The spectrum-based estimators can serve as a good complement to the popular time domain estimators.
[ { "type": "A", "before": null, "after": "local", "start_char_pos": 137, "end_char_pos": 137 }, { "type": "A", "before": null, "after": "-- as generalizations of their univariate counterparts", "start_char_pos": 159, "end_char_pos": 159 }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 212, "end_char_pos": 215 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 308, "end_char_pos": 311 } ]
[ 0, 161, 461, 611 ]
1408.6799
1
We develop the stochastic Perron's method in the framework of stochastic target games , in which one player tries to find a strategy such that the state process almost-surely reaches a given target no matter which action is chosen by the other player. Within this framework, the stochastic Perron's method produces a viscosity sub-solution (super-solution) of a Hamilton-Jacobi-Bellman (HJB) equation. Using a comparison result, we characterize the value as a viscosity solution to the HJB equation.
We develop the stochastic Perron's method (see e.g. arXiv: 1212.2170) in the framework of stochastic target games (arXiv: 1307.5606) , in which one player tries to find a strategy such that the state process almost-surely reaches a given target no matter which action is chosen by the other player. Within this framework, the stochastic Perron's method produces a viscosity sub-solution (super-solution) of a Hamilton-Jacobi-Bellman (HJB) equation. Using a comparison result, we characterize the value as a viscosity solution to the HJB equation.
[ { "type": "A", "before": null, "after": "(see e.g. arXiv: 1212.2170)", "start_char_pos": 42, "end_char_pos": 42 }, { "type": "A", "before": null, "after": "(arXiv: 1307.5606)", "start_char_pos": 87, "end_char_pos": 87 } ]
[ 0, 253, 403 ]
1408.6799
2
We develop the stochastic Perron 's method (see e.g. arXiv: 1212.2170) in the framework of stochastic target games (arXiv: 1307.5606) , in which one player tries to find a strategy such that the state process almost-surely reaches a given target no matter which action is chosen by the other player. Within this framework, the stochastic Perron's method produces a viscosity sub-solution (super-solution) of a Hamilton-Jacobi-Bellman (HJB) equation. Using a comparison result, we characterize the value as a viscosity solution to the HJB equation .
We extend the stochastic Perron method to analyze the framework of stochastic target games , in which one player tries to find a strategy such that the state process almost surely reaches a given target no matter which action is chosen by the other player. Within this framework, our method produces a viscosity sub-solution (super-solution) of a Hamilton-Jacobi-Bellman (HJB) equation. We then characterize the value function as a viscosity solution to the HJB equation using a comparison result and a byproduct to obtain the dynamic programming principle .
[ { "type": "R", "before": "develop", "after": "extend", "start_char_pos": 3, "end_char_pos": 10 }, { "type": "R", "before": "'s method (see e.g. arXiv: 1212.2170) in", "after": "method to analyze", "start_char_pos": 33, "end_char_pos": 73 }, { "type": "D", "before": "(arXiv: 1307.5606)", "after": null, "start_char_pos": 115, "end_char_pos": 133 }, { "type": "R", "before": "almost-surely", "after": "almost surely", "start_char_pos": 209, "end_char_pos": 222 }, { "type": "R", "before": "the stochastic Perron's", "after": "our", "start_char_pos": 323, "end_char_pos": 346 }, { "type": "R", "before": "Using a comparison result, we", "after": "We then", "start_char_pos": 450, "end_char_pos": 479 }, { "type": "A", "before": null, "after": "function", "start_char_pos": 503, "end_char_pos": 503 }, { "type": "A", "before": null, "after": "using a comparison result and a byproduct to obtain the dynamic programming principle", "start_char_pos": 548, "end_char_pos": 548 } ]
[ 0, 299, 449 ]
1409.0665
1
In this paper we propose a continuous time stochastic inventory model for a traded commodity whose supply purchase in the spot market is affected by price and demand uncertainty. A firm aims at meeting a random demand of the commodity at a random time by maximizing total expected profits. We model the firm's optimal procurement problem as a singular stochastic control problem in which a nondecreasing control policy represents the cumulative investment made by the firm in the spot market ( that is, a so-called stochastic "monotone follower problem"). We assume a general exponential L\'evy process for the commodity's spot price, contrary to the common use of a Brownian setting, and we model the holding cost by a general convex function . We obtain sufficient and necessary first order conditions for optimality and we provide the optimal procurement policy in terms of a "base inventory" process; that is, a minimal time-dependent desirable inventory level that the firm's manager must reach at any time. In the case of linear holding costs and exponentially distributed random demand, we are able to provide an explicit analytic solution . The paper is completed by some computer drawings showing the behaviour of the optimal inventory for spot prices given by a geometric Brownian motion , an exponential jump-diffusion , or an exponential Ornstein-Uhlenbeck process .
In this paper we study a continuous time stochastic inventory model for a commodity traded in the spot market and whose supply purchase is affected by price and demand uncertainty. A firm aims at meeting a random demand of the commodity at a random time by maximizing total expected profits. We model the firm's optimal procurement problem as a singular stochastic control problem in which controls are nondecreasing processes and represent the cumulative investment made by the firm in the spot market ( a so-called stochastic "monotone follower problem"). We assume a general exponential L\'evy process for the commodity's spot price, rather than the commonly used geometric Brownian motion, and general convex holding costs . We obtain necessary and sufficient first order conditions for optimality and we provide the optimal procurement policy in terms of a "base inventory" process; that is, a minimal time-dependent desirable inventory level that the firm's manager must reach at any time. In particular, in the case of linear holding costs and exponentially distributed demand, we are also able to obtain the explicit analytic form of the optimal policy and a probabilistic representation of the optimal revenue . The paper is completed by some computer drawings of the optimal inventory when spot prices are given by a geometric Brownian motion and by an exponential jump-diffusion process. In the first case we also make a numerical comparison between the value function and the revenue associated to the classical static "newsvendor" strategy .
[ { "type": "R", "before": "propose", "after": "study", "start_char_pos": 17, "end_char_pos": 24 }, { "type": "R", "before": "traded commodity whose supply purchase", "after": "commodity traded", "start_char_pos": 76, "end_char_pos": 114 }, { "type": "A", "before": null, "after": "and whose supply purchase", "start_char_pos": 134, "end_char_pos": 134 }, { "type": "R", "before": "a nondecreasing control policy represents", "after": "controls are nondecreasing processes and represent", "start_char_pos": 389, "end_char_pos": 430 }, { "type": "D", "before": "that is,", "after": null, "start_char_pos": 495, "end_char_pos": 503 }, { "type": "R", "before": "contrary to the common use of a Brownian setting, and we model the holding cost by a general convex function", "after": "rather than the commonly used geometric Brownian motion, and general convex holding costs", "start_char_pos": 636, "end_char_pos": 744 }, { "type": "R", "before": "sufficient and necessary", "after": "necessary and sufficient", "start_char_pos": 757, "end_char_pos": 781 }, { "type": "A", "before": null, "after": "particular, in", "start_char_pos": 1017, "end_char_pos": 1017 }, { "type": "D", "before": "random", "after": null, "start_char_pos": 1081, "end_char_pos": 1087 }, { "type": "R", "before": "able to provide an explicit analytic solution", "after": "also able to obtain the explicit analytic form of the optimal policy and a probabilistic representation of the optimal revenue", "start_char_pos": 1103, "end_char_pos": 1148 }, { "type": "D", "before": "showing the behaviour", "after": null, "start_char_pos": 1200, "end_char_pos": 1221 }, { "type": "R", "before": "for spot prices", "after": "when spot prices are", "start_char_pos": 1247, "end_char_pos": 1262 }, { "type": "R", "before": ",", "after": "and by", "start_char_pos": 1300, "end_char_pos": 1301 }, { "type": "R", "before": ", or an exponential Ornstein-Uhlenbeck process", "after": "process. In the first case we also make a numerical comparison between the value function and the revenue associated to the classical static \"newsvendor\" strategy", "start_char_pos": 1332, "end_char_pos": 1378 } ]
[ 0, 179, 290, 556, 746, 905, 1013, 1150 ]
1409.1071
1
A model of contagion propagation in the Russian interbank market based on the real data is developed .
Systemic risks of default contagion in the Russian interbank market are investigated. The analysis is based on considering the bow-tie structure of the weighted oriented graph describing the structure of the interbank loans. A probabilistic model of interbank contagion explicitly taking into account the empirical bow-tie structure reflecting functionality of the corresponding nodes (borrowers, lenders, borrowers and lenders simultaneously), degree distributions and disassortativity of the interbank network under consideration based on empirical data is developed . The characteristics of contagion-related systemic risk calculated with this model are shown to be in agreement with those of explicit stress tests .
[ { "type": "R", "before": "A model of contagion propagation", "after": "Systemic risks of default contagion", "start_char_pos": 0, "end_char_pos": 32 }, { "type": "R", "before": "based on the real", "after": "are investigated. The analysis is based on considering the bow-tie structure of the weighted oriented graph describing the structure of the interbank loans. A probabilistic model of interbank contagion explicitly taking into account the empirical bow-tie structure reflecting functionality of the corresponding nodes (borrowers, lenders, borrowers and lenders simultaneously), degree distributions and disassortativity of the interbank network under consideration based on empirical", "start_char_pos": 65, "end_char_pos": 82 }, { "type": "A", "before": null, "after": ". The characteristics of contagion-related systemic risk calculated with this model are shown to be in agreement with those of explicit stress tests", "start_char_pos": 101, "end_char_pos": 101 } ]
[ 0 ]
1409.1819
1
In this paper we study the structure of three types of biochemical networks: protein , metabolic, and gene expression networks, together with simulated archetypical networks acting as probes. We consider both classical topological descriptors, such as the modularity and statistics of the shortest paths, and different interpretations in terms of diffusion provided by the well-known discrete heat kernel. A principal component analysis shows high discrimination among the network types, either by considering the topological and heat kernel based characterizations. Furthermore, a canonical correlation analysis demonstrates the strong agreement among the two characterizations, providing an important justification in terms of interpretability for the heat kernel. Finally, and most importantly, the focused analysis of the heat kernel provides us a way to yield insights on the fact that proteins have to satisfy specific structural design constraints that the other considered biochemical networks do not need to obey .
In this paper , we study the structure of three types of biochemical networks: protein contact networks, metabolic networks , and gene regulatory networks, together with simulated archetypal models acting as probes. We consider both classical topological descriptors, such as the modularity and statistics of the shortest paths, and different interpretations in terms of diffusion provided by the well-known discrete heat kernel. A principal component analysis shows high discrimination among the network types, either by considering the topological and heat kernel based characterizations. Furthermore, a canonical correlation analysis demonstrates the strong agreement among those two characterizations, providing thus an important justification in terms of interpretability for the heat kernel. Finally, and most importantly, the focused analysis of the heat kernel provides a way to yield insights on the fact that proteins have to satisfy specific structural design constraints that the other considered biochemical networks do not need to obey . Notably, the heat trace decay of the protein ensemble denotes subdiffusion, a peculiar property of proteins .
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 14, "end_char_pos": 14 }, { "type": "A", "before": null, "after": "contact networks, metabolic networks", "start_char_pos": 86, "end_char_pos": 86 }, { "type": "D", "before": "metabolic,", "after": null, "start_char_pos": 89, "end_char_pos": 99 }, { "type": "R", "before": "expression", "after": "regulatory", "start_char_pos": 109, "end_char_pos": 119 }, { "type": "R", "before": "archetypical networks", "after": "archetypal models", "start_char_pos": 154, "end_char_pos": 175 }, { "type": "R", "before": "the", "after": "those", "start_char_pos": 655, "end_char_pos": 658 }, { "type": "A", "before": null, "after": "thus", "start_char_pos": 692, "end_char_pos": 692 }, { "type": "D", "before": "us", "after": null, "start_char_pos": 850, "end_char_pos": 852 }, { "type": "A", "before": null, "after": ". Notably, the heat trace decay of the protein ensemble denotes subdiffusion, a peculiar property of proteins", "start_char_pos": 1025, "end_char_pos": 1025 } ]
[ 0, 193, 407, 568, 769 ]
1409.1819
2
In this paper, we study the structure of three types of biochemical networks: protein contact networks , metabolic networks, and gene regulatory networks, together with simulated archetypal models acting as probes. We consider both classical topological descriptors, such as the modularity and statistics of the shortest paths, and different interpretations in terms of diffusion provided by the well-known discrete heat kernel . A principal component analysis shows high discrimination among the network types, either by considering the topological and heat kernel based characterizations. Furthermore, a canonical correlation analysis demonstrates the strong agreement among those two characterizations, providing thus an important justification in terms of interpretability for the heat kernel. Finally, and most importantly, the focused analysis of the heat kernel provides a way to yield insights on the fact that proteins have to satisfy specific structural design constraints that the other considered biochemical networks do not need to obey. Notably, the heat trace decay of the protein ensemble denotes subdiffusion, a peculiar property of proteins.
In this paper, we study the structure and dynamical properties of protein contact networks with respect to other biological networks, together with simulated archetypal models acting as probes. We consider both classical topological descriptors, such as the modularity and statistics of the shortest paths, and different interpretations in terms of diffusion provided by the discrete heat kernel , which is elaborated from the normalized graph Laplacians . A principal component analysis shows high discrimination among the network types, either by considering the topological and heat kernel based vector characterizations. Furthermore, a canonical correlation analysis demonstrates the strong agreement among those two characterizations, providing thus an important justification in terms of interpretability for the heat kernel. Finally, and most importantly, the focused analysis of the heat kernel provides a way to yield insights on the fact that proteins have to satisfy specific structural design constraints that the other considered networks do not need to obey. Notably, the heat trace decay of an ensemble of varying-size proteins denotes subdiffusion, a peculiar property of proteins.
[ { "type": "R", "before": "of three types of biochemical networks:", "after": "and dynamical properties of", "start_char_pos": 38, "end_char_pos": 77 }, { "type": "R", "before": ", metabolic networks, and gene regulatory networks,", "after": "with respect to other biological networks,", "start_char_pos": 103, "end_char_pos": 154 }, { "type": "D", "before": "well-known", "after": null, "start_char_pos": 396, "end_char_pos": 406 }, { "type": "A", "before": null, "after": ", which is elaborated from the normalized graph Laplacians", "start_char_pos": 428, "end_char_pos": 428 }, { "type": "A", "before": null, "after": "vector", "start_char_pos": 573, "end_char_pos": 573 }, { "type": "D", "before": "biochemical", "after": null, "start_char_pos": 1011, "end_char_pos": 1022 }, { "type": "R", "before": "the protein ensemble", "after": "an ensemble of varying-size proteins", "start_char_pos": 1086, "end_char_pos": 1106 } ]
[ 0, 214, 592, 799, 1052 ]
1409.2205
1
How to induce differentiated cells into pluripotent cells has elicited researchers' interests for a long time since pluripotent stem cells are able to offer remarkable potential in numerous subfields of biological research. However, the nature of cell reprogramming, especially the mechanisms still remain elusive for the sake of most protocols of inducing pluripotent stem cells were discovered by screening but not from the knowledge of gene regulation networks. Generally there are two hypotheses to elucidate the mechanism termed as elite model and stochastic model which regard reprogramming process a deterministic process or a stochastic process, respectively. However, the difference between these two models cannot yet be discriminated experimentally. Here we use a general mathematical model to elucidate the nature of cell reprogramming which can fit both hypotheses. We investigate this process from a novel perspective, the timing. We calculate the time of reprogramming in a general way and find that noise would play a significant role if the stochastic hypothesis holds. Thus the two hypotheses may be discriminated experimentally by counting the time of reprogramming in different magnitudes of noise. Because our approach is general, our results should facilitate broad studies of rational design of cell reprogramming protocols.
How to induce differentiated cells into pluripotent cells has elicited researchers' interests for a long time since pluripotent stem cells are able to offer remarkable potential in numerous subfields of biological research. However, the nature of cell reprogramming, especially the mechanisms still remain elusive for the sake of most protocols of inducing pluripotent stem cells were discovered by screening but not from the knowledge of gene regulation networks. Generally there are two hypotheses to elucidate the mechanism termed as elite model and stochastic model which regard reprogramming process a deterministic process or a stochastic process, respectively. However, the difference between these two models cannot yet be discriminated experimentally. Here we used a general mathematical model to elucidate the nature of cell reprogramming which can fit both hypotheses. We investigated this process from a novel perspective, the timing. We calculated the time of reprogramming in a general way and find that noise would play a significant role if the stochastic hypothesis holds. Thus the two hypotheses may be discriminated experimentally by counting the time of reprogramming in different magnitudes of noise. Because our approach is general, our results should facilitate broad studies of rational design of cell reprogramming protocols.
[ { "type": "R", "before": "use", "after": "used", "start_char_pos": 769, "end_char_pos": 772 }, { "type": "R", "before": "investigate", "after": "investigated", "start_char_pos": 882, "end_char_pos": 893 }, { "type": "R", "before": "calculate", "after": "calculated", "start_char_pos": 948, "end_char_pos": 957 } ]
[ 0, 223, 464, 667, 760, 878, 944, 1086, 1218 ]
1409.2625
1
We investigate the credit risk model previously introduced by Hatchett and K\"uhn under more general assumptions . We show that the model is exactly solvable in the N\rightarrow \infty limit and that the exact solution is described by a message-passing approach outlined by Karrer and Newman, generalized to include heterogeneous agents and couplings. We provide comparisons with simulations in the case of a scale-free graph .
We investigate the credit risk model defined in Hatchett K\"{u under more general assumptions , in particular using a general degree distribution for sparse graphs. Expanding upon earlier results, we show that the model is exactly solvable in the N\rightarrow \infty limit and demonstrate that the exact solution is described by the message-passing approach outlined by Karrer and Newman, generalized to include heterogeneous agents and couplings. We provide comparisons with simulations of graph ensembles with power-law degree distributions .
[ { "type": "R", "before": "previously introduced by Hatchett and K\\\"uhn", "after": "defined in Hatchett", "start_char_pos": 37, "end_char_pos": 81 }, { "type": "A", "before": null, "after": "K\\\"{u", "start_char_pos": 82, "end_char_pos": 82 }, { "type": "R", "before": ". We", "after": ", in particular using a general degree distribution for sparse graphs. Expanding upon earlier results, we", "start_char_pos": 114, "end_char_pos": 118 }, { "type": "A", "before": null, "after": "demonstrate", "start_char_pos": 196, "end_char_pos": 196 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 237, "end_char_pos": 238 }, { "type": "R", "before": "in the case of a scale-free graph", "after": "of graph ensembles with power-law degree distributions", "start_char_pos": 394, "end_char_pos": 427 } ]
[ 0, 115, 353 ]
1409.4595
1
The whole body of experimental and empirical observations on cell metabolism cannot be understood without their integration into a consistent systematic framework. However, the characterization of metabolic flux phenotypes under a certain environmental condition is typically reduced to the study of a single optimal state, like maximum biomass yield that is by far the most common assumption. Contrarily, here we confront optimal growth solutions to the whole set of feasible flux phenotypes (FFP), which provides a reference map that helps us to assess the likelihood of extreme and high-growth states and their extent of conformity with experimental results. In addition, FFP maps are able to uncover metabolic behaviors, such as aerobic glycolysis in high-growth minimal medium with unlimited oxygen uptake , that are unreachable using models based on optimality principles. The information content of the full FFP space of metabolic states provides us with an entire map to explore and evaluate metabolic behavior and capabilities, and so it opens new avenues for biotechnological and biomedical applications.
Experimental and empirical observations on cell metabolism cannot be understood as a whole without their integration into a consistent systematic framework. However, the characterization of metabolic flux phenotypes is typically reduced to the study of a single optimal state, like maximum biomass yield that is by far the most common assumption. Here we confront optimal growth solutions to the whole set of feasible flux phenotypes (FFP), which provides a bechmark to assess the likelihood of optimal and high-growth states and their agreement with experimental results. In addition, FFP maps are able to uncover metabolic behaviors, such as aerobic fermentation accompanying exponential growth on sugars at nutrient excess conditions , that are unreachable using standard models based on optimality principles. The information content of the full FFP space provides us with a map to explore and evaluate metabolic behavior and capabilities, and so it opens new avenues for biotechnological and biomedical applications.
[ { "type": "R", "before": "The whole body of experimental", "after": "Experimental", "start_char_pos": 0, "end_char_pos": 30 }, { "type": "A", "before": null, "after": "as a whole", "start_char_pos": 98, "end_char_pos": 98 }, { "type": "D", "before": "under a certain environmental condition", "after": null, "start_char_pos": 224, "end_char_pos": 263 }, { "type": "R", "before": "Contrarily, here", "after": "Here", "start_char_pos": 395, "end_char_pos": 411 }, { "type": "R", "before": "reference map that helps us", "after": "bechmark", "start_char_pos": 518, "end_char_pos": 545 }, { "type": "R", "before": "extreme", "after": "optimal", "start_char_pos": 574, "end_char_pos": 581 }, { "type": "R", "before": "extent of conformity", "after": "agreement", "start_char_pos": 615, "end_char_pos": 635 }, { "type": "R", "before": "glycolysis in high-growth minimal medium with unlimited oxygen uptake", "after": "fermentation accompanying exponential growth on sugars at nutrient excess conditions", "start_char_pos": 742, "end_char_pos": 811 }, { "type": "A", "before": null, "after": "standard", "start_char_pos": 841, "end_char_pos": 841 }, { "type": "D", "before": "of metabolic states", "after": null, "start_char_pos": 927, "end_char_pos": 946 }, { "type": "R", "before": "an entire", "after": "a", "start_char_pos": 964, "end_char_pos": 973 } ]
[ 0, 164, 394, 662, 880 ]
1409.6193
1
A fundamental problem in studying and modeling economic and financial systems is represented by privacy issues, which put severe limitations on the amount of accessible information. Here we investigate a novel method to reconstruct the structural properties of complex weighted networks using only partial information: the total number of nodes and links, and the values of the strength for all nodes. The latter are used first as fitness to estimate the unknown node degrees through a standard configuration model ; then, degrees and strengths are employed to calibrate an enhanced configuration model in order to generate ensembles of networks intended to represent the real system. The method, which is tested on the World Trade Web , while drastically reducing the amount of information needed to infer network properties, turns out to be remarkably effective-thus representing a valuable tool for gaining insights on privacy-protected socioeconomic networks .
A fundamental problem in studying and modeling economic and financial systems is represented by privacy issues, which put severe limitations on the amount of accessible information. Here we introduce a novel, highly nontrivial method to reconstruct the structural properties of complex weighted networks of this kind using only partial information: the total number of nodes and links, and the values of the strength for all nodes. The latter are used as fitness to estimate the unknown node degrees through a standard configuration model . Then, these estimated degrees and the strengths are used to calibrate an enhanced configuration model in order to generate ensembles of networks intended to represent the real system. The method, which is tested on real economic and financial networks , while drastically reducing the amount of information needed to infer network properties, turns out to be remarkably effective-thus representing a valuable tool for gaining insights on privacy-protected socioeconomic systems .
[ { "type": "R", "before": "investigate a novel", "after": "introduce a novel, highly nontrivial", "start_char_pos": 190, "end_char_pos": 209 }, { "type": "A", "before": null, "after": "of this kind", "start_char_pos": 287, "end_char_pos": 287 }, { "type": "D", "before": "first", "after": null, "start_char_pos": 423, "end_char_pos": 428 }, { "type": "R", "before": "; then, degrees and strengths are employed", "after": ". Then, these estimated degrees and the strengths are used", "start_char_pos": 516, "end_char_pos": 558 }, { "type": "R", "before": "the World Trade Web", "after": "real economic and financial networks", "start_char_pos": 717, "end_char_pos": 736 }, { "type": "R", "before": "networks", "after": "systems", "start_char_pos": 955, "end_char_pos": 963 } ]
[ 0, 181, 402, 517, 685 ]
1409.6444
1
We focus on emergence of the power-law cross-correlations from processes with both short and long term memory properties. In the case of correlated error-terms, the power-law decay of the cross-correlation function comes automatically with the characteristics of the separate processes. The bivariate Hurst exponent is then equal to an average of the separate Hurst exponents of the analysed processes. Strength of the short term memory has no effect on these asymptotic properties .
We focus on emergence of the power-law cross-correlations from processes with both short and long term memory properties. In the case of correlated error-terms, the power-law decay of the cross-correlation function comes automatically with the characteristics of separate processes. Bivariate Hurst exponent is then equal to an average of separate Hurst exponents of the analyzed processes. Strength of short term memory has no effect on these asymptotic properties . Implications of these findings for the power-law cross-correlations concept are further discussed .
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 263, "end_char_pos": 266 }, { "type": "R", "before": "The bivariate", "after": "Bivariate", "start_char_pos": 287, "end_char_pos": 300 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 347, "end_char_pos": 350 }, { "type": "R", "before": "analysed", "after": "analyzed", "start_char_pos": 383, "end_char_pos": 391 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 415, "end_char_pos": 418 }, { "type": "A", "before": null, "after": ". Implications of these findings for the power-law cross-correlations concept are further discussed", "start_char_pos": 482, "end_char_pos": 482 } ]
[ 0, 121, 286, 402 ]
1409.6775
1
This paper presents a queueing-network approach to the analysis and control of mobility-on-demand (MoD) systems for urban personal transportation. A MoD system consists of a fleet of vehicles providing one-way carsharing service and a team of drivers to rebalance such vehicles. The drivers then rebalance themselves by driving select customers similar to a taxi service. We model the MoD system as two coupled closed Jackson networks with passenger loss. We show that the system can be approximately balanced by solving two decoupled linear programs and exactly balanced through nonlinear optimization. The rebalancing techniques are applied to a fleet sizing example using taxi data in three neighborhoods of Manhattan, which suggests that the optimal vehicle-to-driver ratio in a MoD system is between 3 and 5. Lastly, we formulate a real-time closed loop rebalancing policy for drivers and demonstrate its stability (in terms of customer waiting times) for typical system loads.
This paper presents a queueing network approach to the analysis and control of mobility-on-demand (MoD) systems for urban personal transportation. A MoD system consists of a fleet of vehicles providing one-way car sharing service and a team of drivers to rebalance such vehicles. The drivers then rebalance themselves by driving select customers similar to a taxi service. We model the MoD system as two coupled closed Jackson networks with passenger loss. We show that the system can be approximately balanced by solving two decoupled linear programs and exactly balanced through nonlinear optimization. The rebalancing techniques are applied to a system sizing example using taxi data in three neighborhoods of Manhattan, which suggests that the optimal vehicle-to-driver ratio in a MoD system is between 3 and 5. Lastly, we formulate a real-time closed-loop rebalancing policy for drivers and demonstrate its stability (in terms of customer wait times) for typical system loads.
[ { "type": "R", "before": "queueing-network", "after": "queueing network", "start_char_pos": 22, "end_char_pos": 38 }, { "type": "R", "before": "carsharing", "after": "car sharing", "start_char_pos": 210, "end_char_pos": 220 }, { "type": "R", "before": "fleet", "after": "system", "start_char_pos": 648, "end_char_pos": 653 }, { "type": "R", "before": "closed loop", "after": "closed-loop", "start_char_pos": 847, "end_char_pos": 858 }, { "type": "R", "before": "waiting", "after": "wait", "start_char_pos": 942, "end_char_pos": 949 } ]
[ 0, 146, 278, 371, 455, 603 ]
1409.6789
1
Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy of protein complexes. In addition to increasing the detective quantum efficiency with which images can be recorded, acquisition of DDD movies during exposures allows for correction of movement of the specimen, due both to instabilities in the specimen stage of the microscope and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement of entire frames or large regions of frames at exposures of 2-3 e^{-}/pixel/frame. Other algorithms allow individual particles in small regions of frames to be aligned, but require rolling averages to be calculated from frames and fit linear trajectories for particles. Here we describe an algorithm that allows for individual < 1 MDa particle images to be aligned without frame averaging when imaged with 2.5 e^{-}/pixel/frame and without requiring particle trajectories in movies to be linear. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. Two additional measures are proposed to smooth estimates of particle trajectories. First, rapid changes in particle positions between frames are penalized. Second, weighted averaging of nearby trajectories ensures local correlation in trajectories. DDD movies of the Saccharomyces cerevisiae V-ATPase are used to demonstrate that the algorithm is able to produce physically reasonable trajectories for a 900 kDa membrane protein complex.
Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM) of protein complexes. In addition to increasing the detective quantum efficiency with which images can be recorded, acquisition of DDD movies during exposures allows for correction of movement of the specimen, due both to instabilities in the specimen stage of the microscope and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement of entire frames or large regions of frames at exposures of 2-3 e^{-}/pixel/frame. Other algorithms allow individual particles in small regions of frames to be aligned, but require rolling averages to be calculated from frames and fit linear trajectories for particles. Here we describe an algorithm that allows for individual < 1 MDa particle images to be aligned without frame averaging when imaged with 2.5 e^{-}/pixel/frame and without requiring particle trajectories in movies to be linear. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. Two additional measures are implemented to smooth estimates of particle trajectories. First, rapid changes in particle positions between frames are penalized. Second, weighted averaging of nearby trajectories ensures local correlation in trajectories. DDD movies of the Saccharomyces cerevisiae V-ATPase are used to demonstrate that the algorithm is able to produce physically reasonable trajectories for a 900 kDa membrane protein complex.
[ { "type": "A", "before": null, "after": "(cryo-EM)", "start_char_pos": 97, "end_char_pos": 97 }, { "type": "R", "before": "proposed", "after": "implemented", "start_char_pos": 1468, "end_char_pos": 1476 } ]
[ 0, 119, 409, 503, 635, 783, 970, 1196, 1302, 1439, 1522, 1595, 1688 ]
1409.7028
1
In this paper we provide a unified and flexible framework for study of the time consistency of risk and performance measures . The proposed framework integrates existing forms of time consistency as well as various connections between them . In our approach the time consistency is studied for a large class of maps that are postulated to satisfy only two properties -- monotonicity and locality. This makes our framework fairly general. The time consistency is defined in terms of an update rule -- a novel notion introduced in this paper. We design various updates rules that allow to recover several known forms of time consistency, and to study some new forms of time consistency .
In this paper we provide a flexible framework allowing for a unified study of time consistency of risk measures and performance measures , also known as acceptability indices . The proposed framework integrates existing forms of time consistency . In our approach the time consistency is studied for a large class of maps that are postulated to satisfy only two properties -- monotonicity and locality. The time consistency is defined in terms of an update rule -- a novel notion introduced in this paper. As an illustration of the usefulness of our approach, we show how to recover almost all concepts of weak time consistency by means of constructing various update rules .
[ { "type": "R", "before": "unified and flexible framework for study of the", "after": "flexible framework allowing for a unified study of", "start_char_pos": 27, "end_char_pos": 74 }, { "type": "A", "before": null, "after": "measures", "start_char_pos": 100, "end_char_pos": 100 }, { "type": "A", "before": null, "after": ", also known as acceptability indices", "start_char_pos": 126, "end_char_pos": 126 }, { "type": "D", "before": "as well as various connections between them", "after": null, "start_char_pos": 198, "end_char_pos": 241 }, { "type": "D", "before": "This makes our framework fairly general.", "after": null, "start_char_pos": 399, "end_char_pos": 439 }, { "type": "R", "before": "We design various updates rules that allow to recover several known forms of time consistency, and to study some new forms of time consistency", "after": "As an illustration of the usefulness of our approach, we show how to recover almost all concepts of weak time consistency by means of constructing various update rules", "start_char_pos": 543, "end_char_pos": 685 } ]
[ 0, 128, 243, 398, 439, 542 ]
1409.7028
2
In this paper we provide a flexible framework allowing for a unified study of time consistency of risk measures and performance measures , also known as acceptability indices . The proposed framework integrates existing forms of time consistency . In ] our approach the time consistency is studied for a large class of maps that are postulated to satisfy only two properties -- monotonicity and locality. The time consistency is defined in terms of an update rule -- a novel notion introduced in this paper. As an illustration of the usefulness of our approach, we show how to recover almost all concepts of weak time consistency by means of constructing various update rules.
In this paper we provide a flexible framework allowing for a unified study of time consistency of risk measures and performance measures ( also known as acceptability indices ) . The proposed framework not only integrates existing forms of time consistency , but also provides a comprehensive toolbox for analysis and synthesis of the concept of time consistency in decision making. In particular, it allows for in depth comparative analysis of (most of) the existing types of time consistency -- a feat that has not be possible before and which is done in the companion paper BCP2016] to this one. In our approach the time consistency is studied for a large class of maps that are postulated to satisfy only two properties -- monotonicity and locality. The time consistency is defined in terms of an update rule . The form of the update rule introduced here is novel, and is perfectly suited for developing the unifying framework that is worked out in this paper. As an illustration of the applicability of our approach, we show how to recover almost all concepts of weak time consistency by means of constructing appropriate update rules.
[ { "type": "R", "before": ",", "after": "(", "start_char_pos": 137, "end_char_pos": 138 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 175, "end_char_pos": 175 }, { "type": "A", "before": null, "after": "not only", "start_char_pos": 201, "end_char_pos": 201 }, { "type": "R", "before": ". In", "after": ", but also provides a comprehensive toolbox for analysis and synthesis of the concept of time consistency in decision making. In particular, it allows for in depth comparative analysis of (most of) the existing types of time consistency -- a feat that has not be possible before and which is done in the companion paper", "start_char_pos": 248, "end_char_pos": 252 }, { "type": "A", "before": null, "after": "BCP2016", "start_char_pos": 253, "end_char_pos": 253 }, { "type": "A", "before": null, "after": "to this one. In", "start_char_pos": 255, "end_char_pos": 255 }, { "type": "R", "before": "-- a novel notion introduced", "after": ". The form of the update rule introduced here is novel, and is perfectly suited for developing the unifying framework that is worked out", "start_char_pos": 467, "end_char_pos": 495 }, { "type": "R", "before": "usefulness", "after": "applicability", "start_char_pos": 537, "end_char_pos": 547 }, { "type": "R", "before": "various", "after": "appropriate", "start_char_pos": 658, "end_char_pos": 665 } ]
[ 0, 177, 249, 407, 510 ]
1409.7720
1
We present extensive evidence that "risk premium" is strongly correlated with tail-risk skewness but very little with volatility. We introduce a new, intuitive definition of skewness and elicit a linear relation between the Sharpe ratio of various risk premium strategies (Equity, Fama-French, FX Carry, Short Vol, Bonds, Credit) and their negative skewness. We find a clear exception to this rule: trend following (and perhaps the Fama-French "High minus Low"), that has positive skewness and positive excess returns , suggesting that some strategies are not risk premia but genuine market anomalies. Based on our results, we propose an objective criterion to assess the quality of a risk-premium portfolio.
We present extensive evidence that ``risk premium'' is strongly correlated with tail-risk skewness but very little with volatility. We introduce a new, intuitive definition of skewness and elicit an approximately linear relation between the Sharpe ratio of various risk premium strategies (Equity, Fama-French, FX Carry, Short Vol, Bonds, Credit) and their negative skewness. We find a clear exception to this rule: trend following has both positive skewness and positive excess returns . This is also true, albeit less markedly, of the Fama-French ``Value'' factor and of the ``Low Volatility'' strategy. This suggests that some strategies are not risk premia but genuine market anomalies. Based on our results, we propose an objective criterion to assess the quality of a risk-premium portfolio.
[ { "type": "R", "before": "\"risk premium\"", "after": "``risk premium''", "start_char_pos": 35, "end_char_pos": 49 }, { "type": "R", "before": "a", "after": "an approximately", "start_char_pos": 194, "end_char_pos": 195 }, { "type": "R", "before": "(and perhaps the Fama-French \"High minus Low\"), that has", "after": "has both", "start_char_pos": 415, "end_char_pos": 471 }, { "type": "R", "before": ", suggesting", "after": ". This is also true, albeit less markedly, of the Fama-French ``Value'' factor and of the ``Low Volatility'' strategy. This suggests", "start_char_pos": 518, "end_char_pos": 530 } ]
[ 0, 129, 358, 601 ]
1409.7960
1
For \alpha\in (1,2), we present a generalized central limit theorem for \alpha-stable random variables under sublinear expectation. The foundation of our proof is an interior regularity estimate for partial integro-differential equations (PIDEs). A classical generalized central limit theorem is recovered as a special case, provided a mild but natural additional condition holds. Our approach contrasts with previous arguments for the result in the linear setting which have typically relied upon tools that are nonexistent in the sublinear framework, e.g. , characteristic functions.
For \alpha\in (1,2), we present a generalized central limit theorem for \alpha-stable random variables under sublinear expectation. The foundation of our proof is an interior regularity estimate for partial integro-differential equations (PIDEs). A classical generalized central limit theorem is recovered as a special case, provided a mild but natural additional condition holds. Our approach contrasts with previous arguments for the result in the linear setting which have typically relied upon tools that are non-existent in the sublinear framework, for example , characteristic functions.
[ { "type": "R", "before": "nonexistent", "after": "non-existent", "start_char_pos": 513, "end_char_pos": 524 }, { "type": "R", "before": "e.g.", "after": "for example", "start_char_pos": 553, "end_char_pos": 557 } ]
[ 0, 131, 246, 380 ]
1410.0384
1
We study utility indifference prices and optimal purchasing quantities for a non-traded contingent claim in an incomplete semi-martingale market with vanishing hedging errors, making connections with the theory of large deviations. We concentrate on sequences of semi-complete markets where for each n the claim h _n admits the decomposition h _n = D_n+Y_n where D_n is replicable and Y_n is completely unhedgeable in that the indifference price of Y_n for an exponential investor is its certainty equivalent. Under broad conditions, we may assume that Y_n vanishes in accordance with a large deviations principle as n grows. In this setting, we identify limiting indifference prices as the position size becomes large, and show the prices typically are not the unique arbitrage free price in the limiting market. Furthermore, we show that optimal purchase quantities occur at the large deviations scaling, and hence large positions endogenously arise in this setting.
We study utility indifference prices and optimal purchasing quantities for a non-traded contingent claim in an incomplete semi-martingale market with vanishing hedging errors, making connections with the theory of large deviations. We concentrate on sequences of semi-complete markets where for each n the claim B _n admits the decomposition B _n = D_n+Y_n where D_n is replicable and Y_n is completely unhedgeable in that the indifference price of Y_n for an exponential investor is its certainty equivalent. Under broad conditions, we may assume that Y_n vanishes in accordance with a large deviations principle as n grows. In this setting, we identify limiting indifference prices as the position size becomes large, and show the prices typically are not the unique arbitrage free price in the limiting market. Furthermore, we show that optimal purchase quantities occur at the large deviations scaling, and hence large positions endogenously arise in this setting.
[ { "type": "R", "before": "h", "after": "B", "start_char_pos": 312, "end_char_pos": 313 }, { "type": "R", "before": "h", "after": "B", "start_char_pos": 342, "end_char_pos": 343 } ]
[ 0, 231, 509, 625, 813 ]
1410.0384
2
We study utility indifference prices and optimal purchasing quantities for a non-traded contingent claim in an incomplete semi-martingale market with vanishing hedging errors , making connections with the theory of large deviations. We concentrate on sequences of semi-complete markets where for each n the claim B_n admits the decomposition B_n = D_n+Y_n where D_n is replicable and Y_n is completely unhedgeable in that the indifference price of Y_n for an exponential investor is its certainty equivalent . Under broad conditions, we may assume that Y_n vanishes in accordance with a large deviations principle as n grows. In this setting, we identify limiting indifference prices as the position size becomes large, and show the prices typically are not the unique arbitrage free price in the limiting market . Furthermore, we show that optimal purchase quantities occur at the large deviations scaling, and hence large positions endogenously arise in this setting.
We study utility indifference prices and optimal purchasing quantities for a non-traded contingent claim in an incomplete semi-martingale market with vanishing hedging errors . We make connections with the theory of large deviations. We concentrate on sequences of semi-complete markets where in the n^{th the claim B_n admits the decomposition B_n = D_n+Y_n . Here, D_n is replicable by trading in the underlying assets S_n, but Y_n is independent of S_n . Under broad conditions, we may assume that Y_n vanishes in accordance with a large deviations principle as n grows. In this setting, for an exponential investor, we identify the limit of the average indifference price p_n(q_n), for q_n units of B_n, as n\rightarrow \infty. We show that if |q_n|\rightarrow\infty, the limiting price typically differs from the price obtained by assuming bounded positions \sup_n|q_n|<\infty, and the difference is explicitly identifiable using large deviations theory . Furthermore, we show that optimal purchase quantities occur at the large deviations scaling, and hence large positions arise endogenously in this setting.
[ { "type": "R", "before": ", making", "after": ". We make", "start_char_pos": 175, "end_char_pos": 183 }, { "type": "R", "before": "for each n", "after": "in the n^{th", "start_char_pos": 292, "end_char_pos": 302 }, { "type": "R", "before": "where", "after": ". Here,", "start_char_pos": 356, "end_char_pos": 361 }, { "type": "R", "before": "and Y_n is completely unhedgeable in that the indifference price of Y_n for an exponential investor is its certainty equivalent", "after": "by trading in the underlying assets S_n, but Y_n is independent of S_n", "start_char_pos": 380, "end_char_pos": 507 }, { "type": "R", "before": "we identify limiting indifference prices as the position size becomes large, and show", "after": "for an exponential investor, we identify", "start_char_pos": 643, "end_char_pos": 728 }, { "type": "R", "before": "prices typically are not the unique arbitrage free price in the limiting market", "after": "limit of the average indifference price p_n(q_n), for q_n units of B_n, as n\\rightarrow \\infty. We show that if |q_n|\\rightarrow\\infty, the limiting price typically differs from the price obtained by assuming bounded positions \\sup_n|q_n|<\\infty, and the difference is explicitly identifiable using large deviations theory", "start_char_pos": 733, "end_char_pos": 812 }, { "type": "R", "before": "endogenously arise", "after": "arise endogenously", "start_char_pos": 934, "end_char_pos": 952 } ]
[ 0, 232, 509, 625, 814 ]
1410.0628
1
This paper introduces a new global analytical model of the heat dissipation process that occurs in passively-cooled embedded systems , and explicits under what circumstances the folklore assumption that exponential cooling laws apply in such context is valid. Since the power consumption and reliability of microprocessors are highly dependent on temperature, both designers and, later on, run-time temperature management units must be able rely upon accurate heating and cooling modelsto handle heat generation and peak temperature. If exponential cooling models are justified for actively-cooled microprocessors, e.g., by forced air or water cooling, for passively cooled processors however, as frequently found in embedded systemssuch as mobile phones, an exponential law may not be theoretically justified . Here, we analyzed the tractability of the exact cooling law for a passively cooled body, subject to radiative cooling and a modest level of heat loss via convection . Focusing then on embedded microprocessors , we compare the performance difference between our new passive cooling law and the conventionally-used exponential one. We show that the differences between the exact solution and the exponential cooling law are not significant, and even negligible, for small surfaces of the order 10cm^2 . However, for larger surface sizes, the radiative cooling component may become comparable to the convective cooling component. Our results thus suggest that, in the absence of accurate temperature measurements, an exponential cooling law is accurate enough for small-sized SoC systems that require low processing overhead.
A new global analytical model of the heat dissipation process that occurs in passively-cooled embedded systems is introduced, and we explicit under what circumstances the traditional assumption that exponential cooling laws apply in such context is valid. Since the power consumption and reliability of microprocessors are highly dependent on temperature, management units need accurate thermal models. Exponential cooling models are justified for actively-cooled systems . Here, we analyze the tractability of the cooling law for a passively cooled body, subject to radiative and convective cooling, including internal heat generation . Focusing then on embedded system-like objects , we compare the performance difference between our new passive cooling law and the conventionally-used exponential one. We show that , for quasi isothermal cooling surfaces of the order of 1\,dm^2 or greater, the radiative cooling effect may become comparable to the convective cooling one. In other words, radiation becomes non-negligible for systems with a cooling surface larger than about 1\,dm^2. Otherwise for surfaces below 1\,dm^2, we show that the differences between the exact solution and the exponential cooling law becomes negligible. In the absence of accurate temperature measurements, an exponential cooling model is shown to be accurate enough for systems, such as small-sized SoCs, that require low processing overhead.
[ { "type": "R", "before": "This paper introduces a", "after": "A", "start_char_pos": 0, "end_char_pos": 23 }, { "type": "R", "before": ", and explicits", "after": "is introduced, and we explicit", "start_char_pos": 133, "end_char_pos": 148 }, { "type": "R", "before": "folklore", "after": "traditional", "start_char_pos": 178, "end_char_pos": 186 }, { "type": "R", "before": "both designers and, later on, run-time temperature management units must be able rely upon accurate heating and cooling modelsto handle heat generation and peak temperature. If exponential", "after": "management units need accurate thermal models. Exponential", "start_char_pos": 360, "end_char_pos": 548 }, { "type": "R", "before": "microprocessors, e.g., by forced air or water cooling, for passively cooled processors however, as frequently found in embedded systemssuch as mobile phones, an exponential law may not be theoretically justified", "after": "systems", "start_char_pos": 598, "end_char_pos": 809 }, { "type": "R", "before": "analyzed", "after": "analyze", "start_char_pos": 821, "end_char_pos": 829 }, { "type": "D", "before": "exact", "after": null, "start_char_pos": 854, "end_char_pos": 859 }, { "type": "R", "before": "cooling and a modest level of heat loss via convection", "after": "and convective cooling, including internal heat generation", "start_char_pos": 922, "end_char_pos": 976 }, { "type": "R", "before": "microprocessors", "after": "system-like objects", "start_char_pos": 1005, "end_char_pos": 1020 }, { "type": "R", "before": "the differences between the exact solution and the exponential cooling law are not significant, and even negligible, for small", "after": ", for quasi isothermal cooling", "start_char_pos": 1155, "end_char_pos": 1281 }, { "type": "R", "before": "10cm^2 . However, for larger surface sizes,", "after": "of 1\\,dm^2 or greater,", "start_char_pos": 1304, "end_char_pos": 1347 }, { "type": "R", "before": "component", "after": "effect", "start_char_pos": 1370, "end_char_pos": 1379 }, { "type": "R", "before": "component. Our results thus suggest that, in the", "after": "one. In other words, radiation becomes non-negligible for systems with a cooling surface larger than about 1\\,dm^2. Otherwise for surfaces below 1\\,dm^2, we show that the differences between the exact solution and the exponential cooling law becomes negligible. In the", "start_char_pos": 1428, "end_char_pos": 1476 }, { "type": "R", "before": "law is", "after": "model is shown to be", "start_char_pos": 1546, "end_char_pos": 1552 }, { "type": "A", "before": null, "after": "systems, such as", "start_char_pos": 1573, "end_char_pos": 1573 }, { "type": "R", "before": "SoC systems", "after": "SoCs,", "start_char_pos": 1586, "end_char_pos": 1597 } ]
[ 0, 259, 533, 811, 978, 1141, 1438 ]
1410.0946
1
In the framework of an incomplete financial market where the stock price dynamics are modeled by a continuous semimartingale , an explicit first-order expansion formula for the power investor's value function - seen as a function of the underlying market price of risk process - is provided and its second-order error is quantified . Two specific calibrated numerical examples illustrating the accuracy of the method are also given.
In the framework of an incomplete financial market where the stock price dynamics are modeled by a continuous semimartingale (not necessarily Markovian) an explicit second-order expansion formula for the power investor's value function - seen as a function of the underlying market price of risk process - is provided . This allows us to provide first-order approximations of the optimal primal and dual controls . Two specific calibrated numerical examples illustrating the accuracy of the method are also given.
[ { "type": "R", "before": ", an explicit first-order", "after": "(not necessarily Markovian) an explicit second-order", "start_char_pos": 125, "end_char_pos": 150 }, { "type": "R", "before": "and its second-order error is quantified", "after": ". This allows us to provide first-order approximations of the optimal primal and dual controls", "start_char_pos": 291, "end_char_pos": 331 } ]
[ 0, 333 ]
1410.2282
1
Recently, Ross suggested that it is possible to recover an objective measure from a risk-neutral measure. His model assumes that there is a finite-state Markov process X that drives the economy in discrete time. This article extends his model to a continuous-time setting with a Markov diffusion process X with state space R. Unfortunately, the continuous-time model fails to recover an objective measure from a risk-neutral measure. We determine under which information recovery is possible in the continuous-time model. Many authors have proven that if X is recurrent under the objective measure, then recovery is possible. In this article, when X is transient under the objective measure, we investigate what information is necessary and sufficient to recover. We also introduce the notion of a reference function, which contains the information near the area where the process X lies with high probability under the objective measure. We discuss what type of condition for the reference function is necessary and sufficient for recovery .
Recently, Ross suggested that it is possible to recover an objective measure from a risk-neutral measure. His model assumes that there is a finite-state Markov process X that drives the economy in discrete time. This article extends his model to a continuous-time setting with a Markov diffusion process X with state space R. Unfortunately, the continuous-time model fails to recover an objective measure from a risk-neutral measure. We determine under which information recovery is possible in the continuous-time model. Many authors have proven that if X is recurrent under the objective measure, then recovery is possible. In this article, when X is transient under the objective measure, we investigate what information is necessary and sufficient to recover. We also introduce the notion of a reference function, which contains the information near the area where the process X lies with high probability under the objective measure. A reference function will be used for empirical purposes when X is transient under the objective measure .
[ { "type": "R", "before": "We discuss what type of condition for the reference function is necessary and sufficient for recovery", "after": "A reference function will be used for empirical purposes when X is transient under the objective measure", "start_char_pos": 939, "end_char_pos": 1040 } ]
[ 0, 105, 211, 433, 521, 625, 763, 938 ]
1410.2282
2
Recently, Ross suggested that it is possible to recover an objective measure from a risk-neutral measure. His model assumes that there is a finite-state Markov process X that drives the economy in discrete time. This article extends his model to a continuous-time setting with a Markov diffusion process X with state space R. Unfortunately, the continuous-time model fails to recover an objective measure from a risk-neutral measure. We determine under which information recovery is possible in the continuous-time model. Many authors have proven that if X is recurrent under the objective measure, then recovery is possible. In this article, when X is transient under the objective measure, we investigate what information is necessary and sufficient to recover. We also introduce the notion of a reference function, which contains the information near the area where the process X lies with high probability under the objective measure. A reference function will be used for empirical purposes when X is transient under the objective measure.
Recently, Ross showed that it is possible to recover an objective measure from a risk-neutral measure. His model assumes that there is a finite-state Markov process X that drives the economy in discrete time. Many authors extended his model to a continuous-time setting with a Markov diffusion process X with state space R. Unfortunately, the continuous-time model fails to recover an objective measure from a risk-neutral measure. We determine under which information recovery is possible in the continuous-time model. It was proven that if X is recurrent under the objective measure, then recovery is possible. In this article, when X is transient under the objective measure, we investigate what information is necessary and sufficient to recover. We also introduce the notion of a reference function, which contains the information near the area where the process X lies with high probability under the objective measure. A reference function will be used for empirical purposes when X is transient under the objective measure.
[ { "type": "R", "before": "suggested", "after": "showed", "start_char_pos": 15, "end_char_pos": 24 }, { "type": "R", "before": "This article extends", "after": "Many authors extended", "start_char_pos": 212, "end_char_pos": 232 }, { "type": "R", "before": "Many authors have", "after": "It was", "start_char_pos": 522, "end_char_pos": 539 } ]
[ 0, 105, 211, 433, 521, 625, 763, 938 ]
1410.2282
3
Recently, Ross showed that it is possible to recover an objective measure from a risk-neutral measure. His model assumes that there is a finite-state Markov process X that drives the economy in discrete time. Many authors extended his model to a continuous-time setting with a Markov diffusion process X with state space R. Unfortunately, the continuous-time model fails to recover an objective measure from a risk-neutral measure. We determine under which information recovery is possible in the continuous-time model. It was proven that if X is recurrent under the objective measure, then recovery is possible. In this article, when X is transient under the objective measure, we investigate what information is necessary and sufficient to recover . We also introduce the notion of a reference function, which contains the information near the area where the process X lies with high probability under the objective measure. A reference function will be used for empirical purposes when X is transient under the objective measure .
Recently, Ross showed that it is possible to recover an objective measure from a risk-neutral measure. His model assumes that there is a finite-state Markov process X that drives the economy in discrete time. Many authors extended his model to a continuous-time setting with a Markov diffusion process X with state space R. Unfortunately, the continuous-time model fails to recover an objective measure from a risk-neutral measure. We determine under which information recovery is possible in the continuous-time model. It was proven that if X is recurrent under the objective measure, then recovery is possible. In this article, when X is transient under the objective measure, we investigate what information is sufficient to recover .
[ { "type": "D", "before": "necessary and", "after": null, "start_char_pos": 714, "end_char_pos": 727 }, { "type": "D", "before": ". We also introduce the notion of a reference function, which contains the information near the area where the process X lies with high probability under the objective measure. A reference function will be used for empirical purposes when X is transient under the objective measure", "after": null, "start_char_pos": 750, "end_char_pos": 1031 } ]
[ 0, 102, 208, 431, 519, 612, 751, 926 ]
1410.2494
1
Here we present ComPPI, a cellular compartment-specific database of proteins and their interactions enabling an extensive, compartmentalized protein-protein interaction network analysis (URL : URL ComPPI enables the user to filter biologically unlikely interactions, where the two interacting proteins have no common subcellular localizations and to predict novel properties, such as compartment-specific biological functions. ComPPI is an integrated database covering four species (S. cerevisiae, C. elegans, D. melanogaster and H. sapiens). The compilation of 9 protein-protein interaction and 8 subcellular localization datasets had 4 curation steps including a manually built, comprehensive hierarchical structure of more than 1,600 subcellular localizations. ComPPI provides confidence scores for protein subcellular localizations and protein-protein interactions. ComPPI has user-friendly search options for individual proteins giving their subcellular localization, their interactions and the likelihood of their interactions considering the subcellular localization of their interacting partners. Download options of search results, URLanelle-specific interactomes , and subcellular localization data are available on its website. Due to its novel features, ComPPI is useful for the analysis of experimental results in biochemistry and molecular biology, as well as for proteome-wide studies in bioinformatics and network science helping cellular biology, medicine and drug desig
Here we present ComPPI, a cellular compartment specific database of proteins and their interactions enabling an extensive, compartmentalized protein-protein interaction network analysis URL ComPPI enables the user to filter biologically unlikely interactions, where the two interacting proteins have no common subcellular localizations and to predict novel properties, such as compartment-specific biological functions. ComPPI is an integrated database covering four species (S. cerevisiae, C. elegans, D. melanogaster and H. sapiens). The compilation of nine protein-protein interaction and eight subcellular localization data sets had four curation steps including a manually built, comprehensive hierarchical structure of more than 1600 subcellular localizations. ComPPI provides confidence scores for protein subcellular localizations and protein-protein interactions. ComPPI has user-friendly search options for individual proteins giving their subcellular localization, their interactions and the likelihood of their interactions considering the subcellular localization of their interacting partners. Download options of search results, whole URLanelle-specific interactomes and subcellular localization data are available on its website. Due to its novel features, ComPPI is useful for the analysis of experimental results in biochemistry and molecular biology, as well as for proteome-wide studies in bioinformatics and network science helping cellular biology, medicine and drug design.
[ { "type": "R", "before": "compartment-specific", "after": "compartment specific", "start_char_pos": 35, "end_char_pos": 55 }, { "type": "R", "before": "(URL : URL", "after": "URL", "start_char_pos": 186, "end_char_pos": 196 }, { "type": "R", "before": "9", "after": "nine", "start_char_pos": 562, "end_char_pos": 563 }, { "type": "R", "before": "8 subcellular localization datasets had 4", "after": "eight subcellular localization data sets had four", "start_char_pos": 596, "end_char_pos": 637 }, { "type": "R", "before": "1,600", "after": "1600", "start_char_pos": 731, "end_char_pos": 736 }, { "type": "A", "before": null, "after": "whole", "start_char_pos": 1141, "end_char_pos": 1141 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1174, "end_char_pos": 1175 }, { "type": "R", "before": "desig", "after": "design.", "start_char_pos": 1483, "end_char_pos": 1488 } ]
[ 0, 426, 542, 763, 869, 1104, 1239 ]
1410.2570
1
This paper studies the problem of optimally allocating a cash injection into a financial system in distress. Given a one-period borrower-lender network in which all debts are due at the same time and have the same seniority, we address the problem of allocating a fixed amount of cash among the nodes to minimize the weighted sum of unpaid liabilities. Assuming all the loan amounts and asset values are fixed and that there are no bankruptcy costs, we show that this problem is equivalent to a linear program. We develop a duality-based distributed algorithm to solve it which is useful for applications where it is desirable to avoid centralized data gathering and computation. Since some applications require forecasting and planning for a wide variety of different contingencies, we also consider the problem of minimizing the expectation of the weighted sum of unpaid liabilities under the assumption that the net external asset holdings of all institutions are stochastic. We show that this problem is a two-stage stochastic linear program. To solve it, we develop two algorithms based on Monte Carlo sampling : Benders decomposition algorithm and projected stochastic gradient descent. We show that if the defaulting nodes never pay anything, the deterministic optimal cash injection allocation problem is an NP-hard mixed-integer linear program. However, modern optimization software enables the computation of very accurate solutions to this problem on a personal computer in a few seconds for network sizes comparable with the size of the US banking system. In addition, we address the problem of allocating the cash injection amount so as to minimize the number of nodes in default. For this problem, we develop a heuristic algorithm which uses reweighted l1 minimization . We show through numerical simulations that the solutions calculated by our algorithm are close to optimal .
This paper studies the problem of optimally allocating a cash injection into a financial system in distress. Given a one-period borrower-lender network in which all debts are due at the same time and have the same seniority, we address the problem of allocating a fixed amount of cash among the nodes to minimize the weighted sum of unpaid liabilities. Assuming all the loan amounts and asset values are fixed and that there are no bankruptcy costs, we show that this problem is equivalent to a linear program. We develop a duality-based distributed algorithm to solve it which is useful for applications where it is desirable to avoid centralized data gathering and computation. We also consider the problem of minimizing the expectation of the weighted sum of unpaid liabilities under the assumption that the net external asset holdings of all institutions are stochastic. We show that this problem is a two-stage stochastic linear program. To solve it, we develop two algorithms based on : Benders decomposition algorithm and projected stochastic gradient descent. We show that if the defaulting nodes never pay anything, the deterministic optimal cash injection allocation problem is an NP-hard mixed-integer linear program. However, modern optimization software enables the computation of very accurate solutions to this problem on a personal computer in a few seconds for network sizes comparable with the size of the US banking system. In addition, we address the problem of allocating the cash injection amount so as to minimize the number of nodes in default. For this problem, we develop two heuristic algorithms: a reweighted l1 minimization algorithm and a greedy algorithm. We illustrate these two algorithms using three synthetic network structures for which the optimal solution can be calculated exactly. We also compare these two algorithms on three types random networks which are more complex .
[ { "type": "R", "before": "Since some applications require forecasting and planning for a wide variety of different contingencies, we", "after": "We", "start_char_pos": 680, "end_char_pos": 786 }, { "type": "D", "before": "Monte Carlo sampling", "after": null, "start_char_pos": 1095, "end_char_pos": 1115 }, { "type": "R", "before": "a heuristic algorithm which uses", "after": "two heuristic algorithms: a", "start_char_pos": 1723, "end_char_pos": 1755 }, { "type": "R", "before": ". We show through numerical simulations that the solutions calculated by our algorithm are close to optimal", "after": "algorithm and a greedy algorithm. We illustrate these two algorithms using three synthetic network structures for which the optimal solution can be calculated exactly. We also compare these two algorithms on three types random networks which are more complex", "start_char_pos": 1783, "end_char_pos": 1890 } ]
[ 0, 108, 352, 510, 679, 978, 1046, 1192, 1353, 1567, 1693, 1784 ]
1410.2803
1
CRDTs are distributed data types that make eventual consistency of a distributed object possible and non ad-hoc. Specifically, state-based CRDTs achieve this by sharing local state changes through shipping the entire state, that is then merged to other replicas with an idempotent, associative, and commutative join operation, ensuring convergence.This imposes a large communication overhead as the state size becomes larger. We introduce Delta State Conflict-Free Replicated Datatypes ({\delta}-CRDT) , which make use of \delta%DIFDELCMD < }%%% -mutators, defined in such a way to return a delta-state, typically, with a much smaller size than the full state. Delta-states are joined to the local state as well as to the remote states (after being shipped). This can achieve the best of both worlds: small messages with an incremental nature, as in operation-based CRDTs, disseminated over unreliable communication channels, as in traditional state-based CRDTs. } We introduce the {\delta}-CRDT framework, and we explain it through establishing a correspondence to current state- based CRDTs. In addition, we present two anti-entropy algorithms: a basic one that provides eventual convergence, and another one that ensures both convergence and causal consistency. We also introduce two {\delta}-CRDT specifications of well-known replicated datatypes.
CRDTs are distributed data types that make eventual consistency of a distributed object possible and non ad-hoc. Specifically, state-based CRDTs ensure convergence through disseminating the en- tire state, that may be large, and merging it to other replicas ; whereas operation-based CRDTs disseminate operations (i.e., small states) assuming an exactly-once reliable dissemination layer. We introduce Delta State Conflict-Free Replicated Datatypes ({\delta}-CRDT) %DIFDELCMD < }%%% that can achieve the best of both worlds: small messages with an incremental nature, as in operation-based CRDTs, disseminated over unreliable communication channels, as in traditional state-based CRDTs. This is achieved by defining \delta}-mutators to return a delta-state, typically with a much smaller size than the full state, that is joined to both: local and remote states. We introduce the {\delta}-CRDT framework, and we explain it through establishing a correspondence to current state-based CRDTs. In addition, we present an anti-entropy algorithm that ensures causal consistency, and we introduce two {\delta}-CRDT specifications of well-known replicated datatypes.
[ { "type": "R", "before": "achieve this by sharing local state changes through shipping the entire", "after": "ensure convergence through disseminating the en- tire", "start_char_pos": 145, "end_char_pos": 216 }, { "type": "R", "before": "is then merged", "after": "may be large, and merging it", "start_char_pos": 229, "end_char_pos": 243 }, { "type": "R", "before": "with an idempotent, associative, and commutative join operation, ensuring convergence.This imposes a large communication overhead as the state size becomes larger.", "after": "; whereas operation-based CRDTs disseminate operations (i.e., small states) assuming an exactly-once reliable dissemination layer.", "start_char_pos": 262, "end_char_pos": 425 }, { "type": "D", "before": ", which make use of", "after": null, "start_char_pos": 502, "end_char_pos": 521 }, { "type": "D", "before": "\\delta", "after": null, "start_char_pos": 522, "end_char_pos": 528 }, { "type": "R", "before": "-mutators, defined in such a way to return a delta-state, typically, with a much smaller size than the full state. Delta-states are joined to the local state as well as to the remote states (after being shipped). This", "after": "that", "start_char_pos": 546, "end_char_pos": 763 }, { "type": "A", "before": null, "after": "This is achieved by defining", "start_char_pos": 963, "end_char_pos": 963 }, { "type": "A", "before": null, "after": "\\delta", "start_char_pos": 964, "end_char_pos": 964 }, { "type": "A", "before": null, "after": "-mutators to return a delta-state, typically with a much smaller size than the full state, that is joined to both: local and remote states.", "start_char_pos": 965, "end_char_pos": 965 }, { "type": "R", "before": "state- based", "after": "state-based", "start_char_pos": 1075, "end_char_pos": 1087 }, { "type": "R", "before": "two", "after": "an", "start_char_pos": 1119, "end_char_pos": 1122 }, { "type": "R", "before": "algorithms: a basic one that provides eventual convergence, and another one that ensures both convergence and causal consistency. We also", "after": "algorithm that ensures causal consistency, and we", "start_char_pos": 1136, "end_char_pos": 1273 } ]
[ 0, 112, 348, 425, 660, 758, 1094, 1265 ]
1410.3793
1
We consider the classical optimal dividend payments problem under the Cram\'er-Lundberg model with exponential claim sizes subject to a constraint on the time of ruin (P1). We use the Lagrangian dual function which leads to an auxiliary problem (P2). For this problem, given a multiplier \Lambda, we prove the uniqueness of the optimal barrier strategy and we also obtain its value function. Finally, we prove that the optimal value function of (P1) is obtained as the point-wise infimum over \Lambda of all value functions of problems (P2) . We also present a series of numerical examples.
We consider the classical optimal dividends problem under the Cram\'er-Lundberg model with exponential claim sizes subject to a constraint on the time of ruin . We introduce the dual problem and show that the complementary slackness conditions are satisfied, thus there is no duality gap. Therefore the optimal value function can be obtained as the point-wise infimum of auxiliary value functions indexed by Lagrange multipliers . We also present a series of numerical examples.
[ { "type": "R", "before": "dividend payments", "after": "dividends", "start_char_pos": 34, "end_char_pos": 51 }, { "type": "R", "before": "(P1). We use the Lagrangian dual function which leads to an auxiliary problem (P2). For this problem, given a multiplier \\Lambda, we prove the uniqueness of the optimal barrier strategy and we also obtain its value function. Finally, we prove that the", "after": ". We introduce the dual problem and show that the complementary slackness conditions are satisfied, thus there is no duality gap. Therefore the", "start_char_pos": 167, "end_char_pos": 418 }, { "type": "R", "before": "of (P1) is", "after": "can be", "start_char_pos": 442, "end_char_pos": 452 }, { "type": "R", "before": "over \\Lambda of all value functions of problems (P2)", "after": "of auxiliary value functions indexed by Lagrange multipliers", "start_char_pos": 488, "end_char_pos": 540 } ]
[ 0, 172, 250, 391, 542 ]
1410.3851
1
We extend the exploration regarding dynamic approach of macroeconomic variables by tackling systematically expenditure using Statistical Physics models (for the first time to the best of our knowledge). Also, using polynomial distribution which characterizes the behavior of dynamic systems in certain situations, we extend also our analysis to mean income data from the UK that span for a time interval of 35 years. We find that most of the values for coefficient of determination obtained from fitting the data from consecutive years analysis to be above 80\%. We used for our analysis first degree polynomial, but higher degree polynomials and longer time intervals between the years considered can dramatically increase goodness of the fit. As this methodology was applied successfully to income and wealth, we can conclude that macroeconomic systems can be treated similarly to dynamic systems from Physics. Subsequently, the analysis could be extended to other macroeconomic indicators.
We extend the exploration regarding dynamical approach of macroeconomic variables by tackling systematically expenditure using Statistical Physics models (for the first time to the best of our knowledge). Also, using polynomial distribution which characterizes the behavior of dynamical systems in certain situations, we extend also our analysis to mean income data from the UK that span for a time interval of 35 years. We find that most of the values for coefficient of determination obtained from fitting the data from consecutive years analysis to be above 80\%. We used for our analysis first degree polynomial, but higher degree polynomials and longer time intervals between the years considered can dramatically increase goodness of the fit. As this methodology was applied successfully to income and wealth, we can conclude that macroeconomic systems can be treated similarly to dynamic systems from Physics. Subsequently, the analysis could be extended to other macroeconomic indicators.
[ { "type": "R", "before": "dynamic", "after": "dynamical", "start_char_pos": 36, "end_char_pos": 43 }, { "type": "R", "before": "dynamic", "after": "dynamical", "start_char_pos": 275, "end_char_pos": 282 } ]
[ 0, 202, 416, 562, 744, 912 ]
1410.4054
1
We revisit the implementation of iterative solvers on discrete graphics processing units and demonstrate the benefit of implementations using extensive kernel fusion for pipelined formulations over conventional implementations of classical formulations. The proposed implementations with both CUDA and OpenCL are freely available in ViennaCL and achieve up to three-fold performance gains when compared to other solver packages for graphics processing units. Highest performance gains are obtained for small to medium-sized systems, while our implementations remain competitive with vendor-tuned implementations for very large systems. Our results are especially beneficial for transient problems, where many small to medium-sized systems instead of a single big system need to be solved.
We revisit the implementation of iterative solvers on discrete graphics processing units and demonstrate the benefit of implementations using extensive kernel fusion for pipelined formulations over conventional implementations of classical formulations. The proposed implementations with both CUDA and OpenCL are freely available in ViennaCL and are shown to be competitive with or even superior to other solver packages for graphics processing units. Highest performance gains are obtained for small to medium-sized systems, while our implementations are on par with vendor-tuned implementations for very large systems. Our results are especially beneficial for transient problems, where many small to medium-sized systems instead of a single big system need to be solved.
[ { "type": "R", "before": "achieve up to three-fold performance gains when compared", "after": "are shown to be competitive with or even superior", "start_char_pos": 346, "end_char_pos": 402 }, { "type": "R", "before": "remain competitive", "after": "are on par", "start_char_pos": 559, "end_char_pos": 577 } ]
[ 0, 253, 458, 635 ]
1410.4382
1
This paper concerns the computation of risk measures for financial data and asks how, given a risk measurement procedure, we can tell whether the answers it produces are correct . We draw the distinction between `external' and `internal' risk measures and concentrate on the latter, where we observe data in real time, make predictions and observe outcomes. It is argued that evaluation of such procedures is best addressed from the point of view of probability forecasting or Dawid's theory of `prequential statistics' [Dawid, JRSS(A)1984]. We introduce a concept of ` consistency ' of a risk measure , which is close to Dawid's `strong prequential principle' , and examine its application to quantile forecasting (VaR -- value at risk) and to mean estimation (applicable to CVaR -- expected shortfall). ] We show in particular that VaR has special properties not shared by any other risk measure . In a final section we show that a simple data-driven feedback algorithm can produce VaR estimates on financial data that easily pass both the consistency test and a further newly-introduced statistical test for independence of a binary sequence.
This paper concerns sequential computation of risk measures for financial data and asks how, given a risk measurement procedure, we can tell whether the answers it produces are `correct' . We draw the distinction between `external' and `internal' risk measures and concentrate on the latter, where we observe data in real time, make predictions and observe outcomes. It is argued that evaluation of such procedures is best addressed from the point of view of probability forecasting or Dawid's theory of `prequential statistics' [Dawid, JRSS(A)1984]. We introduce a concept of ` calibration ' of a risk measure in a dynamic setting, following the precepts of Dawid's weak and strong prequential principles , and examine its application to quantile forecasting (VaR -- value at risk) and to mean estimation (applicable to CVaR -- expected shortfall). The relationship between these ideas and `elicitability' Gneiting, JASA 2011] is examined. We show in particular that VaR has special properties not shared by any other risk measure . Turning to CVaR we argue that its main deficiency is the unquantifiable tail dependence of estimators . In a final section we show that a simple data-driven feedback algorithm can produce VaR estimates on financial data that easily pass both the consistency test and a further newly-introduced statistical test for independence of a binary sequence.
[ { "type": "R", "before": "the", "after": "sequential", "start_char_pos": 20, "end_char_pos": 23 }, { "type": "R", "before": "correct", "after": "`correct'", "start_char_pos": 170, "end_char_pos": 177 }, { "type": "R", "before": "consistency", "after": "calibration", "start_char_pos": 570, "end_char_pos": 581 }, { "type": "R", "before": ", which is close to", "after": "in a dynamic setting, following the precepts of", "start_char_pos": 602, "end_char_pos": 621 }, { "type": "R", "before": "`strong prequential principle'", "after": "weak and strong prequential principles", "start_char_pos": 630, "end_char_pos": 660 }, { "type": "A", "before": null, "after": "The relationship between these ideas and `elicitability'", "start_char_pos": 805, "end_char_pos": 805 }, { "type": "A", "before": null, "after": "Gneiting, JASA 2011", "start_char_pos": 806, "end_char_pos": 806 }, { "type": "A", "before": null, "after": "is examined.", "start_char_pos": 808, "end_char_pos": 808 }, { "type": "A", "before": null, "after": ". Turning to CVaR we argue that its main deficiency is the unquantifiable tail dependence of estimators", "start_char_pos": 900, "end_char_pos": 900 } ]
[ 0, 179, 357, 541, 804, 902 ]
1410.4807
1
The article presents a description of geometry of Banach structures forming mathematical base of the 'Fundamental Theorem of asset Pricing' type phenomena. In this connection we uncover the role of plasterable cones and reflexive subspaces .
The article presents a description of geometry of Banach structures forming mathematical base of markets arbitrage absence type phenomena. In this connection the role of reflexive subspaces (replacing classically considered finite-dimensional subspaces) and plasterable cones is uncovered .
[ { "type": "R", "before": "the 'Fundamental Theorem of asset Pricing'", "after": "markets arbitrage absence", "start_char_pos": 97, "end_char_pos": 139 }, { "type": "D", "before": "we uncover", "after": null, "start_char_pos": 175, "end_char_pos": 185 }, { "type": "R", "before": "plasterable cones and reflexive subspaces", "after": "reflexive subspaces (replacing classically considered finite-dimensional subspaces) and plasterable cones is uncovered", "start_char_pos": 198, "end_char_pos": 239 } ]
[ 0, 155 ]
1410.4820
1
We consider the relationship between stationary distributions for stochastic models of chemical reaction systems and Lyapunov functions for their deterministic counterparts. Specifically, we derive the well known Lyapunov function of chemical reaction network theory as a scaling limit of the non-equilibrium potential of the stationary distribution of stochastically modeled complex balanced systems. We extend this result to general birth-death models and demonstrate via example that similar scaling limits can yield Lyapunov functions even for models that are not complex or detailed balanced, and may even have multiple equilibria.
We consider the relationship between stationary distributions for stochastic models of reaction systems and Lyapunov functions for their deterministic counterparts. Specifically, we derive the well known Lyapunov function of reaction network theory as a scaling limit of the non-equilibrium potential of the stationary distribution of stochastically modeled complex balanced systems. We extend this result to general birth-death models and demonstrate via example that similar scaling limits can yield Lyapunov functions even for models that are not complex or detailed balanced, and may even have multiple equilibria.
[ { "type": "D", "before": "chemical", "after": null, "start_char_pos": 87, "end_char_pos": 95 }, { "type": "D", "before": "chemical", "after": null, "start_char_pos": 234, "end_char_pos": 242 } ]
[ 0, 173, 401 ]
1410.5328
1
We propose an iterative gradient-based algorithm to efficiently solve the portfolio selection problem with multiple spectral risk constraints. Since the conditional value at risk (CVaR) is a special case of the spectral risk measure, our algorithm solves portfolio selection problems with multiple CVaR constraints. In each step, the algorithm solves very simple separable convex quadratic programs; hence, we show that the spectral risk constrained portfolio selection problem can be solved using the technology developed for solving mean-variance problems. The algorithm extends to the case where the objective is a weighted sum of the mean return and either a weighted combination or the maximum of a set of spectral risk measures. We report numerical results that show that our proposed algorithm is very efficient; it is at least two orders of magnitude faster than the state-of-the-art general purpose solver for all practical instances. One can leverage this efficiency to be robust against model risk by including constraints with respect to several different risk models.
We propose an iterative gradient-based algorithm to efficiently solve the portfolio selection problem with multiple spectral risk constraints. Since the conditional value at risk (CVaR) is a special case of the spectral risk measure, our algorithm solves portfolio selection problems with multiple CVaR constraints. In each step, the algorithm solves very simple separable convex quadratic programs; hence, we show that the spectral risk constrained portfolio selection problem can be solved using the technology developed for solving mean-variance problems. The algorithm extends to the case where the objective is a weighted sum of the mean return and either a weighted combination or the maximum of a set of spectral risk measures. We report numerical results that show that our proposed algorithm is very efficient; it is at least one order of magnitude faster than the state-of-the-art general purpose solver for all practical instances. One can leverage this efficiency to be robust against model risk by including constraints with respect to several different risk models.
[ { "type": "R", "before": "two orders", "after": "one order", "start_char_pos": 835, "end_char_pos": 845 } ]
[ 0, 142, 315, 399, 558, 734, 819, 943 ]
1410.6064
1
Homeostasis is a running theme in biology. Often achieved through feedback regulation strategies, homeostasis allows living cells to control their internal environment as a means for surviving changing and unfavourable environments. While many endogenous homeostatic motifs have been studied in living cells, synthetic homeostatic circuits have received far less attention. The tight regulation of the abundance of cellular products and intermediates in the noisy environment of the cell is now recognised as a critical requirement for several biotechnology and therapeutic applications . Here we lay the foundation for a regulation theory at the molecular level that explicitly takes into account the noisy nature of biochemical reactions and provides novel tools for the analysis and design of robust synthetic homeostatic circuits. Using these ideas, we propose a new regulation motif that implements an integral feedbackstrategy which can generically and effectively regulate{\em a wide class of reaction networks. By combining tools from probability and control theory, we show that the proposed control motif preserves the stability of the overall network, steers the population of any regulated species to a desired set point, and achieves robust perfect adaptation -- all without any prior knowledge of reaction rates. Moreover, our proposed control motif can be implemented using a very small number of molecules and hence has a negligible metabolic load. Strikingly, the regulatory motif exploits stochastic noise, leading to enhanced regulation in scenarios where noise-free implementations result in dysregulation. Several examples demonstrate the potential of the approach .
Homeostasis is a running theme in biology. Often achieved through feedback regulation strategies, homeostasis allows living cells to control their internal environment as a means for surviving changing and unfavourable environments. While many endogenous homeostatic motifs have been studied in living cells, some other motifs may remain under-explored or even undiscovered. At the same time, known regulatory motifs have been mostly analyzed at the deterministic level, and the effect of noise on their regulatory function has received low attention . Here we lay the foundation for a regulation theory at the molecular level that explicitly takes into account the noisy nature of biochemical reactions and provides novel tools for the analysis and design of robust homeostatic circuits. Using these ideas, we propose a new regulation motif , which we refer to as{\em antithetic integral feedback, and demonstrate its effectiveness as a strategy for generically regulating a wide class of reaction networks. By combining tools from probability and control theory, we show that the proposed motif preserves the stability of the overall network, steers the population of any regulated species to a desired set point, and achieves robust perfect adaptation -- all with low prior knowledge of reaction rates. Moreover, our proposed regulatory motif can be implemented using a very small number of molecules and hence has a negligible metabolic load. Strikingly, the regulatory motif exploits stochastic noise, leading to enhanced regulation in scenarios where noise-free implementations result in dysregulation. Finally, we discuss the possible manifestation of the proposed antithetic integral feedback motif in endogenous biological circuits and its realization in synthetic circuits .
[ { "type": "R", "before": "synthetic homeostatic circuits have received far less attention. The tight regulation of the abundance of cellular products and intermediates in the noisy environment of the cell is now recognised as a critical requirement for several biotechnology and therapeutic applications", "after": "some other motifs may remain under-explored or even undiscovered. At the same time, known regulatory motifs have been mostly analyzed at the deterministic level, and the effect of noise on their regulatory function has received low attention", "start_char_pos": 309, "end_char_pos": 586 }, { "type": "D", "before": "synthetic", "after": null, "start_char_pos": 803, "end_char_pos": 812 }, { "type": "R", "before": "that implements an integral feedbackstrategy which can generically and effectively regulate", "after": ", which we refer to as", "start_char_pos": 888, "end_char_pos": 979 }, { "type": "A", "before": null, "after": "antithetic integral feedback, and demonstrate its effectiveness as a strategy for generically regulating", "start_char_pos": 984, "end_char_pos": 984 }, { "type": "D", "before": "control", "after": null, "start_char_pos": 1102, "end_char_pos": 1109 }, { "type": "R", "before": "without any", "after": "with low", "start_char_pos": 1281, "end_char_pos": 1292 }, { "type": "R", "before": "control", "after": "regulatory", "start_char_pos": 1351, "end_char_pos": 1358 }, { "type": "R", "before": "Several examples demonstrate the potential of the approach", "after": "Finally, we discuss the possible manifestation of the proposed antithetic integral feedback motif in endogenous biological circuits and its realization in synthetic circuits", "start_char_pos": 1628, "end_char_pos": 1686 } ]
[ 0, 42, 232, 373, 588, 834, 1019, 1327, 1465, 1627 ]
1410.6084
1
Democrats in the US say that taxes can be used to "grease the wheels" of the economy and create wealth enough to recover taxes and thereby increase employment; the Republicans say that taxation discourages investmentand so increases unemployment. These arguments cannot both be correct, but both arguments seem meritorious. Faced with this paradox, one might hope that a rigorous mathematical approach might help determine which is the truth ] .
Democrats in the United States argue that government spending can be used to "grease the wheels" of the economy to create wealth and to increase employment; Republicans contend that government spending is wasteful and discourages investment, thereby increasing unemployment. These arguments cannot both be correct, but both arguments seem meritorious. Faced with this paradox, one might hope that a rigorous mathematical approach might help determine the truth. We address this economic question of fiscal stimulus as a new optimal control problem generalizing the model of Dutta and Radner 1999]. We find that there exists an optimal strategy and provide rigorous verification proof for the optimality. Further, we prove a few interesting mathematical properties of our solution, providing deeper insight into this important politico-economic debate and illustrating how the fiscal stimulus from the government may affect the profit-taking behavior of firms in the private sector .
[ { "type": "R", "before": "US say that taxes", "after": "United States argue that government spending", "start_char_pos": 17, "end_char_pos": 34 }, { "type": "R", "before": "and create wealth enough to recover taxes and thereby", "after": "to create wealth and to", "start_char_pos": 85, "end_char_pos": 138 }, { "type": "R", "before": "the Republicans say that taxation discourages investmentand so increases", "after": "Republicans contend that government spending is wasteful and discourages investment, thereby increasing", "start_char_pos": 160, "end_char_pos": 232 }, { "type": "R", "before": "which is the truth", "after": "the truth. We address this economic question of fiscal stimulus as a new optimal control problem generalizing the model of Dutta and Radner", "start_char_pos": 423, "end_char_pos": 441 }, { "type": "A", "before": null, "after": "1999", "start_char_pos": 442, "end_char_pos": 442 }, { "type": "A", "before": null, "after": ". We find that there exists an optimal strategy and provide rigorous verification proof for the optimality. Further, we prove a few interesting mathematical properties of our solution, providing deeper insight into this important politico-economic debate and illustrating how the fiscal stimulus from the government may affect the profit-taking behavior of firms in the private sector", "start_char_pos": 443, "end_char_pos": 443 } ]
[ 0, 159, 246, 323 ]
1410.6084
2
Democrats in the United States argue that government spending can be used to "grease the wheels" of the economy to create wealth and to increase employment; Republicans contend that government spending is wasteful and discourages investment, thereby increasing unemployment. These arguments cannot both be correct, but both arguments seem meritorious. Faced with this paradox, one might hope that a rigorous mathematical approach might help determine the truth. We address this economic question of fiscal stimulus as a new optimal control problem generalizing the model of Dutta and Radner 1999%DIFDELCMD < ]%%% . We find that there exists an optimal strategy and provide rigorous verification proof for the optimality . Further, we prove a few interesting mathematical properties of our solution, providing deeper insight into this important politico-economic debate and illustrating how the fiscal stimulus from the government may affect the profit-taking behavior of firms in the private sector .
During the Great Recession, Democrats in the United States argued that government spending could be utilized to "grease the wheels" of the economy in order to create wealth and to increase employment; Republicans , on the other hand, contended that government spending is wasteful and discouraged investment, thereby increasing unemployment. Today, in 2020, we find ourselves in the midst of another crisis where government spending and fiscal stimulus is again being considered as a solution. In the present paper, we address this question by formulating an optimal control problem generalizing the model of %DIFDELCMD < ]%%% Radner Shepp (1996). The model allows for the company to borrow continuously from the government. We prove that there exists an optimal strategy ; rigorous verification proofs for its optimality are provided. We proceed to prove that government loans increase the expected net value of a company. We also examine the consequences of different profit-taking behaviors among firms who receive fiscal stimulus .
[ { "type": "A", "before": null, "after": "During the Great Recession,", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "argue", "after": "argued", "start_char_pos": 32, "end_char_pos": 37 }, { "type": "R", "before": "can be used", "after": "could be utilized", "start_char_pos": 63, "end_char_pos": 74 }, { "type": "A", "before": null, "after": "in order", "start_char_pos": 113, "end_char_pos": 113 }, { "type": "R", "before": "contend", "after": ", on the other hand, contended", "start_char_pos": 171, "end_char_pos": 178 }, { "type": "R", "before": "discourages", "after": "discouraged", "start_char_pos": 220, "end_char_pos": 231 }, { "type": "R", "before": "These arguments cannot both be correct, but both arguments seem meritorious. Faced with this paradox, one might hope that a rigorous mathematical approach might help determine the truth. We address this economic question of fiscal stimulus as a new", "after": "Today, in 2020, we find ourselves in the midst of another crisis where government spending and fiscal stimulus is again being considered as a solution. In the present paper, we address this question by formulating an", "start_char_pos": 277, "end_char_pos": 525 }, { "type": "D", "before": "Dutta and Radner", "after": null, "start_char_pos": 576, "end_char_pos": 592 }, { "type": "D", "before": "1999", "after": null, "start_char_pos": 593, "end_char_pos": 597 }, { "type": "R", "before": ". We find", "after": "Radner", "start_char_pos": 615, "end_char_pos": 624 }, { "type": "A", "before": null, "after": "Shepp (1996). The model allows for the company to borrow continuously from the government. We prove", "start_char_pos": 625, "end_char_pos": 625 }, { "type": "R", "before": "and provide rigorous verification proof for the optimality . Further, we prove a few interesting mathematical properties of our solution, providing deeper insight into this important politico-economic debate and illustrating how the fiscal stimulus from the government may affect the", "after": "; rigorous verification proofs for its optimality are provided. We proceed to prove that government loans increase the expected net value of a company. We also examine the consequences of different", "start_char_pos": 664, "end_char_pos": 947 }, { "type": "R", "before": "behavior of firms in the private sector", "after": "behaviors among firms who receive fiscal stimulus", "start_char_pos": 962, "end_char_pos": 1001 } ]
[ 0, 158, 276, 353, 463, 724 ]
1410.7453
1
We construct a binomial model for a guaranteed minimum withdrawal benefit (GMWB) rider to a variable annuity (VA) under optimal policyholder behaviour. The binomial model results in explicitly formulated perfect hedging strategies funded using only periodic fee income. We consider the separate perspectives of the insurer and policyholder and introduce a unifying relationship. Decompositions of the VA and GMWB contract into term-certain payments and options representing the guarantee and early surrender features similar to those presented in Hyndman and Wenger (Insurance Math. Econom. 55:283-290, 2014) are extended to the binomial framework. We incorporate an approximation algorithm for Asian options that significantly improves efficiency of the binomial model while retaining accuracy. Several numerical examples are provided which illustrate both the accuracy and the tractability of the model .
We construct a binomial model for a guaranteed minimum withdrawal benefit (GMWB) rider to a variable annuity (VA) under optimal policyholder behaviour. The binomial model results in explicitly formulated perfect hedging strategies funded using only periodic fee income. We consider the separate perspectives of the insurer and policyholder and introduce a unifying relationship. Decompositions of the VA and GMWB contract into term-certain payments and options representing the guarantee and early surrender features are extended to the binomial framework. We incorporate an approximation algorithm for Asian options that significantly improves efficiency of the binomial model while retaining accuracy. Several numerical examples are provided which illustrate both the accuracy and the tractability of the binomial model. We extend the binomial model to include policy holder mortality and death benefits. Pricing, hedging, and the decompositions of the contract are extended to incorporate mortality risk. We prove limiting results for the hedging strategies and demonstrate mortality risk diversification. Numerical examples are provided which illustrate the effectiveness of hedging and the diversification of mortality risk under capacity constraints with finite pools .
[ { "type": "D", "before": "similar to those presented in Hyndman and Wenger (Insurance Math. Econom. 55:283-290, 2014)", "after": null, "start_char_pos": 517, "end_char_pos": 608 }, { "type": "R", "before": "model", "after": "binomial model. We extend the binomial model to include policy holder mortality and death benefits. Pricing, hedging, and the decompositions of the contract are extended to incorporate mortality risk. We prove limiting results for the hedging strategies and demonstrate mortality risk diversification. Numerical examples are provided which illustrate the effectiveness of hedging and the diversification of mortality risk under capacity constraints with finite pools", "start_char_pos": 899, "end_char_pos": 904 } ]
[ 0, 151, 269, 378, 582, 648, 795 ]
1410.8671
1
We model business relationships exemplified for a (re)insurance market by a bipartite graph which determines the sharing of severe losses . Using Pareto-tailed claims and multivariate regular variation we obtain asymptotic results for the Value-at-Risk and the Conditional Tail Expectation. We show that the dependence on the network structure plays a fundamental role in their asymptotic behaviour. As is well-known , if the Pareto exponent is larger than 1, then for the individual agent ( re-insurance company) diversification is beneficial, whereas when it is less than 1, concentration on a few objects is the better strategy. The situation changes, however, when systemic risk comes into play. The random network structure has a strong influence on diversification effects, which destroys this simple individual agent's diversification rule. It turns out that diversification is always beneficial from a macro-prudential point of view creating a conflicting situation between the incentives of individual agents and the interest of some superior entity to keep overall risk small. We explain the influence of the network structure on diversification effects in different network scenarios.
We model the influence of sharing large exogeneous losses to the reinsurance market by a bipartite graph . Using Pareto-tailed claims and multivariate regular variation we obtain asymptotic results for the Value-at-Risk and the Conditional Tail Expectation. We show that the dependence on the network structure plays a fundamental role in their asymptotic behaviour. As is well-known in a non-network setting , if the Pareto exponent is larger than 1, then for the individual agent ( reinsurance company) diversification is beneficial, whereas when it is less than 1, concentration on a few objects is the better strategy. An additional aspect of this paper is the amount of uninsured losses which have to be convered by society. In the situation of networks of agents, in our setting diversification is never detrimental concerning the amount of uninsured losses. If the Pareto-tailed claims have finite mean, diversification turns out to be never detrimental, both for society and for individual agents. In contrast, if the Pareto-tailed claims have infinite mean, a conflicting situation may arise between the incentives of individual agents and the interest of some regulator to keep risk for society small. We explain the influence of the network structure on diversification effects in different network scenarios.
[ { "type": "R", "before": "business relationships exemplified for a (re)insurance", "after": "the influence of sharing large exogeneous losses to the reinsurance", "start_char_pos": 9, "end_char_pos": 63 }, { "type": "D", "before": "which determines the sharing of severe losses", "after": null, "start_char_pos": 92, "end_char_pos": 137 }, { "type": "A", "before": null, "after": "in a non-network setting", "start_char_pos": 417, "end_char_pos": 417 }, { "type": "R", "before": "re-insurance", "after": "reinsurance", "start_char_pos": 493, "end_char_pos": 505 }, { "type": "R", "before": "The situation changes, however, when systemic risk comes into play. The random network structure has a strong influence on diversification effects, which destroys this simple individual agent's diversification rule. It turns out that diversification is always beneficial from a macro-prudential point of view creating a conflicting situation", "after": "An additional aspect of this paper is the amount of uninsured losses which have to be convered by society. In the situation of networks of agents, in our setting diversification is never detrimental concerning the amount of uninsured losses. If the Pareto-tailed claims have finite mean, diversification turns out to be never detrimental, both for society and for individual agents. In contrast, if the Pareto-tailed claims have infinite mean, a conflicting situation may arise", "start_char_pos": 633, "end_char_pos": 974 }, { "type": "R", "before": "superior entity to keep overall risk", "after": "regulator to keep risk for society", "start_char_pos": 1044, "end_char_pos": 1080 } ]
[ 0, 290, 399, 632, 700, 848, 1087 ]
1411.0496
1
We propose a novel framework combining detrended fluctuation analysis with standard regression methodology. The method is built on detrended variances and covariances and it is designed to estimate regression parameters at different scales and under potential non-stationarity and power-law correlations. Selected examples from physics, finance and environmental sciences illustrate usefulness of the framework .
We propose a framework combining detrended fluctuation analysis with standard regression methodology. The method is built on detrended variances and covariances and it is designed to estimate regression parameters at different scales and under potential non-stationarity and power-law correlations. The former feature allows for distinguishing between effects for a pair of variables from different temporal perspectives. The latter ones make the method a significant improvement over the standard least squares estimation. Theoretical claims are supported by Monte Carlo simulations. The method is then applied on selected examples from physics, finance , environmental science and epidemiology. For most of the studied cases, the relationship between variables of interest varies strongly across scales .
[ { "type": "D", "before": "novel", "after": null, "start_char_pos": 13, "end_char_pos": 18 }, { "type": "R", "before": "Selected", "after": "The former feature allows for distinguishing between effects for a pair of variables from different temporal perspectives. The latter ones make the method a significant improvement over the standard least squares estimation. Theoretical claims are supported by Monte Carlo simulations. The method is then applied on selected", "start_char_pos": 305, "end_char_pos": 313 }, { "type": "R", "before": "and environmental sciences illustrate usefulness of the framework", "after": ", environmental science and epidemiology. For most of the studied cases, the relationship between variables of interest varies strongly across scales", "start_char_pos": 345, "end_char_pos": 410 } ]
[ 0, 107, 304 ]
1411.0782
1
The emerging fields of genetic engineering, synthetic biology, DNA computing, DNA nanotechnology, and molecular programming herald the birth of a new information technology that acquires information by directly sensing molecules within a chemical environment, stores information in molecules such as DNA, RNA, and proteins, processes that information by means of chemical and biochemical transformations, and uses that information to direct the manipulation of matter at the nanometer scale. To scale up beyond current proof-of-principle demonstrations, new methods for managing the complexity of designed molecular systems will need to be developed. Here we focus on the challenge of verifying the correctness of molecular implementations of abstract chemical reaction networks, where operation in a well-mixed "soup" of molecules is stochastic, asynchronous, concurrent, and often involves multiple intermediate steps in the implementation, parallel pathways, and side reactions. This problem relates to the verification of Petri Nets , but existing approaches are not sufficient for certain situations that commonly arise in molecular implementations, such as what we call "delayed choice. " We formulate a new theory of pathway decomposition that provides an elegant formal basis for comparing chemical reaction network implementations, and we present an algorithm that computes this basis. We further show how pathway decomposition can be combined with weak bisimulation to handle a wider class that includes all currently known enzyme-free DNA implementation techniques. We anticipate that our notion of logical equivalence between chemical reaction network implementations will be valuable for other molecular implementations such as biochemical enzyme systems, and perhaps even more broadly in concurrency theory.
Here we focus on the challenge of verifying the correctness of molecular implementations of abstract chemical reaction networks, where operation in a well-mixed "soup" of molecules is stochastic, asynchronous, concurrent, and often involves multiple intermediate steps in the implementation, parallel pathways, and side reactions. This problem relates to the verification of Petri nets , but existing approaches are not sufficient for providing a single guarantee covering an infinite set of possible initial states (molecule counts) and an infinite state space potentially explored by the system given any initial state. We address these issues by formulating a new theory of pathway decomposition that provides an elegant formal basis for comparing chemical reaction network implementations, and we present an algorithm that computes this basis. Our theory naturally handles certain situations that commonly arise in molecular implementations, such as what we call "delayed choice," that are not easily accommodated by other approaches. We further show how pathway decomposition can be combined with weak bisimulation to handle a wider class that includes most currently known enzyme-free DNA implementation techniques. We anticipate that our notion of logical equivalence between chemical reaction network implementations will be valuable for other molecular implementations such as biochemical enzyme systems, and perhaps even more broadly in concurrency theory.
[ { "type": "D", "before": "The emerging fields of genetic engineering, synthetic biology, DNA computing, DNA nanotechnology, and molecular programming herald the birth of a new information technology that acquires information by directly sensing molecules within a chemical environment, stores information in molecules such as DNA, RNA, and proteins, processes that information by means of chemical and biochemical transformations, and uses that information to direct the manipulation of matter at the nanometer scale. To scale up beyond current proof-of-principle demonstrations, new methods for managing the complexity of designed molecular systems will need to be developed.", "after": null, "start_char_pos": 0, "end_char_pos": 650 }, { "type": "R", "before": "Nets", "after": "nets", "start_char_pos": 1032, "end_char_pos": 1036 }, { "type": "R", "before": "certain situations that commonly arise in molecular implementations, such as what we call \"delayed choice. \" We formulate", "after": "providing a single guarantee covering an infinite set of possible initial states (molecule counts) and an infinite state space potentially explored by the system given any initial state. We address these issues by formulating", "start_char_pos": 1086, "end_char_pos": 1207 }, { "type": "A", "before": null, "after": "Our theory naturally handles certain situations that commonly arise in molecular implementations, such as what we call \"delayed choice,\" that are not easily accommodated by other approaches.", "start_char_pos": 1395, "end_char_pos": 1395 }, { "type": "R", "before": "all", "after": "most", "start_char_pos": 1515, "end_char_pos": 1518 } ]
[ 0, 491, 650, 981, 1192, 1394, 1577 ]
1411.1103
1
We explore martingale and convex duality techniques to study optimal investment strategies that maximize expected risk-averse utility from consumption and terminal wealth in a pure-jump model driven by (multivariate) marked point processes and in presence of margin requirements such as different interest rates for borrowing and lending and risk premiums for short positions . Margin requirements are modelled by adding in a margin payment function to the investor's wealth equation which is nonlinear with respect to the portfolio proportion process . We give sufficient conditions for existence of optimal policies and find closed-form solutions for the optimal value function in the case of pure-jump models with jump-size distributions modulated by a two-state Markov chain and agents with logarithmic and fractional power utility .
We explore martingale and convex duality techniques to study optimal investment strategies that maximize expected risk-averse utility from consumption and terminal wealth . We consider a market model with jumps driven by (multivariate) marked point processes and so-called non-linear wealth dynamics which allows to take account of relaxed assumptions such as differential borrowing and lending interest rates or short positions with cash collateral and negative rebate rates . We give suffcient conditions for existence of optimal policies for agents with logarithmic and CRRA power utility. We find closed-form solutions for the optimal value function in the case of pure-jump models with jump-size distributions modulated by a two-state Markov chain .
[ { "type": "R", "before": "in a pure-jump model", "after": ". We consider a market model with jumps", "start_char_pos": 171, "end_char_pos": 191 }, { "type": "R", "before": "in presence of margin requirements such as different interest rates for", "after": "so-called non-linear wealth dynamics which allows to take account of relaxed assumptions such as differential", "start_char_pos": 244, "end_char_pos": 315 }, { "type": "R", "before": "and risk premiums for short positions . Margin requirements are modelled by adding in a margin payment function to the investor's wealth equation which is nonlinear with respect to the portfolio proportion process", "after": "interest rates or short positions with cash collateral and negative rebate rates", "start_char_pos": 338, "end_char_pos": 551 }, { "type": "R", "before": "sufficient", "after": "suffcient", "start_char_pos": 562, "end_char_pos": 572 }, { "type": "R", "before": "and", "after": "for agents with logarithmic and CRRA power utility. We", "start_char_pos": 618, "end_char_pos": 621 }, { "type": "D", "before": "and agents with logarithmic and fractional power utility", "after": null, "start_char_pos": 779, "end_char_pos": 835 } ]
[ 0, 216, 553 ]
1411.1229
1
We study super-replication of contingent claims in an illiquid market with model uncertainty. Illiquidity is captured by nonlinear transaction costs in discrete time and model uncertainty arises as our only assumption on stock price returns is that they are in a range specified by fixed volatility bounds. We provide a dual characterization of super-replication prices as a supremum of penalized expectations for the contingent claim's payoff. We also describe the scaling limit of this dual representation when the number of trading periods increases to infinity. Hence, this paper complements the results in [ 8 ] and [ 16 ] for the case of model uncertainty.
We study super-replication of contingent claims in an illiquid market with model uncertainty. Illiquidity is captured by nonlinear transaction costs in discrete time and model uncertainty arises as our only assumption on stock price returns is that they are in a range specified by fixed volatility bounds. We provide a dual characterization of super-replication prices as a supremum of penalized expectations for the contingent claim's payoff. We also describe the scaling limit of this dual representation when the number of trading periods increases to infinity. Hence, this paper complements the results in [ 11 ] and [ 19 ] for the case of model uncertainty.
[ { "type": "R", "before": "8", "after": "11", "start_char_pos": 613, "end_char_pos": 614 }, { "type": "R", "before": "16", "after": "19", "start_char_pos": 623, "end_char_pos": 625 } ]
[ 0, 93, 306, 444, 565 ]
1411.1624
1
We provide explicit conditions on the distribution of risk-neutral log-returns which yield sharp asymptotic estimates on the implied volatility smile. Our results extend previous work of Benaim and Friz [Math. Finance 19 (2009), 1-12] and are valid in great generality, both for extreme strike (with arbitrary bounded maturity, possibly varying with the strike) and for small maturity (with arbitrary strike, possibly varying with the maturity) .
We provide explicit conditions on the distribution of risk-neutral log-returns which yield sharp asymptotic estimates on the implied volatility smile. We allow for a variety of asymptotic regimes, including both small maturity (with arbitrary strike) and extreme strike (with arbitrary bounded maturity), extending previous work of Benaim and Friz [Math. Finance 19 (2009), 1-12] . We present applications to popular models, including Carr-Wu finite moment logstable model, Merton's jump diffusion model and Heston's model .
[ { "type": "R", "before": "Our results extend", "after": "We allow for a variety of asymptotic regimes, including both small maturity (with arbitrary strike) and extreme strike (with arbitrary bounded maturity), extending", "start_char_pos": 151, "end_char_pos": 169 }, { "type": "R", "before": "and are valid in great generality, both for extreme strike (with arbitrary bounded maturity, possibly varying with the strike) and for small maturity (with arbitrary strike, possibly varying with the maturity)", "after": ". We present applications to popular models, including Carr-Wu finite moment logstable model, Merton's jump diffusion model and Heston's model", "start_char_pos": 235, "end_char_pos": 444 } ]
[ 0, 150, 209 ]
1411.1650
1
We model the ligand-receptor molecular communication channel with a discrete-time Markov model, and show how to obtain the capacity of this channel . We show that the capacity-achieving input distribution is IID. Further , unusually for a channel with memory , we show that feedback does not increase the capacity of this channel. We show how the capacity of the discrete-time channel approaches the capacity of Kabanov's Poisson channel, in the limit of short time steps and rapid ligand release.
We model biochemical signal transduction, based on a ligand-receptor binding mechanism, as a discrete-time finite-state Markov channel, which we call the BIND channel. We show how to obtain the capacity of this channel , for the case of binary output, binary channel state, and arbitrary finite input alphabets . We show that the capacity-achieving input distribution is IID. Further , we show that feedback does not increase the capacity of this channel. We show how the capacity of the discrete-time channel approaches the capacity of Kabanov's Poisson channel, in the limit of short time steps and rapid ligand release.
[ { "type": "R", "before": "the", "after": "biochemical signal transduction, based on a", "start_char_pos": 9, "end_char_pos": 12 }, { "type": "R", "before": "molecular communication channel with", "after": "binding mechanism, as", "start_char_pos": 29, "end_char_pos": 65 }, { "type": "R", "before": "Markov model, and", "after": "finite-state Markov channel, which we call the BIND channel. We", "start_char_pos": 82, "end_char_pos": 99 }, { "type": "A", "before": null, "after": ", for the case of binary output, binary channel state, and arbitrary finite input alphabets", "start_char_pos": 148, "end_char_pos": 148 }, { "type": "D", "before": ", unusually for a channel with memory", "after": null, "start_char_pos": 222, "end_char_pos": 259 } ]
[ 0, 150, 213, 331 ]
1411.2222
1
Design space exploration of multiprocessor systems involves the optimization of cost/performance functions over a large number of design parameters, most of which are discrete-valued . This optimization is non-trivial because the evaluation of cost/performance functions is computationally expensive , typically involving simulation of long benchmark programs on a cycle-accurate model of the system . Further, algorithms for optimization over discrete parameters do not scale well with the number of parameters. We describe a new approach to this optimization problem, based on embedding the discrete parameter space into an extended continuous space . Optimization is then carried out over the extended continuous space using standard descent based continuous optimization schemes. The embedding is performed using a novel simulation-based ergodic interpolation method that produces the interpolated value in a single simulation run . The post-embedding performance function is continuous, and observed to be piecewise smooth. We demonstrate the approach by considering a multiprocessor design exploration problem with 31 discrete parameterswhere the objective function is a weighted sum of cost and performance metrics, and cost-performance tradeoff curves are obtained by varying the weights. We use the COBYLA implementation from the Python SciPy library to perform the optimization on the extended continuous space . Near optimal solutions are obtained within three hundred simulation runs, and we observe improvements in the objective function ranging from 1.3X to 12.2X (for randomly chosen initial parameter values). Cost-performance trade-off curves generated from these optimization runs provide clear indicators for the optimal system configuration. Thus, continuous embeddings of discrete parameter optimization problemsoffer an effective mechanism for the design space exploration of multiprocessor systems .
Modern multi-core systems have a large number of design parameters, most of which are discrete-valued , and this number is likely to keep increasing as chip complexity rises. Further, the accurate evaluation of a potential design choice is computationally expensive because it requires detailed cycle-accurate system simulation. If the discrete parameter space can be embedded into a larger continuous parameter space, then continuous space techniques can, in principle, be applied to the system optimization problem. Such continuous space techniques often scale well with the number of parameters. We propose a novel technique for embedding the discrete parameter space into an extended continuous space so that continuous space techniques can be applied to the embedded problem using cycle accurate simulation for evaluating the objective function. This embedding is implemented using simulation-based ergodic interpolation , which, unlike spatial interpolation, produces the interpolated value within a single simulation run irrespective of the number of parameters. We have implemented this interpolation scheme in a cycle-based system simulator. In a characterization study, we observe that the interpolated performance curves are continuous, piece-wise smooth, and have low statistical error. We use the ergodic interpolation-based approach to solve a large multi-core design optimization problem with 31 design parameters. Our results indicate that continuous space optimization using ergodic interpolation-based embedding can be a viable approach for large multi-core design optimization problems .
[ { "type": "R", "before": "Design space exploration of multiprocessor systems involves the optimization of cost/performance functions over", "after": "Modern multi-core systems have", "start_char_pos": 0, "end_char_pos": 111 }, { "type": "R", "before": ". This optimization is non-trivial because the evaluation of cost/performance functions", "after": ", and this number is likely to keep increasing as chip complexity rises. Further, the accurate evaluation of a potential design choice", "start_char_pos": 183, "end_char_pos": 270 }, { "type": "R", "before": ", typically involving simulation of long benchmark programs on a", "after": "because it requires detailed", "start_char_pos": 300, "end_char_pos": 364 }, { "type": "R", "before": "model of the system . Further, algorithms for optimization over discrete parameters do not", "after": "system simulation. If the discrete parameter space can be embedded into a larger continuous parameter space, then continuous space techniques can, in principle, be applied to the system optimization problem. Such continuous space techniques often", "start_char_pos": 380, "end_char_pos": 470 }, { "type": "R", "before": "describe a new approach to this optimization problem, based on", "after": "propose a novel technique for", "start_char_pos": 516, "end_char_pos": 578 }, { "type": "R", "before": ". Optimization is then carried out over the extended continuous space using standard descent based continuous optimization schemes. The embedding is performed using a novel", "after": "so that continuous space techniques can be applied to the embedded problem using cycle accurate simulation for evaluating the objective function. This embedding is implemented using", "start_char_pos": 652, "end_char_pos": 824 }, { "type": "R", "before": "method that", "after": ", which, unlike spatial interpolation,", "start_char_pos": 864, "end_char_pos": 875 }, { "type": "R", "before": "in", "after": "within", "start_char_pos": 908, "end_char_pos": 910 }, { "type": "R", "before": ". The post-embedding performance function is continuous, and observed to be piecewise smooth. We demonstrate the approach by considering a multiprocessor design exploration", "after": "irrespective of the number of parameters. We have implemented this interpolation scheme in a cycle-based system simulator. In a characterization study, we observe that the interpolated performance curves are continuous, piece-wise smooth, and have low statistical error. We use the ergodic interpolation-based approach to solve a large multi-core design optimization", "start_char_pos": 935, "end_char_pos": 1107 }, { "type": "R", "before": "discrete parameterswhere the objective function is a weighted sum of cost and performance metrics, and cost-performance tradeoff curves are obtained by varying the weights. We use the COBYLA implementation from the Python SciPy library to perform the optimization on the extended continuous space . Near optimal solutions are obtained within three hundred simulation runs, and we observe improvements in the objective function ranging from 1.3X to 12.2X (for randomly chosen initial parameter values). Cost-performance trade-off curves generated from these optimization runs provide clear indicators for the optimal system configuration. Thus, continuous embeddings of discrete parameter optimization problemsoffer an effective mechanism for the design space exploration of multiprocessor systems", "after": "design parameters. Our results indicate that continuous space optimization using ergodic interpolation-based embedding can be a viable approach for large multi-core design optimization problems", "start_char_pos": 1124, "end_char_pos": 1920 } ]
[ 0, 184, 401, 512, 653, 783, 936, 1028, 1296, 1625, 1761 ]
1411.2675
1
For controlled discrete-time stochastic processes we introduce a new class of dynamic risk measures, which we call process-based. Their main features are that they measure risk of processes that are functions of the history of the base process. We introduce a new concept of conditional stochastic time consistency and we derive the structure of process-based risk measures enjoying this property. We show that they can be equivalently represented by a collection of static law-invariant risk measures on the space of functions of the state of the base process. We apply this result to controlled Markov processes and we derive dynamic programming equations. Next, we consider partially observable processes and we derive the structure of stochastically conditionally time-consistent risk measures in this case. We prove that they can be represented by a sequence of law invariant risk measures on the space of function of the observable part of the state. We also prove corresponding dynamic programming equations.
For controlled discrete-time stochastic processes we introduce a new class of dynamic risk measures, which we call process-based. Their main features are that they measure risk of processes that are functions of the history of the base process. We introduce a new concept of conditional stochastic time consistency and we derive the structure of process-based risk measures enjoying this property. We show that they can be equivalently represented by a collection of static law-invariant risk measures on the space of functions of the state of the base process. We apply this result to controlled Markov processes and we derive dynamic programming equations. Next, we consider partially observable processes and we derive the structure of stochastically conditionally time-consistent risk measures in this case. We establish equivalence of two approaches to such problems: history-dependent, and based on belief states, and we prove that the dynamic risk measures can be represented by a sequence of law invariant risk measures on the space of function of the observable part of the state. We also prove corresponding dynamic programming equations.
[ { "type": "R", "before": "prove that they", "after": "establish equivalence of two approaches to such problems: history-dependent, and based on belief states, and we prove that the dynamic risk measures", "start_char_pos": 815, "end_char_pos": 830 } ]
[ 0, 129, 244, 397, 561, 658, 811, 956 ]
1411.2675
2
For controlled discrete-time stochastic processes we introduce a new class of dynamic risk measures, which we call process-based. Their main features are that they measure risk of processes that are functions of the history of the base process. We introduce a new concept of conditional stochastic time consistency and we derive the structure of process-based risk measures enjoying this property. We show that they can be equivalently represented by a collection of static law-invariant risk measures on the space of functions of the state of the base process. We apply this result to controlled Markov processes and we derive dynamic programming equations. Next, we consider partially observable processes and we derive the structure of stochastically conditionally time-consistent risk measures in this case. We establish equivalence of two approaches to such problems: history-dependent, and based on belief states, and we prove that the dynamic risk measures can be represented by a sequence of law invariant risk measures on the space of function of the observable part of the state. We also prove corresponding dynamic programming equations.
For controlled discrete-time stochastic processes we introduce a new class of dynamic risk measures, which we call process-based. Their main features are that they measure risk of processes that are functions of the history of a base process. We introduce a new concept of conditional stochastic time consistency and we derive the structure of process-based risk measures enjoying this property. We show that they can be equivalently represented by a collection of static law-invariant risk measures on the space of functions of the state of the base process. We apply this result to controlled Markov processes and we derive dynamic programming equations. Next, we consider partially observable Markov processes and we derive the structure of stochastically conditionally time-consistent risk measures in this case. We prove that the dynamic risk measures can be represented by a sequence of law invariant risk measures on the space of function of the observable part of the state. We also derive the corresponding dynamic programming equations.
[ { "type": "R", "before": "the", "after": "a", "start_char_pos": 227, "end_char_pos": 230 }, { "type": "A", "before": null, "after": "Markov", "start_char_pos": 698, "end_char_pos": 698 }, { "type": "D", "before": "establish equivalence of two approaches to such problems: history-dependent, and based on belief states, and we", "after": null, "start_char_pos": 816, "end_char_pos": 927 }, { "type": "R", "before": "prove", "after": "derive the", "start_char_pos": 1099, "end_char_pos": 1104 } ]
[ 0, 129, 244, 397, 561, 658, 812, 1090 ]
1411.2782
1
We formulate and characterize a model to describe dynamics of semiflexible polymers in the presence of activity due to motor proteins attached irreversibly to a substrate, and a transverse pulling force acting on one end of the filament. The stochastic binding-unbinding of the motor proteins and their ability to move along the polymer, generates active forces. As the pulling force reaches a threshold value, the polymer eventually desorbs from the substrate. We present a mean field theory that predicts increase in desorption force with polymer bending rigidity, active velocity and processivity of the motor proteins. Performing molecular dynamics simulations of the polymer in presence of a Langevin heat bath, and stochastic motor activity we obtain desorption phase diagrams that show good agreement with theory. With increase in pulling force, the polymer undergoes a first order phase transition from mostly adsorbed to fully desorbed state via a regime of coexistence where the steady state dynamics of the polymer switches between large fraction of adsorbed and desorbed lengths .
We formulate and characterize a model to describe the dynamics of semiflexible polymers in the presence of activity due to motor proteins attached irreversibly to a substrate, and a transverse pulling force acting on one end of the filament. The stochastic binding-unbinding of the motor proteins and their ability to move along the polymer, generates active forces. As the pulling force reaches a threshold value, the polymer eventually desorbs from the substrate. Performing molecular dynamics simulations of the polymer in presence of a Langevin heat bath, and stochastic motor activity , we obtain desorption phase diagrams . The correlation time for fluctuations in desorbed fraction increases as one approaches complete desorption, captured quantitatively by a power law spectral density. We present theoretical analysis of the phase diagram using mean field approximations in the weakly bending limit of the polymer and performing linear stability analysis. This predicts increase in the desorption force with the polymer bending rigidity, active velocity and processivity of the motor proteins to capture the main features of the simulation results .
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 50, "end_char_pos": 50 }, { "type": "D", "before": "We present a mean field theory that predicts increase in desorption force with polymer bending rigidity, active velocity and processivity of the motor proteins.", "after": null, "start_char_pos": 463, "end_char_pos": 623 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 748, "end_char_pos": 748 }, { "type": "R", "before": "that show good agreement with theory. With increase in pulling force, the polymer undergoes a first order phase transition from mostly adsorbed to fully desorbed state via a regime of coexistence where the steady state dynamics", "after": ". The correlation time for fluctuations in desorbed fraction increases as one approaches complete desorption, captured quantitatively by a power law spectral density. We present theoretical analysis of the phase diagram using mean field approximations in the weakly bending limit", "start_char_pos": 785, "end_char_pos": 1012 }, { "type": "R", "before": "switches between large fraction of adsorbed and desorbed lengths", "after": "and performing linear stability analysis. This predicts increase in the desorption force with the polymer bending rigidity, active velocity and processivity of the motor proteins to capture the main features of the simulation results", "start_char_pos": 1028, "end_char_pos": 1092 } ]
[ 0, 238, 363, 462, 623, 822 ]
1411.2835
1
In a unified framework we study equilibrium in the presence of an insider having information on the signal of the firm value, which is naturally connected to the fundamental price of the firm related asset. The fundamental value itself is announced at a future random (stopping) time. We consider the two casesin which this release time of information is known and not known, respectively, to the insider . Allowing for very general dynamics, we study the structure of the insider's optimal strategies in equilibrium and we discuss market efficiency. With respect to market efficiency , we show that in the case the insider knows the release time of information , the market is fully efficient. In the case the insider does not know this random time, we see that there is no full efficiency, but there is nevertheless an equilibrium where the sensitivity of prices is decreasing in time according with the probability that the announcement time is greater than the current time. In other words, the prices become more and more stable as the announcement approaches.
In a unified framework we study equilibrium in the presence of an insider having information on the signal of the firm value, which is naturally connected to the fundamental price of the firm related asset. The fundamental value itself is announced at a future random (stopping) time. We consider two cases. First when the release time of information is known to the insider and then when it is unknown also to her . Allowing for very general dynamics, we study the structure of the insider's optimal strategies in equilibrium and we discuss market efficiency. In particular , we show that in the case the insider knows the information release time , the market is fully efficient. In the case the insider does not know this random time, we see that there is an equilibrium with no full efficiency, but where the sensitivity of prices is decreasing in time according with the probability that the announcement time is greater than the current time. In other words, the prices become more and more stable as the announcement approaches.
[ { "type": "R", "before": "the two casesin which this", "after": "two cases. First when the", "start_char_pos": 297, "end_char_pos": 323 }, { "type": "D", "before": "and not known, respectively,", "after": null, "start_char_pos": 361, "end_char_pos": 389 }, { "type": "A", "before": null, "after": "and then when it is unknown also to her", "start_char_pos": 405, "end_char_pos": 405 }, { "type": "R", "before": "With respect to market efficiency", "after": "In particular", "start_char_pos": 552, "end_char_pos": 585 }, { "type": "R", "before": "release time of information", "after": "information release time", "start_char_pos": 635, "end_char_pos": 662 }, { "type": "A", "before": null, "after": "an equilibrium with", "start_char_pos": 773, "end_char_pos": 773 }, { "type": "D", "before": "there is nevertheless an equilibrium", "after": null, "start_char_pos": 798, "end_char_pos": 834 } ]
[ 0, 206, 284, 551, 695, 980 ]
1411.3383
1
{\alpha}-synuclein ({\alpha}-syn) is the intrinsically disordered protein which is considered to be one of the causes of Parkinson's disease. This protein forms amyloid fibrils when in a highly concentrated solution. The fibril formation of {\alpha}-syn is induced not only by increases in {\alpha}-syn concentration but also by macromolecular crowding. We focused on the relation between the intrinsic disorder of {\alpha}-syn and macromolecular crowding, and constructed a simplified model of {\alpha}-syn including crowding agents based on statistical mechanics. The main assumption was that {\alpha}-syn can be expressed as coarse-grained particles with internal states coupled with effective volume; and disordered states were modeled by larger particles with larger internal entropy than other states. It was found that the crowding effect is taken into account as the effective internal entropy; and the crowding effect reduces the effective internal entropy of disordered states. From a Monte Carlo simulation, we provide scenarios of crowding-induced fibril formation. We also discuss the recent controversy over the existence of helically folded tetramers of {\alpha}-syn, and suggest that macromolecular crowding is the key to resolving the controversy.
{\alpha}-synuclein ({\alpha}-syn) is an intrinsically disordered protein which is considered to be one of the causes of Parkinson's disease. This protein forms amyloid fibrils when in a highly concentrated solution. The fibril formation of {\alpha}-syn is induced not only by increases in {\alpha}-syn concentration but also by macromolecular crowding. We focused on the relation between the intrinsic disorder of {\alpha}-syn and macromolecular crowding, and constructed a lattice gas model of {\alpha}-syn including crowding agents based on statistical mechanics. The main assumption was that {\alpha}-syn can be expressed as coarse-grained particles with internal states coupled with effective volume; and disordered states were modeled by larger particles with larger internal entropy than other states. It was found that the crowding effect is taken into account as the effective internal entropy; and the crowding effect reduces the effective internal entropy of disordered states. Based on Monte Carlo simulation, we provide scenarios of crowding-induced fibril formation. We also discuss the recent controversy over the existence of helically folded tetramers of {\alpha}-syn, and suggest that macromolecular crowding is the key to resolving the controversy.
[ { "type": "R", "before": "the", "after": "an", "start_char_pos": 37, "end_char_pos": 40 }, { "type": "R", "before": "simplified", "after": "lattice gas", "start_char_pos": 475, "end_char_pos": 485 }, { "type": "R", "before": "From a", "after": "Based on", "start_char_pos": 988, "end_char_pos": 994 } ]
[ 0, 141, 216, 353, 565, 704, 807, 902, 987, 1077 ]
1411.3383
2
{\alpha}-synuclein ({\alpha}-syn) is an intrinsically disordered protein which is considered to be one of the causes of Parkinson's disease. This protein forms amyloid fibrils when in a highly concentrated solution. The fibril formation of {\alpha}-syn is induced not only by increases in {\alpha}-syn concentration but also by macromolecular crowding. We focused on the relation between the intrinsic disorder of {\alpha}-syn and macromolecular crowding, and constructed a lattice gas model of {\alpha}-syn including crowding agents based on statistical mechanics. The main assumption was that {\alpha}-syn can be expressed as coarse-grained particles with internal states coupled with effective volume; and disordered states were modeled by larger particles with larger internal entropy than other states. It was found that the crowding effect is taken into account as the effective internal entropy ; and the crowding effect reduces the effective internal entropyof disordered states . Based on Monte Carlo simulation, we provide scenarios of crowding-induced fibril formation. We also discuss the recent controversy over the existence of helically folded tetramers of {\alpha}-syn, and suggest that macromolecular crowding is the key to resolving the controversy.
{\alpha}-synuclein ({\alpha}-syn) is an intrinsically disordered protein which is considered to be one of the causes of Parkinson's disease. This protein forms amyloid fibrils when in a highly concentrated solution. The fibril formation of {\alpha}-syn is induced not only by increases in {\alpha}-syn concentration but also by macromolecular crowding. In order to investigate the coupled effect of the intrinsic disorder of {\alpha}-syn and macromolecular crowding, we construct a lattice gas model of {\alpha}-syn in contact with a crowding agent reservoir based on statistical mechanics. The main assumption is that {\alpha}-syn can be expressed as coarse-grained particles with internal states coupled with effective volume; and disordered states are modeled by larger particles with larger internal entropy than other states. Thanks to the simplicity of the model, we can exactly calculate the number of conformations of crowding agents, and this enables us to prove that the original grand canonical ensemble with a crowding agent reservoir is mathematically equivalent to a canonical ensemble without crowding agents. In this expression, the effect of macromolecular crowding is absorbed in the internal entropy of disordered states; it is clearly shown that the crowding effect reduces the internal entropy . Based on Monte Carlo simulation, we provide scenarios of crowding-induced fibril formation. We also discuss the recent controversy over the existence of helically folded tetramers of {\alpha}-syn, and suggest that macromolecular crowding is the key to resolving the controversy.
[ { "type": "R", "before": "We focused on the relation between", "after": "In order to investigate the coupled effect of", "start_char_pos": 353, "end_char_pos": 387 }, { "type": "R", "before": "and constructed", "after": "we construct", "start_char_pos": 456, "end_char_pos": 471 }, { "type": "R", "before": "including crowding agents", "after": "in contact with a crowding agent reservoir", "start_char_pos": 508, "end_char_pos": 533 }, { "type": "R", "before": "was", "after": "is", "start_char_pos": 586, "end_char_pos": 589 }, { "type": "R", "before": "were", "after": "are", "start_char_pos": 727, "end_char_pos": 731 }, { "type": "R", "before": "It was found that the crowding effect is taken into account as the effective internal entropy ; and", "after": "Thanks to the simplicity of the model, we can exactly calculate the number of conformations of crowding agents, and this enables us to prove that the original grand canonical ensemble with a crowding agent reservoir is mathematically equivalent to a canonical ensemble without crowding agents. In this expression, the effect of macromolecular crowding is absorbed in the internal entropy of disordered states; it is clearly shown that", "start_char_pos": 808, "end_char_pos": 907 }, { "type": "R", "before": "effective internal entropyof disordered states", "after": "internal entropy", "start_char_pos": 940, "end_char_pos": 986 } ]
[ 0, 140, 215, 352, 565, 704, 807, 903, 1080 ]
1411.3491
1
DNA nanotubes are tubular structures composed of DNA crossover molecules. We present a bottom up approach for construction and characterization of these structures. Various possible topologies of nanotubes are constructed such as 6-helix, 8-helix and tri-tubes with different sequences and lengths. We have used fully atomistic molecular dynamics simulations to study the structure, stability and elasticity of these structures. Several nanosecond long MD simulations give the microscopic details about DNA nanotubes. Based on the structural analysis of simulation data, we show that 6-helix nanotubes are stable and maintain their tubular structure; while 8-helix nanotubes are flattened to stabilize themselves. We also are also comment on the sequence dependence and effect of overhangs. These structures are approximately four times more rigid having stretch modulus of ~4000 pN compared to the stretch modulus of 1000 pN of DNA double helix molecule of same length and sequence. The stretch moduli of these nanotubes are also three times larger than those of PX/JX crossover DNA molecules which have stretch modulus in the range of 1500-2000 pN. The calculated persistence length is in range of few microns which is close to the reported experimental results on certain class of DNA nanotubes.
DNA nanotubes are tubular structures composed of DNA crossover molecules. We present a bottom up approach for construction and characterization of these structures. Various possible topologies of nanotubes are constructed such as 6-helix, 8-helix and tri-tubes with different sequences and lengths. We have used fully atomistic molecular dynamics simulations to study the structure, stability and elasticity of these structures. Several nanosecond long MD simulations give the microscopic details about DNA nanotubes. Based on the structural analysis of simulation data, we show that 6-helix nanotubes are stable and maintain their tubular structure; while 8-helix nanotubes are flattened to stabilize themselves. We also comment on the sequence dependence and effect of overhangs. These structures are approximately four times more rigid having stretch modulus of ~4000 pN compared to the stretch modulus of 1000 pN of DNA double helix molecule of same length and sequence. The stretch moduli of these nanotubes are also three times larger than those of PX/JX crossover DNA molecules which have stretch modulus in the range of 1500-2000 pN. The calculated persistence length is in the range of few microns which is close to the reported experimental results on certain class of the DNA nanotubes.
[ { "type": "D", "before": "are also", "after": null, "start_char_pos": 722, "end_char_pos": 730 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1191, "end_char_pos": 1191 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1285, "end_char_pos": 1285 } ]
[ 0, 73, 164, 298, 428, 517, 650, 713, 790, 983, 1150 ]
1411.3947
1
We examine the possibility of incorporating information or views of market movements during the holding period of a portfolio, in the hedging of European options with respect to the underlying. Given a holding period interval that is bounded below, we explore whether it is possible to adjust the number of shares needed to effectively hedge our position to account for views on market dynamics from present until liquidation , to account for the time-dependence of the options' sensitivity to the underlying. We derive a preliminary analytical expression for the number of shares needed by adjusting the standard Black-Scholes-Merton \Delta quantity and present numerical results.
We examine the possibility of incorporating information or views of market movements during the holding period of a portfolio, in the hedging of European options with respect to the underlying. Given a holding period interval that is bounded below, we explore whether it is possible to adjust the number of shares needed to effectively hedge our position to account for views on market dynamics from present until the end of our interval , to account for the time-dependence of the options' sensitivity to the underlying. We derive an analytical expression for the number of shares needed by adjusting the standard Black-Scholes-Merton \Delta quantity , in the case of an arbitrary process for implied volatility, and we present numerical results.
[ { "type": "R", "before": "liquidation", "after": "the end of our interval", "start_char_pos": 414, "end_char_pos": 425 }, { "type": "R", "before": "a preliminary", "after": "an", "start_char_pos": 520, "end_char_pos": 533 }, { "type": "R", "before": "and", "after": ", in the case of an arbitrary process for implied volatility, and we", "start_char_pos": 651, "end_char_pos": 654 } ]
[ 0, 193, 509 ]
1411.3977
1
We present a HJM approach to the projection of multiple yield curves developed to capture the volatility content of historical term structures for risk management purposes. Since we observe the empirical data at daily frequency and only for a finite number of time to maturity buckets, we propose a modelling framework which is inherently discrete. In particular, we show how to approximate the HJM continuous time description of the multi-curve dynamics by a Vector Autoregressive process of order one. The resulting dynamics lends itself to a feasible estimation of the model volatility-correlation structure . Then, resorting to the Principal Component Analysis we further simplify the dynamics reducing the number of covariance components. Applying the constant volatility version of our model on a sample of curves from the Euro area, we demonstrate its forecasting ability through an out-of-sample test.
We present a HJM approach to the projection of multiple yield curves developed to capture the volatility content of historical term structures for risk management purposes. Since we observe the empirical data at daily frequency and only for a finite number of time-to-maturity buckets, we propose a modelling framework which is inherently discrete. In particular, we show how to approximate the HJM continuous time description of the multi-curve dynamics by a Vector Autoregressive process of order one. The resulting dynamics lends itself to a feasible estimation of the model volatility-correlation structure and market risk-premia . Then, resorting to the Principal Component Analysis we further simplify the dynamics reducing the number of covariance components. Applying the constant volatility version of our model on a sample of curves from the Euro area, we demonstrate its forecasting ability through an out-of-sample test.
[ { "type": "R", "before": "time to maturity", "after": "time-to-maturity", "start_char_pos": 260, "end_char_pos": 276 }, { "type": "A", "before": null, "after": "and market risk-premia", "start_char_pos": 611, "end_char_pos": 611 } ]
[ 0, 172, 348, 503, 613, 744 ]
1411.4067
1
The concept of a nested canalizing Boolean function has been studied over the last decade in the context of understanding the regulatory logic of molecular interaction networks, such as gene regulatory networks . Such networks are predominantly governed by nested canalizing functions . Derrida values are frequently used to analyze the robustness of a Boolean network to perturbations. This paper introduces closed formulas for the calculation of Derrida values of networks governed by Boolean nested canalizing functions, which previously required extensive simulations. Recently, the concept of nested canalizing functions has been generalized to include multistate functions, and a recursive formula has been derived for their number, as a function of the number of variables. This paper contains a detailed analysis of the class of nested canalizing functions over an arbitrary finite field. In addition, the concept of nested canalization is further generalized and closed formulas for the number of such generalized functions , as well as for the number of equivalence classes under permutation of variables , are derived. The latter is motivated by the fact that two nested canalizing functions that differ only by a permutation of the variables share many important properties .
This paper provides a collection of mathematical and computational tools for the study of robustness in nonlinear gene regulatory networks , represented by time- and state-discrete dynamical systems taking on multiple states. The focus is on networks governed by nested canalizing functions (NCFs), first introduced in the Boolean context by S. Kauffman. After giving a general definition of NCFs we analyze the class of such functions. We derive a formula for the normalized average c-sensitivities of multistate NCFs, which enables the calculation of the Derrida plot, a popular measure of network stability. We also provide a unique canonical parametrized polynomial form of NCFs. This form has several consequences. We can easily generate NCFs for varying parameter choices, and derive a closed form formula for the number of such functions in a given number of variables , as well as an asymptotic formula. Finally, we compute the number of equivalence classes of NCFs under permutation of variables . Together, the results of the paper represent a useful mathematical framework for the study of NCFs and their dynamic networks .
[ { "type": "R", "before": "The concept of a nested canalizing Boolean function has been studied over the last decade in the context of understanding the regulatory logic of molecular interaction networks, such as", "after": "This paper provides a collection of mathematical and computational tools for the study of robustness in nonlinear", "start_char_pos": 0, "end_char_pos": 185 }, { "type": "R", "before": ". Such networks are predominantly", "after": ", represented by time- and state-discrete dynamical systems taking on multiple states. The focus is on networks", "start_char_pos": 211, "end_char_pos": 244 }, { "type": "R", "before": ". Derrida values are frequently used to analyze the robustness of a Boolean network to perturbations. This paper introduces closed formulas for the", "after": "(NCFs), first introduced in the Boolean context by S. Kauffman. After giving a general definition of NCFs we analyze the class of such functions. We derive a formula for the normalized average c-sensitivities of multistate NCFs, which enables the", "start_char_pos": 285, "end_char_pos": 432 }, { "type": "R", "before": "Derrida values of networks governed by Boolean nested canalizing functions, which previously required extensive simulations. Recently, the concept of nested canalizing functions has been generalized to include multistate functions, and a recursive formula has been derived for their number, as a function of the number of variables. This paper contains a detailed analysis of the class of nested canalizing functions over an arbitrary finite field. In addition, the concept of nested canalization is further generalized and closed formulas", "after": "the Derrida plot, a popular measure of network stability. We also provide a unique canonical parametrized polynomial form of NCFs. This form has several consequences. We can easily generate NCFs for varying parameter choices, and derive a closed form formula", "start_char_pos": 448, "end_char_pos": 987 }, { "type": "R", "before": "generalized functions", "after": "functions in a given number of variables", "start_char_pos": 1011, "end_char_pos": 1032 }, { "type": "R", "before": "for", "after": "an asymptotic formula. Finally, we compute", "start_char_pos": 1046, "end_char_pos": 1049 }, { "type": "A", "before": null, "after": "of NCFs", "start_char_pos": 1084, "end_char_pos": 1084 }, { "type": "R", "before": ", are derived. The latter is motivated by the fact that two nested canalizing functions that differ only by a permutation of the variables share many important properties", "after": ". Together, the results of the paper represent a useful mathematical framework for the study of NCFs and their dynamic networks", "start_char_pos": 1116, "end_char_pos": 1286 } ]
[ 0, 212, 386, 572, 780, 896, 1130 ]
1411.4265
2
After the release of the final accounting standards for impairment in July 2014 by the IASB, banks will face the next significant methodological challenge after Basel 2. The presented work shares some first methodological thoughts and proposes ways how to approach underlying questions . It starts with a detailed discussion of the structural conservatism in the final standard. The exposure value as outlined in the IFRS 9 exposure draft (ED 2009) will be interpreted as an economically justified value under amortized cost accounting and provides the main methodological benchmark . Consequently, the ED 2009 can be used to quantify conservatism (ie hidden reserves) in the actual implementation of the final standard and to separate operational side-effects caused by the local implementation from actual credit risk impacts. The second part continues with a quantification of expected credit losses based on Impact of Risk instead of traditional cost of risk measures. An objective framework is suggested which allows for improved testing of forward looking credit risk estimates during credit cycles. This framework will prove useful to mitigate overly pro-cyclical provisioning and to reduce earnings volatility. Finally, an LGD monitoring and backtesting approach, applicable under regulatory requirements and accounting standards as well, is proposed. On basis of the NPL Backtest , part of the Impact of Risk framework, specific key risk indicators are introduced that allow for a detailed assessment of collections performance versus LGD in in NPL portfolio (bucket 3).
After the release of the final accounting standards for impairment in July 2014 by the IASB, banks will face the next significant methodological challenge after Basel 2. In this paper, first methodological thoughts are presented, and ways how to approach underlying questions are proposed . It starts with a detailed discussion of the structural conservatism in the final standard. The exposure value iACV(c) (idealized Amortized Cost Value), as originally introduced in the Exposure Draft 2009 (ED 2009) , will be interpreted as economic value under amortized cost accounting and provides the valuation benchmark under IFRS 9. Consequently, iACV(c) can be used to quantify conservatism (ie potential hidden reserves) in the actual implementation of the final standard and to separate operational side-effects caused by the local implementation from actual credit risk impacts. The second part continues with a quantification of expected credit losses based on Impact of Risk (c) instead of traditional cost of risk measures. An objective framework is suggested which allows for improved testing of forward looking credit risk estimates during credit cycles. This framework will prove useful to mitigate overly pro-cyclical provisioning and to reduce earnings volatility. Finally, an LGD monitoring and backtesting approach, applicable under regulatory requirements and accounting standards as well, is proposed. On basis of the NPL Dashboard , part of the Impact of Risk (c) framework, specific key risk indicators are introduced that allow for a detailed assessment of collections performance versus LGD in in NPL portfolio (bucket 3).
[ { "type": "R", "before": "The presented work shares some", "after": "In this paper,", "start_char_pos": 170, "end_char_pos": 200 }, { "type": "R", "before": "and proposes", "after": "are presented, and", "start_char_pos": 231, "end_char_pos": 243 }, { "type": "A", "before": null, "after": "are proposed", "start_char_pos": 286, "end_char_pos": 286 }, { "type": "R", "before": "as outlined in the IFRS 9 exposure draft", "after": "iACV(c) (idealized Amortized Cost Value), as originally introduced in the Exposure Draft 2009", "start_char_pos": 399, "end_char_pos": 439 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 450, "end_char_pos": 450 }, { "type": "R", "before": "an economically justified", "after": "economic", "start_char_pos": 474, "end_char_pos": 499 }, { "type": "R", "before": "main methodological benchmark . Consequently, the ED 2009", "after": "valuation benchmark under IFRS 9. Consequently, iACV(c)", "start_char_pos": 555, "end_char_pos": 612 }, { "type": "A", "before": null, "after": "potential", "start_char_pos": 654, "end_char_pos": 654 }, { "type": "A", "before": null, "after": "(c)", "start_char_pos": 930, "end_char_pos": 930 }, { "type": "R", "before": "Backtest", "after": "Dashboard", "start_char_pos": 1384, "end_char_pos": 1392 }, { "type": "A", "before": null, "after": "(c)", "start_char_pos": 1422, "end_char_pos": 1422 } ]
[ 0, 169, 288, 379, 831, 976, 1109, 1222, 1363 ]
1411.4438
1
This paper studies the connection between Dynkin games and optimal switching in continuous time and on a finite horizon. An auxiliary two-mode optimal switching problem is formulated which enables the derivation of the game's value under very mild assumptions. Under slightly stronger assumptions, the optimal switching formulation is used to prove the existence of a saddle point and a connection is made to the classical "Mokobodski's hypothesis". Results are illustrated by comparison to numerical solutions of three specific Dynkin games which have appeared in recent papers, including an example of a game option with payoff dependent on a jump-diffusion process .
This paper uses recent results on continuous-time finite-horizon optimal switching problems with negative switching costs to prove the existence of a saddle point in an optimal stopping (Dynkin) game. Sufficient conditions for the game's value to be continuous with respect to the time horizon are obtained using recent results on norm estimates for doubly reflected backward stochastic differential equations. This theory is then demonstrated numerically for the special cases of cancellable call and put options in a Black-Scholes market .
[ { "type": "R", "before": "studies the connection between Dynkin games and optimal switching in continuous time and on a finite horizon. An auxiliary two-mode optimal switching problem is formulated which enables the derivation of the game's value under very mild assumptions. Under slightly stronger assumptions, the optimal switching formulation is used", "after": "uses recent results on continuous-time finite-horizon optimal switching problems with negative switching costs", "start_char_pos": 11, "end_char_pos": 339 }, { "type": "R", "before": "and a connection is made to the classical \"Mokobodski's hypothesis\". Results are illustrated by comparison to numerical solutions of three specific Dynkin games which have appeared in recent papers, including an example of a game option with payoff dependent on a jump-diffusion process", "after": "in an optimal stopping (Dynkin) game. Sufficient conditions for the game's value to be continuous with respect to the time horizon are obtained using recent results on norm estimates for doubly reflected backward stochastic differential equations. This theory is then demonstrated numerically for the special cases of cancellable call and put options in a Black-Scholes market", "start_char_pos": 381, "end_char_pos": 667 } ]
[ 0, 120, 260, 449 ]
1411.4759
1
In this paper we develop an analytical framework, based on the Che approximation \mbox{%DIFAUXCMD che Least Recently Used (LRU) caches operating under the Shot Noise requests Model (SNM). The SNM was recently proposed in \mbox{%DIFAUXCMD nostroCCR to better capture the main characteristics of today Video on Demand ( Vod ) traffic. In this context, the Che approximation is derived as the application of a mean field principle to the cache eviction time. We investigate the validity of this approximation through an asymptotic analysis of the cache eviction time. Particularly , we provide a large deviation principle and a central limit theorem for the cache eviction time, as the cache size grows large. Furthermore, we obtain a non-asymptotic analytical upper bound on the error entailed by Che's approximation of the hit probability .
In this paper we analyze Least Recently Used (LRU) caches operating under the Shot Noise requests Model (SNM). The SNM was recently proposed to better capture the main characteristics of today Video on Demand ( VoD ) traffic. We investigate the validity of Che's approximation through an asymptotic analysis of the cache eviction time. In particular , we provide a large deviation principle , a law of large numbers and a central limit theorem for the cache eviction time, as the cache size grows large. Finally, we derive upper and lower bounds for the "hit" probability in tandem networks of caches under Che's approximation .
[ { "type": "R", "before": "develop an analytical framework, based on the Che approximation \\mbox{%DIFAUXCMD che", "after": "analyze", "start_char_pos": 17, "end_char_pos": 101 }, { "type": "D", "before": "in \\mbox{%DIFAUXCMD nostroCCR", "after": null, "start_char_pos": 218, "end_char_pos": 247 }, { "type": "R", "before": "Vod", "after": "VoD", "start_char_pos": 318, "end_char_pos": 321 }, { "type": "D", "before": "In this context, the Che approximation is derived as the application of a mean field principle to the cache eviction time.", "after": null, "start_char_pos": 333, "end_char_pos": 455 }, { "type": "R", "before": "this", "after": "Che's", "start_char_pos": 487, "end_char_pos": 491 }, { "type": "R", "before": "Particularly", "after": "In particular", "start_char_pos": 565, "end_char_pos": 577 }, { "type": "A", "before": null, "after": ", a law of large numbers", "start_char_pos": 619, "end_char_pos": 619 }, { "type": "R", "before": "Furthermore, we obtain a non-asymptotic analytical upper bound on the error entailed by", "after": "Finally, we derive upper and lower bounds for the \"hit\" probability in tandem networks of caches under", "start_char_pos": 708, "end_char_pos": 795 }, { "type": "D", "before": "of the hit probability", "after": null, "start_char_pos": 816, "end_char_pos": 838 } ]
[ 0, 187, 332, 455, 564, 707 ]
1411.6256
1
The purpose of this paper is to establish a robust representation theorem for conditional risk measures by using a module-based convex analysis, where risk measures are defined on a L^\infty-type module. We define and study a Fatou property for this kind of risk measures, which is a generalization of the already known Fatou property for static risk measures. In order to prove this robust representation theorem we provide a modular version of Krein-Smulian theorem .
The theory of theory of locally L^0-convex modules was introduced as the analytic basis for conditional L^0-convex risk measures. In this paper we first give some preliminaries of this theory and discuss about two kinds of countable concatenation properties. Second we extend to this framework some results from classical convex analysis, namely we provide randomized versions of Mazur lemma and Krein-\v{S .
[ { "type": "R", "before": "purpose of this paper is to establish a robust representation theorem for conditional risk measures by using a module-based convex analysis, where risk measures are defined on a L^\\infty-type module. We define and study a Fatou property for this kind of risk measures, which is a generalization of the already known Fatou property for static risk measures. In order to prove this robust representation theorem we provide a modular version of Krein-Smulian theorem", "after": "theory of theory of locally L^0-convex modules was introduced as the analytic basis for conditional L^0-convex risk measures. In this paper we first give some preliminaries of this theory and discuss about two kinds of countable concatenation properties. Second we extend to this framework some results from classical convex analysis, namely we provide randomized versions of Mazur lemma and Krein-\\v{S", "start_char_pos": 4, "end_char_pos": 467 } ]
[ 0, 203, 360 ]
1411.6256
2
The theory of theory of locally L^0-convex modules was introduced as the analytic basis for conditional L^0-convex risk measures. In this paper we first give some preliminaries of this theory and discuss about two kinds of countable concatenation properties. Second we extend to this framework some results from classical convex analysis , namely we provide randomized versions of Mazur lemma and Krein-\v{S .
We extend to the framework of locally L^0-convex modules some results from classical convex analysis . Namely, randomized versions of Mazur lemma and Krein-Smulian theorem under mild stability properties are provided .
[ { "type": "R", "before": "The theory of theory of", "after": "We extend to the framework of", "start_char_pos": 0, "end_char_pos": 23 }, { "type": "R", "before": "was introduced as the analytic basis for conditional L^0-convex risk measures. In this paper we first give some preliminaries of this theory and discuss about two kinds of countable concatenation properties. Second we extend to this framework some", "after": "some", "start_char_pos": 51, "end_char_pos": 298 }, { "type": "R", "before": ", namely we provide", "after": ". Namely,", "start_char_pos": 338, "end_char_pos": 357 }, { "type": "R", "before": "Krein-\\v{S", "after": "Krein-Smulian theorem under mild stability properties are provided", "start_char_pos": 397, "end_char_pos": 407 } ]
[ 0, 129, 258 ]
1411.6657
1
We consider the risk minimization problem , with capital at risk as the coherent measure, under the Black-Scholes setting. The problem is studied , when there exists additional correlation constraint between the desired portfolio and another financial index , and the closed form solution for the optimal portfolio is obtained . We also mention to variance reduction and getting better diversified portfolio as the applications of correlation condition in this paper .
We consider the problem of minimizing capital at risk in the Black-Scholes setting. The portfolio problem is studied given the possibility that a correlation constraint between the portfolio and a financial index is imposed. The optimal portfolio is obtained in closed form. The effects of the correlation constraint are explored; it turns out that this portfolio constraint leads to a more diversified portfolio .
[ { "type": "R", "before": "risk minimization problem , with", "after": "problem of minimizing", "start_char_pos": 16, "end_char_pos": 48 }, { "type": "R", "before": "as the coherent measure, under the", "after": "in the", "start_char_pos": 65, "end_char_pos": 99 }, { "type": "A", "before": null, "after": "portfolio", "start_char_pos": 127, "end_char_pos": 127 }, { "type": "R", "before": ", when there exists additional", "after": "given the possibility that a", "start_char_pos": 147, "end_char_pos": 177 }, { "type": "R", "before": "desired portfolio and another financial index , and the closed form solution for the", "after": "portfolio and a financial index is imposed. The", "start_char_pos": 213, "end_char_pos": 297 }, { "type": "R", "before": ". We also mention to variance reduction and getting better diversified portfolio as the applications of correlation condition in this paper", "after": "in closed form. The effects of the correlation constraint are explored; it turns out that this portfolio constraint leads to a more diversified portfolio", "start_char_pos": 328, "end_char_pos": 467 } ]
[ 0, 122, 329 ]
1411.6907
1
Given pervasive games that maintain a virtual spatiotemporal model of the physical world, game designers must contend with space and time in the virtual and physical . Previous works on pervasive games have partially contended with these representations , but an integrated conceptual model is lacking. Because they both make use of the Earth's geography, the problem domains of GIS and pervasive games overlap. The goal here is twofold: (1) help designers contend with the spatiotemporal representations and the analysis thereof; and (2) show that Peuquet's Triad Representational Framework from the domain of GIS is applicable to specifically the sub-domain of pervasive games that maintain a virtual model of physical space and time. By borrowing the Triad framework, space and time can be conceptualized in an integrated model as the WHAT, WHEN and WHERE, allowing for spatiotemporal analysis. The framework is evaluated and validated by applying it to the pervasive game called , Codename: Heroes .
Given pervasive games that maintain a virtual spatiotemporal model of the physical world, game designers must contend with space and time in the virtual and physical , but an integrated conceptual model is lacking. Because the problem domains of GIS and Pervasive Games overlap, Peuquet's Triad Representational Framework is exapted, from the domain of GIS , and applied to Pervasive Games. Using Dix et al.'s three types of space and Langran's notion of time, virtual time and space are then be mapped to the physical world and vice versa. The approach is evaluated using the pervasive game called Codename: Heroes , as case study .
[ { "type": "D", "before": ". Previous works on pervasive games have partially contended with these representations", "after": null, "start_char_pos": 166, "end_char_pos": 253 }, { "type": "R", "before": "they both make use of the Earth's geography, the", "after": "the", "start_char_pos": 311, "end_char_pos": 359 }, { "type": "R", "before": "pervasive games overlap. The goal here is twofold: (1) help designers contend with the spatiotemporal representations and the analysis thereof; and (2) show that", "after": "Pervasive Games overlap,", "start_char_pos": 387, "end_char_pos": 548 }, { "type": "A", "before": null, "after": "is exapted,", "start_char_pos": 592, "end_char_pos": 592 }, { "type": "R", "before": "is applicable to specifically the sub-domain of pervasive games that maintain a virtual model of physical space and time. By borrowing the Triad framework, space and time can be conceptualized in an integrated model as the WHAT, WHEN and WHERE, allowing for spatiotemporal analysis. The framework is evaluated and validated by applying it to", "after": ", and applied to Pervasive Games. Using Dix et al.'s three types of space and Langran's notion of time, virtual time and space are then be mapped to the physical world and vice versa. The approach is evaluated using", "start_char_pos": 616, "end_char_pos": 957 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 984, "end_char_pos": 985 }, { "type": "A", "before": null, "after": ", as case study", "start_char_pos": 1003, "end_char_pos": 1003 } ]
[ 0, 302, 411, 530, 737, 898 ]
1411.6938
1
We introduce a simple stochastic volatility model, which takes into account hitting times of the asset price, and study the optimal stopping problem corresponding to a put option whose time horizon (after the asset price hits a certain level) is exponentially distributed. We obtain explicit optimal stopping rules in various cases one of which is interestingly complex because of an unexpectedly disconnected continuation region. Finally, we discuss in detail how these stopping rules could be used for trading an American put when the trader expects a market drop in the near future.
We introduce a simple stochastic volatility model, whose novelty consists in taking into account hitting times of the asset price, and study the optimal stopping problem corresponding to a put option whose time horizon (after the asset price hits a certain level) is exponentially distributed. We obtain explicit optimal stopping rules in various cases one of which is interestingly complex because of an unexpected disconnected continuation region. Finally, we discuss in detail how these stopping rules could be used for trading an American put when the trader expects a market drop in the near future.
[ { "type": "R", "before": "which takes", "after": "whose novelty consists in taking", "start_char_pos": 51, "end_char_pos": 62 }, { "type": "R", "before": "unexpectedly", "after": "unexpected", "start_char_pos": 384, "end_char_pos": 396 } ]
[ 0, 272, 430 ]
1411.7653
1
We consider here the fractional version of the Heston model originally proposed by Comte, Coutin and Renault. Inspired by some recent ground-breaking work by Gatheral, Jaisson and Rosenbaum, who showed that fractional Brownian motion with short memory allows for a better calibration of the volatility surface (as opposed to the classical econometric approach of long memory of volatility) , we provide a characterisation of the short- and long-maturity asymptotics of the implied volatility smile. Our analysis reveals that the short-memory property precisely provides a jump-type behaviour of the smile for short maturities, thereby fixing the well-known standard inability of classical stochastic volatility models to fit the short-end of the volatility skew .
We consider the fractional Heston model originally proposed by Comte, Coutin and Renault. Inspired by recent ground-breaking work on rough volatility, which showed that models with volatility driven by fractional Brownian motion with short memory allows for better calibration of the volatility surface and more robust estimation of time series of historical volatility , we provide a characterisation of the short- and long-maturity asymptotics of the implied volatility smile. Our analysis reveals that the short-memory property precisely provides a jump-type behaviour of the smile for short maturities, thereby fixing the well-known standard inability of classical stochastic volatility models to fit the short-end of the volatility smile .
[ { "type": "R", "before": "here the fractional version of the", "after": "the fractional", "start_char_pos": 12, "end_char_pos": 46 }, { "type": "D", "before": "some", "after": null, "start_char_pos": 122, "end_char_pos": 126 }, { "type": "R", "before": "by Gatheral, Jaisson and Rosenbaum, who showed that", "after": "on rough volatility, which showed that models with volatility driven by", "start_char_pos": 155, "end_char_pos": 206 }, { "type": "D", "before": "a", "after": null, "start_char_pos": 263, "end_char_pos": 264 }, { "type": "R", "before": "(as opposed to the classical econometric approach of long memory of volatility)", "after": "and more robust estimation of time series of historical volatility", "start_char_pos": 310, "end_char_pos": 389 }, { "type": "R", "before": "skew", "after": "smile", "start_char_pos": 757, "end_char_pos": 761 } ]
[ 0, 109, 498 ]
1411.7991
1
We introduce and study three classes of over-the-counter markets specified by systems of Ordinary Differential Equations (ODE's), in the spirit of Duffie-G\^{a rleanu-Pedersen , Over-the-Counter markets, Econometrica, 73 (2005) . The key innovation is allowing for multiple assets. We compute the steady states for these ODE's.
We introduce and study a class of over-the-counter market models specified by systems of Ordinary Differential Equations (ODE's), in the spirit of Duffie- G^a rleanu-Pedersen 6 . The key innovation is allowing for multiple assets. We show the existence and uniqueness of a steady state for these ODE's.
[ { "type": "R", "before": "three classes", "after": "a class", "start_char_pos": 23, "end_char_pos": 36 }, { "type": "R", "before": "markets", "after": "market models", "start_char_pos": 57, "end_char_pos": 64 }, { "type": "R", "before": "Duffie-G\\^{a", "after": "Duffie- G^a", "start_char_pos": 147, "end_char_pos": 159 }, { "type": "R", "before": ", Over-the-Counter markets, Econometrica, 73 (2005)", "after": "6", "start_char_pos": 176, "end_char_pos": 227 }, { "type": "R", "before": "compute the steady states", "after": "show the existence and uniqueness of a steady state", "start_char_pos": 285, "end_char_pos": 310 } ]
[ 0, 229, 281 ]
1412.0042
1
Asset prices contain information about the probability distribution of future states and the stochastic discounting of these states. Without additional assumptions, probabilities and stochastic discounting cannot be separately identified. To understand this identification challenge, we extract a positive martingale component from the stochastic discount factor process using Perron-Frobenius theory . When this martingale is degenerate, probabilities that govern investor beliefs are recovered from the prices of Arrow securities. When the martingale component is not trivial, using this same approach recovers a probability measure, but not the one that is used by investors. We refer to this outcome as "misspecified recovery." We show that the resulting misspecified probability measure absorbs long-term risk adjustments. Many structural models of asset prices have stochastic discount factors with martingale components. Also empirical evidence on asset prices suggests that the recovered measure differs from the actual probability distribution. Even though this probability measure may fail to capture investor beliefs, we conclude that it is valuable as a tool for characterizing long-term risk pricing .
Asset prices contain information about the probability distribution of future states and the stochastic discounting of these states. Without additional assumptions, probabilities and stochastic discounting cannot be separately identified. To understand this identification challenge, we extract a positive martingale component from the stochastic discount factor process using Perron-Frobenius Theory . When this martingale is degenerate, probabilities that govern investor beliefs are recovered from the prices of Arrow securities. When the martingale component is not trivial, using this same approach recovers a probability measure, but not the one that is used by investors. We refer to this outcome as "misspecified recovery." We show that the resulting misspecified probability measure absorbs long-term risk adjustments. Many structural models of asset prices have stochastic discount factors with martingale components. Also empirical evidence on asset prices suggests that the recovered measure differs from the actual probability distribution. While this probability measure is of substantive interest, interpreting it as the true probability distribution may bias our inference about investor aversion to risk as reflected in risk-return tradeoffs .
[ { "type": "R", "before": "theory", "after": "Theory", "start_char_pos": 394, "end_char_pos": 400 }, { "type": "R", "before": "Even though", "after": "While", "start_char_pos": 1054, "end_char_pos": 1065 }, { "type": "R", "before": "may fail to capture investor beliefs, we conclude that it is valuable as a tool for characterizing long-term risk pricing", "after": "is of substantive interest, interpreting it as the true probability distribution may bias our inference about investor aversion to risk as reflected in risk-return tradeoffs", "start_char_pos": 1091, "end_char_pos": 1212 } ]
[ 0, 132, 238, 402, 532, 678, 731, 827, 927, 1053 ]
1412.0042
2
Asset prices contain information about the probability distribution of future states and the stochastic discounting of these states . Without additional assumptions, probabilities and stochastic discounting cannot be separately identified. To understand this identification challenge , we extract a positive martingale component from the stochastic discount factor process using Perron-Frobenius Theory. When this martingale is degenerate, probabilities that govern investor beliefs are recovered from the prices of Arrow securities . When the martingale component is not trivial, using this same approach recovers a probability measure, but not the one that is used by investors. We refer to this outcome as "misspecified recovery." We show that the resulting misspecified probability measure absorbs long-term risk adjustments. Many structural models of asset prices have stochastic discount factors with martingale components. Also empirical evidence on asset prices suggests that the recovered measure differs from the actual probability distribution. While this probability measure is of substantive interest, interpreting it as the true probability distribution may bias our inference about investor aversion to risk as reflected in risk-return tradeoffs .
Asset prices contain information about the probability distribution of future states and the stochastic discounting of those states as used by investors. To better understand the challenge in distinguishing investors' beliefs from risk-adjusted discounting, we use Perron-Frobenius Theory to isolate a positive martingale component of the stochastic discount factor process . This component recovers a probability measure that absorbs long-term risk adjustments. When the martingale is not degenerate, surmising that this recovered probability captures investors' beliefs distorts inference about risk-return tradeoffs. Stochastic discount factors in many structural models of asset prices have empirically relevant martingale components .
[ { "type": "R", "before": "these states . Without additional assumptions, probabilities and stochastic discounting cannot be separately identified. To understand this identification challenge , we extract", "after": "those states as used by investors. To better understand the challenge in distinguishing investors' beliefs from risk-adjusted discounting, we use Perron-Frobenius Theory to isolate", "start_char_pos": 119, "end_char_pos": 296 }, { "type": "R", "before": "from", "after": "of", "start_char_pos": 329, "end_char_pos": 333 }, { "type": "D", "before": "using Perron-Frobenius Theory. When this martingale is degenerate, probabilities that govern investor beliefs are recovered from the prices of Arrow securities", "after": null, "start_char_pos": 373, "end_char_pos": 532 }, { "type": "A", "before": null, "after": "This component recovers a probability measure that absorbs long-term risk adjustments.", "start_char_pos": 535, "end_char_pos": 535 }, { "type": "R", "before": "component is not trivial, using this same approach recovers a probability measure, but not the one that is used by investors. We refer to this outcome as \"misspecified recovery.\" We show that the resulting misspecified probability measure absorbs long-term risk adjustments. Many", "after": "is not degenerate, surmising that this recovered probability captures investors' beliefs distorts inference about risk-return tradeoffs. Stochastic discount factors in many", "start_char_pos": 556, "end_char_pos": 835 }, { "type": "R", "before": "stochastic discount factors with martingale components. Also empirical evidence on asset prices suggests that the recovered measure differs from the actual probability distribution. While this probability measure is of substantive interest, interpreting it as the true probability distribution may bias our inference about investor aversion to risk as reflected in risk-return tradeoffs", "after": "empirically relevant martingale components", "start_char_pos": 875, "end_char_pos": 1261 } ]
[ 0, 133, 239, 403, 534, 681, 734, 830, 930, 1056 ]
1412.0709
1
The analysis of dynamical systems that attempts to model chemical reaction , gene-regulatory, population, and ecosystem networks all rely on models having interacting components. When the details of these interactions are unknown for biological systems of interest, one effective approach is to study the dynamical properties of an ensemble of models determined by evolutionary constraints that may apply to all such systems. One such constraint is that of dynamical robustness. Despite previous investigations, the relationship between dynamical robustness-an important functional characteristic of many biological systems-and network structure is poorly understood . Here we analyze the stability and robustness of a large class of dynamical systems and demonstrate that the most hierarchical network structures, those equivalent to the total ordering, are the most robust. In particular, we determine the probability distribution of robustness over system connectivity and show that robustness is maximized by maximizing the number of links between strongly connected components of the graph representing the underlying system connectivity. We demonstrate that this can be understood in terms of the fact that permutation of strongly connected components is a fundamental symmetry of dynamical robustness , which applies to networks of any number of components and is independent of the distribution from which the strengths of interconnection among components are sampled. The classification of dynamical robustness based upon a purely topological property provides a URLanizing principle that can be used in the context of experimental validation to select among models that break or preserve network hierarchy. This result contributes to an explanation for the observation of hierarchical modularity in biological networks at all scales .
The relationship between network topology and system dynamics has significant implications for unifying our understanding of the interplay among metabolic , gene-regulatory, and ecosystem network architecures . Here we analyze the stability and robustness of a large class of dynamics on such networks. We determine the probability distribution of robustness as a function of network topology and show that robustness is classified by the number of links between modules of the network. We also demonstrate that permutation of these modules is a fundamental symmetry of dynamical robustness . Analysis of these findings leads to the conclusion that the most robust systems have the most hierarchical structure. This relationship provides a means by which evolutionary selection for a purely dynamical phenomenon may shape network architectures across scales of the biological hierarchy .
[ { "type": "R", "before": "analysis of dynamical systems that attempts to model chemical reaction", "after": "relationship between network topology and system dynamics has significant implications for unifying our understanding of the interplay among metabolic", "start_char_pos": 4, "end_char_pos": 74 }, { "type": "R", "before": "population, and ecosystem networks all rely on models having interacting components. When the details of these interactions are unknown for biological systems of interest, one effective approach is to study the dynamical properties of an ensemble of models determined by evolutionary constraints that may apply to all such systems. One such constraint is that of dynamical robustness. Despite previous investigations, the relationship between dynamical robustness-an important functional characteristic of many biological systems-and network structure is poorly understood", "after": "and ecosystem network architecures", "start_char_pos": 94, "end_char_pos": 666 }, { "type": "R", "before": "dynamical systems and demonstrate that the most hierarchical network structures, those equivalent to the total ordering, are the most robust. In particular, we", "after": "dynamics on such networks. We", "start_char_pos": 734, "end_char_pos": 893 }, { "type": "R", "before": "over system connectivity", "after": "as a function of network topology", "start_char_pos": 947, "end_char_pos": 971 }, { "type": "R", "before": "maximized by maximizing", "after": "classified by", "start_char_pos": 1000, "end_char_pos": 1023 }, { "type": "R", "before": "strongly connected components of the graph representing the underlying system connectivity. We demonstrate that this can be understood in terms of the fact that permutation of strongly connected components", "after": "modules of the network. We also demonstrate that permutation of these modules", "start_char_pos": 1052, "end_char_pos": 1257 }, { "type": "R", "before": ", which applies to networks of any number of components and is independent of the distribution from which the strengths of interconnection among components are sampled. The classification of dynamical robustness based upon a purely topological property provides a URLanizing principle that can be used in the context of experimental validation to select among models that break or preserve network hierarchy. This result contributes to an explanation for the observation of hierarchical modularity in biological networks at all scales", "after": ". Analysis of these findings leads to the conclusion that the most robust systems have the most hierarchical structure. This relationship provides a means by which evolutionary selection for a purely dynamical phenomenon may shape network architectures across scales of the biological hierarchy", "start_char_pos": 1308, "end_char_pos": 1842 } ]
[ 0, 178, 425, 478, 668, 875, 1143, 1476, 1716 ]
1412.1325
1
In this work we study the price-hedge issue for general defaultable contracts characterized by the presence of a contingent CSA of switching type. This is a contingent risk mitigation mechanism that allow the counterparties of a defaultable contract to switch from zero to full/perfect collateralization and switch back whenever until maturity T paying some instantaneous switching costs , taking in account in the picture CVA, collateralization and the funding problem. We have been lead to the study of this theoretical pricing/hedging problem, by the economic significance of this type of mechanism which allows a better management of all the defaultable contract risks respect to the standard mitigation mechanisms . In particular, our approach through hedging strategy decomposition of the claim and its solution representation through system of nonlinear reflected BSDE (theorem 3.2.4) are the main contribution of the work.
In this work we study the price-hedge issue for general defaultable contracts characterized by the presence of a contingent CSA of switching type. This is a contingent risk mitigation mechanism that allow the counterparties of a defaultable contract to switch from zero to full/perfect collateralization and switch back whenever until maturity T paying some instantaneous switching costs , taking in account in the picture CVA, collateralization and the funding problem. We have been lead to the study of this theoretical pricing/hedging problem, by the economic significance of this type of mechanism which allows a greater flexibility in managing all the defaultable contract risks with respect to the "standard" non contingent mitigation mechanisms (as full or partial collateralization) . In particular, our approach through hedging strategy decomposition of the claim (proposition 2.2.5) and its price-hedge representation through system of nonlinear reflected BSDE (theorem 3.2.4) are the main contribution of the work.
[ { "type": "R", "before": "better management of", "after": "greater flexibility in managing", "start_char_pos": 617, "end_char_pos": 637 }, { "type": "A", "before": null, "after": "with", "start_char_pos": 673, "end_char_pos": 673 }, { "type": "R", "before": "standard mitigation mechanisms", "after": "\"standard\" non contingent mitigation mechanisms (as full or partial collateralization)", "start_char_pos": 689, "end_char_pos": 719 }, { "type": "R", "before": "and its solution", "after": "(proposition 2.2.5) and its price-hedge", "start_char_pos": 802, "end_char_pos": 818 } ]
[ 0, 146, 470, 721 ]
1412.1394
1
Persistent homology captures the evolution of topological features of a model as a parameter changes. The two standard summary statistics of persistent homology are the barcode and the persistence diagram. A third summary statistic, the persistence landscape, was recently introduced by Bubenik. It is a functional summary, so it is easy to calculate sample means and variances, and it is straightforward to construct various test statistics. Implementing a permutation test we detect conformational changes between closed and open forms of the maltose-binding protein, a large biomolecule consisting of 370 amino acid residues . Moreover, because our approach captures dynamical properties of the protein our results may help in identifying residues susceptible to ligand binding; we show that the majority of active site residues and allosteric pathway residues are located in the vicinity of the most persistent loop in the corresponding filtered Vietoris-Rips complex. This finding was not observed in the classical anisotropic network model.
Persistent homology captures the evolution of topological features of a model as a parameter changes. The most commonly used summary statistics of persistent homology are the barcode and the persistence diagram. Another summary statistic, the persistence landscape, was recently introduced by Bubenik. It is a functional summary, so it is easy to calculate sample means and variances, and it is straightforward to construct various test statistics. Implementing a permutation test we detect conformational changes between closed and open forms of the maltose-binding protein, a large biomolecule consisting of 370 amino acid residues . Furthermore, persistence landscapes can be applied to machine learning methods. A hyperplane from a support vector machine shows the clear separation between the closed and open proteins conformations . Moreover, because our approach captures dynamical properties of the protein our results may help in identifying residues susceptible to ligand binding; we show that the majority of active site residues and allosteric pathway residues are located in the vicinity of the most persistent loop in the corresponding filtered Vietoris-Rips complex. This finding was not observed in the classical anisotropic network model.
[ { "type": "R", "before": "two standard", "after": "most commonly used", "start_char_pos": 106, "end_char_pos": 118 }, { "type": "R", "before": "A third", "after": "Another", "start_char_pos": 206, "end_char_pos": 213 }, { "type": "A", "before": null, "after": ". Furthermore, persistence landscapes can be applied to machine learning methods. A hyperplane from a support vector machine shows the clear separation between the closed and open proteins conformations", "start_char_pos": 628, "end_char_pos": 628 } ]
[ 0, 101, 205, 295, 442, 630, 782, 973 ]
1412.2155
1
We propose the use of local search to investigate protein residue networks ( PRN) . In a local search , clustering is inversely related to average path length. The opposite holds for a global search such as the commonly used breadth first strategy (BFS). The inverse relationship better fits the notion that amino acids get closer to each other as a protein becomes more compact. We use a greedy local search algorithm (EDS) that is Euclidean distance based and allows backtracking. While there are preferable differences between BFS and EDS paths in terms of variation in path length , search cost and link usage, there are also similarities in terms of centrality and hierarchy . EDS identifies a set of short-cut edges for each PRN . Short-cut edges are enriched with short-range links, and the short-cut network (SCN) they form spans most of a PRN's nodes, is adjacent to most of a PRN's edges and is strongly transitive. The short-cuts influence average EDS path length by reducing the difference in length or stretch between path-pairs. A consequence of this role is short-cut sets are more volatile, i.e. they undergo significantly more additions and deletions from step to step in a molecular dynamics (MD ) simulation than non-short-cut edge sets. Despite their volatility, about 79\% of SCN edge deletions have replacements (a deleted short-cut is replaced when an added short-cut is found in the edge cut-set of the deleted short-cut), and about 88\% of added short-cuts are replacement edges. This high edge replacement rate helps with the growth of the largest connected component of a SCN . More work is needed to understand the structure and formation of SCNs, in particular to identify the conditions under which short-cuts get deleted and how their replacements are selected .
We examined protein residue networks ( PRNs) from a local search perspective to understand why PRNs are highly clustered when having short paths is important for protein functionality. We found that by adopting a local search perspective, this conflict between form and function is resolved as increased clustering actually helps to reduce path length in PRNs. Further, the paths found via our EDS local search algorithm are more congruent with the characteristics of intra-protein communication . EDS identifies a subset of PRN edges called short-cuts that are distinct, have high usage, impacts EDS path length, diversity and stretch, and are dominated by short-range contacts. The short-cuts form a network (SCN) that increases in size and transitivity as a protein folds. The structure of a SCN supports its function and formation, and the function of a SCN influences its formation. Several significant differences in terms of SCN structure, function and formation is found between successful and unsuccessful MD trajectories. We hypothesize that strong SCN transitivity is a hallmark of well-formed SCNs, and suggest the possibility of using SCN transitivity as a folding coordinate for proteins whose native state is not known a priori. By connecting the static and the dynamic aspects of PRNs, the protein folding process becomes a problem of graph formation with the purpose of forming suitable pathways within proteins .
[ { "type": "R", "before": "propose the use of local search to investigate", "after": "examined", "start_char_pos": 3, "end_char_pos": 49 }, { "type": "R", "before": "PRN) . In", "after": "PRNs) from", "start_char_pos": 77, "end_char_pos": 86 }, { "type": "R", "before": ", clustering is inversely related to average path length. The opposite holds for a global search such as the commonly used breadth first strategy (BFS). The inverse relationship better fits the notion that amino acids get closer to each other as a protein becomes more compact. We use a greedy local search algorithm (EDS) that is Euclidean distance based and allows backtracking. While there are preferable differences between BFS and EDS paths in terms of variation in path length ,", "after": "perspective to understand why PRNs are highly clustered when having short paths is important for protein functionality. We found that by adopting a local", "start_char_pos": 102, "end_char_pos": 586 }, { "type": "R", "before": "cost and link usage, there are also similarities in terms of centrality and hierarchy", "after": "perspective, this conflict between form and function is resolved as increased clustering actually helps to reduce path length in PRNs. Further, the paths found via our EDS local search algorithm are more congruent with the characteristics of intra-protein communication", "start_char_pos": 594, "end_char_pos": 679 }, { "type": "R", "before": "set of short-cut edges for each PRN . Short-cut edges are enriched with", "after": "subset of PRN edges called short-cuts that are distinct, have high usage, impacts EDS path length, diversity and stretch, and are dominated by", "start_char_pos": 699, "end_char_pos": 770 }, { "type": "R", "before": "links, and the short-cut", "after": "contacts. The short-cuts form a", "start_char_pos": 783, "end_char_pos": 807 }, { "type": "R", "before": "they form spans most of a PRN's nodes, is adjacent to most of a PRN's edges and is strongly transitive. The short-cuts influence average EDS path length by reducing the difference in length or stretch between path-pairs. A consequence of this role is short-cut sets are more volatile, i.e. they undergo significantly more additions and deletions from step to step in a molecular dynamics (MD ) simulation than non-short-cut edge sets. Despite their volatility, about 79\\% of SCN edge deletions have replacements (a deleted short-cut is replaced when an added short-cut is found in the edge cut-set of the deleted short-cut), and about 88\\% of added short-cuts are replacement edges. This high edge replacement rate helps with the growth of the largest connected component of a SCN . More work is needed to understand the structure and formation of SCNs, in particular to identify the conditions under which short-cuts get deleted and how their replacements are selected", "after": "that increases in size and transitivity as a protein folds. The structure of a SCN supports its function and formation, and the function of a SCN influences its formation. Several significant differences in terms of SCN structure, function and formation is found between successful and unsuccessful MD trajectories. We hypothesize that strong SCN transitivity is a hallmark of well-formed SCNs, and suggest the possibility of using SCN transitivity as a folding coordinate for proteins whose native state is not known a priori. By connecting the static and the dynamic aspects of PRNs, the protein folding process becomes a problem of graph formation with the purpose of forming suitable pathways within proteins", "start_char_pos": 822, "end_char_pos": 1791 } ]
[ 0, 159, 254, 379, 482, 925, 1042, 1256, 1504 ]
1412.2262
1
We determine the optimal strategies for purchasing term life insurance and for investing in a risky financial market in order to maximize the probability of reaching a bequest goal . We extend our previous work, Bayraktar et al. 2014a, Section 3.1, in two important ways: (1) we assume that the individual consumes from her investment account, and (2) we add a risky asset to the financial market. We learn that if the rate of consumption is largeenough, then the individual will purchase term life insurance at any level of wealth, a surprising result. We also determine when the individual optimally invests more in the risky asset than her current wealth, so-called leveraging .
We determine the optimal strategies for purchasing term life insurance and for investing in a risky financial market in order to maximize the probability of reaching a bequest goal while consuming from an investment account . We extend Bayraktar and Young (2015) by allowing the individual to purchase term life insurance to reach her bequest goal. The premium rate for life insurance, h, serves as a parameter to connect two seemingly unrelated problems. As the premium rate approaches 0, covering the bequest goal becomes costless, so the individual simply wants to avoid ruin that might result from her consumption. Thus, as h approaches 0, the problem in this paper becomes equivalent to minimizing the probability of lifetime ruin, which is solved in Young (2004). On the other hand, as the premium rate becomes arbitrarily large, the individual will not buy life insurance to reach her bequest goal. Thus, as h approaches infinity, the problem in this paper becomes equivalent to maximizing the probability of reaching the bequest goal when life insurance is not available in the market, which is solved in Bayraktar and Young (2015) .
[ { "type": "A", "before": null, "after": "while consuming from an investment account", "start_char_pos": 181, "end_char_pos": 181 }, { "type": "R", "before": "our previous work, Bayraktar et al. 2014a, Section 3.1, in two important ways: (1) we assume that the individual consumes from her investment account, and (2) we add a risky asset to", "after": "Bayraktar and Young (2015) by allowing the individual to purchase term life insurance to reach her bequest goal. The premium rate for life insurance, h, serves as a parameter to connect two seemingly unrelated problems. As the premium rate approaches 0, covering the bequest goal becomes costless, so the individual simply wants to avoid ruin that might result from her consumption. Thus, as h approaches 0, the problem in this paper becomes equivalent to minimizing the probability of lifetime ruin, which is solved in Young (2004). On the other hand, as the premium rate becomes arbitrarily large,", "start_char_pos": 194, "end_char_pos": 376 }, { "type": "D", "before": "financial market. We learn that if the rate of consumption is largeenough, then the", "after": null, "start_char_pos": 381, "end_char_pos": 464 }, { "type": "R", "before": "purchase term life insurance at any level of wealth, a surprising result. We also determine when the individual optimally invests more in the risky asset than her current wealth, so-called leveraging", "after": "not buy life insurance to reach her bequest goal. Thus, as h approaches infinity, the problem in this paper becomes equivalent to maximizing the probability of reaching the bequest goal when life insurance is not available in the market, which is solved in Bayraktar and Young (2015)", "start_char_pos": 481, "end_char_pos": 680 } ]
[ 0, 183, 398, 554 ]