doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1604.08224
1
In this paper we study the utility maximization problem on the terminal wealth with proportional transaction costs and random endowment. Under the assumption of the existence of consistent price systems, which makes the duality approach possible, we consider the duality between the primal utility maximization problem and the dual one, which is set up on the domain of finitely additive measures. In particular, we prove duality results for utility functions supporting possibly negative values. Moreover, we construct the shadow market by the dual optimal process and exhibit the utility based pricing for the random endowment.
In this paper we study the problem of maximizing expected utility from the terminal wealth with proportional transaction costs and random endowment. In the context of the existence of consistent price systems, we consider the duality between the primal utility maximization problem and the dual one, which is set up on the domain of finitely additive measures. In particular, we prove duality results for utility functions supporting possibly negative values. Moreover, we construct the shadow market by the dual optimal process and consider the utility based pricing for random endowment.
[ { "type": "R", "before": "utility maximization problem on", "after": "problem of maximizing expected utility from", "start_char_pos": 27, "end_char_pos": 58 }, { "type": "R", "before": "Under the assumption", "after": "In the context", "start_char_pos": 137, "end_char_pos": 157 }, { "type": "D", "before": "which makes the duality approach possible,", "after": null, "start_char_pos": 204, "end_char_pos": 246 }, { "type": "R", "before": "exhibit", "after": "consider", "start_char_pos": 570, "end_char_pos": 577 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 608, "end_char_pos": 611 } ]
[ 0, 136, 397, 496 ]
1604.08278
2
Because of the potential link between -1 programmed ribosomal frameshifting (PRF) and response of a pseudoknot (PK) RNA to force, a number of single molecule pulling experiments have been performed on PKs to decipher the mechanism of PRF . Motivated in part by these experiments, we performed simulations using a coarse-grained model of RNA to describe the response of a PK over a range of mechanical forces (fs) and monovalent salt concentrations (Cs). The coarse-grained simulations quantitatively reproduce the multistep thermal melting observed in experiments, thus validating our model. The free energy changes obtained in simulations are in excellent agreement with experiments. By varying f and C, we calculated the phase diagram that shows a sequence of structural transitions, populating distinct intermediate states. As f and C are changed, the stem-loop tertiary interactions rupture first followed by unfolding of the 3^{\prime}-end hairpin (I\rightleftharpoonsF). Finally, the 5^{\prime}-end hairpin unravels producing a extended state (E\rightleftharpoonsI). A theoretical analysis of the phase boundaries shows that the critical force for rupture scales as \left(\log C_{m}\right)^{\alpha} with \alpha=1\,(0.5) for E\rightleftharpoonsI (I\rightleftharpoonsF) transition. This relation is used to obtain the preferential ion-RNA interaction coefficient, which can be quantitatively measured in single molecule experiments, as done previously for DNA hairpins. A by product of our work is the suggestion that the frameshift efficiency is likely determined by the stability of the 5^{\prime}-end hairpin that the ribosome first encounters during translation.
Because of the potential link between -1 programmed ribosomal frameshifting and response of a pseudoknot (PK) RNA to force, a number of single molecule pulling experiments have been performed on PKs to decipher the mechanism of programmed ribosomal frameshifting . Motivated in part by these experiments, we performed simulations using a coarse-grained model of RNA to describe the response of a PK over a range of mechanical forces (fs) and monovalent salt concentrations (Cs). The coarse-grained simulations quantitatively reproduce the multistep thermal melting observed in experiments, thus validating our model. The free energy changes obtained in simulations are in excellent agreement with experiments. By varying f and C, we calculated the phase diagram that shows a sequence of structural transitions, populating distinct intermediate states. As f and C are changed, the stem-loop tertiary interactions rupture first , followed by unfolding of the 3^{\prime}-end hairpin (I\rightleftharpoonsF). Finally, the 5^{\prime}-end hairpin unravels , producing an extended state (E\rightleftharpoonsI). A theoretical analysis of the phase boundaries shows that the critical force for rupture scales as \left(\log C_{m}\right)^{\alpha} with \alpha=1\,(0.5) for E\rightleftharpoonsI (I\rightleftharpoonsF) transition. This relation is used to obtain the preferential ion-RNA interaction coefficient, which can be quantitatively measured in single-molecule experiments, as done previously for DNA hairpins. A by-product of our work is the suggestion that the frameshift efficiency is likely determined by the stability of the 5^{\prime}-end hairpin that the ribosome first encounters during translation.
[ { "type": "D", "before": "(PRF)", "after": null, "start_char_pos": 76, "end_char_pos": 81 }, { "type": "R", "before": "PRF", "after": "programmed ribosomal frameshifting", "start_char_pos": 234, "end_char_pos": 237 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 901, "end_char_pos": 901 }, { "type": "R", "before": "producing a", "after": ", producing an", "start_char_pos": 1023, "end_char_pos": 1034 }, { "type": "R", "before": "single molecule", "after": "single-molecule", "start_char_pos": 1409, "end_char_pos": 1424 }, { "type": "R", "before": "by product", "after": "by-product", "start_char_pos": 1477, "end_char_pos": 1487 } ]
[ 0, 453, 591, 684, 826, 977, 1073, 1286, 1474 ]
1604.08354
1
Specific protein-protein interactions are crucial in the cell, both to ensure the formation and stability of multi-protein complexes, and to enable signal transduction in various pathways. Functional interactions between proteins result in coevolution between the interaction partners . Hence, the sequences of interacting partners are correlated. Here we exploit these correlations to accurately identify which proteins are specific interaction partners from sequence data alone. Our general approach, which employs a pairwise maximum entropy model to infer direct couplings between residues, has been successfully used to predict the three-dimensional structures of proteins from sequences. Building on this approach , we introduce an iterative algorithm to predict specific interaction partners from among the members of two protein families . We assess the algorithm's performance on histidine kinases and response regulators from bacterial two-component signaling systems. The algorithm proves successful without any a priori knowledge of interaction partners, yielding a striking 0.93 true positive fraction on our complete dataset , and we uncover the origin of this surprising success. Finally, we discuss how our method could be used to predict novel protein-protein interactions .
Specific protein-protein interactions are crucial in the cell, both to ensure the formation and stability of multi-protein complexes, and to enable signal transduction in various pathways. Functional interactions between proteins result in coevolution between the interaction partners , causing their sequences to be correlated. Here we exploit these correlations to accurately identify which proteins are specific interaction partners from sequence data alone. Our general approach, which employs a pairwise maximum entropy model to infer couplings between residues, has been successfully used to predict the three-dimensional structures of proteins from sequences. Thus inspired , we introduce an iterative algorithm to predict specific interaction partners from two protein families whose members are known to interact. We first assess the algorithm's performance on histidine kinases and response regulators from bacterial two-component signaling systems. We obtain a striking 0.93 true positive fraction on our complete dataset without any a priori knowledge of interaction partners , and we uncover the origin of this success. We then apply the algorithm to proteins from ATP-binding cassette (ABC) transporter complexes, and obtain accurate predictions in these systems as well. Finally, we present two metrics that accurately distinguish interacting protein families from non-interacting ones, using only sequence data .
[ { "type": "R", "before": ". Hence, the sequences of interacting partners are", "after": ", causing their sequences to be", "start_char_pos": 285, "end_char_pos": 335 }, { "type": "D", "before": "direct", "after": null, "start_char_pos": 559, "end_char_pos": 565 }, { "type": "R", "before": "Building on this approach", "after": "Thus inspired", "start_char_pos": 693, "end_char_pos": 718 }, { "type": "D", "before": "among the members of", "after": null, "start_char_pos": 803, "end_char_pos": 823 }, { "type": "R", "before": ". We", "after": "whose members are known to interact. We first", "start_char_pos": 845, "end_char_pos": 849 }, { "type": "R", "before": "The algorithm proves successful without any a priori knowledge of interaction partners, yielding a", "after": "We obtain a", "start_char_pos": 978, "end_char_pos": 1076 }, { "type": "A", "before": null, "after": "without any a priori knowledge of interaction partners", "start_char_pos": 1138, "end_char_pos": 1138 }, { "type": "R", "before": "surprising success.", "after": "success. We then apply the algorithm to proteins from ATP-binding cassette (ABC) transporter complexes, and obtain accurate predictions in these systems as well.", "start_char_pos": 1175, "end_char_pos": 1194 }, { "type": "R", "before": "discuss how our method could be used to predict novel protein-protein interactions", "after": "present two metrics that accurately distinguish interacting protein families from non-interacting ones, using only sequence data", "start_char_pos": 1207, "end_char_pos": 1289 } ]
[ 0, 188, 286, 347, 480, 692, 846, 977, 1194 ]
1605.00080
1
Generally accepted depreciation methods do not factor in the Time Value of Money, despite the concept being a core principle of financial asset valuation . By applying the concept to depreciation, Depreciable Asset Value Models can be formulated , that allow depreciation to be calculated in a manner consistent with financial theory. While the Basic Depreciable Asset Value Model formulated withinhas its limitations, more complex models , which factor in a greater number of variables, can be formulated using its logic .
Generally accepted depreciation methods do not compute the intrinsic value of an asset, as they do not factor for the Time Value of Money, a key principle within financial theory. This is disadvantageous, as knowing the intrinsic value of an asset can assist with making effective purchase and sale decisions . By applying the Time Value of Money principle to deprecation and book valuation, methods can be formulated to approximate the intrinsic valuation of a depreciable asset, which improves the capacity for buyers and sellers of assets to make rational decisions. A deprecation method is formulated within, which aims to better match book value with intrinsic value. While this method makes many assumptions and thus has limitations, more complex formulas , which factor for a greater number of variables, can be created using a similar approach, to produce better approximations for intrinsic value .
[ { "type": "R", "before": "factor in", "after": "compute the intrinsic value of an asset, as they do not factor for", "start_char_pos": 47, "end_char_pos": 56 }, { "type": "R", "before": "despite the concept being a core principle of financial asset valuation", "after": "a key principle within financial theory. This is disadvantageous, as knowing the intrinsic value of an asset can assist with making effective purchase and sale decisions", "start_char_pos": 82, "end_char_pos": 153 }, { "type": "R", "before": "concept to depreciation, Depreciable Asset Value Models", "after": "Time Value of Money principle to deprecation and book valuation, methods", "start_char_pos": 172, "end_char_pos": 227 }, { "type": "R", "before": ", that allow depreciation to be calculated in a manner consistent with financial theory. While the Basic Depreciable Asset Value Model formulated withinhas its", "after": "to approximate the intrinsic valuation of a depreciable asset, which improves the capacity for buyers and sellers of assets to make rational decisions. A deprecation method is formulated within, which aims to better match book value with intrinsic value. While this method makes many assumptions and thus has", "start_char_pos": 246, "end_char_pos": 405 }, { "type": "R", "before": "models", "after": "formulas", "start_char_pos": 432, "end_char_pos": 438 }, { "type": "R", "before": "in", "after": "for", "start_char_pos": 454, "end_char_pos": 456 }, { "type": "R", "before": "formulated using its logic", "after": "created using a similar approach, to produce better approximations for intrinsic value", "start_char_pos": 495, "end_char_pos": 521 } ]
[ 0, 155, 334 ]
1605.00230
1
We compare the predictive ability of several volatility models for a long series of weekly log-returns of the Dow Jones Industrial Average Index from 1902 to 2016. Our focus is particularly on predicting one and multi-step ahead conditional and aggregated conditional densities. Our set of competing models includes: Well-known GARCH specifications, Markov switching GARCH, sempiparametric GARCH, Generalised Autoregressive Score (GAS), the plain stochastic volatility (SV) as well as its more flexible extensions such as SV with leverage, in-mean effects and Student-t distributed errors . We find that : (i) SV models generally outperform the GARCH specifications, (ii): The SV model with leverage effect provides very strong out-of-sample performance in terms of one and multi-steps ahead density prediction, (iii) Differences in terms of Value-at-Risk (VaR) predictions accuracy are less evident. Thus, our results have an important implication: the best performing model depends on the evaluation criterion
The leverage effect refers to the well-established relationship between returns and volatility. When returns fall, volatility increases. We examine the role of the leverage effect with regards to generating density forecasts of equity returns using well-known observation and parameter-driven volatility models. These models differ in their assumptions regarding: The parametric specification, the evolution of the conditional volatility process and how the leverage effect is accounted for. The ability of a model to generate accurate density forecasts when the leverage effect is incorporated or not as well as a comparison between different model-types is carried out using a large number of financial time-series . We find that , models with the leverage effect generally generate more accurate density forecasts compared to their no-leverage counterparts. Moreover, we also find that our choice with regards to how to model the leverage effect and the conditional log-volatility process is important in generating accurate density forecasts
[ { "type": "R", "before": "We compare the predictive ability of several volatility models for a long series of weekly log-returns of the Dow Jones Industrial Average Index from 1902 to 2016. Our focus is particularly on predicting one and multi-step ahead conditional and aggregated conditional densities. Our set of competing models includes: Well-known GARCH specifications, Markov switching GARCH, sempiparametric GARCH, Generalised Autoregressive Score (GAS), the plain stochastic volatility (SV)", "after": "The leverage effect refers to the well-established relationship between returns and volatility. When returns fall, volatility increases. We examine the role of the leverage effect with regards to generating density forecasts of equity returns using well-known observation and parameter-driven volatility models. These models differ in their assumptions regarding: The parametric specification, the evolution of the conditional volatility process and how the leverage effect is accounted for. The ability of a model to generate accurate density forecasts when the leverage effect is incorporated or not", "start_char_pos": 0, "end_char_pos": 473 }, { "type": "R", "before": "its more flexible extensions such as SV with leverage, in-mean effects and Student-t distributed errors", "after": "a comparison between different model-types is carried out using a large number of financial time-series", "start_char_pos": 485, "end_char_pos": 588 }, { "type": "R", "before": ": (i) SV models generally outperform the GARCH specifications, (ii): The SV model with leverage effect provides very strong out-of-sample performance in terms of one and multi-steps ahead density prediction, (iii) Differences in terms of Value-at-Risk (VaR) predictions accuracy are less evident. Thus, our results have an important implication: the best performing model depends on the evaluation criterion", "after": ", models with the leverage effect generally generate more accurate density forecasts compared to their no-leverage counterparts. Moreover, we also find that our choice with regards to how to model the leverage effect and the conditional log-volatility process is important in generating accurate density forecasts", "start_char_pos": 604, "end_char_pos": 1011 } ]
[ 0, 163, 278, 590, 900 ]
1605.00748
1
The macromolecules that encode and translate information in living systems exhibit distinctive structural asymmetries, including homochirality or mirror image asymmetry and 3'-5' directionality, that are invariant across all life forms. The evolutionary dynamics that led to these broken symmetries remain unknown. Using a computational model of hypothetical self-replicating autocatalytic heteropolymers, we identify a fundamental symmetry-breaking mechanism that significantly increases the rate of replication of asymmetric heteropolymers, compared to their symmetric counterparts . This broken-symmetry property, called asymmetric cooperativity, arises when the catalytic influence of inter-strand bonds on their left and right neighbors is unequal. We provide experimental evidence suggestive of its presence in DNA. Asymmetric cooperativity is used to explain, apart from the broken symmetries mentioned above, a number of other properties of DNA that includes four nucleotide alphabet, three nucleotide codons, helicity, anti-parallel double-strand orientation, heteromolecular base-pairing, asymmetric base compositions, and palindromic instability .
The macromolecules that encode and translate information in living systems , DNA and RNA, exhibit distinctive structural asymmetries, including homochirality or mirror image asymmetry and 3'-5' directionality, that are invariant across all life forms. The evolutionary advantages of these broken symmetries remain unknown. Here we construct a simple model of hypothetical self-replicating polymers to show that asymmetric autocatalytic polymers are more successful in self-replication compared to their symmetric counterparts in the Darwinian competition for space and common substrates . This broken-symmetry property, called asymmetric cooperativity, arises when the catalytic influence of inter-strand bonds on their left and right neighbors is unequal. Asymmetric cooperativity also leads to simple evolution-based explanations for a number of other properties of DNA that include four nucleotide alphabet, three nucleotide codons, circular genomes, helicity, anti-parallel double-strand orientation, heteromolecular base-pairing, asymmetric base compositions, and palindromic instability , apart from the structural asymmetries mentioned above. Our model results and explanations are consistent with multiple lines of experimental evidence, which include evidence for the presence of asymmetric cooperativity in DNA .
[ { "type": "A", "before": null, "after": ", DNA and RNA,", "start_char_pos": 75, "end_char_pos": 75 }, { "type": "R", "before": "dynamics that led to", "after": "advantages of", "start_char_pos": 255, "end_char_pos": 275 }, { "type": "R", "before": "Using a computational", "after": "Here we construct a simple", "start_char_pos": 316, "end_char_pos": 337 }, { "type": "R", "before": "autocatalytic heteropolymers, we identify a fundamental symmetry-breaking mechanism that significantly increases the rate of replication of asymmetric heteropolymers,", "after": "polymers to show that asymmetric autocatalytic polymers are more successful in self-replication", "start_char_pos": 377, "end_char_pos": 543 }, { "type": "A", "before": null, "after": "in the Darwinian competition for space and common substrates", "start_char_pos": 585, "end_char_pos": 585 }, { "type": "R", "before": "We provide experimental evidence suggestive of its presence in DNA. Asymmetric cooperativity is used to explain, apart from the broken symmetries mentioned above,", "after": "Asymmetric cooperativity also leads to simple evolution-based explanations for", "start_char_pos": 756, "end_char_pos": 918 }, { "type": "R", "before": "includes", "after": "include", "start_char_pos": 960, "end_char_pos": 968 }, { "type": "A", "before": null, "after": "circular genomes,", "start_char_pos": 1020, "end_char_pos": 1020 }, { "type": "A", "before": null, "after": ", apart from the structural asymmetries mentioned above. Our model results and explanations are consistent with multiple lines of experimental evidence, which include evidence for the presence of asymmetric cooperativity in DNA", "start_char_pos": 1160, "end_char_pos": 1160 } ]
[ 0, 237, 315, 587, 755 ]
1605.00748
2
The macromolecules that encode and translate information in living systems, DNA and RNA, exhibit distinctive structural asymmetries, including homochirality or mirror image asymmetry and 3' -5 ' directionality, that are invariant across all life forms. The evolutionary advantages of these broken symmetries remain unknown. Here we construct a simple model of hypothetical self-replicating polymers to show that asymmetric autocatalytic polymers are more successful in self-replication compared to their symmetric counterparts in the Darwinian competition for space and common substrates. This broken-symmetry property, called asymmetric cooperativity, arises when the catalytic influence of inter-strand bonds on their left and right neighbors is unequal. Asymmetric cooperativity also leads to simple evolution-based explanations for a number of other properties of DNA that include four nucleotide alphabet, three nucleotide codons, circular genomes, helicity, anti-parallel double-strand orientation, heteromolecular base-pairing, asymmetric base compositions, and palindromic instability, apart from the structural asymmetries mentioned above. Our model results and explanations are consistent with multiple lines of experimental evidence, which include evidence for the presence of asymmetric cooperativity in DNA.
The macromolecules that encode and translate information in living systems, DNA and RNA, exhibit distinctive structural asymmetries, including homochirality or mirror image asymmetry and 3' - 5 ' directionality, that are invariant across all life forms. The evolutionary advantages of these broken symmetries remain unknown. Here we utilize a very simple model of hypothetical self-replicating polymers to show that asymmetric autocatalytic polymers are more successful in self-replication compared to their symmetric counterparts in the Darwinian competition for space and common substrates. This broken-symmetry property, called asymmetric cooperativity, arises with the maximization of a replication potential, where the catalytic influence of inter-strand bonds on their left and right neighbors is unequal. Asymmetric cooperativity also leads to tentative, qualitative and simple evolution-based explanations for a number of other properties of DNA that include four nucleotide alphabet, three nucleotide codons, circular genomes, helicity, anti-parallel double-strand orientation, heteromolecular base-pairing, asymmetric base compositions, and palindromic instability, apart from the structural asymmetries mentioned above. Our model results and tentative explanations are consistent with multiple lines of experimental evidence, which include evidence for the presence of asymmetric cooperativity in DNA.
[ { "type": "R", "before": "-5", "after": "- 5", "start_char_pos": 190, "end_char_pos": 192 }, { "type": "R", "before": "construct a", "after": "utilize a very", "start_char_pos": 332, "end_char_pos": 343 }, { "type": "R", "before": "when the", "after": "with the maximization of a replication potential, where the", "start_char_pos": 660, "end_char_pos": 668 }, { "type": "A", "before": null, "after": "tentative, qualitative and", "start_char_pos": 796, "end_char_pos": 796 }, { "type": "A", "before": null, "after": "tentative", "start_char_pos": 1172, "end_char_pos": 1172 } ]
[ 0, 252, 323, 588, 756, 1149 ]
1605.01150
1
A method is proposed to identify target states that optimize a metastability index amongst a set of trial states and use these target states as milestones to build Markov State Models . If the optimized metastability index is small, this automatically guarantees the accuracy of the MSM in the sense that the transitions between the target milestones is indeed approximately Markovian. The method is simple to implement and use, it does not require that the dynamics on the trial milestones be Markovian, and it also offers the possibility to partition the system's state-space by assigning every trial milestone to the target milestones it is most likely to visit next and to identify transition state regions. Here the method is tested on the Gly-Ala-Gly peptide, where it shown to correctly identify the known metastable states in the dihedral angle space of the molecule without a priori information about these states. It is also applied to analyze the folding landscape of the Beta3s min-protein , where it is shown to identify the folded basin as a connecting hub between a high entropy helix-rich region and a beta-rich kinetic trap region .
A method is proposed to identify target states that optimize a metastability index amongst a set of trial states and use these target states as milestones (or core sets) to build Markov State Models (MSMs) . If the optimized metastability index is small, this automatically guarantees the accuracy of the MSM , in the sense that the transitions between the target milestones is indeed approximately Markovian. The method is simple to implement and use, it does not require that the dynamics on the trial milestones be Markovian, and it also offers the possibility to partition the system's state-space by assigning every trial milestone to the target milestones it is most likely to visit next and to identify transition state regions. Here the method is tested on the Gly-Ala-Gly peptide, where it shown to correctly identify the expected metastable states in the dihedral angle space of the molecule without a~priori information about these states. It is also applied to analyze the folding landscape of the Beta3s mini-protein , where it is shown to identify the folded basin as a connecting hub between an helix-rich region , which is entropically stabilized, and a beta-rich region, which is energetically stabilized and acts as a kinetic trap .
[ { "type": "A", "before": null, "after": "(or core sets)", "start_char_pos": 155, "end_char_pos": 155 }, { "type": "A", "before": null, "after": "(MSMs)", "start_char_pos": 185, "end_char_pos": 185 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 289, "end_char_pos": 289 }, { "type": "R", "before": "known", "after": "expected", "start_char_pos": 810, "end_char_pos": 815 }, { "type": "R", "before": "a priori", "after": "a~priori", "start_char_pos": 886, "end_char_pos": 894 }, { "type": "R", "before": "min-protein", "after": "mini-protein", "start_char_pos": 993, "end_char_pos": 1004 }, { "type": "R", "before": "a high entropy", "after": "an", "start_char_pos": 1082, "end_char_pos": 1096 }, { "type": "A", "before": null, "after": ", which is entropically stabilized,", "start_char_pos": 1115, "end_char_pos": 1115 }, { "type": "R", "before": "kinetic trap region", "after": "region, which is energetically stabilized and acts as a kinetic trap", "start_char_pos": 1132, "end_char_pos": 1151 } ]
[ 0, 187, 388, 714, 926 ]
1605.01219
1
Transcription factors (TFs) exert their regulatory action by binding to DNA with specific sequence preferences. However, different TFs can partially share their binding sequences . This "redundancy" of binding defines a way URLanizing TFs in "motif families" that goes beyond the usual classification based on protein structural similarities. Since the TF binding preferences ultimately define the target genes, the motif URLanization entails information about the structure of transcriptional regulation as it has been shaped by evolution. Focusing on the human lineage , we show that a one-parameter evolutionary model of the Birth-Death-Innovation type can explain the empirical repartition of TFs in motif families, thus identifying the relevant evolutionary forces at its origin . More importantly , the model allows to pinpoint few deviations in human from the neutral scenario it assumes: three over-expanded families corresponding to HOX and FOX type genes , a set of "singleton" TFs for which duplication seems to be selected against, and an higher-than-average rate of diversification of the binding preferences of TFs with a Zinc Finger DNA binding domain. Finally, a comparison of the TF motif URLanization in different eukaryotic species suggests an increase of redundancy of binding URLanism complexity.
Transcription factors (TFs) exert their regulatory action by binding to DNA with specific sequence preferences. However, different TFs can partially share their binding sequences due to their common evolutionary origin. This `redundancy' of binding defines a way URLanizing TFs in `motif families' by grouping TFs with similar binding preferences. Since these ultimately define the TF target genes, the motif URLanization entails information about the structure of transcriptional regulation as it has been shaped by evolution. Focusing on the human TF repertoire , we show that a one-parameter evolutionary model of the Birth-Death-Innovation type can explain the TF empirical ripartition in motif families, and allows to highlight the relevant evolutionary forces at the origin of URLanization. Moreover , the model allows to pinpoint few deviations from the neutral scenario it assumes: three over-expanded families (including HOX and FOX genes) , a set of `singleton' TFs for which duplication seems to be selected against, and a higher-than-average rate of diversification of the binding preferences of TFs with a Zinc Finger DNA binding domain. Finally, a comparison of the TF motif URLanization in different eukaryotic species suggests an increase of redundancy of binding URLanism complexity.
[ { "type": "R", "before": ". This \"redundancy\"", "after": "due to their common evolutionary origin. This `redundancy'", "start_char_pos": 179, "end_char_pos": 198 }, { "type": "R", "before": "\"motif families\" that goes beyond the usual classification based on protein structural similarities. Since the TF binding preferences", "after": "`motif families' by grouping TFs with similar binding preferences. Since these", "start_char_pos": 242, "end_char_pos": 375 }, { "type": "A", "before": null, "after": "TF", "start_char_pos": 398, "end_char_pos": 398 }, { "type": "R", "before": "lineage", "after": "TF repertoire", "start_char_pos": 564, "end_char_pos": 571 }, { "type": "R", "before": "empirical repartition of TFs", "after": "TF empirical ripartition", "start_char_pos": 673, "end_char_pos": 701 }, { "type": "R", "before": "thus identifying", "after": "and allows to highlight", "start_char_pos": 721, "end_char_pos": 737 }, { "type": "R", "before": "its origin . More importantly", "after": "the origin of URLanization. Moreover", "start_char_pos": 774, "end_char_pos": 803 }, { "type": "D", "before": "in human", "after": null, "start_char_pos": 850, "end_char_pos": 858 }, { "type": "R", "before": "corresponding to", "after": "(including", "start_char_pos": 926, "end_char_pos": 942 }, { "type": "R", "before": "type genes", "after": "genes)", "start_char_pos": 955, "end_char_pos": 965 }, { "type": "R", "before": "\"singleton\"", "after": "`singleton'", "start_char_pos": 977, "end_char_pos": 988 }, { "type": "R", "before": "an", "after": "a", "start_char_pos": 1049, "end_char_pos": 1051 } ]
[ 0, 111, 180, 342, 541, 786, 1168 ]
1605.01327
1
We generalize the fundamental theorem of asset pricing (FTAP) and hedging dualities in \mbox{%DIFAUXCMD ZZ8 the case where the investor can short American options. Following arXiv:1502.06681, we assume that the longed American options are divisible. As for the shorted American options, we show that the divisibility plays no role regarding arbitrage property and hedging prices. Then using the method of enlarging probability spaces proposed in arXiv:1604.05517, we convert the shorted American options to European options, and establish the FTAP and sub- and super-hedging dualities in the enlarged space both with and without model uncertainty.
Since most of the traded options on individual stocks is of American type it is of interest to generalize the results obtained in semi-static trading to the case when one is allowed to statically trade American options. However, this problem has proved to be elusive so far because of the asymmetric nature of the positions of holding versus shorting such options. Here we provide a unified framework and generalize the fundamental theorem of asset pricing (FTAP) and hedging dualities in arXiv:1502.06681 (to appear in Annals of Applied Probability) to the case where the investor can also short American options. Following arXiv:1502.06681, we assume that the longed American options are divisible. As for the shorted American options, we show that the divisibility plays no role regarding arbitrage property and hedging prices. Then using the method of enlarging probability spaces proposed in arXiv:1604.05517, we convert the shorted American options to European options, and establish the FTAP and sub- and super-hedging dualities in the enlarged space both with and without model uncertainty.
[ { "type": "R", "before": "We generalize the", "after": "Since most of the traded options on individual stocks is of American type it is of interest to generalize the results obtained in semi-static trading to the case when one is allowed to statically trade American options. However, this problem has proved to be elusive so far because of the asymmetric nature of the positions of holding versus shorting such options. Here we provide a unified framework and generalize the", "start_char_pos": 0, "end_char_pos": 17 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD ZZ8", "after": "arXiv:1502.06681 (to appear in Annals of Applied Probability) to", "start_char_pos": 87, "end_char_pos": 107 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 140, "end_char_pos": 140 } ]
[ 0, 164, 250, 380 ]
1605.01621
1
We present numerical simulations of active fluid droplets immersed in an external fluid in 2-dimensions . We use an Immersed Boundary method to simulate the fluid droplet interface as a Lagrangian mesh. We present results from two example systems, firstly a droplet filled with an active polar fluid with polar anchoring at the droplet interface. Secondly, an active isotropic fluid consisting of particles that can bind and unbind from the interface and generate surface tension gradients through active contractility } . These two systems demonstrate spontaneous symmetry breaking and steady state dynamics resembling cell motility and division and show complex feedback mechanisms with minimal degrees of freedom. The simulations outlined here will be useful for quantifying the wide range of dynamics observable in these active systems and modelling the effects of confinement in a consistent and adaptable way.
We present numerical simulations of active fluid droplets immersed in an external fluid in 2-dimensions using an Immersed Boundary method to simulate the fluid droplet interface as a Lagrangian mesh. We present results from two example systems, firstly an active isotropic fluid boundary consisting of particles that can bind and unbind from the interface and generate surface tension gradients through active contractility . Secondly, a droplet filled with an active polar fluid with homeotropic} anchoring at the droplet interface . These two systems demonstrate spontaneous symmetry breaking and steady state dynamics resembling cell motility and division and show complex feedback mechanisms with minimal degrees of freedom. The simulations outlined here will be useful for quantifying the wide range of dynamics observable in these active systems and modelling the effects of confinement in a consistent and adaptable way.
[ { "type": "R", "before": ". We use", "after": "using", "start_char_pos": 104, "end_char_pos": 112 }, { "type": "R", "before": "a droplet filled with an active polar fluid with polar anchoring at the droplet interface. Secondly, an active isotropic fluid", "after": "an active isotropic fluid boundary", "start_char_pos": 256, "end_char_pos": 382 }, { "type": "A", "before": null, "after": ". Secondly, a droplet filled with an active polar fluid with", "start_char_pos": 519, "end_char_pos": 519 }, { "type": "A", "before": null, "after": "homeotropic", "start_char_pos": 520, "end_char_pos": 520 }, { "type": "A", "before": null, "after": "anchoring at the droplet interface", "start_char_pos": 522, "end_char_pos": 522 } ]
[ 0, 105, 202, 346, 718 ]
1605.01639
1
Cells adapt their metabolism to survive changes in their environment. We present a framework for the construction and analysis of metabolic reaction networksthat can be tailored to reflect different environmental conditions . Using context-dependent flux distributions from Flux Balance Analysis (FBA), we produce directed networks with weighted links representing the amount of metabolite flowing from a source reaction to a target reaction per unit time. Such networks are analyzed with tools from network theory to reveal salient features of metabolite flows in each biological context. We illustrate our approach with the directed network of the central carbon metabolism of Escherichia coli , and study its properties in four relevant biological scenarios. Our results show that both flow and network structure depend drastically on the environment: networks produced from the same metabolic model in different contexts have different edges, components, and flow communities, capturing the biological re-routing of metabolic flows inside the cell . By integrating FBA-based analysis with tools from network science, our results provide a framework to interrogate cellular metabolism beyond standard pathway descriptions that are blind to the environmental context .
Cells adapt their metabolic fluxes in response to changes in the environment. We present a systematic flux-based framework for the construction of graphs to URLanism-wide metabolic networks. Our graphs encode the directionality of metabolic fluxes via links that represent the flow of metabolites from source to target reactions. The methodology can be applied in the absence of a specific biological context by modelling fluxes as probabilities, or tailored to different environmental conditions by incorporating flux distributions computed from constraint-based modelling such as Flux Balance Analysis . We illustrate our approach on the central carbon metabolism of Escherichia coli and study the derived graphs under various growth conditions. The results reveal drastic changes in the topological and community structure of the metabolic graphs, which capture the re-routing of metabolic fluxes under each growth condition . By integrating constraint-based models and tools from network science, our framework allows for the interrogation of environment-specific metabolic responses beyond fixed, standard pathway descriptions .
[ { "type": "R", "before": "metabolism to survive changes in their", "after": "metabolic fluxes in response to changes in the", "start_char_pos": 18, "end_char_pos": 56 }, { "type": "A", "before": null, "after": "systematic flux-based", "start_char_pos": 83, "end_char_pos": 83 }, { "type": "R", "before": "and analysis of metabolic reaction networksthat can be tailored to reflect", "after": "of graphs to URLanism-wide metabolic networks. Our graphs encode the directionality of metabolic fluxes via links that represent the flow of metabolites from source to target reactions. The methodology can be applied in the absence of a specific biological context by modelling fluxes as probabilities, or tailored to", "start_char_pos": 115, "end_char_pos": 189 }, { "type": "R", "before": ". Using context-dependent flux distributions from", "after": "by incorporating flux distributions computed from constraint-based modelling such as", "start_char_pos": 225, "end_char_pos": 274 }, { "type": "R", "before": "(FBA), we produce directed networks with weighted links representing the amount of metabolite flowing from a source reaction to a target reaction per unit time. Such networks are analyzed with tools from network theory to reveal salient features of metabolite flows in each biological context.", "after": ".", "start_char_pos": 297, "end_char_pos": 590 }, { "type": "R", "before": "with the directed network of the", "after": "on the", "start_char_pos": 618, "end_char_pos": 650 }, { "type": "R", "before": ", and study its properties in four relevant biological scenarios. Our results show that both flow and network structure depend drastically on the environment: networks produced from the same metabolic model in different contexts have different edges, components, and flow communities, capturing the biological", "after": "and study the derived graphs under various growth conditions. The results reveal drastic changes in the topological and community structure of the metabolic graphs, which capture the", "start_char_pos": 697, "end_char_pos": 1006 }, { "type": "R", "before": "flows inside the cell", "after": "fluxes under each growth condition", "start_char_pos": 1031, "end_char_pos": 1052 }, { "type": "R", "before": "FBA-based analysis with", "after": "constraint-based models and", "start_char_pos": 1070, "end_char_pos": 1093 }, { "type": "R", "before": "results provide a framework to interrogate cellular metabolism beyond", "after": "framework allows for the interrogation of environment-specific metabolic responses beyond fixed,", "start_char_pos": 1126, "end_char_pos": 1195 }, { "type": "D", "before": "that are blind to the environmental context", "after": null, "start_char_pos": 1226, "end_char_pos": 1269 } ]
[ 0, 69, 457, 590, 762, 1054 ]
1605.01639
2
Cells adapt their metabolic fluxes in response to changes in the environment. We present a systematic flux-based framework for the construction of graphs to URLanism-wide metabolic networks. Our graphs encode the directionality of metabolic fluxes via links that represent the flow of metabolites from source to target reactions. The methodology can be applied in the absence of a specific biological context by modelling fluxes as probabilities, or tailored to different environmental conditions by incorporating flux distributions computed from constraint-based modelling such as Flux Balance Analysis. We illustrate our approach on the central carbon metabolism of Escherichia coli and study the derived graphs under various growth conditions . The results reveal drastic changes in the topological and community structure of the metabolic graphs, which capture the re-routing of metabolic fluxes under each growth condition. By integrating constraint-based models and tools from network science, our framework allows for the interrogation of environment-specific metabolic responses beyond fixed, standard pathway descriptions.
Cells adapt their metabolic fluxes in response to changes in the environment. We present a systematic flux-based framework for the construction of graphs to URLanism-wide metabolic networks. Our graphs encode the directionality of metabolic fluxes via links that represent the flow of metabolites from source to target reactions. The methodology can be applied in the absence of a specific biological context by modelling fluxes probabilistically, or can be tailored to different environmental conditions by incorporating flux distributions computed from constraint-based modelling such as Flux Balance Analysis. We illustrate our approach on the central carbon metabolism of Escherichia coli , and on a larger metabolic model of human hepatocytes, and study the proposed graphs under various environmental conditions and genetic perturbations . The results reveal drastic changes in the topological and community structure of the metabolic graphs, which capture the re-routing of metabolic fluxes under each growth and genetic condition. By integrating constraint-based models and tools from network science, our framework allows for the interrogation of context-specific metabolic responses beyond fixed, standard pathway descriptions.
[ { "type": "R", "before": "as probabilities, or", "after": "probabilistically, or can be", "start_char_pos": 429, "end_char_pos": 449 }, { "type": "R", "before": "and study the derived", "after": ", and on a larger metabolic model of human hepatocytes, and study the proposed", "start_char_pos": 685, "end_char_pos": 706 }, { "type": "R", "before": "growth conditions", "after": "environmental conditions and genetic perturbations", "start_char_pos": 728, "end_char_pos": 745 }, { "type": "A", "before": null, "after": "and genetic", "start_char_pos": 918, "end_char_pos": 918 }, { "type": "R", "before": "environment-specific", "after": "context-specific", "start_char_pos": 1047, "end_char_pos": 1067 } ]
[ 0, 77, 190, 329, 604, 747, 929 ]
1605.01639
3
Cells adapt their metabolic fluxes in response to changes in the environment. We present a systematic flux-based framework for the construction of graphs to URLanism-wide metabolic networks. Our graphs encode the directionality of metabolic fluxes via links that represent the flow of metabolites from source to target reactions. The methodology can be applied in the absence of a specific biological context by modelling fluxes probabilistically, or can be tailored to different environmental conditions by incorporating flux distributions computed from constraint-based modelling such as Flux Balance Analysis. We illustrate our approach on the central carbon metabolism of Escherichia coli , and on a larger metabolic model of human hepatocytes , and study the proposed graphs under various environmental conditions and genetic perturbations . The results reveal drastic changes in the topological and community structure of the metabolic graphs , which capture the re-routing of metabolic fluxes under each growth and genetic condition . By integrating constraint-based models and tools from network science, our framework allows for the interrogation of context-specific metabolic responses beyond fixed, standard pathway descriptions.
Cells adapt their metabolic fluxes in response to changes in the environment. We present a framework for the systematic construction of flux-based graphs derived URLanism-wide metabolic networks. Our graphs encode the directionality of metabolic fluxes via edges that represent the flow of metabolites from source to target reactions. The methodology can be applied in the absence of a specific biological context by modelling fluxes probabilistically, or can be tailored to different environmental conditions by incorporating flux distributions computed through constraint-based approaches such as Flux Balance Analysis. We illustrate our approach on the central carbon metabolism of Escherichia coli and on a metabolic model of human hepatocytes . The flux-dependent graphs under various environmental conditions and genetic perturbations exhibit systemic changes in their topological and community structure , which capture the re-routing of metabolic fluxes and the varying importance of specific reactions and pathways . By integrating constraint-based models and tools from network science, our framework allows the study of context-specific metabolic responses at a system level beyond standard pathway descriptions.
[ { "type": "D", "before": "systematic flux-based", "after": null, "start_char_pos": 91, "end_char_pos": 112 }, { "type": "R", "before": "construction of graphs to", "after": "systematic construction of flux-based graphs derived", "start_char_pos": 131, "end_char_pos": 156 }, { "type": "R", "before": "links", "after": "edges", "start_char_pos": 252, "end_char_pos": 257 }, { "type": "R", "before": "from", "after": "through", "start_char_pos": 550, "end_char_pos": 554 }, { "type": "R", "before": "modelling", "after": "approaches", "start_char_pos": 572, "end_char_pos": 581 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 693, "end_char_pos": 694 }, { "type": "D", "before": "larger", "after": null, "start_char_pos": 704, "end_char_pos": 710 }, { "type": "R", "before": ", and study the proposed", "after": ". The flux-dependent", "start_char_pos": 748, "end_char_pos": 772 }, { "type": "R", "before": ". The results reveal drastic changes in the", "after": "exhibit systemic changes in their", "start_char_pos": 845, "end_char_pos": 888 }, { "type": "D", "before": "of the metabolic graphs", "after": null, "start_char_pos": 925, "end_char_pos": 948 }, { "type": "R", "before": "under each growth and genetic condition", "after": "and the varying importance of specific reactions and pathways", "start_char_pos": 1000, "end_char_pos": 1039 }, { "type": "R", "before": "for the interrogation", "after": "the study", "start_char_pos": 1134, "end_char_pos": 1155 }, { "type": "R", "before": "beyond fixed,", "after": "at a system level beyond", "start_char_pos": 1196, "end_char_pos": 1209 } ]
[ 0, 77, 190, 329, 612, 846, 1041 ]
1605.02539
1
We pursue the robust approach to pricing and hedging of financial derivatives. We investigate when the pricing--hedging duality for a regular agent, who only observes the stock prices , extends to agents with some additional information . We introduce a general framework to express the superhedging and market model prices for an informed agent. Our key insight is that an informed agent can be seen as a regular agent who can restrict her attention to a certain subset of possible paths. We use results of Hou \& Ob\l\'oj \mbox{%DIFAUXCMD ho_beliefs on robust approach with beliefs to establish the pricing--hedging duality for an informed agent. Our results cover number of scenarios, including information arriving before trading starts, arriving after static position in European options is formed but before dynamic trading starts or arriving at some point before the maturity. For the latter we show that the superhedging value satisfies a suitable dynamic programming principle, which is of independent interest.
We investigate asymmetry of information in the context of robust approach to pricing and hedging of financial derivatives. We consider two agents, one who only observes the stock prices and another with some additional information , and investigate when the pricing--hedging duality for the former extends to the latter . We introduce a general framework to express the superhedging and market model prices for an informed agent. Our key insight is that an informed agent can be seen as a regular agent who can restrict her attention to a certain subset of possible paths. We use results of Hou Ob\l\'oj on robust approach with beliefs to establish the pricing--hedging duality for an informed agent. Our results cover number of scenarios, including information arriving before trading starts, arriving after static position in European options is formed but before dynamic trading starts or arriving at some point before the maturity. For the latter we show that the superhedging value satisfies a suitable dynamic programming principle, which is of independent interest.
[ { "type": "R", "before": "pursue the", "after": "investigate asymmetry of information in the context of", "start_char_pos": 3, "end_char_pos": 13 }, { "type": "R", "before": "investigate when the pricing--hedging duality for a regular agent,", "after": "consider two agents, one", "start_char_pos": 82, "end_char_pos": 148 }, { "type": "R", "before": ", extends to agents", "after": "and another", "start_char_pos": 184, "end_char_pos": 203 }, { "type": "A", "before": null, "after": ", and investigate when the pricing--hedging duality for the former extends to the latter", "start_char_pos": 237, "end_char_pos": 237 }, { "type": "D", "before": "\\&", "after": null, "start_char_pos": 513, "end_char_pos": 515 }, { "type": "D", "before": "\\mbox{%DIFAUXCMD ho_beliefs", "after": null, "start_char_pos": 525, "end_char_pos": 552 } ]
[ 0, 78, 239, 347, 490, 649, 884 ]
1605.02977
1
The ability to navigate environmental gradients is often critical for survival. When gradients are shallow or noisy, URLanisms move by alternating straight motion (runs) with random reorientations (tumbles) . Navigation is achieved by transiently reducing the probability to tumble when attractant signal increases. One drawback of this strategy is that occasional runs in the wrong direction reduce progress up the gradient. Here we discovered a positive feedback regime inherent in this strategy that strongly mitigates this problem. In an attractant field, motion up the gradient reduces tumble probability, which further boosts drift up the gradient. This positive feedback can drive large fluctuations in the internal state of URLanism away from its mean, resulting in long runs in favorable directions but short ones otherwise. In this new regime URLanism achieves a "ratchet-like" gradient climbing behavior unexpected from mean field theory, and drift speeds much faster than previously believed possible .
URLanisms navigate gradients by alternating straight motions (runs) with random reorientations (tumbles) , transiently suppressing tumbles whenever attractant signal increases. This induces a functional coupling between movement and sensation, since tumbling probability is controlled by the internal state of URLanism which, in turn, depends on previous signal levels. Although a negative feedback tends to maintain this internal state close to adapted levels, positive feedback can arise when motion up the gradient reduces tumbling probability, further boosting drift up the gradient. Importantly, such positive feedback can drive large fluctuations in the internal state , complicating analytical approaches. Previous studies focused on what happens when the negative feedback dominates the dynamics. By contrast, we show here that there is a large portion of physiologically-relevant parameter space where the positive feedback can dominate, even when gradients are relatively shallow. We demonstrate how large transients emerge because of non-normal dynamics (non-orthogonal eigenvectors near a stable fixed point) inherent in the positive feedback, and further identify a fundamental nonlinearity that strongly amplifies their effect. Most importantly, this amplification is asymmetric, elongating runs in favorable directions and abbreviating others. The result is a "ratchet-like" gradient climbing behavior with drift speeds that can approach half the maximum run speed of URLanism. Our results thus show that the classical drawback of run-and-tumble navigation --- wasteful runs in the wrong direction --- can be mitigated by exploiting the non-normal dynamics implicit in the run-and-tumble strategy .
[ { "type": "R", "before": "The ability to navigate environmental gradients is often critical for survival. When gradients are shallow or noisy, URLanisms move", "after": "URLanisms navigate gradients", "start_char_pos": 0, "end_char_pos": 131 }, { "type": "R", "before": "motion", "after": "motions", "start_char_pos": 156, "end_char_pos": 162 }, { "type": "R", "before": ". Navigation is achieved by transiently reducing the probability to tumble when", "after": ", transiently suppressing tumbles whenever", "start_char_pos": 207, "end_char_pos": 286 }, { "type": "R", "before": "One drawback of this strategy is that occasional runs in the wrong direction reduce progress up the gradient. Here we discovered a positive feedback regime inherent in this strategy that strongly mitigates this problem. In an attractant field,", "after": "This induces a functional coupling between movement and sensation, since tumbling probability is controlled by the internal state of URLanism which, in turn, depends on previous signal levels. Although a negative feedback tends to maintain this internal state close to adapted levels, positive feedback can arise when", "start_char_pos": 316, "end_char_pos": 559 }, { "type": "R", "before": "tumble probability, which further boosts", "after": "tumbling probability, further boosting", "start_char_pos": 591, "end_char_pos": 631 }, { "type": "R", "before": "This", "after": "Importantly, such", "start_char_pos": 655, "end_char_pos": 659 }, { "type": "R", "before": "of URLanism away from its mean, resulting in long", "after": ", complicating analytical approaches. Previous studies focused on what happens when the negative feedback dominates the dynamics. By contrast, we show here that there is a large portion of physiologically-relevant parameter space where the positive feedback can dominate, even when gradients are relatively shallow. We demonstrate how large transients emerge because of non-normal dynamics (non-orthogonal eigenvectors near a stable fixed point) inherent in the positive feedback, and further identify a fundamental nonlinearity that strongly amplifies their effect. Most importantly, this amplification is asymmetric, elongating", "start_char_pos": 729, "end_char_pos": 778 }, { "type": "R", "before": "but short ones otherwise. In this new regime URLanism achieves", "after": "and abbreviating others. The result is", "start_char_pos": 808, "end_char_pos": 870 }, { "type": "R", "before": "unexpected from mean field theory, and drift speeds much faster than previously believed possible", "after": "with drift speeds that can approach half the maximum run speed of URLanism. Our results thus show that the classical drawback of run-and-tumble navigation --- wasteful runs in the wrong direction --- can be mitigated by exploiting the non-normal dynamics implicit in the run-and-tumble strategy", "start_char_pos": 915, "end_char_pos": 1012 } ]
[ 0, 79, 208, 315, 425, 535, 654, 833 ]
1605.04943
1
Linear stochastic models and discretized kinetic theory are two complementary analytical techniques used for the investigation of complex systems of economic interactions. The former employ Langevin equations, with an emphasis on trade; the latter is based on systems of ordinary differential equations and is better suited for the description of binary interactions, taxation and welfare redistribution. We propose a new framework which establishes a connection between the two approaches by introducing stochastic effects into the kinetic model based on Langevin and Fokker-Planck formalisms. Numerical simulations of the Langevin model indicate positive correlations between the Gini index and the total wealth, that suggests a growing inequality with increasing income. Complementary analysis shows a simultaneous decrease in inequality as social mobility increases in presence of a conserved total wealth, in conformity with economic expectations .
Linear stochastic models and discretized kinetic theory are two complementary analytical techniques used for the investigation of complex systems of economic interactions. The former employ Langevin equations, with an emphasis on stock trade; the latter is based on systems of ordinary differential equations and is better suited for the description of binary interactions, taxation and welfare redistribution. We propose a new framework which establishes a connection between the two approaches by introducing stochastic effects into the kinetic model based on Langevin and Fokker-Planck formalisms. Numerical simulations of the resulting model indicate positive correlations between the Gini index and the total wealth, that suggests a growing inequality with increasing income. Further analysis shows a simultaneous decrease in inequality as social mobility increases in presence of a conserved total wealth, in conformity with economic data .
[ { "type": "A", "before": null, "after": "stock", "start_char_pos": 230, "end_char_pos": 230 }, { "type": "R", "before": "Langevin", "after": "resulting", "start_char_pos": 625, "end_char_pos": 633 }, { "type": "R", "before": "Complementary", "after": "Further", "start_char_pos": 775, "end_char_pos": 788 }, { "type": "R", "before": "expectations", "after": "data", "start_char_pos": 940, "end_char_pos": 952 } ]
[ 0, 171, 237, 405, 595, 774 ]
1605.05819
1
A function is exponentially concave if its exponential is concave. We consider exponentially concave functions on the unit simplex. It is known that gradient maps of exponentially concave functions are solutions of a Monge-Kantorovich optimal transport problem and allow for a better gradient approximation than those of ordinary concave functions. The approximation error, called L-divergence, is different from the usual Bregman divergence. Using the tools of information geometry and optimal transport, we show that L-divergence induces a new information geometry on the simplex . This geometric structure consists of a Riemannian metric and a pair of dually coupled affine connections defining two kinds of geodesics. We show that the induced geometry is dually projectively flat but not flat. Nevertheless, we prove an analogue of the celebrated generalized Pythagorean theorem from classical information geometry. On the other hand, we consider displacement interpolation under a Lagrangian integral action that is consistent with the optimal transport problem and show that the action minimizing curves are dual geodesics. The Pythagorean theorem is also shown to have a remarkable application of determining the optimal frequency of rebalancing in stochastic portfolio theory.
A function is exponentially concave if its exponential is concave. We consider exponentially concave functions on the unit simplex. In a previous paper we showed that gradient maps of exponentially concave functions provide solutions to a Monge-Kantorovich optimal transport problem and give a better gradient approximation than those of ordinary concave functions. The approximation error, called L-divergence, is different from the usual Bregman divergence. Using the tools of information geometry and optimal transport, we show that L-divergence induces a new information geometry on the simplex consisting of a Riemannian metric and a pair of dually coupled affine connections which defines two kinds of geodesics. We show that the induced geometry is dually projectively flat but not flat. Nevertheless, we prove an analogue of the celebrated generalized Pythagorean theorem from classical information geometry. On the other hand, we consider displacement interpolation under a Lagrangian integral action that is consistent with the optimal transport problem and show that the action minimizing curves are dual geodesics. The Pythagorean theorem is also shown to have an interesting application of determining the optimal trading frequency in stochastic portfolio theory.
[ { "type": "R", "before": "It is known", "after": "In a previous paper we showed", "start_char_pos": 132, "end_char_pos": 143 }, { "type": "R", "before": "are solutions of", "after": "provide solutions to", "start_char_pos": 198, "end_char_pos": 214 }, { "type": "R", "before": "allow for", "after": "give", "start_char_pos": 265, "end_char_pos": 274 }, { "type": "R", "before": ". This geometric structure consists", "after": "consisting", "start_char_pos": 582, "end_char_pos": 617 }, { "type": "R", "before": "defining", "after": "which defines", "start_char_pos": 689, "end_char_pos": 697 }, { "type": "R", "before": "a remarkable", "after": "an interesting", "start_char_pos": 1176, "end_char_pos": 1188 }, { "type": "R", "before": "frequency of rebalancing", "after": "trading frequency", "start_char_pos": 1228, "end_char_pos": 1252 } ]
[ 0, 66, 131, 348, 442, 583, 721, 797, 919, 1129 ]
1605.05853
1
We previously proposed the existence of DNA to DNA transcription in eukaryotic cells, but the mechanism by which single-stranded DNA (ssDNA) transcript is produced and released from the genome remains unknown. We previously speculated that the mechanism of DNA to DNA transcription might be similar to that of DNA to RNA transcription, but there is another mechanism that we think is most likely to be true. The mechanism is called endonuclease dependent transcript cutout , in which a copy of ssDNA fragment (transcript) between two nicks produced by nicking endonuclease is released from double-stranded DNA region in the genome by an unknown ssDNA fragment-releasing enzyme. The gap in the double-stranded DNA will be filled through DNA repair mechanism. Occasionally, multiple copies of ssDNA transcripts might be produced through multiple rounds of cutout-repair-cutout cycle.
We previously proposed the existence of DNA to DNA transcription in eukaryotic cells, but the mechanism by which single-stranded DNA (ssDNA) transcript is produced and released from the genome remains unknown. We once speculated that the mechanism of DNA to DNA transcription might be similar to that of DNA to RNA transcription, but now we propose that endonuclease dependent transcript cutout may be a possible mechanism of DNA to DNA transcription , in which a copy of ssDNA fragment (transcript) between two nicks produced by nicking endonuclease is released from double-stranded DNA region in the genome by an unknown ssDNA fragment releasing enzyme. The gap in the double-stranded DNA will be filled through DNA repair mechanism. Occasionally, multiple copies of ssDNA transcripts could be produced through multiple rounds of cutout-repair-cutout cycle.
[ { "type": "R", "before": "previously", "after": "once", "start_char_pos": 213, "end_char_pos": 223 }, { "type": "R", "before": "there is another mechanism that we think is most likely to be true. The mechanism is called", "after": "now we propose that", "start_char_pos": 340, "end_char_pos": 431 }, { "type": "A", "before": null, "after": "may be a possible mechanism of DNA to DNA transcription", "start_char_pos": 473, "end_char_pos": 473 }, { "type": "R", "before": "fragment-releasing", "after": "fragment releasing", "start_char_pos": 652, "end_char_pos": 670 }, { "type": "R", "before": "might", "after": "could", "start_char_pos": 810, "end_char_pos": 815 } ]
[ 0, 209, 407, 678, 758 ]
1605.05853
2
We previously proposed the existence of DNA to DNA transcription in eukaryotic cells, but the mechanism by which single-stranded DNA (ssDNA) transcript is produced and released from the genome remains unknown. We once speculated that the mechanism of DNA to DNA transcription might be similar to that of DNA to RNA transcription, but now we propose that endonuclease dependent transcript cutout may be a possible mechanism of DNA to DNA transcription, in which a copy of ssDNA fragment (transcript) between two nicks produced by nicking endonuclease is released from double-stranded DNA region in the genome by an unknown ssDNA fragment releasing enzyme. The gap in the double-stranded DNA will be filled through DNA repair mechanism. Occasionally, multiple copies of ssDNA transcripts could be produced through multiple rounds of cutout-repair-cutout cycle.
We previously proposed the existence of DNA to DNA transcription in eukaryotic cells, but the mechanism by which single-stranded DNA (ssDNA) transcript is produced and released from the genome remains unknown. We once speculated that the mechanism of DNA to DNA transcription might be similar to that of DNA to RNA transcription, but now we propose that endonuclease dependent transcript cutout may be a possible mechanism of DNA to DNA transcription, in which a copy of ssDNA fragment (transcript) between two nicks produced by nicking endonuclease is released from double-stranded DNA (dsDNA) region in the genome by an unknown ssDNA fragment releasing enzyme. The gap in the dsDNA will be filled through DNA repair mechanism. Occasionally, multiple copies of ssDNA transcripts could be produced through multiple rounds of cutout-repair-cutout cycle.
[ { "type": "A", "before": null, "after": "(dsDNA)", "start_char_pos": 587, "end_char_pos": 587 }, { "type": "R", "before": "double-stranded DNA", "after": "dsDNA", "start_char_pos": 671, "end_char_pos": 690 } ]
[ 0, 209, 655, 735 ]
1605.06482
1
We discuss generalizing the correlation between an asset's return and its volatility-- the leverage effect-- in the context of stochastic volatility. While it is a long standing consensus that leverage effects exist, empirical evidence paradoxically show that most individual stocks do not exhibit this correlation . We extend the standard linear correlation to a nonlinear generalization in order to capture the complex nature of this effect using a newly developed Bayesian sequential computation method . Examining 615 stocks that comprise the S&P500 and Nikkei 225, we find nearly all of the stocks to exhibit leverage effects, of which most would have been lost under the standard linear assumption. We further the analysis by exploring whether there are clear traits that characterize the complexity of the leverage effect .
We discuss generalizing the correlation between an asset's return and its volatility-- the leverage effect-- in the context of stochastic volatility. While it is a long standing consensus that leverage effects exist, empirical evidence paradoxically show that most individual stocks either do not exhibit this correlation or are very weak. This, at least partially, is due to an assumption that the correlation is linear, i.e., the effect of large shocks and small fluctuations changes linearly. We relax this standard assumption of linear correlation to a nonlinear generalization in order to capture the complex nature of this effect using a newly developed Bayesian sequential computation method that is fast, efficient, and on-line . Examining 615 stocks that comprise the S&P500 and Nikkei 225, we find nearly all of the stocks to exhibit leverage effects, of which many would have been lost under the standard linear assumption. We further the analysis by exploring whether there are clear traits that characterize the complexity of the leverage effect and find that different countries and certain sectors to have tendencies for more complex leverage effects .
[ { "type": "A", "before": null, "after": "either", "start_char_pos": 283, "end_char_pos": 283 }, { "type": "R", "before": ". We extend the standard", "after": "or are very weak. This, at least partially, is due to an assumption that the correlation is linear, i.e., the effect of large shocks and small fluctuations changes linearly. We relax this standard assumption of", "start_char_pos": 316, "end_char_pos": 340 }, { "type": "A", "before": null, "after": "that is fast, efficient, and on-line", "start_char_pos": 507, "end_char_pos": 507 }, { "type": "R", "before": "most", "after": "many", "start_char_pos": 643, "end_char_pos": 647 }, { "type": "A", "before": null, "after": "and find that different countries and certain sectors to have tendencies for more complex leverage effects", "start_char_pos": 831, "end_char_pos": 831 } ]
[ 0, 149, 317, 706 ]
1605.06482
2
We discuss generalizing the correlation between an asset's return and its volatility-- the leverage effect-- in the context of stochastic volatility. While it is a long standing consensus that leverage effects exist, empirical evidence paradoxically show that most individual stocks either do not exhibit this correlation or are very weak. This, at least partially, is due to an assumption that the correlationis linear, i. e., the effect of large shocks and small fluctuations changes linearly. We relax this standard assumption of linear correlation to a nonlinear generalization in order to capture the complex nature of this effect using a newly developed Bayesian sequential computation method that is fast, efficient, and on-line . Examining 615 stocks that comprise the S P500 and Nikkei 225, we find nearly all of the stocks to exhibit leverage effects, of which many would have been lost under the standard linear assumption. We further the analysis by exploring whether there are clear traits that characterize the complexity of the leverage effect and find that different countries and certain sectors to have tendencies for more complex leverage effects.\\%DIF > of all stocks exhibit general leverage effects. In contrast, under the linear assumption we find nearly half of the stocks to have no leverage effects. We further the analysis by exploring whether there are clear traits that characterize the complexity of the leverage effect. We find that there are country and some industry effects that help explain leverage complexity.
While it is a long standing consensus that leverage effects exist, empirical evidence paradoxically show that most individual stocks do not exhibit this correlation . We examine this paradox by questioning the validity of the assumption of linearity in the correlation. Nonlinear generalizations of the leverage effect are proposed within the stochastic volatility framework in order to capture flexible correlation structures. Efficient Bayesian sequential computation is implemented to estimate this effect in a practical, on-line manner . Examining 615 stocks that comprise the S \& P500 and Nikkei 225, we find that 89\\%DIF > of all stocks exhibit general leverage effects. In contrast, under the linear assumption we find nearly half of the stocks to have no leverage effects. We further the analysis by exploring whether there are clear traits that characterize the complexity of the leverage effect. We find that there are country and some industry effects that help explain leverage complexity.
[ { "type": "D", "before": "We discuss generalizing the correlation between an asset's return and its volatility-- the leverage effect-- in the context of stochastic volatility.", "after": null, "start_char_pos": 0, "end_char_pos": 149 }, { "type": "D", "before": "either", "after": null, "start_char_pos": 283, "end_char_pos": 289 }, { "type": "R", "before": "or are very weak. This, at least partially, is due to an assumption that the correlationis linear, i. e., the effect of large shocks and small fluctuations changes linearly. We relax this standard assumption of linear correlation to a nonlinear generalization", "after": ". We examine this paradox by questioning the validity of the assumption of linearity in the correlation. Nonlinear generalizations of the leverage effect are proposed within the stochastic volatility framework", "start_char_pos": 322, "end_char_pos": 581 }, { "type": "R", "before": "the complex nature of this effect using a newly developed", "after": "flexible correlation structures. Efficient", "start_char_pos": 602, "end_char_pos": 659 }, { "type": "R", "before": "method that is fast, efficient, and", "after": "is implemented to estimate this effect in a practical,", "start_char_pos": 692, "end_char_pos": 727 }, { "type": "A", "before": null, "after": "manner", "start_char_pos": 736, "end_char_pos": 736 }, { "type": "A", "before": null, "after": "\\&", "start_char_pos": 780, "end_char_pos": 780 }, { "type": "R", "before": "nearly all of the stocks to exhibit leverage effects, of which many would have been lost under the standard linear assumption. We further the analysis by exploring whether there are clear traits that characterize the complexity of the leverage effect and find that different countries and certain sectors to have tendencies for more complex leverage effects.", "after": "that 89", "start_char_pos": 810, "end_char_pos": 1168 } ]
[ 0, 149, 339, 495, 936, 1168, 1224, 1328, 1453 ]
1605.06482
3
While it is a long standing consensus that leverage effects exist , empirical evidence paradoxically show that most individual stocks do not exhibit this correlation . We examine this paradox by questioning the validity of the assumption of linearity in the correlation . Nonlinear generalizations of the leverage effect are proposed within the stochastic volatility framework in order to capture flexible correlation structures . Efficient Bayesian sequential computation is implemented to estimate this effect in a practical, on-line manner. Examining 615 stocks that comprise the S\&P500 and Nikkei 225, we find that 89\\%DIF < of all stocks exhibit general leverage effects. In contrast, under the linear assumption we find nearly half of the stocks to have no leverage effects. We further the analysis by exploring whether there are clear traits that characterize the complexity of the leverage effect. We find that there are country and some industry effects that help explain leverage complexity.abstract %DIF > of all stocks compared to the conventional model assumption.
The leverage effect-- the correlation between an asset's return and its volatility-- has played a key role in forecasting and understanding volatility and risk. While it is a long standing consensus that leverage effects exist and improve forecasts , empirical evidence paradoxically do not show that most individual stocks exhibit this phenomena, mischaracterizing risk and therefore leading to poor predictive performance . We examine this paradox , with the goal to improve density forecasts, by relaxing the assumption of linearity in the leverage effect . Nonlinear generalizations of the leverage effect are proposed within the Bayesian stochastic volatility framework in order to capture flexible leverage structures, where small fluctuations in prices have a different effect from large shocks . Efficient Bayesian sequential computation is developed and implemented to estimate this effect in a practical, on-line manner. Examining 615 stocks that comprise the S\&P500 and Nikkei 225, we find that relaxing the linear assumption to our proposed nonlinear leverage effect function improves predictive performances for 89\\%DIF < of all stocks exhibit general leverage effects. In contrast, under the linear assumption we find nearly half of the stocks to have no leverage effects. We further the analysis by exploring whether there are clear traits that characterize the complexity of the leverage effect. We find that there are country and some industry effects that help explain leverage complexity.abstract %DIF > of all stocks compared to the conventional model assumption.
[ { "type": "A", "before": null, "after": "The leverage effect-- the correlation between an asset's return and its volatility-- has played a key role in forecasting and understanding volatility and risk.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "A", "before": null, "after": "and improve forecasts", "start_char_pos": 67, "end_char_pos": 67 }, { "type": "A", "before": null, "after": "do not", "start_char_pos": 103, "end_char_pos": 103 }, { "type": "R", "before": "do not exhibit this correlation", "after": "exhibit this phenomena, mischaracterizing risk and therefore leading to poor predictive performance", "start_char_pos": 137, "end_char_pos": 168 }, { "type": "R", "before": "by questioning the validity of the", "after": ", with the goal to improve density forecasts, by relaxing the", "start_char_pos": 195, "end_char_pos": 229 }, { "type": "R", "before": "correlation", "after": "leverage effect", "start_char_pos": 261, "end_char_pos": 272 }, { "type": "A", "before": null, "after": "Bayesian", "start_char_pos": 348, "end_char_pos": 348 }, { "type": "R", "before": "correlation structures", "after": "leverage structures, where small fluctuations in prices have a different effect from large shocks", "start_char_pos": 410, "end_char_pos": 432 }, { "type": "A", "before": null, "after": "developed and", "start_char_pos": 480, "end_char_pos": 480 }, { "type": "A", "before": null, "after": "relaxing the linear assumption to our proposed nonlinear leverage effect function improves predictive performances for", "start_char_pos": 625, "end_char_pos": 625 } ]
[ 0, 170, 548, 684, 788, 913 ]
1605.06488
1
Molecular motor proteins serve as an essential component of intracellular transport by generating forces to haul cargoes along cytoskeletal filaments. In some circumstances, two species of motors that are directed oppositely (e.g. kinesin, dynein) can be attached to the same cargo . The resulting net motion is known to be bidirectional , but the mechanism of switching remains unclear. In this work , we propose a mean-field mathematical model of the mechanical interactions of two populations of molecular motors with diffusion of the cargo (thermal fluctuations ) as the fundamental noise source. By studying a simplified model, the delayed response of motors to rapid fluctuations in the cargo is quantified, allowing for the reduction of the full model to two "characteristic positions " of each of the motor populations . The system is then found to be "metastable", switching between two distinct directional transport states , or bidirectional motion . The time to switch between these states is then investigated using WKB analysisin the weak-noise limit .
Molecular motor proteins serve as an essential component of intracellular transport by generating forces to haul cargoes along cytoskeletal filaments. Two species of motors that are directed oppositely (e.g. kinesin, dynein) can be attached to the same cargo , which is known to produce bidirectional net motion. However, the mechanism of switching remains subtle. Although previous work focuses on the motor number as the driving noise source for switching, this work proposes an alternative possibility: cargo diffusion. A mean-field mathematical model of mechanical interactions of two populations of molecular motors with cargo thermal fluctuations (diffusion) is presented to study this phenomenon. The delayed response of a motor to fluctuations in the cargo velocity is quantified, allowing for the reduction of the full model a single "characteristic position " , a proxy for the net force on the cargo . The system is then found to be metastable, with switching exclusively due to cargo diffusion between two distinct directional transport states . The time to switch between these states is then investigated using a mean first passage time analysis. The switching time is found to be non-monotonic in the drag of the cargo, providing an experimental prediction for verification .
[ { "type": "R", "before": "In some circumstances, two", "after": "Two", "start_char_pos": 151, "end_char_pos": 177 }, { "type": "R", "before": ". The resulting net motion", "after": ", which", "start_char_pos": 282, "end_char_pos": 308 }, { "type": "R", "before": "be bidirectional , but", "after": "produce bidirectional net motion. However,", "start_char_pos": 321, "end_char_pos": 343 }, { "type": "R", "before": "unclear. In this work , we propose a", "after": "subtle. Although previous work focuses on the motor number as the driving noise source for switching, this work proposes an alternative possibility: cargo diffusion. A", "start_char_pos": 379, "end_char_pos": 415 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 449, "end_char_pos": 452 }, { "type": "R", "before": "diffusion of the cargo (thermal fluctuations ) as the fundamental noise source. By studying a simplified model, the", "after": "cargo thermal fluctuations (diffusion) is presented to study this phenomenon. The", "start_char_pos": 521, "end_char_pos": 636 }, { "type": "R", "before": "motors to rapid", "after": "a motor to", "start_char_pos": 657, "end_char_pos": 672 }, { "type": "A", "before": null, "after": "velocity", "start_char_pos": 699, "end_char_pos": 699 }, { "type": "R", "before": "to two", "after": "a single", "start_char_pos": 760, "end_char_pos": 766 }, { "type": "R", "before": "positions", "after": "position", "start_char_pos": 783, "end_char_pos": 792 }, { "type": "R", "before": "of each of the motor populations", "after": ", a proxy for the net force on the cargo", "start_char_pos": 795, "end_char_pos": 827 }, { "type": "R", "before": "\"metastable\", switching", "after": "metastable, with switching exclusively due to cargo diffusion", "start_char_pos": 861, "end_char_pos": 884 }, { "type": "D", "before": ", or bidirectional motion", "after": null, "start_char_pos": 935, "end_char_pos": 960 }, { "type": "R", "before": "WKB analysisin the weak-noise limit", "after": "a mean first passage time analysis. The switching time is found to be non-monotonic in the drag of the cargo, providing an experimental prediction for verification", "start_char_pos": 1030, "end_char_pos": 1065 } ]
[ 0, 150, 283, 387, 600, 829, 962 ]
1605.06488
2
Molecular motor proteins serve as an essential component of intracellular transport by generating forces to haul cargoes along cytoskeletal filaments. Two species of motors that are directed oppositely (e.g. kinesin, dynein) can be attached to the same cargo, which is known to produce bidirectional net motion. However, the mechanism of switching remains subtle. Although previous work focuses on the motor number as the driving noise source for switching, this work proposes an alternative possibility : cargo diffusion. A mean-field mathematical model of mechanical interactions of two populations of molecular motors with cargo thermal fluctuations (diffusion) is presented to study this phenomenon. The delayed response of a motor to fluctuations in the cargo velocity is quantified, allowing for the reduction of the full model a single "characteristic position ", a proxy for the net force on the cargo. The system is then found to be metastable, with switching exclusively due to cargo diffusion between two distinct directional transport states. The time to switch between these states is then investigated using a mean first passage time analysis. The switching time is found to be non-monotonic in the drag of the cargo, providing an experimental prediction for verification .
Molecular motor proteins serve as an essential component of intracellular transport by generating forces to haul cargoes along cytoskeletal filaments. Two species of motors that are directed oppositely (e.g. kinesin, dynein) can be attached to the same cargo, which is known to produce bidirectional net motion. Although previous work focuses on the motor number as the driving noise source for switching, we propose an alternative mechanism : cargo diffusion. A mean-field mathematical model of mechanical interactions of two populations of molecular motors with cargo thermal fluctuations (diffusion) is presented to study this phenomenon. The delayed response of a motor to fluctuations in the cargo velocity is quantified, allowing for the reduction of the full model a single "characteristic distance ", a proxy for the net force on the cargo. The system is then found to be metastable, with switching exclusively due to cargo diffusion between distinct directional transport states. The time to switch between these states is then investigated using a mean first passage time analysis. The switching time is found to be non-monotonic in the drag of the cargo, providing an experimental test of the theory .
[ { "type": "D", "before": "However, the mechanism of switching remains subtle.", "after": null, "start_char_pos": 312, "end_char_pos": 363 }, { "type": "R", "before": "this work proposes an alternative possibility", "after": "we propose an alternative mechanism", "start_char_pos": 458, "end_char_pos": 503 }, { "type": "R", "before": "position", "after": "distance", "start_char_pos": 859, "end_char_pos": 867 }, { "type": "D", "before": "two", "after": null, "start_char_pos": 1012, "end_char_pos": 1015 }, { "type": "R", "before": "prediction for verification", "after": "test of the theory", "start_char_pos": 1258, "end_char_pos": 1285 } ]
[ 0, 150, 311, 363, 522, 703, 910, 1054, 1157 ]
1605.06849
1
In this paper, we consider the problem of maximizing the expected discounted utility of dividend payments for an insurance company that controls risk exposure by purchasing proportional reinsurance. We assume the preference of the insurer is of CRRA form. By solving the corresponding Hamilton-Jacobi-Bellman equation, we identify the value function and the corresponding optimal strategy. We also analyze the asymptotics of the value function for large initial reserves .
In this paper, we consider the problem of maximizing the expected discounted utility of dividend payments for an insurance company that controls risk exposure by purchasing proportional reinsurance. We assume the preference of the insurer is of CRRA form. By solving the corresponding Hamilton-Jacobi-Bellman equation, we identify the value function and the corresponding optimal strategy. We also analyze the asymptotic behavior of the value function for large initial reserves . Finally, we provide some numerical examples to illustrate the results and analyze the sensitivity of the parameters .
[ { "type": "R", "before": "asymptotics", "after": "asymptotic behavior", "start_char_pos": 410, "end_char_pos": 421 }, { "type": "A", "before": null, "after": ". Finally, we provide some numerical examples to illustrate the results and analyze the sensitivity of the parameters", "start_char_pos": 471, "end_char_pos": 471 } ]
[ 0, 198, 255, 389 ]
1605.07099
1
We introduce a novel stochastic volatility model where the squared volatility of the asset return follows a Jacobi process. It contains the Heston model as a limit case. We show that the finite-dimensional distributions of the log price process admit a Gram--Charlier A expansion in closed-form. We use this to derive closed-form series representations for option prices whose payoff is a function of the underlying asset price trajectory at finitely many time points. This includes European call, put, and digital options, forward start options, and forward start options on the underlying return. We derive sharp analytical and numerical bounds on the truncation errors. We illustrate the performance by numerical examples, which show that our approach offers a viable alternative to Fourier transform techniques.
We introduce a novel stochastic volatility model where the squared volatility of the asset return follows a Jacobi process. It contains the Heston model as a limit case. We show that the the joint distribution of any finite sequence of log returns admits a Gram-Charlier A expansion in closed-form. We use this to derive closed-form series representations for option prices whose payoff is a function of the underlying asset price trajectory at finitely many time points. This includes European call, put, and digital options, forward start options, and forward start options on the underlying return. We derive sharp analytical and numerical bounds on the series truncation errors. We illustrate the performance by numerical examples, which show that our approach offers a viable alternative to Fourier transform techniques.
[ { "type": "R", "before": "finite-dimensional distributions of the log price process admit a Gram--Charlier", "after": "the joint distribution of any finite sequence of log returns admits a Gram-Charlier", "start_char_pos": 187, "end_char_pos": 267 }, { "type": "A", "before": null, "after": "series", "start_char_pos": 654, "end_char_pos": 654 } ]
[ 0, 123, 169, 295, 468, 598, 673 ]
1605.07099
2
We introduce a novel stochastic volatility model where the squared volatility of the asset return follows a Jacobi process. It contains the Heston model as a limit case. We show that the the joint distribution of any finite sequence of log returns admits a Gram-Charlier A expansion in closed-form . We use this to derive closed-form series representations for option prices whose payoff is a function of the underlying asset price trajectory at finitely many time points. This includes European call, put, and digital options, forward start options, and forward start optionson the underlying return. We derive sharp analytical and numerical bounds on the series truncation errors. We illustrate the performance by numerical examples, which show that our approach offers a viable alternative to Fourier transform techniques .
We introduce a novel stochastic volatility model where the squared volatility of the asset return follows a Jacobi process. It contains the Heston model as a limit case. We show that the joint density of any finite sequence of log returns admits a Gram-Charlier A expansion with closed-form coefficients. We derive closed-form series representations for option prices whose discounted payoffs are functions of the asset price trajectory at finitely many time points. This includes European call, put, and digital options, forward start options, and can be applied to discretely monitored Asian options. In a numerical analysis we find that the price approximations become accurate within short CPU time .
[ { "type": "R", "before": "the joint distribution", "after": "joint density", "start_char_pos": 187, "end_char_pos": 209 }, { "type": "R", "before": "in", "after": "with", "start_char_pos": 283, "end_char_pos": 285 }, { "type": "R", "before": ". We use this to", "after": "coefficients. We", "start_char_pos": 298, "end_char_pos": 314 }, { "type": "R", "before": "payoff is a function of the underlying", "after": "discounted payoffs are functions of the", "start_char_pos": 381, "end_char_pos": 419 }, { "type": "R", "before": "forward start optionson the underlying return. We derive sharp analytical and numerical bounds on the series truncation errors. We illustrate the performance by numerical examples, which show that our approach offers a viable alternative to Fourier transform techniques", "after": "can be applied to discretely monitored Asian options. In a numerical analysis we find that the price approximations become accurate within short CPU time", "start_char_pos": 555, "end_char_pos": 824 } ]
[ 0, 123, 169, 299, 472, 601, 682 ]
1605.07099
3
We introduce a novel stochastic volatility model where the squared volatility of the asset return follows a Jacobi process. It contains the Heston model as a limit case. We show that the joint density of any finite sequence of log returns admits a Gram-Charlier A expansion with closed-form coefficients. We derive closed-form series representations for option prices whose discounted payoffs are functions of the asset price trajectory at finitely many time points. This includes European call, put, and digital options, forward start options, and can be applied to discretely monitored Asian options. In a numerical analysis we find that the price approximations become accurate within short CPU time .
We introduce a novel stochastic volatility model where the squared volatility of the asset return follows a Jacobi process. It contains the Heston model as a limit case. We show that the joint density of any finite sequence of log returns admits a Gram-Charlier A expansion with closed-form coefficients. We derive closed-form series representations for option prices whose discounted payoffs are functions of the asset price trajectory at finitely many time points. This includes European call, put, and digital options, forward start options, and can be applied to discretely monitored Asian options. In a numerical analysis we show that option prices can be accurately and efficiently approximated by truncating their series representations .
[ { "type": "R", "before": "find that the price approximations become accurate within short CPU time", "after": "show that option prices can be accurately and efficiently approximated by truncating their series representations", "start_char_pos": 630, "end_char_pos": 702 } ]
[ 0, 123, 169, 304, 466, 602 ]
1605.07124
1
Gene transcription is a highly stochastic and dynamic process. As a result, the mRNA copy number of a given gene is heterogeneous both between cells and across time. We present a framework to model gene transcription in populations of cells with time-varying (stochastic or deterministic) transcription and degradation rates. Such rates can be understood as upstream cellular drives representing the effect of different aspects of the cellular environment. We show that the full solution of the master equation contains two components: a model-specific, upstream effective drive, which encapsulates the effect of the cellular drives (e.g., entrainment, periodicity or promoter randomness), and a downstream transcriptional Poissonian part, which is common to all models. Our analytical framework allows us to treat cell-to-cell and dynamic variability consistently, unifying several approaches in the literature. We apply the obtained solution to characterize several gene transcription models of experimental relevance, and to explain the influence on gene transcription of synchrony, stationarity, ergodicity, as well as the effect of time-scales and other dynamic characteristics of drives. We also show how the solution can be applied to the analysis of single-cell data, and to reduce the computational cost of sampling solutions via stochastic simulation .
Gene transcription is a highly stochastic and dynamic process. As a result, the mRNA copy number of a given gene is heterogeneous both between cells and across time. We present a framework to model gene transcription in populations of cells with time-varying (stochastic or deterministic) transcription and degradation rates. Such rates can be understood as upstream cellular drives representing the effect of different aspects of the cellular environment. We show that the full solution of the master equation contains two components: a model-specific, upstream effective drive, which encapsulates the effect of cellular drives (e.g., entrainment, periodicity or promoter randomness), and a downstream transcriptional Poissonian part, which is common to all models. Our analytical framework treats cell-to-cell and dynamic variability consistently, unifying several approaches in the literature. We apply the obtained solution to characterise different models of experimental relevance, and to explain the influence on gene transcription of synchrony, stationarity, ergodicity, as well as the effect of time-scales and other dynamic characteristics of drives. We also show how the solution can be applied to the analysis of noise sources in single-cell data, and to reduce the computational cost of stochastic simulations .
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 613, "end_char_pos": 616 }, { "type": "R", "before": "allows us to treat", "after": "treats", "start_char_pos": 796, "end_char_pos": 814 }, { "type": "R", "before": "characterize several gene transcription", "after": "characterise different", "start_char_pos": 947, "end_char_pos": 986 }, { "type": "A", "before": null, "after": "noise sources in", "start_char_pos": 1258, "end_char_pos": 1258 }, { "type": "R", "before": "sampling solutions via stochastic simulation", "after": "stochastic simulations", "start_char_pos": 1317, "end_char_pos": 1361 } ]
[ 0, 62, 165, 325, 456, 770, 912, 1193 ]
1605.07171
1
This article outlines some techniques for the use of bitwise operations in programming languages C /C ++ and Java. As an example, we describe an algorithm for receiving a Latin square of arbitrary order .
The main thrust of the article is to provide interesting example, useful for students of using bitwise operations in the programming languages C ++ and Java. As an example, we describe an algorithm for obtaining a Latin square of arbitrary order . We will outline some techniques for the use of bitwise operations .
[ { "type": "R", "before": "This article outlines some techniques for the use of", "after": "The main thrust of the article is to provide interesting example, useful for students of using", "start_char_pos": 0, "end_char_pos": 52 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 75, "end_char_pos": 75 }, { "type": "D", "before": "/C", "after": null, "start_char_pos": 100, "end_char_pos": 102 }, { "type": "R", "before": "receiving", "after": "obtaining", "start_char_pos": 160, "end_char_pos": 169 }, { "type": "A", "before": null, "after": ". We will outline some techniques for the use of bitwise operations", "start_char_pos": 204, "end_char_pos": 204 } ]
[ 0, 115 ]
1605.07247
1
Motivation: Early approaches for protein (structural ) classification were sequence-based. Since amino acids that are distant in the sequence can be close in the 3-dimensional (3D) structure, 3D contact approaches can complement sequence approaches. Traditional 3D contact approaches study 3D structures directly. Instead, 3D structures can first be modeled as protein structure networks (PSNs). Then, network approaches can be used to classify the PSNs. Network approaches may improve upon traditional 3D contact approaches. We cannot use existing PSN approaches to test this, because: 1) They rely on naive measures of network topology that cannot capture the complexity of PSNs . 2) They are not robust to PSN size. They cannot integrate 3) multiple PSN measures or 4) PSN data with sequence data, although this could help because the different data types capture complementary biological knowledge. Results: We address these limitations by: 1) exploiting well-established graphlet measures via a new network approach for protein classification , 2) introducing novel normalized graphlet measures to remove the bias of PSN size, 3) allowing for integrating multiple PSN measures, and 4) using ordered graphlets to combine the complementary ideas of PSN data and sequence data. We classify both synthetic networks and real-world PSNs more accurately and faster than existing network, 3D contact, or sequence approaches. Our approach finds PSN patterns that may be biochemically interesting.
Initial protein structural comparisons were sequence-based. Since amino acids that are distant in the sequence can be close in the 3-dimensional (3D) structure, 3D contact approaches can complement sequence approaches. Traditional 3D contact approaches study 3D structures directly. Instead, 3D structures can be modeled as protein structure networks (PSNs). Then, network approaches can compare proteins by comparing their PSNs. Network approaches may improve upon traditional 3D contact approaches. We cannot use existing PSN approaches to test this, because: 1) They rely on naive measures of network topology . 2) They are not robust to PSN size. They cannot integrate 3) multiple PSN measures or 4) PSN data with sequence data, although this could help because the different data types capture complementary biological knowledge. We address these limitations by: 1) exploiting well-established graphlet measures via a new network approach , 2) introducing normalized graphlet measures to remove the bias of PSN size, 3) allowing for integrating multiple PSN measures, and 4) using ordered graphlets to combine the complementary PSN data and sequence data. We compare both synthetic networks and real-world PSNs more accurately and faster than existing network, 3D contact, or sequence approaches. Our approach finds PSN patterns that may be biochemically interesting.
[ { "type": "R", "before": "Motivation: Early approaches for protein (structural ) classification", "after": "Initial protein structural comparisons", "start_char_pos": 0, "end_char_pos": 69 }, { "type": "D", "before": "first", "after": null, "start_char_pos": 341, "end_char_pos": 346 }, { "type": "R", "before": "be used to classify the", "after": "compare proteins by comparing their", "start_char_pos": 425, "end_char_pos": 448 }, { "type": "D", "before": "that cannot capture the complexity of PSNs", "after": null, "start_char_pos": 638, "end_char_pos": 680 }, { "type": "D", "before": "Results:", "after": null, "start_char_pos": 903, "end_char_pos": 911 }, { "type": "D", "before": "for protein classification", "after": null, "start_char_pos": 1021, "end_char_pos": 1047 }, { "type": "D", "before": "novel", "after": null, "start_char_pos": 1065, "end_char_pos": 1070 }, { "type": "D", "before": "ideas of", "after": null, "start_char_pos": 1243, "end_char_pos": 1251 }, { "type": "R", "before": "classify", "after": "compare", "start_char_pos": 1283, "end_char_pos": 1291 } ]
[ 0, 90, 249, 313, 395, 525, 718, 902, 1279, 1421 ]
1605.07353
1
The recent research effort towards defining new communication solutions to guarantee high availability level with limited cabling costs and complexity has renewed the interest in ring-based networks. This topology has been recently used for industrial and embedded applications , with the implementation of many Real Time Ethernet (RTE) profiles. A relevant issue for such networks is handling cyclic dependencies to prove timing predictability, a key requirement for safety-critical systems , e. g., avionics and automotive. To deal with the performance evaluation of such networks, most relevant existing techniques are based on the Network Calculus framework, and consists in analyzing locally the delay upper bound in each crossed node, resulting in pessimistic end-to-end delay bounds. To overcome this limitation, an enhanced global timing analysis , accounting the flow serialization phenomena along the flow path , is proposed in this paper to improve the delay bounds tightness. The main contribution consists in defining and proving a closed form formula of the guaranteed end-to-end service curve of any flow of interest crossing a FIFO ring-based network. An extensive analysis of such a proposal has been conducted regarding the tightness of delay bounds and its impact on the system performance, in terms of system scalability and resource-efficiency . Results highlight the proposed approach efficiency to compute tight delay bounds , in comparison with conventional timing analysis and in reference with a worst-case delay lower bound .
The recent research effort towards defining new communication solutions for cyber-physical systems (CPS), to guarantee high availability level with limited cabling costs and complexity , has renewed the interest in ring-based networks. This topology has been recently used for various networked cyber-physical systems (Net-CPS), e.g., avionics and automotive , with the implementation of many Real Time Ethernet (RTE) profiles. A relevant issue for such networks is to prove timing predictability, a key requirement for safety-critical systems . We are interested in this paper in event-triggered ring-based networks, which guarantee high resource utilization efficiency and (re)configuration flexibility, at the cost of increasing the timing analysis complexity. The implementation of such a communication scheme on top of a ring topology actually induces cyclic dependencies, in comparison to time-triggered solutions. To cope with this arising issue of cyclic dependencies, only few techniques have been proposed in the literature, mainly based on Network Calculus framework, and consist in analyzing locally the delay upper bound in each crossed node, resulting in pessimistic end-to-end delay bounds. Hence, the main contribution in this paper is enhancing the delay bounds tightness of such networks, through an innovative global analysis based on Network Calculus , accounting the flow serialization phenomena along the flow path . An extensive analysis of such a proposal is conducted herein regarding the accuracy of delay bounds and its impact on the system performance, i.e., scalability and resource-efficiency ; and the results highlight its outperformance , in comparison to conventional methods .
[ { "type": "A", "before": null, "after": "for cyber-physical systems (CPS),", "start_char_pos": 72, "end_char_pos": 72 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 152, "end_char_pos": 152 }, { "type": "R", "before": "industrial and embedded applications", "after": "various networked cyber-physical systems (Net-CPS), e.g., avionics and automotive", "start_char_pos": 243, "end_char_pos": 279 }, { "type": "D", "before": "handling cyclic dependencies", "after": null, "start_char_pos": 387, "end_char_pos": 415 }, { "type": "R", "before": ", e. g., avionics and automotive. To deal with the performance evaluation of such networks, most relevant existing techniques are based on the", "after": ". We are interested in this paper in event-triggered ring-based networks, which guarantee high resource utilization efficiency and (re)configuration flexibility, at the cost of increasing the timing analysis complexity. The implementation of such a communication scheme on top of a ring topology actually induces cyclic dependencies, in comparison to time-triggered solutions. To cope with this arising issue of cyclic dependencies, only few techniques have been proposed in the literature, mainly based on", "start_char_pos": 494, "end_char_pos": 636 }, { "type": "R", "before": "consists", "after": "consist", "start_char_pos": 669, "end_char_pos": 677 }, { "type": "R", "before": "To overcome this limitation, an enhanced global timing analysis", "after": "Hence, the main contribution in this paper is enhancing the delay bounds tightness of such networks, through an innovative global analysis based on Network Calculus", "start_char_pos": 793, "end_char_pos": 856 }, { "type": "R", "before": ", is proposed in this paper to improve the delay bounds tightness. The main contribution consists in defining and proving a closed form formula of the guaranteed end-to-end service curve of any flow of interest crossing a FIFO ring-based network.", "after": ".", "start_char_pos": 923, "end_char_pos": 1169 }, { "type": "R", "before": "has been conducted regarding the tightness", "after": "is conducted herein regarding the accuracy", "start_char_pos": 1211, "end_char_pos": 1253 }, { "type": "R", "before": "in terms of system", "after": "i.e.,", "start_char_pos": 1312, "end_char_pos": 1330 }, { "type": "R", "before": ". Results highlight the proposed approach efficiency to compute tight delay bounds", "after": "; and the results highlight its outperformance", "start_char_pos": 1367, "end_char_pos": 1449 }, { "type": "R", "before": "with conventional timing analysis and in reference with a worst-case delay lower bound", "after": "to conventional methods", "start_char_pos": 1466, "end_char_pos": 1552 } ]
[ 0, 201, 348, 527, 792, 989, 1169, 1368 ]
1605.07353
2
The recent research effort towards defining new communication solutions for cyber-physical systems (CPS), to guarantee high availability level with limited cabling costs and complexity, has renewed the interest in ring-based networks. This topology has been recently used for various networked cyber-physical systems (Net-CPS), e.g., avionics and automotive, with the implementation of many Real Time Ethernet (RTE) profiles. A relevant issue for such networks is to prove timing predictability, a key requirement for safety-critical systems. We are interested in this paper in event-triggered ring-based networks, which guarantee high resource utilization efficiency and (re)configuration flexibility, at the cost of increasing the timing analysis complexity. The implementation of such a communication scheme on top of a ring topology actually induces cyclic dependencies, in comparison to time-triggered solutions. To cope with this arising issue of cyclic dependencies, only few techniques have been proposed in the literature, mainly based on Network Calculus framework, and consist in analyzing locally the delay upper bound in each crossed node, resulting in pessimistic end-to-end delay bounds . Hence, the main contribution in this paper is enhancing the delay bounds tightness of such networks, through an innovative global analysis based on Network Calculus, accounting the flow serialization phenomena along the flow path . An extensive analysis of such a proposal is conducted herein regarding the accuracy of delay bounds and its impact on the system performance, i. e., scalability and resource-efficiency; and the results highlight its outperformance, in comparison to conventional methods .
Tightening performance bounds of ring networks with cyclic dependencies is still an open problem in the literature. In this paper, we tackle such a challenging issue based on Network Calculus . First, we review the conventional timing approaches in the area and identify their main limitations, in terms of delay bounds pessimism. Afterwards, we have introduced a new concept called Pay Multiplexing Only at Convergence points (PMOC) to overcome such limitations. PMOC considers the flow serialization phenomena along the flow path , by paying the bursts of interfering flows only at the convergence points. The guaranteed endto- end service curves under such a concept have been defined and proved for mono-ring and multiple-ring networks, as well as under Arbitrary and Fixed Priority multiplexing. A sensitivity analysis of the computed delay bounds for mono and multiple-ring networks is conducted with respect to various flow and network parameters, and their tightness is assessed in comparison with an achievable worst-case delay. A noticeable enhancement of the delay bounds, thus network resource efficiency and scalability, is highlighted under our proposal with reference to conventional approaches. Finally, the efficiency of the PMOC approach to provide timing guarantees is confirmed in the case of a realistic avionics application .
[ { "type": "R", "before": "The recent research effort towards defining new communication solutions for cyber-physical systems (CPS), to guarantee high availability level with limited cabling costs and complexity, has renewed the interest in ring-based networks. This topology has been recently used for various networked cyber-physical systems (Net-CPS), e.g., avionics and automotive, with the implementation of many Real Time Ethernet (RTE) profiles. A relevant issue for such networks is to prove timing predictability, a key requirement for safety-critical systems. We are interested in this paper in event-triggered ring-based networks, which guarantee high resource utilization efficiency and (re)configuration flexibility, at the cost of increasing the timing analysis complexity. The implementation of such a communication scheme on top of a ring topology actually induces cyclic dependencies, in comparison to time-triggered solutions. To cope with this arising issue of cyclic dependencies, only few techniques have been proposed in the literature, mainly", "after": "Tightening performance bounds of ring networks with cyclic dependencies is still an open problem in the literature. In this paper, we tackle such a challenging issue", "start_char_pos": 0, "end_char_pos": 1038 }, { "type": "R", "before": "framework, and consist in analyzing locally the delay upper bound in each crossed node, resulting in pessimistic end-to-end delay bounds . Hence, the main contribution in this paper is enhancing the delay bounds tightness of such networks, through an innovative global analysis based on Network Calculus, accounting", "after": ". First, we review the conventional timing approaches in the area and identify their main limitations, in terms of delay bounds pessimism. Afterwards, we have introduced a new concept called Pay Multiplexing Only at Convergence points (PMOC) to overcome such limitations. PMOC considers", "start_char_pos": 1065, "end_char_pos": 1380 }, { "type": "R", "before": ". An extensive analysis of such a proposal is conducted herein regarding the accuracy of delay bounds", "after": ", by paying the bursts of interfering flows only at the convergence points. The guaranteed endto- end service curves under such a concept have been defined and proved for mono-ring", "start_char_pos": 1434, "end_char_pos": 1535 }, { "type": "R", "before": "its impact on the system performance, i. e., scalability and resource-efficiency; and the results highlight its outperformance, in comparison to conventional methods", "after": "multiple-ring networks, as well as under Arbitrary and Fixed Priority multiplexing. A sensitivity analysis of the computed delay bounds for mono and multiple-ring networks is conducted with respect to various flow and network parameters, and their tightness is assessed in comparison with an achievable worst-case delay. A noticeable enhancement of the delay bounds, thus network resource efficiency and scalability, is highlighted under our proposal with reference to conventional approaches. Finally, the efficiency of the PMOC approach to provide timing guarantees is confirmed in the case of a realistic avionics application", "start_char_pos": 1540, "end_char_pos": 1705 } ]
[ 0, 234, 425, 542, 760, 917, 1203, 1435, 1621 ]
1605.07419
1
We introduce a novel class of credit risk models in which the drift of the survival process of a firm is a linear function of the factors. These models outperform the standard affine default intensity models in terms of analytical tractability. The prices of defaultable bonds and credit default swaps (CDS) are linear in the factors. The price of a CDS option can be uniformly approximated by polynomials in the factors. An empirical study illustrates the versatility of these models by fitting CDS spread time series .
We introduce a novel class of credit risk models in which the drift of the survival process of a firm is a linear function of the factors. The prices of defaultable bonds and credit default swaps (CDS) are linear-rational in the factors. The price of a CDS option can be uniformly approximated by polynomials in the factors. Multi-name models can produce simultaneous defaults, generate positively as well as negatively correlated default intensities, and accommodate stochastic interest rates. A calibration study illustrates the versatility of these models by fitting CDS spread time series . A numerical analysis validates the efficiency of the option price approximation method .
[ { "type": "D", "before": "These models outperform the standard affine default intensity models in terms of analytical tractability.", "after": null, "start_char_pos": 139, "end_char_pos": 244 }, { "type": "R", "before": "linear", "after": "linear-rational", "start_char_pos": 312, "end_char_pos": 318 }, { "type": "R", "before": "An empirical", "after": "Multi-name models can produce simultaneous defaults, generate positively as well as negatively correlated default intensities, and accommodate stochastic interest rates. A calibration", "start_char_pos": 422, "end_char_pos": 434 }, { "type": "A", "before": null, "after": ". A numerical analysis validates the efficiency of the option price approximation method", "start_char_pos": 519, "end_char_pos": 519 } ]
[ 0, 138, 244, 334, 421 ]
1605.07673
1
The purpose of this document is to specify the basic data types required for storing electrophysiology and optical imaging data to facilitate computer-based neuroscience studies and data sharing. These requirements were developed within a working group of the Electrophysiology Task Force in the INCF Program on Standards for Data Sharing .
The purpose of this document is to specify the basic data types required for storing electrophysiology and optical imaging data to facilitate computer-based neuroscience studies and data sharing. These requirements are being developed within a working group of the Electrophysiology Task Force in the International Neuroinformatics Coordinating Facility (INCF) Program on Standards for Data Sharing . While this document describes the requirements of the standard independent of the actual storage technology, the Task Force has recommended basing a standard on HDF5. This is in line with a number of groups who are already using HDF5 to store electrophysiology data, although currently without being based on a standard .
[ { "type": "R", "before": "were", "after": "are being", "start_char_pos": 215, "end_char_pos": 219 }, { "type": "R", "before": "INCF", "after": "International Neuroinformatics Coordinating Facility (INCF)", "start_char_pos": 296, "end_char_pos": 300 }, { "type": "A", "before": null, "after": ". While this document describes the requirements of the standard independent of the actual storage technology, the Task Force has recommended basing a standard on HDF5. This is in line with a number of groups who are already using HDF5 to store electrophysiology data, although currently without being based on a standard", "start_char_pos": 339, "end_char_pos": 339 } ]
[ 0, 195 ]
1605.07884
1
The classical discrete time model of transaction costs relies on the assumption that the increments of the feasible portfolio process belong to the solvency set at each step. We extend this setting by assuming that any such increment belongs to the sum of an element of the solvency set and the family of acceptable positions, e.g. with respect to a dynamic risk measure. We describe the sets of superhedging prices, formulate several no risk arbitrage conditions and explore connections between them. If the acceptance sets consist of non-negative random vectors, that is the underlying dynamic risk measure is the conditional essential infimum, we extend many classical no arbitrage conditions in markets with transaction costs and provide their natural geometric interpretation . The mathematical technique relies on results for unbounded and possibly non-closed random sets in the Euclidean space.
The classical discrete time model of proportional transaction costs relies on the assumption that a feasible portfolio process has solvent increments at each step. We extend this setting in two directions, allowing for convex transaction costs and assuming that increments of the portfolio process belong to the sum of a solvency set and a family of multivariate acceptable positions, e.g. with respect to a dynamic risk measure. We describe the sets of superhedging prices, formulate several no (risk) arbitrage conditions and explore connections between them. In the special case when multivariate positions are converted into a single fixed asset, our framework turns into the no good deals setting. However, in general, the possibilities of assessing the risk with respect to any asset or a basket of the assets lead to a decrease of superhedging prices and the no arbitrage conditions become stronger . The mathematical technique relies on results for unbounded and possibly non-closed random sets in Euclidean space.
[ { "type": "A", "before": null, "after": "proportional", "start_char_pos": 37, "end_char_pos": 37 }, { "type": "R", "before": "the increments of the", "after": "a", "start_char_pos": 86, "end_char_pos": 107 }, { "type": "R", "before": "belong to the solvency set", "after": "has solvent increments", "start_char_pos": 135, "end_char_pos": 161 }, { "type": "R", "before": "by assuming that any such increment belongs", "after": "in two directions, allowing for convex transaction costs and assuming that increments of the portfolio process belong", "start_char_pos": 199, "end_char_pos": 242 }, { "type": "R", "before": "an element of the", "after": "a", "start_char_pos": 257, "end_char_pos": 274 }, { "type": "R", "before": "the family of", "after": "a family of multivariate", "start_char_pos": 292, "end_char_pos": 305 }, { "type": "R", "before": "risk", "after": "(risk)", "start_char_pos": 439, "end_char_pos": 443 }, { "type": "R", "before": "If the acceptance sets consist of non-negative random vectors, that is the underlying dynamic risk measure is the conditional essential infimum, we extend many classical", "after": "In the special case when multivariate positions are converted into a single fixed asset, our framework turns into the no good deals setting. However, in general, the possibilities of assessing the risk with respect to any asset or a basket of the assets lead to a decrease of superhedging prices and the", "start_char_pos": 503, "end_char_pos": 672 }, { "type": "R", "before": "in markets with transaction costs and provide their natural geometric interpretation", "after": "become stronger", "start_char_pos": 697, "end_char_pos": 781 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 882, "end_char_pos": 885 } ]
[ 0, 175, 372, 502, 783 ]
1605.08415
1
Given the network of interactions underlying a complex system, what can we learn about controlling such a system solely from its structure? Over a century of research in control theory has given us tools to answer this question, which were widely applied in science and engineering. Yet the current tools do not always consider the inherently nonlinear dynamics of real systems and the naturally occurring system states in their definition of "control", a term whose interpretation varies across disciplines. Here we use a new mathematical framework for structure-based control of networks governed by a broad class of nonlinear dynamics that includes the major dynamic models of biological, technological, and social processes. This framework provides realizable node overrides that steer a system towards any of its natural long term dynamic behaviors and which are guaranteed to be effective regardless of the dynamic details and parameters of the underlying system . We use this framework on several real networks, compare its predictions to those of classical control theory, and identify the topological characteristics that underlie the commonalities and differencesbetween these frameworks . Finally, we illustrate the applicability of this new frameworkin the field of dynamic models by demonstrating its success in two models of a gene regulatory network and identifying the nodes whose override is necessary for control in the general case, but not in specific model instances.
What can we learn about controlling a system solely from its underlying network structure? Here we use a framework for control of networks governed by a broad class of nonlinear dynamics that includes the major dynamic models of biological, technological, and social processes. This feedback-based framework provides realizable node overrides that steer a system towards any of its natural long term dynamic behaviors , regardless of the dynamic details and system parameters . We use this framework on several real networks, compare its predictions to those of classical structural control theory, and identify the topological characteristics that underlie the observed differences . Finally, we demonstrate this framework's applicability in dynamic models of gene regulatory networks and identify nodes whose override is necessary for control in the general case, but not in specific model instances.
[ { "type": "R", "before": "Given the network of interactions underlying a complex system, what", "after": "What", "start_char_pos": 0, "end_char_pos": 67 }, { "type": "D", "before": "such", "after": null, "start_char_pos": 99, "end_char_pos": 103 }, { "type": "R", "before": "structure? Over a century of research in control theory has given us tools to answer this question, which were widely applied in science and engineering. Yet the current tools do not always consider the inherently nonlinear dynamics of real systems and the naturally occurring system states in their definition of \"control\", a term whose interpretation varies across disciplines.", "after": "underlying network structure?", "start_char_pos": 129, "end_char_pos": 508 }, { "type": "R", "before": "new mathematical framework for structure-based", "after": "framework for", "start_char_pos": 523, "end_char_pos": 569 }, { "type": "A", "before": null, "after": "feedback-based", "start_char_pos": 734, "end_char_pos": 734 }, { "type": "R", "before": "and which are guaranteed to be effective", "after": ",", "start_char_pos": 855, "end_char_pos": 895 }, { "type": "R", "before": "parameters of the underlying system", "after": "system parameters", "start_char_pos": 934, "end_char_pos": 969 }, { "type": "A", "before": null, "after": "structural", "start_char_pos": 1066, "end_char_pos": 1066 }, { "type": "R", "before": "commonalities and differencesbetween these frameworks", "after": "observed differences", "start_char_pos": 1146, "end_char_pos": 1199 }, { "type": "R", "before": "illustrate the applicability of this new frameworkin the field of dynamic models by demonstrating its success in two models of a gene regulatory network and identifying the", "after": "demonstrate this framework's applicability in dynamic models of gene regulatory networks and identify", "start_char_pos": 1214, "end_char_pos": 1386 } ]
[ 0, 139, 282, 508, 728, 971, 1201 ]
1605.08415
2
What can we learn about controlling a system solely from its underlying network structure? Here we use a framework for control of networks governed by a broad class of nonlinear dynamics that includes the major dynamic models of biological, technological, and social processes. This feedback-based framework provides realizable node overrides that steer a system towards any of its natural long term dynamic behaviors, regardless of the dynamic details and system parameters. We use this framework on several real networks, compare its predictions to those of classical structural control theory, and identify the topological characteristics that underlie the observed differences . Finally, we demonstrate this framework's applicability in dynamic models of gene regulatory networks and identify nodes whose override is necessary for control in the general case, but not in specific model instances.
What can we learn about controlling a system solely from its underlying network structure? Here we adapt a recently developed framework for control of networks governed by a broad class of nonlinear dynamics that includes the major dynamic models of biological, technological, and social processes. This feedback-based framework provides realizable node overrides that steer a system towards any of its natural long term dynamic behaviors, regardless of the specific functional forms and system parameters. We use this framework on several real networks, identify the topological characteristics that underlie the predicted node overrides, and compare its predictions to those of structural controllability in control theory . Finally, we demonstrate this framework's applicability in dynamic models of gene regulatory networks and identify nodes whose override is necessary for control in the general case, but not in specific model instances.
[ { "type": "R", "before": "use a", "after": "adapt a recently developed", "start_char_pos": 99, "end_char_pos": 104 }, { "type": "R", "before": "dynamic details", "after": "specific functional forms", "start_char_pos": 437, "end_char_pos": 452 }, { "type": "A", "before": null, "after": "identify the topological characteristics that underlie the predicted node overrides, and", "start_char_pos": 524, "end_char_pos": 524 }, { "type": "R", "before": "classical structural control theory, and identify the topological characteristics that underlie the observed differences", "after": "structural controllability in control theory", "start_char_pos": 561, "end_char_pos": 681 } ]
[ 0, 90, 277, 475, 683 ]
1605.08944
1
Application of pulling force, under force-clamp conditions, to kinetochore-microtubule attachments {\it in-vitro} revealed a catch-bond-like behavior. In an earlier paper (%DIFDELCMD < {\it %%% Sharma et al. Phys. Biol. (2014) the physical origin of this apparently counter-intuitive phenomenon was traced to the nature of the force-dependence of the (de-)polymerization kinetics of the microtubules. In this brief communication that work is extended to situations where the external forced is ramped up till the attachment gets ruptured . In spite of the fundamental differences in the underlying mechanisms, the trend of variation of the rupture force distribution observed in our model kinetochore-microtubule attachment with the increasing loading rate is qualitatively similar to that displayed by the catch bonds formed in some other ligand-receptor systems. Our theoretical predictions can be tested experimentally by a straightforward modification of the protocol for controlling the force in the optical trap set up that was used in the original experiments under force-clamp conditions .
Measurement of the life time of attachments formed by a single microtubule (MT) with a single kinetochore (kt) {\it in-vitro} under force-clamp conditions revealed a catch-bond-like behavior. In %DIFDELCMD < {\it %%% the past the physical origin of this apparently counter-intuitive phenomenon was traced to the nature of the force-dependence of the (de-)polymerization kinetics of the microtubules. Here first the same model kt-MT attachment is subjected to external tension that is ramped up till the attachment gets ruptured ; the trend of variation of the rupture force distribution with increasing loading rate is consistent with that displayed by the catch bonds formed in some other ligand-receptor systems. We then extend the formalism to model an attachment of a bundle of multiple parallel microtubules with a single kt under force-clamp and force-ramp conditions. From numerical studies of the model we predict the trends of variation of the mean life time and mean rupture force with the increasing number of MTs in the bundle. Both the mean life time and the mean rupture force display nontrivial nonlinear dependence on the maximum number of MTs that can attach simultaneously to the same kt .
[ { "type": "R", "before": "Application of pulling force, under force-clamp conditions, to kinetochore-microtubule attachments", "after": "Measurement of the life time of attachments formed by a single microtubule (MT) with a single kinetochore (kt)", "start_char_pos": 0, "end_char_pos": 98 }, { "type": "A", "before": null, "after": "under force-clamp conditions", "start_char_pos": 114, "end_char_pos": 114 }, { "type": "D", "before": "an earlier paper (", "after": null, "start_char_pos": 155, "end_char_pos": 173 }, { "type": "D", "before": "Sharma et al. Phys. Biol. (2014)", "after": null, "start_char_pos": 195, "end_char_pos": 227 }, { "type": "R", "before": "the", "after": "the past the", "start_char_pos": 228, "end_char_pos": 231 }, { "type": "R", "before": "In this brief communication that work is extended to situations where the external forced", "after": "Here first the same model kt-MT attachment is subjected to external tension that", "start_char_pos": 402, "end_char_pos": 491 }, { "type": "R", "before": ". In spite of the fundamental differences in the underlying mechanisms, the", "after": "; the", "start_char_pos": 539, "end_char_pos": 614 }, { "type": "R", "before": "observed in our model kinetochore-microtubule attachment with the", "after": "with", "start_char_pos": 668, "end_char_pos": 733 }, { "type": "R", "before": "qualitatively similar to", "after": "consistent with", "start_char_pos": 761, "end_char_pos": 785 }, { "type": "R", "before": "Our theoretical predictions can be tested experimentally by a straightforward modification of the protocol for controlling the force in the optical trap set up that was used in the original experiments under", "after": "We then extend the formalism to model an attachment of a bundle of multiple parallel microtubules with a single kt under", "start_char_pos": 866, "end_char_pos": 1073 }, { "type": "R", "before": "conditions", "after": "and force-ramp conditions. From numerical studies of the model we predict the trends of variation of the mean life time and mean rupture force with the increasing number of MTs in the bundle. Both the mean life time and the mean rupture force display nontrivial nonlinear dependence on the maximum number of MTs that can attach simultaneously to the same kt", "start_char_pos": 1086, "end_char_pos": 1096 } ]
[ 0, 151, 208, 401, 540, 865 ]
1605.08944
2
Measurement of the life time of attachments formed by a single microtubule (MT) with a single kinetochore (kt) {\it in-vitro} under force-clamp conditions revealed a catch-bond-like behavior. In the past the physical origin of this apparently counter-intuitive phenomenon was traced to the nature of the force-dependence of the (de-)polymerization kinetics of the microtubules . Here first the same model kt-MT attachment is subjected to external tension that is ramped up till the attachment gets ruptured; the trend of variation of the rupture force distribution with increasing loading rate is consistent with that displayed by the catch bonds formed in some other ligand-receptor systems{\it {\it . We then extend the formalism to model an attachment of a bundle of multiple parallel microtubules with a single kt under force-clamp and force-ramp conditions. From numerical studies of the model we predict the trends of variation of the mean life time and mean rupture force with the increasing number of MTs in the bundle. Both the mean life time and the mean rupture force display nontrivial nonlinear dependence on the maximum number of MTs that can attach simultaneously to the same kt.
Measurement of the life time of attachments formed by a single microtubule (MT) with a single kinetochore (kt) {\it in-vitro} under force-clamp conditions had earlier revealed a catch-bond-like behavior. In the past the physical origin of this apparently counter-intuitive phenomenon was traced to the nature of the force-dependence of the (de-)polymerization kinetics of the MTs . Here first the same model MT-kt attachment is subjected to external tension that increases linearly with time until rupture occurs. In our{\it force-ramp experiments{\it in-silico , the model displays the well known `mechanical signatures' of a catch-bond probed by molecular force spectroscopy. Exploiting this new evidence, we have further strengthened the analogy between MT-kt attachments and common ligand-receptor bonds in spite of the crucial differences in their underlying physical mechanisms . We then extend the formalism to model the stochastic kinetics of an attachment formed by a bundle of multiple parallel microtubules with a single kt considering the effect of rebinding under force-clamp and force-ramp conditions. From numerical studies of the model we predict the trends of variation of the mean life time and mean rupture force with the increasing number of MTs in the bundle. Both the mean life time and the mean rupture force display nontrivial nonlinear dependence on the maximum number of MTs that can attach simultaneously to the same kt.
[ { "type": "A", "before": null, "after": "had earlier", "start_char_pos": 155, "end_char_pos": 155 }, { "type": "R", "before": "microtubules", "after": "MTs", "start_char_pos": 365, "end_char_pos": 377 }, { "type": "R", "before": "kt-MT", "after": "MT-kt", "start_char_pos": 406, "end_char_pos": 411 }, { "type": "R", "before": "is ramped up till the attachment gets ruptured; the trend of variation of the rupture force distribution with increasing loading rate is consistent with that displayed by the catch bonds formed in some other ligand-receptor systems", "after": "increases linearly with time until rupture occurs. In our", "start_char_pos": 461, "end_char_pos": 692 }, { "type": "A", "before": null, "after": "force-ramp", "start_char_pos": 697, "end_char_pos": 697 }, { "type": "A", "before": null, "after": "experiments", "start_char_pos": 698, "end_char_pos": 698 }, { "type": "A", "before": null, "after": "in-silico", "start_char_pos": 703, "end_char_pos": 703 }, { "type": "A", "before": null, "after": ", the model displays the well known `mechanical signatures' of a catch-bond probed by molecular force spectroscopy. Exploiting this new evidence, we have further strengthened the analogy between MT-kt attachments and common ligand-receptor bonds in spite of the crucial differences in their underlying physical mechanisms", "start_char_pos": 704, "end_char_pos": 704 }, { "type": "R", "before": "an attachment of", "after": "the stochastic kinetics of an attachment formed by", "start_char_pos": 745, "end_char_pos": 761 }, { "type": "A", "before": null, "after": "considering the effect of rebinding", "start_char_pos": 822, "end_char_pos": 822 } ]
[ 0, 192, 379, 508, 706, 867, 1032 ]
1605.09181
1
The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent "respectively safe" investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd -- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was examined for daily shares' returns of companies traded on the Warsaw Stock Exchange. It was shown that the algorithm gives the investment portfolios that are on average better than portfolios achieved by other methods, as well as than the proposed benchmark . Remark that the algorithm of is based on cumulant tensors up to the 6'th order , what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.
The cumulant analysis plays an important role in non Gaussian distributed data analysis. The shares' prices returns are good example of such data. The purpose of this research is to develop the cumulant based algorithm and use it to determine eigenvectors that represent investment portfolios with low variability. Such algorithm is based on the Alternating Least Square method and involves the simultaneous minimisation 2'nd -- 6'th cumulants of the multidimensional random variable (percentage shares' returns of many companies). Then the algorithm was tested during the recent crash on the Warsaw Stock Exchange. To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA. It was shown that introduced algorithm is on average better that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent . Remark that the algorithm of is based on cumulant tensors up to the 6'th order calculated for a multidimensional random variable , what is the novel idea. It can be expected that the algorithm would be useful in the financial data analysis on the world wide scale as well as in the analysis of other types of non Gaussian distributed data.
[ { "type": "D", "before": "\"respectively safe\"", "after": null, "start_char_pos": 271, "end_char_pos": 290 }, { "type": "R", "before": "examined for daily shares' returns of companies traded", "after": "tested during the recent crash", "start_char_pos": 575, "end_char_pos": 629 }, { "type": "A", "before": null, "after": "To determine incoming crash and provide enter and exit signal for the investment strategy the Hurst exponent was calculated using the local DFA.", "start_char_pos": 660, "end_char_pos": 660 }, { "type": "R", "before": "the algorithm gives the investment portfolios that are", "after": "introduced algorithm is", "start_char_pos": 679, "end_char_pos": 733 }, { "type": "R", "before": "than portfolios achieved by other methods, as well as than the proposed benchmark", "after": "that benchmark and other portfolio determination methods, but only within examination window determined by low values of the Hurst exponent", "start_char_pos": 752, "end_char_pos": 833 }, { "type": "A", "before": null, "after": "calculated for a multidimensional random variable", "start_char_pos": 915, "end_char_pos": 915 } ]
[ 0, 88, 146, 334, 551, 659, 941 ]
1606.00054
1
Constraint-Based Reconstruction and Analysis (COBRA) is currently the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization can compute steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Standard double-precision solvers may return inaccurate solutions or report that no solution exists. ME models currently have 70,000 constraints and variables and will grow larger , so that exact simplex solvers are not practical . We have developed a quadruple-precision version of our linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that balances efficiency and reliability for ME models. Efficient double-precision optimizers already enabled exponential growth in biological applications of metabolic models. Combined use of Double and Quad solvers now promises extensive use of linear, nonlinear, genome-scale, and multiscale ME models .
Constraint-Based Reconstruction and Analysis (COBRA) is currently the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers are extremely slow and hence not practical for ME models that currently have 70,000 constraints and variables and will grow larger . We have developed a quadruple-precision version of our linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves efficiency and reliability for ME models. DQQ enables extensive use of large, multiscale, linear and nonlinear models in systems biology and many other applications .
[ { "type": "R", "before": "can compute", "after": "computes", "start_char_pos": 206, "end_char_pos": 217 }, { "type": "R", "before": "ME models", "after": "Exact simplex solvers are extremely slow and hence not practical for ME models that", "start_char_pos": 419, "end_char_pos": 428 }, { "type": "D", "before": ", so that exact simplex solvers are not practical", "after": null, "start_char_pos": 498, "end_char_pos": 547 }, { "type": "R", "before": "balances", "after": "achieves", "start_char_pos": 711, "end_char_pos": 719 }, { "type": "R", "before": "Efficient double-precision optimizers already enabled exponential growth in biological applications of metabolic models. Combined use of Double and Quad solvers now promises", "after": "DQQ enables", "start_char_pos": 762, "end_char_pos": 935 }, { "type": "R", "before": "linear, nonlinear, genome-scale, and multiscale ME models", "after": "large, multiscale, linear and nonlinear models in systems biology and many other applications", "start_char_pos": 953, "end_char_pos": 1010 } ]
[ 0, 185, 317, 418, 549, 761, 882 ]
1606.00092
1
The issue addressed in this paper is that of testing for common breaks across or within equations of a multivariate system. Our framework is very general and allows integrated regressors and trends as well as stationary regressors. The null hypothesis is that breaks in different parameters (either regression coefficients or elements of the covariance matrix of the errors) occur at a common locations or are separated by some positive fraction of the sample size . Under the alternative hypothesis, the break dates are not the same and also need not be separated by a positive fraction of the sample size across parameters . The test considered is the quasi-likelihood ratio test assuming normal errors, though as usual the limit distribution of the test remains valid with non-normal errors. Also of independent interest, we provide results about the rate of convergence when searching over all possible partitions subject only to the requirement that each regime of different parameters contains at least as many observations as some positive fraction of the sample size . Simulation results show that the test has good finite sample properties. We also provide an application to various measures of inflation to illustrate its usefulness.
The issue addressed in this paper is that of testing for common breaks across or within equations of a multivariate system. Our framework is very general and allows integrated regressors and trends as well as stationary regressors. The null hypothesis is that breaks in different parameters occur at common locations and are separated by some positive fraction of the sample size unless they occur across different equations . Under the alternative hypothesis, the break dates across parameters are not the same and also need not be separated by a positive fraction of the sample size whether within or across equations . The test considered is the quasi-likelihood ratio test assuming normal errors, though as usual the limit distribution of the test remains valid with non-normal errors. Of independent interest, we provide results about the rate of convergence of the estimates when searching over all possible partitions subject only to the requirement that each regime contains at least as many observations as some positive fraction of the sample size , allowing break dates not separated by a positive fraction of the sample size across equations. Simulations show that the test has good finite sample properties. We also provide an application to issues related to level shifts and persistence for various measures of inflation to illustrate its usefulness.
[ { "type": "R", "before": "(either regression coefficients or elements of the covariance matrix of the errors) occur at a common locations or", "after": "occur at common locations and", "start_char_pos": 291, "end_char_pos": 405 }, { "type": "A", "before": null, "after": "unless they occur across different equations", "start_char_pos": 465, "end_char_pos": 465 }, { "type": "A", "before": null, "after": "across parameters", "start_char_pos": 518, "end_char_pos": 518 }, { "type": "R", "before": "across parameters", "after": "whether within or across equations", "start_char_pos": 609, "end_char_pos": 626 }, { "type": "R", "before": "Also of", "after": "Of", "start_char_pos": 797, "end_char_pos": 804 }, { "type": "A", "before": null, "after": "of the estimates", "start_char_pos": 876, "end_char_pos": 876 }, { "type": "D", "before": "of different parameters", "after": null, "start_char_pos": 970, "end_char_pos": 993 }, { "type": "R", "before": ". Simulation results", "after": ", allowing break dates not separated by a positive fraction of the sample size across equations. Simulations", "start_char_pos": 1078, "end_char_pos": 1098 }, { "type": "A", "before": null, "after": "issues related to level shifts and persistence for", "start_char_pos": 1187, "end_char_pos": 1187 } ]
[ 0, 123, 231, 628, 796, 1152 ]
1606.00101
1
The capacity of cells URLanisms to respond in a repeatable manner to challenging conditions is limited by a finite number of pre-evolved adaptive responses. Beyond this capacity, exploratory dynamics can provide alternative means to cope with a much broader array of conditions. At the population level, exploration is implemented by mutations and selection over multiple generations. However, within the lifetime of a single cell ,the mechanisms by which exploratory changes can lead to adaptive phenotypes are still poorly understood . Here, we address this question by developing a network model of exploration in gene regulation. The model we propose demonstrates the feasibility of adapting by temporal exploration. Exploration is initiated by failure to comply with a global constraint and is implemented by random sampling of available network configurations. It ceases if and when the system converges to a stable compliance with the constraint. Successful convergence of this process depends crucially on network topology and is most efficient for scale-free connectivity , typical of gene regulatory networks. For such networks, convergence to an adapted phenotype can be achieved without fine tuning of initial conditions or other model parameters, thus making it plausible for biological implementation .
The capacity of cells URLanisms to respond in a repeatable manner to challenging conditions is limited by a finite number of pre-evolved adaptive responses. Beyond this capacity, exploratory dynamics can provide alternative means to cope with a much broader array of conditions. At the population level, exploration is implemented by mutations and selection over multiple generations. However, it is not known how exploration can lead to new phenotypes within the lifetime of a single cell . Here, we address this question by developing a network model of exploration in gene regulation. This model demonstrates the feasibility of adapting by temporal exploration. Exploration is initiated by failure to comply with a global constraint and is implemented by random sampling of available network configurations. It ceases if and when the system converges to a stable compliance with the constraint. Successful convergence depends crucially on network topology and is most efficient for scale-free connectivity . Convergence to an adapted phenotype in this class of networks is achieved without fine tuning of initial conditions or other model parameters, thus making it plausible for biological implementation . Experimental results have indeed shown that gene regulatory networks are characterized by this type of topology, suggesting a structural basis for exploratory adaptation .
[ { "type": "A", "before": null, "after": "it is not known how exploration can lead to new phenotypes", "start_char_pos": 394, "end_char_pos": 394 }, { "type": "D", "before": ",the mechanisms by which exploratory changes can lead to adaptive phenotypes are still poorly understood", "after": null, "start_char_pos": 432, "end_char_pos": 536 }, { "type": "R", "before": "The model we propose", "after": "This model", "start_char_pos": 635, "end_char_pos": 655 }, { "type": "D", "before": "of this process", "after": null, "start_char_pos": 978, "end_char_pos": 993 }, { "type": "R", "before": ", typical of gene regulatory networks. For such networks, convergence", "after": ". Convergence", "start_char_pos": 1082, "end_char_pos": 1151 }, { "type": "R", "before": "can be", "after": "in this class of networks is", "start_char_pos": 1176, "end_char_pos": 1182 }, { "type": "A", "before": null, "after": ". Experimental results have indeed shown that gene regulatory networks are characterized by this type of topology, suggesting a structural basis for exploratory adaptation", "start_char_pos": 1316, "end_char_pos": 1316 } ]
[ 0, 156, 278, 384, 538, 634, 721, 867, 954, 1120 ]
1606.00495
1
The cellular adaptive immune response plays a key role in resolving influenza infection. It can provide cross-protection between subtypes of influenza A which share epitopes; thus, the strength of the immune response to a given strain is dependent upon the individual's infectionhistory. We model cross-reactive cellular adaptive immune responses induced by multiple infections, and show how the formation and re-activation of memory T cells explains observed shortening of a second infection when cross-reactivity is present . We include three possible mechanisms which determine the strength of the cross-reactive immune response . Our model of cross-reactivity contributes to understanding how repeated exposures change an individual's immune profile over a lifetime .
The cellular adaptive immune response plays a key role in resolving influenza infection. Experiments where individuals are successively infected with different strains within a short timeframe provide insight into the underlying viral dynamics and the role of a cross-reactive immune response in resolving an acute infection. We construct a mathematical model of within-host influenza viral dynamics including three possible factors which determine the strength of the cross-reactive cellular adaptive immune response: the initial naive T cell number, the avidity of the interaction between T cells and the epitopes presented by infected cells, and the epitope abundance per infected cell. Our model explains the experimentally observed shortening of a second infection when cross-reactivity is present , and shows that memory in the cellular adaptive immune response is necessary to protect against a second infection .
[ { "type": "R", "before": "It can provide cross-protection between subtypes of influenza A which share epitopes; thus, the strength of the immune response to a given strain is dependent upon the individual's infectionhistory. We model", "after": "Experiments where individuals are successively infected with different strains within a short timeframe provide insight into the underlying viral dynamics and the role of a cross-reactive immune response in resolving an acute infection. We construct a mathematical model of within-host influenza viral dynamics including three possible factors which determine the strength of the", "start_char_pos": 89, "end_char_pos": 296 }, { "type": "R", "before": "responses induced by multiple infections, and show how the formation and re-activation of memory T cells explains", "after": "response: the initial naive T cell number, the avidity of the interaction between T cells and the epitopes presented by infected cells, and the epitope abundance per infected cell. Our model explains the experimentally", "start_char_pos": 337, "end_char_pos": 450 }, { "type": "R", "before": ". We include three possible mechanisms which determine the strength of the cross-reactive immune response . Our model of cross-reactivity contributes to understanding how repeated exposures change an individual's immune profile over a lifetime", "after": ", and shows that memory in the cellular adaptive immune response is necessary to protect against a second infection", "start_char_pos": 526, "end_char_pos": 769 } ]
[ 0, 88, 174, 287, 527, 633 ]
1606.00530
1
In this paper, we extend the 3/2-model for VIX studied by Goard and Mazur (2013) and introduce generalized 3/2 and 1/2 classes for volatility . Under these models, we study the pricing of European and American VIX options and, for the latter, we obtain an early exercise premium representation using a free-boundary approach and local time-space calculus. The optimal exercise boundary for the volatility is obtained as the unique solution to an integral equation of Volterra type. We also consider a model mixing these two classes and formulate the cor- responding optimal stopping problem in terms of the observed factor process. The price of an American VIX call is then represented by an early exercise premium formula. We show the existence of a pair of optimal exercise bound- aries for the factor process and characterize them as the unique solution to a system of integral equations.
In this paper, we extend the 3/2-model for VIX studied by Goard and Mazur (2013) and introduce the generalized 3/2 and 1/2 classes of volatility processes . Under these models, we study the pricing of European and American VIX options and, for the latter, we obtain an early exercise premium representation using a free-boundary approach and local time-space calculus. The optimal exercise boundary for the volatility is obtained as the unique solution to an integral equation of Volterra type. We also consider a model mixing these two classes and formulate the corresponding optimal stopping problem in terms of the observed factor process. The price of an American VIX call is then represented by an early exercise premium formula. We show the existence of a pair of optimal exercise boundaries for the factor process and characterize them as the unique solution to a system of integral equations.
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 95, "end_char_pos": 95 }, { "type": "R", "before": "for volatility", "after": "of volatility processes", "start_char_pos": 128, "end_char_pos": 142 }, { "type": "R", "before": "cor- responding", "after": "corresponding", "start_char_pos": 551, "end_char_pos": 566 }, { "type": "R", "before": "bound- aries", "after": "boundaries", "start_char_pos": 777, "end_char_pos": 789 } ]
[ 0, 144, 356, 482, 632, 724 ]
1606.01810
1
Is undecidability a requirement for open-ended evolution (OEE)? Using algorithmic complexity theory methods, we propose robust computational definitions for open-ended evolution and adaptability of computable dynamical systems. Within this framework, we show that decidability imposes absolute limits to the growth of complexity on computable dynamical systems up to a logarithm of a logarithmic term. Conversely, systems that exhibit open-ended evolution must be undecidable, establishing undecidability as a requirement for such systems. Complexity is assessed in terms of three measures: sophistication, coarse sophistication and busy beaver logical depth. These three complexity measures assign low complexity values to random (incompressible) objects. We conjecture that, for similar complexity measures that assign low complexity values, decidability imposes comparable limits to the stable growth of complexity and such behaviour is necessary for non-trivial evolutionary systems. Finally, we show that undecidability of adapted states imposes novel and unpredictable behaviour on the individuals or population being modelled. Such behaviour is irreducible .
Is undecidability a requirement for open-ended evolution (OEE)? Using algorithmic complexity theory methods, we propose robust computational definitions for open-ended evolution and adaptability of computable dynamical systems. Within this framework, we show that decidability imposes absolute limits to the growth of complexity on computable dynamical systems up to a double logarithmic term. Conversely, systems that exhibit (strong) open-ended evolution must be undecidable, establishing undecidability as a requirement for such systems. Complexity is assessed in terms of three measures: sophistication, coarse sophistication and busy beaver logical depth. These three complexity measures assign low complexity values to random (incompressible) objects. We conjecture that, for similar complexity measures that assign low complexity values, decidability imposes comparable limits to the stable growth of complexity and such behaviour is necessary for non-trivial evolutionary systems. We show that undecidability of adapted states imposes novel and unpredictable behaviour on the individuals or population being modelled. Such behaviour is irreducible . Finally, we offer an example of a system, first proposed by Chaitin, that exhibits strong OEE .
[ { "type": "R", "before": "logarithm of a", "after": "double", "start_char_pos": 369, "end_char_pos": 383 }, { "type": "A", "before": null, "after": "(strong)", "start_char_pos": 435, "end_char_pos": 435 }, { "type": "R", "before": "Finally, we", "after": "We", "start_char_pos": 989, "end_char_pos": 1000 }, { "type": "A", "before": null, "after": ". Finally, we offer an example of a system, first proposed by Chaitin, that exhibits strong OEE", "start_char_pos": 1165, "end_char_pos": 1165 } ]
[ 0, 63, 227, 401, 540, 660, 757, 988, 1134 ]
1606.01810
2
Is undecidability a requirement for open-ended evolution (OEE)? Using algorithmic complexity theory methods , we propose robust computational definitions for open-ended evolution and adaptability of computable dynamical systems. Within this framework, we show that decidability imposes absolute limits to the growth of complexity on computable dynamical systems up to a double logarithmic term . Conversely, systems that exhibit (strong) open-ended evolution must be undecidable, establishing undecidability as a requirement for such systems. Complexity is assessed in terms of three measures: sophistication, coarse sophistication and busy beaver logical depth. These three complexity measures assign low complexity values to random (incompressible) objects. We conjecture that, for similar complexity measures that assign low complexity values, decidability imposes comparable limits to the stable growth of complexity and such behaviour is necessary for non-trivial evolutionary systems. We show that undecidability of adapted states imposes novel and unpredictable behaviour on the individuals or population being modelled. Such behaviour is irreducible. Finally, we offer an example of a system, first proposed by Chaitin, that exhibits strong OEE.
Is undecidability a requirement for open-ended evolution (OEE)? Using methods derived from algorithmic complexity theory , we propose robust computational definitions of open-ended evolution and the adaptability of computable dynamical systems. Within this framework, we show that decidability imposes absolute limits to the stable growth of complexity in computable dynamical systems . Conversely, systems that exhibit (strong) open-ended evolution must be undecidable, establishing undecidability as a requirement for such systems. Complexity is assessed in terms of three measures: sophistication, coarse sophistication and busy beaver logical depth. These three complexity measures assign low complexity values to random (incompressible) objects. As time grows, the stated complexity measures allow for the existence of complex states during the evolution of a computable dynamical system. We show, however, that finding these states involves undecidable computations. We conjecture that for similar complexity measures that assign low complexity values, decidability imposes comparable limits to the stable growth of complexity , and that such behaviour is necessary for non-trivial evolutionary systems. We show that the undecidability of adapted states imposes novel and unpredictable behaviour on the individuals or populations being modelled. Such behaviour is irreducible. Finally, we offer an example of a system, first proposed by Chaitin, that exhibits strong OEE.
[ { "type": "A", "before": null, "after": "methods derived from", "start_char_pos": 70, "end_char_pos": 70 }, { "type": "D", "before": "methods", "after": null, "start_char_pos": 101, "end_char_pos": 108 }, { "type": "R", "before": "for", "after": "of", "start_char_pos": 155, "end_char_pos": 158 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 184, "end_char_pos": 184 }, { "type": "A", "before": null, "after": "stable", "start_char_pos": 311, "end_char_pos": 311 }, { "type": "R", "before": "on", "after": "in", "start_char_pos": 333, "end_char_pos": 335 }, { "type": "D", "before": "up to a double logarithmic term", "after": null, "start_char_pos": 365, "end_char_pos": 396 }, { "type": "R", "before": "We conjecture that,", "after": "As time grows, the stated complexity measures allow for the existence of complex states during the evolution of a computable dynamical system. We show, however, that finding these states involves undecidable computations. We conjecture that", "start_char_pos": 763, "end_char_pos": 782 }, { "type": "R", "before": "and", "after": ", and that", "start_char_pos": 924, "end_char_pos": 927 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1007, "end_char_pos": 1007 }, { "type": "R", "before": "population", "after": "populations", "start_char_pos": 1105, "end_char_pos": 1115 } ]
[ 0, 63, 230, 545, 665, 762, 993, 1131, 1162 ]
1606.02900
1
We consider the problem of embedding a discrete parameter queueing system into continuous space by using simulation-based interpolation. We first show that a discrete-time Geom/Geom/1 queue with service time T can be exactly simulated by making the service time a Bernoulli random variable which switches between T_1 and T_2 with T_1 < T < T_2 with expected value T. This is a form of simulation-based interpolation which can, in principle, be applied to more complex queueing networks. We show that such an interpolation is possible for queueing networks whose parameters can include deterministic service times, queue capacities, and the number of servers, and empirically demonstrate that the interpolation is well behaved. Unlike spatial interpolation, the interpolated value can be computed using a single simulation run irrespective of the number of parameters in the system. To demonstrate the utility of the interpolation scheme , we solve a discrete parameter queuing network optimization problem by embedding the discrete parameters into continuous space, and then using a continuous space optimization algorithm to find optimal configurations .
In simulation-based optimization of queuing systems, the presence of discrete-valued parameters (such as buffer sizes and the number of servers) makes the optimization difficult. We propose a novel technique for embedding such discrete parameters into a continuous space, so that optimization can be performed efficiently using continuous-space methods. Unlike spatial interpolation, our embedding technique is based on a randomization of the simulation model. The interpolated value can be computed using a single simulation of this randomized model, irrespective of the number of parameters. We first study the theoretical properties of such a randomization scheme applied to M/M/1 and Geom/Geom/1 queues. We prove that the randomization produces valid interpolations of the steady-state performance functions with respect to an integer service-time parameter. We then show that such an embedding is possible for more complex queueing networks whose parameters can include deterministic service times, queue capacities, and the number of servers, and empirically demonstrate that the interpolation is well behaved. To demonstrate the utility of the embedding technique , we solve a 6-parameter queuing network optimization problem by embedding the discrete parameters into a continuous space. The technique produces smooth interpolations of the objective function, and a continuous optimizer applied directly over this embedding converges rapidly, producing good solutions .
[ { "type": "R", "before": "We consider the problem of embedding a discrete parameter queueing system into continuous space by using simulation-based interpolation. We first show that a discrete-time", "after": "In simulation-based optimization of queuing systems, the presence of discrete-valued parameters (such as buffer sizes and the number of servers) makes the optimization difficult. We propose a novel technique for embedding such discrete parameters into a continuous space, so that optimization can be performed efficiently using continuous-space methods. Unlike spatial interpolation, our embedding technique is based on a randomization of the simulation model. The interpolated value can be computed using a single simulation of this randomized model, irrespective of the number of parameters. We first study the theoretical properties of such a randomization scheme applied to M/M/1 and", "start_char_pos": 0, "end_char_pos": 171 }, { "type": "R", "before": "queue with service time T can be exactly simulated by making the service time a Bernoulli random variable which switches between T_1 and T_2 with T_1 < T < T_2 with expected value T. This is a form of simulation-based interpolation which can, in principle, be applied to more complex queueing networks. We", "after": "queues. We prove that the randomization produces valid interpolations of the steady-state performance functions with respect to an integer service-time parameter. We then", "start_char_pos": 184, "end_char_pos": 489 }, { "type": "R", "before": "interpolation", "after": "embedding", "start_char_pos": 508, "end_char_pos": 521 }, { "type": "A", "before": null, "after": "more complex", "start_char_pos": 538, "end_char_pos": 538 }, { "type": "D", "before": "Unlike spatial interpolation, the interpolated value can be computed using a single simulation run irrespective of the number of parameters in the system.", "after": null, "start_char_pos": 728, "end_char_pos": 882 }, { "type": "R", "before": "interpolation scheme", "after": "embedding technique", "start_char_pos": 917, "end_char_pos": 937 }, { "type": "R", "before": "discrete parameter", "after": "6-parameter", "start_char_pos": 951, "end_char_pos": 969 }, { "type": "A", "before": null, "after": "a continuous space. The technique produces smooth interpolations of the objective function, and a", "start_char_pos": 1049, "end_char_pos": 1049 }, { "type": "R", "before": "space, and then using a continuous space optimization algorithm to find optimal configurations", "after": "optimizer applied directly over this embedding converges rapidly, producing good solutions", "start_char_pos": 1061, "end_char_pos": 1155 } ]
[ 0, 136, 366, 486, 727, 882 ]
1606.02900
2
In simulation-based optimization of queuing systems, the presence of discrete-valued parameters (such as buffer sizes and the number of servers) makes the optimization difficult. We propose a novel technique for embedding such discrete parameters into a continuous space, so that optimization can be performed efficiently using continuous-space methods . Unlike spatial interpolation, our embedding technique is based on a randomization of the simulation model . The interpolated value can be computed using a single simulation of this randomized model, irrespective of the number of parameters . We first study the theoretical properties of such a randomization scheme applied to M/M/1 and Geom/Geom/1 queues. We prove that the randomization produces valid interpolations of the steady-state performance functions with respect to an integer service-time parameter. We then show that such an embedding is possible for more complex queueing networks whose parameters can include deterministic service times, queue capacities, and the number of servers , and empirically demonstrate that the interpolation is well behaved. To demonstrate the utility of the embedding technique, we solve a 6-parameter queuing network optimization problem by embedding the discrete parameters into a continuous space . The technique produces smooth interpolations of the objective function , and a continuous optimizer applied directly over this embedding converges rapidly, producing good solutions.
Motivated by the problem of discrete-parameter simulation optimization (DPSO) of queueing systems, we consider the problem of embedding the discrete parameter space into a continuous one so that descent-based continuous-space methods could be directly applied for efficient optimization. We show that a randomization of the simulation model itself can be used to achieve such an embedding when the objective function is a long-run average measure. Unlike spatial interpolation, the computational cost of this embedding is independent of the number of parameters in the system, making the approach ideally suited to high-dimensional problems. We describe in detail the application of this technique to discrete-time queues for embedding queue capacities, number of servers and server-delay parameters into continuous space and empirically show that the technique can produce smooth interpolations of the objective function . Through an optimization case-study of a queueing network with 10^7 design points, we demonstrate that existing continuous optimizers can be effectively applied over such an embedding to find good solutions.
[ { "type": "R", "before": "In simulation-based optimization of queuing systems, the presence of discrete-valued parameters (such as buffer sizes and the number of servers) makes the optimization difficult. We propose a novel technique for embedding such discrete parameters", "after": "Motivated by the problem of discrete-parameter simulation optimization (DPSO) of queueing systems, we consider the problem of embedding the discrete parameter space", "start_char_pos": 0, "end_char_pos": 246 }, { "type": "R", "before": "space, so that optimization can be performed efficiently using", "after": "one so that descent-based", "start_char_pos": 265, "end_char_pos": 327 }, { "type": "R", "before": ". Unlike spatial interpolation, our embedding technique is based on", "after": "could be directly applied for efficient optimization. We show that", "start_char_pos": 353, "end_char_pos": 420 }, { "type": "R", "before": ". The interpolated value can be computed using a single simulation of this randomized model, irrespective", "after": "itself can be used to achieve such an embedding when the objective function is a long-run average measure. Unlike spatial interpolation, the computational cost of this embedding is independent", "start_char_pos": 461, "end_char_pos": 566 }, { "type": "R", "before": ". We first study the theoretical properties of such a randomization scheme applied to M/M/1 and Geom/Geom/1 queues. We prove that the randomization produces valid interpolations of the steady-state performance functions with respect to an integer service-time parameter. We then show that such an embedding is possible for more complex queueing networks whose parameters can include deterministic service times,", "after": "in the system, making the approach ideally suited to high-dimensional problems. We describe in detail the application of this technique to discrete-time queues for embedding", "start_char_pos": 595, "end_char_pos": 1006 }, { "type": "D", "before": "and the", "after": null, "start_char_pos": 1025, "end_char_pos": 1032 }, { "type": "R", "before": ", and empirically demonstrate that the interpolation is well behaved. To demonstrate the utility of the embedding technique, we solve a 6-parameter queuing network optimization problem by embedding the discrete parameters into a continuous space . The technique produces", "after": "and server-delay parameters into continuous space and empirically show that the technique can produce", "start_char_pos": 1051, "end_char_pos": 1321 }, { "type": "R", "before": ", and a continuous optimizer applied directly over this embedding converges rapidly, producing", "after": ". Through an optimization case-study of a queueing network with 10^7 design points, we demonstrate that existing continuous optimizers can be effectively applied over such an embedding to find", "start_char_pos": 1370, "end_char_pos": 1464 } ]
[ 0, 178, 462, 596, 710, 865, 1120, 1298 ]
1606.03325
1
We use functional pathwise It\^o calculus to prove a strictly pathwise version of the master formula in Fernholz' stochastic portfolio theory. Moreover, in our setting, the portfolio-generating function may depend on the entire history of the asset trajectories and on an additional continuous trajectory of bounded variation . Our results are illustrated by several examples and shown to work on empirical market data.
We use pathwise It\^o calculus to prove strictly pathwise versions of the master formula in Fernholz' stochastic portfolio theory. Our first version is set within the framework of F\"ollmer's pathwise It\^o calculus and works for portfolios generated from functions that may depend on the current states of the market portfolio and an additional path of finite variation. The second version is formulated within the functional pathwise It\^o calculus of Dupire (2009) and Cont \& Fourni\'e (2010) and allows for portfolio-generating functionals that may depend additionally on the entire path of the market portfolio . Our results are illustrated by several examples and shown to work on empirical market data.
[ { "type": "D", "before": "functional", "after": null, "start_char_pos": 7, "end_char_pos": 17 }, { "type": "R", "before": "a strictly pathwise version", "after": "strictly pathwise versions", "start_char_pos": 51, "end_char_pos": 78 }, { "type": "R", "before": "Moreover, in our setting, the portfolio-generating function", "after": "Our first version is set within the framework of F\\\"ollmer's pathwise It\\^o calculus and works for portfolios generated from functions that", "start_char_pos": 143, "end_char_pos": 202 }, { "type": "R", "before": "entire history of the asset trajectories and on an additional continuous trajectory of bounded variation", "after": "current states of the market portfolio and an additional path of finite variation. The second version is formulated within the functional pathwise It\\^o calculus of Dupire (2009) and Cont \\& Fourni\\'e (2010) and allows for portfolio-generating functionals that may depend additionally on the entire path of the market portfolio", "start_char_pos": 221, "end_char_pos": 325 } ]
[ 0, 142, 327 ]
1606.03388
1
This paper studies the problem of optimally extracting nonrenewable natural resource in light of various financial and economical restrictions and constraints. Taking into account the fact that the market values of the main natural resources i.e. oil, natural gas, copper,...,etc, fluctuate randomly following global and seasonal macro-economic parameters, these values are modeled using Markov switching L\'evy processes. We formulate this problem as finite-time horizon combined optimal stopping and optimal control problem. We prove that the value function is the unique viscosity of the corresponding Hamilton-Jacobi-Bellman equations. Moreover, we prove the convergence of a finite difference approximation of the value function. Numerical examples are presented to illustrate these results.
This paper studies the problem of optimally extracting nonrenewable natural resource in light of various financial and economic restrictions and constraints. Taking into account the fact that the market values of the main natural resources i.e. oil, natural gas, copper,...,etc, fluctuate randomly following global and seasonal macroeconomic parameters, these values are modeled using Markov switching L\'evy processes. We formulate this problem as finite-time horizon combined optimal stopping and optimal control problem. We prove that the value function is the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman equations. Moreover, we prove the convergence of a finite difference approximation of the value function. Numerical examples are presented to illustrate these results.
[ { "type": "R", "before": "economical", "after": "economic", "start_char_pos": 119, "end_char_pos": 129 }, { "type": "R", "before": "macro-economic", "after": "macroeconomic", "start_char_pos": 330, "end_char_pos": 344 }, { "type": "A", "before": null, "after": "solution", "start_char_pos": 584, "end_char_pos": 584 } ]
[ 0, 159, 422, 526, 640, 735 ]
1606.03899
1
We introduce a novel method to estimate the discount curve from market quotes based on the Moore-Penrose pseudoinverse such that 1) the market quotes are exactly replicated, 2) the curve has maximal smoothness, 3) no ad hoc interpolation is needed, and 4) no numerical root-finding algorithms are required . We provide a full theoretical framework as well as practical applications for both single-curve and multi-curve estimation .
We present a non-parametric method to estimate the discount curve from market quotes based on the Moore-Penrose pseudoinverse . The discount curve reproduces the market quotes perfectly, has maximal smoothness, and is given in closed-form. The method is easy to implement and requires only basic linear algebra operations . We provide a full theoretical framework as well as several practical applications .
[ { "type": "R", "before": "introduce a novel", "after": "present a non-parametric", "start_char_pos": 3, "end_char_pos": 20 }, { "type": "R", "before": "such that 1)", "after": ". The discount curve reproduces", "start_char_pos": 119, "end_char_pos": 131 }, { "type": "R", "before": "are exactly replicated, 2) the curve", "after": "perfectly,", "start_char_pos": 150, "end_char_pos": 186 }, { "type": "R", "before": "3) no ad hoc interpolation is needed, and 4) no numerical root-finding algorithms are required", "after": "and is given in closed-form. The method is easy to implement and requires only basic linear algebra operations", "start_char_pos": 211, "end_char_pos": 305 }, { "type": "R", "before": "practical applications for both single-curve and multi-curve estimation", "after": "several practical applications", "start_char_pos": 359, "end_char_pos": 430 } ]
[ 0, 307 ]
1606.04285
1
In this article , we propose a new numerical computation scheme for Markovian backward stochastic differential equations (BSDEs ) by connecting the semi-analytic short-term approximation applied to each time interval, which has a very simple form to implement. We give the error analysis for BSDEs which have generators of quadratic growth with respect to the control variables and bounded terminal conditions. Although the scheme requires higher regularities than the standard method , one can avoid altogether time-consuming Monte Carlo simulation or other numerical integration for estimating conditional expectations at each space-time node. We provide numerical examples of quadratic-growth (qg) BSDEs as well as standard Lipschitz BSDEs to illustrate the proposed scheme and its empirical convergence rate .
This article proposes a new approximation scheme for quadratic-growth BSDEs in a Markovian setting by connecting a series of semi-analytic asymptotic expansions applied to short-time intervals. Although there remains a condition which needs to be checked a posteriori , one can avoid altogether time-consuming Monte Carlo simulation and other numerical integrations for estimating conditional expectations at each space-time node. Numerical examples of quadratic-growth as well as Lipschitz BSDEs suggest that the scheme works well even for large quadratic coefficients, and a fortiori for large Lipschitz constants .
[ { "type": "R", "before": "In this article , we propose a new numerical computation scheme for Markovian backward stochastic differential equations (BSDEs ) by connecting the", "after": "This article proposes a new approximation scheme for quadratic-growth BSDEs in a Markovian setting by connecting a series of", "start_char_pos": 0, "end_char_pos": 147 }, { "type": "R", "before": "short-term approximation applied to each time interval, which has a very simple form to implement. We give the error analysis for BSDEs which have generators of quadratic growth with respect to the control variables and bounded terminal conditions. Although the scheme requires higher regularities than the standard method", "after": "asymptotic expansions applied to short-time intervals. Although there remains a condition which needs to be checked a posteriori", "start_char_pos": 162, "end_char_pos": 484 }, { "type": "R", "before": "or other numerical integration", "after": "and other numerical integrations", "start_char_pos": 550, "end_char_pos": 580 }, { "type": "R", "before": "We provide numerical", "after": "Numerical", "start_char_pos": 646, "end_char_pos": 666 }, { "type": "D", "before": "(qg) BSDEs", "after": null, "start_char_pos": 696, "end_char_pos": 706 }, { "type": "R", "before": "standard Lipschitz BSDEs to illustrate the proposed scheme and its empirical convergence rate", "after": "Lipschitz BSDEs suggest that the scheme works well even for large quadratic coefficients, and a fortiori for large Lipschitz constants", "start_char_pos": 718, "end_char_pos": 811 } ]
[ 0, 260, 410, 645 ]
1606.05079
1
We study the problem of a trader who wants to maximize the expected reward from liquidating a given stock position. We model the stock price dynamics as a geometric pure jump process with local characteristics driven by an unobservable finite-state Markov chain and the liquidation rate. This reflects uncertainty about the state of the market and feedback effects from trading. We use stochastic filtering to reduce the optimization problem under partial information to an equivalent one under complete information. This leads to a control problem for piecewise deterministic Markov processes ( in short PDMP ). We apply control theory for PDMPs to our problem. In particular, we derive the optimality equation for the value function and we characterize the value function as unique viscosity solution of the associated dynamic programming equation . The paper concludes with a detailed analysis of specific examples. We present numerical results illustrating the impact of partial information and feedback effects on the value function and on the optimal liquidation rate.
We study the problem of a trader who wants to maximize the expected revenue from liquidating a given stock position. We model the stock price dynamics as a geometric pure jump process with local characteristics driven by an unobservable finite-state Markov chain and the liquidation rate. This reflects uncertainty about activity of other traders and feedback effects from trading. We use stochastic filtering to reduce the optimization problem under partial information to an equivalent one under complete information. This leads to a control problem for piecewise deterministic Markov processes ( PDMPs ). We apply control theory for PDMPs to our problem. In particular, we derive the optimality equation for the value function , we characterize the value function as viscosity solution of the associated dynamic programming equation , and we prove a novel comparison result . The paper concludes with a detailed analysis of specific examples. We present numerical results illustrating the impact of partial information and feedback effects on the value function and on the optimal liquidation rate.
[ { "type": "R", "before": "reward", "after": "revenue", "start_char_pos": 68, "end_char_pos": 74 }, { "type": "R", "before": "the state of the market", "after": "activity of other traders", "start_char_pos": 320, "end_char_pos": 343 }, { "type": "R", "before": "in short PDMP", "after": "PDMPs", "start_char_pos": 596, "end_char_pos": 609 }, { "type": "R", "before": "and", "after": ",", "start_char_pos": 735, "end_char_pos": 738 }, { "type": "D", "before": "unique", "after": null, "start_char_pos": 777, "end_char_pos": 783 }, { "type": "A", "before": null, "after": ", and we prove a novel comparison result", "start_char_pos": 850, "end_char_pos": 850 } ]
[ 0, 115, 287, 378, 516, 612, 662, 852, 919 ]
1606.05079
2
We study the problem of a trader who wants to maximize the expected revenue from liquidating a given stock position. We model the stock price dynamics as a geometric pure jump process with local characteristics driven by an unobservable finite-state Markov chain and the liquidation rate. This reflects uncertainty about activity of other traders and feedback effects from trading. We use stochastic filtering to reduce the optimization problem under partial information to an equivalent one under complete information. This leads to a control problem for piecewise deterministic Markov processes (PDMPs). We apply control theory for PDMPs to our problem. In particular, we derive the optimality equation for the value function, we characterize the value function as viscosity solution of the associated dynamic programming equation, and we prove a novel comparison result. The paper concludes with a detailed analysis of specific examples . We present numerical results illustrating the impact of partial information and feedback effects on the value function and on the optimal liquidation rate.
We study the problem of a trader who wants to maximize the expected revenue from liquidating a given stock position. We model the stock price dynamics as a geometric pure jump process with local characteristics driven by an unobservable finite-state Markov chain and by the liquidation rate. This reflects uncertainty about activity of other traders and feedback effects from trading. We use stochastic filtering to reduce the optimization problem under partial information to an equivalent one under complete information. This leads to a control problem for piecewise deterministic Markov processes (PDMPs). We apply control theory for PDMPs to our problem. In particular, we derive the optimality equation for the value function, we characterize the value function as viscosity solution of the associated dynamic programming equation, and we prove a novel comparison result. The paper concludes with a detailed analysis of a specific example . We present numerical results illustrating the impact of partial information and feedback effects on the value function and on the optimal liquidation rate.
[ { "type": "A", "before": null, "after": "by", "start_char_pos": 267, "end_char_pos": 267 }, { "type": "R", "before": "specific examples", "after": "a specific example", "start_char_pos": 923, "end_char_pos": 940 } ]
[ 0, 116, 289, 382, 520, 606, 656, 874, 942 ]
1606.05079
3
We study the problem of a trader who wants to maximize the expected revenue from liquidating a given stock position. We model the stock price dynamics as a geometric pure jump process with local characteristics driven by an unobservable finite-state Markov chain and by the liquidation rate. This reflects uncertainty about activity of other traders and feedback effects from trading . We use stochastic filtering to reduce the optimization problem under partial information to an equivalent one under complete information. This leads to a control problem for piecewise deterministic Markov processes (PDMPs). We apply control theory for PDMPs to our problem. In particular, we derive the optimality equation for the value function, we characterize the value function as viscosity solution of the associated dynamic programming equation, and we prove a novel comparison result. The paper concludes with a detailed analysis of a specific example. We present numerical results illustrating the impact of partial information and feedback effects on the value function and on the optimal liquidation rate.
We study the optimal liquidation problem in a market model where the bid price follows a geometric pure jump process whose local characteristics are driven by an unobservable finite-state Markov chain and by the liquidation rate. This model is consistent with stylized facts of high frequency data such as the discrete nature of tick data and the clustering in the order flow. We include both temporary and permanent effects into our analysis . We use stochastic filtering to reduce the optimal liquidation problem to an equivalent optimization problem under complete information. This leads to a stochastic control problem for piecewise deterministic Markov processes (PDMPs). We carry out a detailed mathematical analysis of this problem. In particular, we derive the optimality equation for the value function, we characterize the value function as continuous viscosity solution of the associated dynamic programming equation, and we prove a novel comparison result. The paper concludes with numerical results illustrating the impact of partial information and price impact on the value function and on the optimal liquidation rate.
[ { "type": "R", "before": "problem of a trader who wants to maximize the expected revenue from liquidating a given stock position. We model the stock price dynamics as", "after": "optimal liquidation problem in a market model where the bid price follows", "start_char_pos": 13, "end_char_pos": 153 }, { "type": "R", "before": "with local characteristics", "after": "whose local characteristics are", "start_char_pos": 184, "end_char_pos": 210 }, { "type": "R", "before": "reflects uncertainty about activity of other traders and feedback effects from trading", "after": "model is consistent with stylized facts of high frequency data such as the discrete nature of tick data and the clustering in the order flow. We include both temporary and permanent effects into our analysis", "start_char_pos": 297, "end_char_pos": 383 }, { "type": "R", "before": "optimization problem under partial information", "after": "optimal liquidation problem", "start_char_pos": 428, "end_char_pos": 474 }, { "type": "R", "before": "one", "after": "optimization problem", "start_char_pos": 492, "end_char_pos": 495 }, { "type": "A", "before": null, "after": "stochastic", "start_char_pos": 540, "end_char_pos": 540 }, { "type": "R", "before": "apply control theory for PDMPs to our", "after": "carry out a detailed mathematical analysis of this", "start_char_pos": 614, "end_char_pos": 651 }, { "type": "A", "before": null, "after": "continuous", "start_char_pos": 772, "end_char_pos": 772 }, { "type": "D", "before": "a detailed analysis of a specific example. We present", "after": null, "start_char_pos": 905, "end_char_pos": 958 }, { "type": "R", "before": "feedback effects", "after": "price impact", "start_char_pos": 1028, "end_char_pos": 1044 } ]
[ 0, 116, 291, 385, 523, 610, 660, 879, 947 ]
1606.05164
1
We introduce a network valuation model (hereafter NEVA) for the ex-ante valuation of claims among financial institutions connected in a network of liabilities. Similar to previous work, the new framework allows to endogenously determine the recovery rate on all claims upon the default of some institutions. In addition, it also allows to account for ex-ante uncertainty on the asset values, in particular the one arising when the valuation is carried out at some time before the maturity of the claims. The framework encompasses as special cases both the ex-post approaches of Eisenberg and Noe and its previous extensions, as well as the ex-ante approaches, in the sense that each of these models can be recovered exactly for special values of the parameters . We characterize the existence and uniqueness of the solutions of the valuation problem under general conditions on how the value of each claim depends on the equity of the counterparty . Further, we define an algorithm to carry out the network valuation and we provide sufficient conditions for convergence to the maximal solution .
We introduce a general model for the balance-sheet consistent valuation of interbank claims within an interconnected financial system. Our model represents an extension of clearing models of interdependent liabilities to account for the presence of uncertainty on banks' external assets. At the same time, it also provides a natural extension of classic structural credit risk models to the case of an interconnected system . We characterize the existence and uniqueness of a valuation that maximises individual and total equity values for all banks. We apply our model to the assessment of systemic risk, and in particular for the case of stress-testing . Further, we provide a fixed-point algorithm to carry out the network valuation and the conditions for its convergence .
[ { "type": "R", "before": "network valuation model (hereafter NEVA) for the ex-ante valuation of claims among financial institutions connected in a network of liabilities. Similar to previous work, the new framework allows to endogenously determine the recovery rate on all claims upon the default of some institutions. In addition, it also allows to account for ex-ante uncertainty on the asset values, in particular the one arising when the valuation is carried out at some time before the maturity of the claims. The framework encompasses as special cases both the ex-post approaches of Eisenberg and Noe and its previous extensions, as well as the ex-ante approaches, in the sense that each of these models can be recovered exactly for special values of the parameters", "after": "general model for the balance-sheet consistent valuation of interbank claims within an interconnected financial system. Our model represents an extension of clearing models of interdependent liabilities to account for the presence of uncertainty on banks' external assets. At the same time, it also provides a natural extension of classic structural credit risk models to the case of an interconnected system", "start_char_pos": 15, "end_char_pos": 760 }, { "type": "R", "before": "the solutions of the valuation problem under general conditions on how the value of each claim depends on the equity of the counterparty", "after": "a valuation that maximises individual and total equity values for all banks. We apply our model to the assessment of systemic risk, and in particular for the case of stress-testing", "start_char_pos": 811, "end_char_pos": 947 }, { "type": "R", "before": "define an", "after": "provide a fixed-point", "start_char_pos": 962, "end_char_pos": 971 }, { "type": "R", "before": "we provide sufficient conditions for convergence to the maximal solution", "after": "the conditions for its convergence", "start_char_pos": 1021, "end_char_pos": 1093 } ]
[ 0, 159, 307, 503, 762, 949 ]
1606.06111
1
Identifying universal behavior is a challenging task for far-from-equilibrium complex systems. Here we investigate the collective dynamics of the international currency exchange market and show the existence of a semi-invariant signature masked by the high degree of heterogeneity in this complex system. The cumulative fluctuation distribution in the exchange rates of different currencies possess heavy tails characterized by exponents varying around a median value of 2. The systematic deviation of individual currencies from this putative universal form (the "inverse square law" ) can be partly ascribed to the differences in their economic prosperity and diversity of export products. The distinct nature of the fluctuation dynamics for currencies of developed , emerging and frontier economiesare characterized in detail by detrended fluctuation analysis and variance-ratio tests, which shows that less developed economies are associated with sub-diffusive random walk processes . We hierarchically cluster the currencies into similarity groups based on differences between their fluctuation distributions as measured by Jensen-Shannon divergence. These clusters are consistent with the nature of the underlying economies - but also show striking divergences during economic crises. Indeed a temporally resolved analysis of the fluctuations indicates significant disruption during the crisis of 2008-09 underlining its severity .
Identifying behavior that is relatively invariant under different conditions is a challenging task in far-from-equilibrium complex systems. As an example of how the existence of a semi-invariant signature can be masked by the heterogeneity in the properties of the components comprising such systems, we consider the exchange rate dynamics in the international currency market. We show that the exponents characterizing the heavy tails of fluctuation distributions for different currencies systematically diverge from a putative universal form associated with the median value (~2) of the exponents. We relate the degree of deviation of a particular currency from such an "inverse square law" to fundamental macroscopic properties of the corresponding economy, viz., measures of per capita production output and diversity of export products. We also show that in contrast to uncorrelated random walks exhibited by the exchange rate dynamics for currencies belonging to developed economies, those of the less developed economies show characteristics of sub-diffusive processes which we relate to the anti-correlated nature of the corresponding fluctuations. Approaches similar to that presented here may help in identifying invariant features obscured by the heterogeneous nature of components in other complex systems .
[ { "type": "R", "before": "universal behavior is", "after": "behavior that is relatively invariant under different conditions is", "start_char_pos": 12, "end_char_pos": 33 }, { "type": "R", "before": "for", "after": "in", "start_char_pos": 53, "end_char_pos": 56 }, { "type": "R", "before": "Here we investigate the collective dynamics of the international currency exchange market and show the", "after": "As an example of how the", "start_char_pos": 95, "end_char_pos": 197 }, { "type": "A", "before": null, "after": "can be", "start_char_pos": 238, "end_char_pos": 238 }, { "type": "R", "before": "high degree of heterogeneity in this complex system. The cumulative fluctuation distribution in the exchange rates of different currencies possess heavy tails characterized by exponents varying around a median value of 2. The systematic deviation of individual currencies from this", "after": "heterogeneity in the properties of the components comprising such systems, we consider the exchange rate dynamics in the international currency market. We show that the exponents characterizing the heavy tails of fluctuation distributions for different currencies systematically diverge from a", "start_char_pos": 253, "end_char_pos": 534 }, { "type": "R", "before": "(the", "after": "associated with the median value (~2) of the exponents. We relate the degree of deviation of a particular currency from such an", "start_char_pos": 559, "end_char_pos": 563 }, { "type": "R", "before": ") can be partly ascribed to the differences in their economic prosperity", "after": "to fundamental macroscopic properties of the corresponding economy, viz., measures of per capita production output", "start_char_pos": 585, "end_char_pos": 657 }, { "type": "R", "before": "The distinct nature of the fluctuation", "after": "We also show that in contrast to uncorrelated random walks exhibited by the exchange rate", "start_char_pos": 692, "end_char_pos": 730 }, { "type": "R", "before": "of developed , emerging and frontier economiesare characterized in detail by detrended fluctuation analysis and variance-ratio tests, which shows that", "after": "belonging to developed economies, those of the", "start_char_pos": 755, "end_char_pos": 905 }, { "type": "R", "before": "are associated with", "after": "show characteristics of", "start_char_pos": 931, "end_char_pos": 950 }, { "type": "R", "before": "random walk processes . We hierarchically cluster the currencies into similarity groups based on differences between their fluctuation distributions as measured by Jensen-Shannon divergence. These clusters are consistent with the", "after": "processes which we relate to the anti-correlated", "start_char_pos": 965, "end_char_pos": 1194 }, { "type": "R", "before": "underlying economies - but also show striking divergences during economic crises. Indeed a temporally resolved analysis of the fluctuations indicates significant disruption during the crisis of 2008-09 underlining its severity", "after": "corresponding fluctuations. Approaches similar to that presented here may help in identifying invariant features obscured by the heterogeneous nature of components in other complex systems", "start_char_pos": 1209, "end_char_pos": 1435 } ]
[ 0, 94, 305, 474, 691, 988, 1155, 1290 ]
1606.06555
1
We propose a stochastic model for gene transcription coupled to DNA supercoiling, where we incorporate the experimental observation that polymerases create supercoiling as they unwind the DNA helix, and that these enzymes bind more favourably to regions where the genome is unwound. Within this model, we show that when the transcriptionally induced flux of supercoiling increases, there is a sharp crossover from a regime where torsional stresses relax quickly and gene transcription is random, to one where gene expression is highly correlated and tightly regulated by supercoiling. In the latter regime, the model displays transcriptional bursts, waves of supercoiling, and up-regulation of divergent or bidirectional genes. It also predicts that topological enzymes which relax twist and writhe should provide a pathway to down-regulate transcription. This article has been accepted for publication in Physical Review Letters, May 2016.
We propose a stochastic model for gene transcription coupled to DNA supercoiling, where we incorporate the experimental observation that polymerases create supercoiling as they unwind the DNA helix, and that these enzymes bind more favourably to regions where the genome is unwound. Within this model, we show that when the transcriptionally induced flux of supercoiling increases, there is a sharp crossover from a regime where torsional stresses relax quickly and gene transcription is random, to one where gene expression is highly correlated and tightly regulated by supercoiling. In the latter regime, the model displays transcriptional bursts, waves of supercoiling, and up-regulation of divergent or bidirectional genes. It also predicts that topological enzymes which relax twist and writhe should provide a pathway to down-regulate transcription. This article has been published in Physical Review Letters, May 2016.
[ { "type": "R", "before": "accepted for publication", "after": "published", "start_char_pos": 878, "end_char_pos": 902 } ]
[ 0, 282, 584, 727, 855 ]
1606.06668
1
We report a new mechanism for allelic dominance in regulatory genetic interactions that we call binding dominance. We investigated a biophysical model of gene regulation, where the fractional occupancy of a transcription factor (TF) on the cis-regulated promoter site it binds to is determined by binding energy (-{\Delta}G) and TF concentration . Transcription and gene expression proceed when the TF is bound to the promoter. In diploids, individuals may be heterozygous at the cis-site, at the TF's coding region, or at the TF's own promoter, which determines allele-specific TF concentration . We find that when the TF's coding region is heterozygous, TF alleles compete for occupancy at the cis sites and the tighter-binding TF is dominant in proportion to the difference in binding strength. When the TF's own promoter is heterozygous, the TF produced at the higher concentration is also dominant. Cis-site heterozygotes have additive and therefore codominant phenotypes. Binding dominance extends to the expression of downstream loci and is sensitive to genetic background . While binding dominance is inevitable at the molecular level, it may be difficult to detect in the phenotype under some biophysical conditions, more so when TF concentration is high and allele-specific binding affinities are similar. A body of empirical research on the biophysics of TF binding demonstrates the plausibility of this mechanism of dominance, but studies of gene expression under competitive binding in heterozygotes in a diversity of genetic backgrounds are needed.
We report a new mechanism for allelic dominance in regulatory genetic interactions that we call binding dominance. We investigated a biophysical model of gene regulation, where the fractional occupancy of a transcription factor (TF) on the cis-regulated promoter site it binds to is determined by binding energy (-{\Delta}G) and TF dosage . Transcription and gene expression proceed when the TF is bound to the promoter. In diploids, individuals may be heterozygous at the cis-site, at the TF's coding region, or at the TF's own promoter, which determines allele-specific dosage . We find that when the TF's coding region is heterozygous, TF alleles compete for occupancy at the cis sites and the tighter-binding TF is dominant in proportion to the difference in binding strength. When the TF's own promoter is heterozygous, the TF produced at the higher dosage is also dominant. Cis-site heterozygotes have additive expression and therefore codominant phenotypes. Binding dominance propagates to affect the expression of downstream loci and it is sensitive in both magnitude and direction to genetic background , but its detectability often attenuates . While binding dominance is inevitable at the molecular level, it is difficult to detect in the phenotype under some biophysical conditions, more so when TF dosage is high and allele-specific binding affinities are similar. A body of empirical research on the biophysics of TF binding demonstrates the plausibility of this mechanism of dominance, but studies of gene expression under competitive binding in heterozygotes in a diversity of genetic backgrounds are needed.
[ { "type": "R", "before": "concentration", "after": "dosage", "start_char_pos": 332, "end_char_pos": 345 }, { "type": "R", "before": "TF concentration", "after": "dosage", "start_char_pos": 579, "end_char_pos": 595 }, { "type": "R", "before": "concentration", "after": "dosage", "start_char_pos": 872, "end_char_pos": 885 }, { "type": "A", "before": null, "after": "expression", "start_char_pos": 941, "end_char_pos": 941 }, { "type": "R", "before": "extends to", "after": "propagates to affect", "start_char_pos": 997, "end_char_pos": 1007 }, { "type": "R", "before": "is sensitive", "after": "it is sensitive in both magnitude and direction", "start_char_pos": 1046, "end_char_pos": 1058 }, { "type": "A", "before": null, "after": ", but its detectability often attenuates", "start_char_pos": 1081, "end_char_pos": 1081 }, { "type": "R", "before": "may be", "after": "is", "start_char_pos": 1149, "end_char_pos": 1155 }, { "type": "R", "before": "concentration", "after": "dosage", "start_char_pos": 1244, "end_char_pos": 1257 } ]
[ 0, 114, 347, 427, 597, 797, 903, 978, 1083, 1317 ]
1606.07684
1
The spreading of financial distress in capital markets and the resulting systemic risk strongly depend on the detailed structure of financial interconnections . Yet, while financial institutions have to disclose their aggregated balance sheet data, the information on single positions is often unavailabledue to privacy issues. The resulting challenge is that of using the aggregate information to statistically reconstruct financial networks and correctly predict their higher-order properties. However, standard approaches generate unrealistically dense networks, which severely underestimate systemic risk. Moreover, reconstruction techniques are generally cast for networks of bilateral exposures between financial institutions (such as the interbank market), whereas, the network of their investment portfolios (i.e. , the stock market)has received much less attention . Here we develop an improved reconstruction method , based on statistical mechanics concepts and tailored for bipartite market networks. Technically, our approach consists in the preliminary estimation of connection probabilities by maximum-entropy inference driven by entities capitalizations and link density, followed by a density-corrected gravity model to assign position weights. Our method is successfully tested on NASDAQ, NYSE and AMEX filing data, by correctly reproducing the network topology and providing reliable estimates of systemic riskover the market .
Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets is strongly affected by the interconnections among financial institutions . Yet, while the aggregate balance sheets of these institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions , thus failing in reproducing bipartite networks of security holdings (e . g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows to reproduce the correct topology of the network. We test the method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed MECAPM both in reproducing the network topology and in estimating systemic risk .
[ { "type": "R", "before": "The spreading", "after": "Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification", "start_char_pos": 0, "end_char_pos": 13 }, { "type": "R", "before": "and the resulting systemic risk strongly depend on the detailed structure of financial interconnections", "after": "is strongly affected by the interconnections among financial institutions", "start_char_pos": 55, "end_char_pos": 158 }, { "type": "R", "before": "financial institutions have to disclose their aggregated balance sheet data, the", "after": "the aggregate balance sheets of these institutions are publicly disclosed,", "start_char_pos": 172, "end_char_pos": 252 }, { "type": "R", "before": "often unavailabledue to privacy issues. The resulting challenge is that of using the aggregate information to statistically reconstruct financial networks and correctly predict their higher-order properties. However, standard approaches generate unrealistically dense networks, which severely underestimate", "after": "mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of", "start_char_pos": 288, "end_char_pos": 594 }, { "type": "R", "before": "cast for", "after": "designed for monopartite", "start_char_pos": 660, "end_char_pos": 668 }, { "type": "D", "before": "(such as the interbank market), whereas, the network of their investment portfolios (i.e.", "after": null, "start_char_pos": 732, "end_char_pos": 821 }, { "type": "R", "before": "the stock market)has received much less attention", "after": "thus failing in reproducing bipartite networks of security holdings (e", "start_char_pos": 824, "end_char_pos": 873 }, { "type": "A", "before": null, "after": "g., investment portfolios).", "start_char_pos": 876, "end_char_pos": 876 }, { "type": "R", "before": "develop an improved reconstruction method , based on statistical mechanics concepts and", "after": "propose a reconstruction method based on constrained entropy maximization,", "start_char_pos": 885, "end_char_pos": 972 }, { "type": "R", "before": "market networks. Technically, our approach consists in the preliminary estimation of connection probabilities by maximum-entropy inference driven by entities capitalizations and link density, followed by a density-corrected gravity model to assign position weights. Our method is successfully tested on NASDAQ, NYSE and AMEX filing data, by correctly", "after": "financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows to reproduce the correct topology of the network. We test the method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed MECAPM both in", "start_char_pos": 996, "end_char_pos": 1346 }, { "type": "R", "before": "providing reliable estimates of systemic riskover the market", "after": "in estimating systemic risk", "start_char_pos": 1384, "end_char_pos": 1444 } ]
[ 0, 160, 327, 495, 609, 875, 1012, 1261 ]
1606.07684
2
Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets is strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of these institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings ( e.g. \eg , investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional {\em capital-asset pricing model (CAPM) and allows to reproduce the correct topology of the network. We test the method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed MECAPM both in reproducing the network topology and in estimating systemic risk .
Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets is strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings ( \eg , investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional {\em capital-asset pricing model (CAPM) and allows to reproduce the correct topology of the network. We test this ECAPM method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed MECAPM both in reproducing the network topology and in estimating systemic risk due to fire-sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model .
[ { "type": "D", "before": "these", "after": null, "start_char_pos": 408, "end_char_pos": 413 }, { "type": "D", "before": "e.g.", "after": null, "start_char_pos": 909, "end_char_pos": 913 }, { "type": "R", "before": "the", "after": "this ECAPM", "start_char_pos": 1221, "end_char_pos": 1224 }, { "type": "A", "before": null, "after": "due to fire-sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model", "start_char_pos": 1542, "end_char_pos": 1542 } ]
[ 0, 152, 207, 364, 532, 699, 943, 1069, 1212, 1389 ]
1606.08629
1
Recently it has been revealed that the equilibrium hydrogen-bond breathing dynamics of terminal base pairs in short DNA exhibit a power-law relaxation similar to that in the time-resolved Stokes shift experiments with an intercalated coumarin probe. Herea simple theory is proposed that relates the Stokes shift signal to the statistics of Poincar\'e recurrences in the base-pair breathing. This theory can explain the origin of the observed slow non-exponential relaxation in time-resolved Stokes shift data for DNA as well as other complex systems. It turns out that an intercalated coumarin greatly increases the breathing fluctuations in the neighboring base pairs . This motion is qualitatively similar to that in terminal residues, with the same exponent in the power-law relaxation decay. The breathing dynamics is transmitted to the photoprobe by direct contacts between aromatic \pi orbitals of stacked bases .
Anomalous non-exponential relaxation in hydrated biomolecules is commonly attributed to the complexity of the free-energy landscapes, similarly to polymers and glasses. It was found recently that the hydrogen-bond breathing of terminal DNA base pairs exhibits a slow power-law relaxation attributable to weak Hamiltonian chaos, with parameters similar to experimental data. Here, the relationship is studied between this motion and spectroscopic signals measured in DNA with a small molecular photoprobe inserted into the base-pair stack. To this end, the earlier computational approach in combination with an analytical theory is applied to the experimental DNA fragment. It is found that the intensity of breathing dynamics is strongly increased in the internal base pairs that flank the photoprobe, with anomalous relaxation quantitatively close to that in terminal base pairs. A physical mechanism is proposed to explain the coupling between the relaxation of base-pair breathing and the experimental response signal. It is concluded that the algebraic relaxation observed experimentally is very likely a manifestation of weakly chaotic dynamics of hydrogen-bond breathing in the base pairs stacked to the photoprobe , and that the weak nanoscale chaos can represent an ubiquitous hidden source of non-exponential relaxation in ultrafast spectroscopy .
[ { "type": "R", "before": "Recently it has been revealed that the equilibrium", "after": "Anomalous non-exponential relaxation in hydrated biomolecules is commonly attributed to the complexity of the free-energy landscapes, similarly to polymers and glasses. It was found recently that the", "start_char_pos": 0, "end_char_pos": 50 }, { "type": "R", "before": "dynamics of terminal base pairs in short DNA exhibit a", "after": "of terminal DNA base pairs exhibits a slow", "start_char_pos": 75, "end_char_pos": 129 }, { "type": "R", "before": "similar to that in the time-resolved Stokes shift experiments with an intercalated coumarin probe. Herea simple theory is proposed that relates the Stokes shift signal to the statistics of Poincar\\'e recurrences in", "after": "attributable to weak Hamiltonian chaos, with parameters similar to experimental data. Here, the relationship is studied between this motion and spectroscopic signals measured in DNA with a small molecular photoprobe inserted into", "start_char_pos": 151, "end_char_pos": 365 }, { "type": "R", "before": "breathing. This theory can explain the origin of the observed slow non-exponential relaxation in time-resolved Stokes shift data for DNA as well as other complex systems. It turns out that an intercalated coumarin greatly increases the breathing fluctuations in the neighboring base pairs . This motion is qualitatively similar", "after": "stack. To this end, the earlier computational approach in combination with an analytical theory is applied to the experimental DNA fragment. It is found that the intensity of breathing dynamics is strongly increased in the internal base pairs that flank the photoprobe, with anomalous relaxation quantitatively close", "start_char_pos": 380, "end_char_pos": 707 }, { "type": "R", "before": "residues, with the same exponent in the power-law relaxation decay. The breathing dynamics is transmitted", "after": "base pairs. A physical mechanism is proposed to explain the coupling between the relaxation of base-pair breathing and the experimental response signal. It is concluded that the algebraic relaxation observed experimentally is very likely a manifestation of weakly chaotic dynamics of hydrogen-bond breathing in the base pairs stacked", "start_char_pos": 728, "end_char_pos": 833 }, { "type": "R", "before": "by direct contacts between aromatic \\pi orbitals of stacked bases", "after": ", and that the weak nanoscale chaos can represent an ubiquitous hidden source of non-exponential relaxation in ultrafast spectroscopy", "start_char_pos": 852, "end_char_pos": 917 } ]
[ 0, 249, 390, 550, 670, 795 ]
1606.08757
1
The proper sorting of membrane components by regulated exchange between URLanelles is crucial to URLanization . This process relies on the budding and fusion of transport vesicles, and should be strongly influenced by stochastic fluctuations considering the relatively small size of URLanelles. We identify the perfect sorting of two membrane components initially mixed in a single compartment as a first passage process, and we show that the mean sorting time exhibits two distinct regimes as a function of the ratio of vesicle fusion to budding rates. Low ratio values leads to fast sorting, but results in a broad size distribution of sorted compartments dominated by small entities. High ratio values result in two well defined sorted compartments but is exponentially slow. Our results suggests an optimal balance between vesicle budding and fusion for the rapid and efficient sorting of membrane components, and highlight the importance of stochastic effects for the URLanization of intra-cellular compartments.
The proper sorting of membrane components by regulated exchange between URLanelles is crucial to URLanisation . This process relies on the budding and fusion of transport vesicles, and should be strongly influenced by stochastic fluctuations considering the relatively small size of URLanelles. We identify the perfect sorting of two membrane components initially mixed in a single compartment as a first passage process, and we show that the mean sorting time exhibits two distinct regimes as a function of the ratio of vesicle fusion to budding rates. Low ratio values lead to fast sorting, but results in a broad size distribution of sorted compartments dominated by small entities. High ratio values result in two well defined sorted compartments but is exponentially slow. Our results suggest an optimal balance between vesicle budding and fusion for the rapid and efficient sorting of membrane components, and highlight the importance of stochastic effects for the URLanisation of intra-cellular compartments.
[ { "type": "R", "before": "URLanization", "after": "URLanisation", "start_char_pos": 97, "end_char_pos": 109 }, { "type": "R", "before": "leads", "after": "lead", "start_char_pos": 571, "end_char_pos": 576 }, { "type": "R", "before": "suggests", "after": "suggest", "start_char_pos": 791, "end_char_pos": 799 }, { "type": "R", "before": "URLanization", "after": "URLanisation", "start_char_pos": 973, "end_char_pos": 985 } ]
[ 0, 111, 294, 553, 686, 778 ]
1607.00077
1
By Gyongy's theorem, a local and stochastic volatility model is calibrated to the market prices of all call options with positive maturities and strikes if its local volatility function is equal to the ratio of the Dupire local volatility function over the root conditional mean square of the stochastic volatility factor given the spot value. This leads to a SDE nonlinear in the sense of McKean. Particle methods based on a kernel approximation of the conditional expectation, as presented by Guyon and Henry-Labord\`ere (2011), provide an efficient calibration procedure even if some calibration errors may appear when the range of the stochastic volatility factor is very large. But so far, no existence result is available for the SDE nonlinear in the sense of McKean. In the particular case where the local volatility function is equal to the inverse of the root conditional mean square of the stochastic volatility factor multiplied by the spot value given this value and the interest rate is zero, the solution to the SDE is a fake Brownian motion. When the stochastic volatility factor is a constant (over time) random variable taking finitely many values and the range of its square is not too large, we prove existence to the associated Fokker-Planck equation. Thanks to Figalli (2008), we then deduce existence of a new class of fake Brownian motions. We then extend these results to the special case of the LSV model called Regime Switching Local Volatility , where the stochastic volatility factor is a jump process taking finitely many values and with jump intensities depending on the spot level. Under the same condition on the range of its square, we prove existence to the associated Fokker-Planck PDE. We then deduce existence of the calibrated model by extending the results in Figalli (2008).
By Gyongy's theorem, a local and stochastic volatility (LSV) model is calibrated to the market prices of all European call options with positive maturities and strikes if its local volatility function is equal to the ratio of the Dupire local volatility function over the root conditional mean square of the stochastic volatility factor given the spot value. This leads to a SDE nonlinear in the sense of McKean. Particle methods based on a kernel approximation of the conditional expectation, as presented by Guyon and Henry-Labord\`ere (2011), provide an efficient calibration procedure even if some calibration errors may appear when the range of the stochastic volatility factor is very large. But so far, no global existence result is available for the SDE nonlinear in the sense of McKean. In the particular case where the local volatility function is equal to the inverse of the root conditional mean square of the stochastic volatility factor multiplied by the spot value given this value and the interest rate is zero, the solution to the SDE is a fake Brownian motion. When the stochastic volatility factor is a constant (over time) random variable taking finitely many values and the range of its square is not too large, we prove existence to the associated Fokker-Planck equation. Thanks to Figalli (2008), we then deduce existence of a new class of fake Brownian motions. We then extend these results to the special case of the LSV model called regime switching local volatility , where the stochastic volatility factor is a jump process taking finitely many values and with jump intensities depending on the spot level. Under the same condition on the range of its square, we prove existence to the associated Fokker-Planck PDE. Finally, we deduce existence of the calibrated model by extending the results in Figalli (2008).
[ { "type": "A", "before": null, "after": "(LSV)", "start_char_pos": 55, "end_char_pos": 55 }, { "type": "A", "before": null, "after": "European", "start_char_pos": 104, "end_char_pos": 104 }, { "type": "A", "before": null, "after": "global", "start_char_pos": 700, "end_char_pos": 700 }, { "type": "R", "before": "Regime Switching Local Volatility", "after": "regime switching local volatility", "start_char_pos": 1440, "end_char_pos": 1473 }, { "type": "R", "before": "We then", "after": "Finally, we", "start_char_pos": 1725, "end_char_pos": 1732 } ]
[ 0, 345, 399, 684, 776, 1059, 1274, 1366, 1615, 1724 ]
1607.00145
1
We present "GEMM-like Tensor-Tensor multiplication" (GETT), a novel approach to tensor contractions that mirrors the design of a high-performance general matrix-matrix multiplication (GEMM). The critical insight behind GETT is the identification of three index sets, involved in the tensor contraction, which enable us to systematically reduce an arbitrary tensor contraction to loops around a highly tuned "macro-kernel". This macro-kernel operates on suitably prepared ("packed") sub-tensors that reside in a specified level of the cache hierarchy. In contrast to previous approaches to tensor contractions, GETT exhibits desirable features such as unit-stride memory accesses, cache-awareness, as well as full vectorization, without requiring auxiliary memory. To compare our technique with other modern tensor contractions, we integrate GETT alongside the so called Transpose-Transpose-GEMM-Transpose and Loops-over-GEMM approaches into an open source "Tensor Contraction Code Generator" (TCCG). The performance results for a wide range of tensor contractions suggest that GETT has the potential of becoming the method of choice: While GETT exhibits excellent performance across the board, its effectiveness for bandwidth-bound tensor contractions is especially impressive, outperforming existing approaches by up to 12.3 \times. More precisely, GETT achieves speedups of up to 1.42 \times over an equivalent-sized GEMM for bandwidth-bound tensor contractions while attaining up to 91.3\\% of peak floating-point performance for compute-bound tensor contractions.
We present "GEMM-like Tensor-Tensor multiplication" (GETT), a novel approach to tensor contractions that mirrors the design of a high-performance general matrix-matrix multiplication (GEMM). The critical insight behind GETT is the identification of three index sets, involved in the tensor contraction, which enable us to systematically reduce an arbitrary tensor contraction to loops around a highly tuned "macro-kernel". This macro-kernel operates on suitably prepared ("packed") sub-tensors that reside in a specified level of the cache hierarchy. In contrast to previous approaches to tensor contractions, GETT exhibits desirable features such as unit-stride memory accesses, cache-awareness, as well as full vectorization, without requiring auxiliary memory. To compare our technique with other modern tensor contractions, we integrate GETT alongside the so called Transpose-Transpose-GEMM-Transpose and Loops-over-GEMM approaches into an open source "Tensor Contraction Code Generator" (TCCG). The performance results for a wide range of tensor contractions suggest that GETT has the potential of becoming the method of choice: While GETT exhibits excellent performance across the board, its effectiveness for bandwidth-bound tensor contractions is especially impressive, outperforming existing approaches by up to 12.4 \times. More precisely, GETT achieves speedups of up to 1.41 \times over an equivalent-sized GEMM for bandwidth-bound tensor contractions while attaining up to 91.3\\% of peak floating-point performance for compute-bound tensor contractions.
[ { "type": "R", "before": "12.3", "after": "12.4", "start_char_pos": 1321, "end_char_pos": 1325 }, { "type": "R", "before": "1.42", "after": "1.41", "start_char_pos": 1382, "end_char_pos": 1386 } ]
[ 0, 190, 422, 550, 763, 999, 1333, 1489 ]
1607.00291
1
Tensor computations--in particular tensor contraction (TC)--are important kernels in many scientific computing applications (SCAs). Due to the fundamental similarity of TC to matrix multiplication (MM) and to the availability of optimized implementations such as the BLAS, tensor operations have traditionally been implemented in terms of BLAS operations, incurring both a performance and a storage overhead. Instead, we implement TC using the much more flexible BLIS framework, which allows for reshaping of the tensor to be fused with internal partitioning and packing operations, requiring no explicit reshaping operations or additional workspace. This implementation achieves performance approaching that of MM, and in some cases considerably higher than that of traditional TC. Our implementation also supports multithreading using an approach identical to that used for MM in BLIS, with similar performance characteristics. The complexity of managing tensor-to-matrix transformations is also handled automatically in our approach, greatly simplifying its use in SCAs.
Tensor computations--in particular tensor contraction (TC)--are important kernels in many scientific computing applications (SCAs). Due to the fundamental similarity of TC to matrix multiplication (MM) and to the availability of optimized implementations such as the BLAS, tensor operations have traditionally been implemented in terms of BLAS operations, incurring both a performance and a storage overhead. Instead, we implement TC using the flexible BLIS framework, which allows for reshaping of the tensor to be fused with internal partitioning and packing operations, requiring no explicit reshaping operations or additional workspace. This implementation achieves performance approaching that of MM, and in some cases considerably higher than that of traditional TC. Our implementation supports multithreading using an approach identical to that used for MM in BLIS, with similar performance characteristics. The complexity of managing tensor-to-matrix transformations is also handled automatically in our approach, greatly simplifying its use in SCAs.
[ { "type": "D", "before": "much more", "after": null, "start_char_pos": 444, "end_char_pos": 453 }, { "type": "D", "before": "also", "after": null, "start_char_pos": 802, "end_char_pos": 806 } ]
[ 0, 131, 408, 650, 782, 929 ]
1607.00291
2
Tensor computations--in particular tensor contraction (TC)--are important kernels in many scientific computing applications (SCAs) . Due to the fundamental similarity of TC to matrix multiplication (MM) and to the availability of optimized implementations such as the BLAS, tensor operations have traditionally been implemented in terms of BLAS operations, incurring both a performance and a storage overhead. Instead, we implement TC using the flexible BLIS framework, which allows for reshaping of the tensor to be fused with internal partitioning and packing operations, requiring no explicit reshaping operations or additional workspace. This implementation achieves performance approaching that of MM, and in some cases considerably higher than that of traditional TC. Our implementation supports multithreading using an approach identical to that used for MM in BLIS, with similar performance characteristics. The complexity of managing tensor-to-matrix transformations is also handled automatically in our approach, greatly simplifying its use in SCAs .
Tensor computations--in particular tensor contraction (TC)--are important kernels in many scientific computing applications . Due to the fundamental similarity of TC to matrix multiplication (MM) and to the availability of optimized implementations such as the BLAS, tensor operations have traditionally been implemented in terms of BLAS operations, incurring both a performance and a storage overhead. Instead, we implement TC using the flexible BLIS framework, which allows for transposition (reshaping) of the tensor to be fused with internal partitioning and packing operations, requiring no explicit transposition operations or additional workspace. This implementation , TBLIS, achieves performance approaching that of MM, and in some cases considerably higher than that of traditional TC. Our implementation supports multithreading using an approach identical to that used for MM in BLIS, with similar performance characteristics. The complexity of managing tensor-to-matrix transformations is also handled automatically in our approach, greatly simplifying its use in scientific applications .
[ { "type": "D", "before": "(SCAs)", "after": null, "start_char_pos": 124, "end_char_pos": 130 }, { "type": "R", "before": "reshaping", "after": "transposition (reshaping)", "start_char_pos": 487, "end_char_pos": 496 }, { "type": "R", "before": "reshaping", "after": "transposition", "start_char_pos": 596, "end_char_pos": 605 }, { "type": "A", "before": null, "after": ", TBLIS,", "start_char_pos": 662, "end_char_pos": 662 }, { "type": "R", "before": "SCAs", "after": "scientific applications", "start_char_pos": 1055, "end_char_pos": 1059 } ]
[ 0, 132, 409, 641, 774, 916 ]
1607.01414
1
The presence of physical knots has been observed in a small fraction of single-domain proteins and related to their thermodynamic and kinetic properties. The entanglement between different chains in multimeric protein complexes, as captured by their linking number, may as well represent a significant topological feature. The exchanging of identical structural elements typical of domain-swapped proteins make them suitable candidates to validate this possibility . Here we analyze 110 non redundant domain-swapped dimers filtered from the 3Dswap and Proswap databases, by keeping only structures not affected by the presence of holes along the main backbone chain. The linking number G' determined by Gauss integrals on the C_\alpha backbones is shown to be a solid and efficient tool for quantifying the mutual entanglement, also due to its strong correlation with the topological linking averaged over many closures of the chains. Our analysis evidences a quite high fraction of chains with a significant linking , that is with |G'| > 1. We report that Nature promotes configurations with G'<0 and surprisingly, it seems to suppress linking of long proteins. While proteins composed of about 100 residues can be well linked in the swapped dimer form, this is not observed for much longer chains. Upon dissociationof a few dimers via numerical simulations, we observe an exponential decay of the linking number with time, providing an additional useful characterization of the entanglement within swapped dimers. Our results provide a novel and robust topology-based classification of protein-swapped dimers together with some preliminary evidence of its impact on their physical and biological properties.
The presence of knots has been observed in a small fraction of single-domain proteins and related to their thermodynamic and kinetic properties. The exchanging of identical structural elements , typical of domain-swapped proteins , make such dimers suitable candidates to validate the possibility that mutual entanglement between chains may play a similar role for protein complexes. We suggest that such entanglement is captured by the linking number. This represents, for two closed curves, the number of times that each curve winds around the other. We show that closing the curves is not necessary, as a novel parameter G', termed Gaussian entanglement, is strongly correlated with the linking number. Based on 110 non redundant domain-swapped dimers, our analysis evidences a high fraction of chains with a significant intertwining , that is with |G'| > 1. We report that Nature promotes configurations with negative mutual entanglement and surprisingly, it seems to suppress intertwining in long protein dimers. Supported by numerical simulations of dimer dissociation, our results provide a novel topology-based classification of protein-swapped dimers together with some preliminary evidence of its impact on their physical and biological properties.
[ { "type": "D", "before": "physical", "after": null, "start_char_pos": 16, "end_char_pos": 24 }, { "type": "D", "before": "entanglement between different chains in multimeric protein complexes, as captured by their linking number, may as well represent a significant topological feature. The", "after": null, "start_char_pos": 158, "end_char_pos": 326 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 371, "end_char_pos": 371 }, { "type": "R", "before": "make them", "after": ", make such dimers", "start_char_pos": 407, "end_char_pos": 416 }, { "type": "R", "before": "this possibility . Here we analyze", "after": "the possibility that mutual entanglement between chains may play a similar role for protein complexes. We suggest that such entanglement is captured by the linking number. This represents, for two closed curves, the number of times that each curve winds around the other. We show that closing the curves is not necessary, as a novel parameter G', termed Gaussian entanglement, is strongly correlated with the linking number. Based on", "start_char_pos": 449, "end_char_pos": 483 }, { "type": "R", "before": "dimers filtered from the 3Dswap and Proswap databases, by keeping only structures not affected by the presence of holes along the main backbone chain. The linking number G' determined by Gauss integrals on the C_\\alpha backbones is shown to be a solid and efficient tool for quantifying the mutual entanglement, also due to its strong correlation with the topological linking averaged over many closures of the chains. Our", "after": "dimers, our", "start_char_pos": 517, "end_char_pos": 939 }, { "type": "D", "before": "quite", "after": null, "start_char_pos": 961, "end_char_pos": 966 }, { "type": "R", "before": "linking", "after": "intertwining", "start_char_pos": 1010, "end_char_pos": 1017 }, { "type": "R", "before": "G'<0", "after": "negative mutual entanglement", "start_char_pos": 1094, "end_char_pos": 1098 }, { "type": "R", "before": "linking of long proteins. While proteins composed of about 100 residues can be well linked in the swapped dimer form, this is not observed for much longer chains. Upon dissociationof a few dimers via numerical simulations, we observe an exponential decay of the linking number with time, providing an additional useful characterization of the entanglement within swapped dimers. Our", "after": "intertwining in long protein dimers. Supported by numerical simulations of dimer dissociation, our", "start_char_pos": 1138, "end_char_pos": 1520 }, { "type": "D", "before": "and robust", "after": null, "start_char_pos": 1545, "end_char_pos": 1555 } ]
[ 0, 153, 322, 467, 667, 935, 1163, 1300, 1516 ]
1607.02067
1
We study American swaptions in the linear-rational term structure model introduced in [5]. The American swaption pricing problem boils down to an optimal stopping problem that is analytically tractable. It reduces to a free-boundary problem that we tackle by the local time-space calculus of [ 9 ]. We characterize the optimal stopping boundary as the unique solution to a nonlinear integral equation that can be readily solved numerically. We obtain the arbitrage-free price of the American swaption and the optimal exercise strategies in terms of swap rates for both fixed-rate payer and receiver swaps .
We study American swaptions in the linear-rational (LR) term structure model introduced in [5]. The American swaption pricing problem boils down to an optimal stopping problem that is analytically tractable. It reduces to a free-boundary problem that we tackle by the local time-space calculus of [ 7 ]. We characterize the optimal stopping boundary as the unique solution to a nonlinear integral equation that can be readily solved numerically. We obtain the arbitrage-free price of the American swaption and the optimal exercise strategies in terms of swap rates for both fixed-rate payer and receiver swaps . Finally, we show that Bermudan swaptions can be efficiently priced as well .
[ { "type": "A", "before": null, "after": "(LR)", "start_char_pos": 51, "end_char_pos": 51 }, { "type": "R", "before": "9", "after": "7", "start_char_pos": 295, "end_char_pos": 296 }, { "type": "A", "before": null, "after": ". Finally, we show that Bermudan swaptions can be efficiently priced as well", "start_char_pos": 606, "end_char_pos": 606 } ]
[ 0, 91, 203, 299, 441 ]
1607.02470
1
This paper analyzes multi-period mortgage risk at loan and pool levels using an unprecedented dataset of over 120 million prime and subprime mortgages originated across the United States between 1995 and 2014, which includes the individual characteristics of each loan, monthly updates on loan performance over the life of a loan, and a number of time-varying economic variables at the zip code level. We develop, estimate, and test dynamic machine learning models for mortgage prepayment, delinquency, and foreclosure which capture loan-to-loan correlation due to geographic proximity and exposure to common risk factors. The basic building block is a deep neural network which addresses the nonlinear relationship between the explanatory variables and loan performance. Our likelihood estimators, which are based on 3.5 billion borrower-month observations, indicate that mortgage risk is strongly influenced by local economic factors such as zip-code level foreclosure rates. The out-of-sample predictive performance of our deep learning model is a significant improvement over linear models such as logistic regression. Model parameters are estimated using GPU parallel computing due to the computational challenges associated with the large amount of data. The deep learning model's superior accuracy compared to linear models directly translates into improved performance for investors. Portfolios constructed with the deep learning model have lower prepayment and delinquency rates than portfolios chosen with a logistic regression .
We develop a deep learning model of multi-period mortgage risk and use it to analyze an unprecedented dataset of origination and monthly performance records for over 120 million mortgages originated across the US between 1995 and 2014. Our estimators of term structures of conditional probabilities of prepayment, foreclosure and various states of delinquency incorporate the dynamics of a large number of loan-specific as well as macroeconomic variables down to the zip-code level. The estimators uncover the highly nonlinear nature of the relationship between the variables and borrower behavior, especially prepayment. They also highlight the effects of local economic conditions on borrower behavior. State unemployment has the greatest explanatory power among all variables, offering strong evidence of the tight connection between housing finance markets and the macroeconomy. The sensitivity of a borrower to changes in unemployment strongly depends upon current unemployment. It also significantly varies across the entire borrower population, which highlights the interaction of unemployment and many other variables. These findings have important implications for mortgage-backed security investors, rating agencies, and housing finance policymakers .
[ { "type": "R", "before": "This paper analyzes", "after": "We develop a deep learning model of", "start_char_pos": 0, "end_char_pos": 19 }, { "type": "R", "before": "at loan and pool levels using", "after": "and use it to analyze", "start_char_pos": 47, "end_char_pos": 76 }, { "type": "A", "before": null, "after": "origination and monthly performance records for", "start_char_pos": 105, "end_char_pos": 105 }, { "type": "D", "before": "prime and subprime", "after": null, "start_char_pos": 123, "end_char_pos": 141 }, { "type": "R", "before": "United States", "after": "US", "start_char_pos": 174, "end_char_pos": 187 }, { "type": "R", "before": "2014, which includes the individual characteristics of each loan, monthly updates on loan performance over the life of a loan, and a number of time-varying economic variables at the zip code level. We develop, estimate, and test dynamic machine learning models for mortgage prepayment, delinquency, and foreclosure which capture loan-to-loan correlation due to geographic proximity and exposure to common risk factors. The basic building block is a deep neural network which addresses the nonlinear", "after": "2014. Our estimators of term structures of conditional probabilities of prepayment, foreclosure and various states of delinquency incorporate the dynamics of a large number of loan-specific as well as macroeconomic variables down to the zip-code level. The estimators uncover the highly nonlinear nature of the", "start_char_pos": 205, "end_char_pos": 703 }, { "type": "R", "before": "explanatory variables and loan performance. Our likelihood estimators, which are based on 3.5 billion borrower-month observations, indicate that mortgage risk is strongly influenced by local economic factors such as zip-code level foreclosure rates. The out-of-sample predictive performance of our deep learning model is a significant improvement over linear models such as logistic regression. Model parameters are estimated using GPU parallel computing due to the computational challenges associated with the large amount of data. The deep learning model's superior accuracy compared to linear models directly translates into improved performance for investors. Portfolios constructed with the deep learning model have lower prepayment and delinquency rates than portfolios chosen with a logistic regression", "after": "variables and borrower behavior, especially prepayment. They also highlight the effects of local economic conditions on borrower behavior. State unemployment has the greatest explanatory power among all variables, offering strong evidence of the tight connection between housing finance markets and the macroeconomy. The sensitivity of a borrower to changes in unemployment strongly depends upon current unemployment. It also significantly varies across the entire borrower population, which highlights the interaction of unemployment and many other variables. These findings have important implications for mortgage-backed security investors, rating agencies, and housing finance policymakers", "start_char_pos": 729, "end_char_pos": 1538 } ]
[ 0, 402, 623, 772, 978, 1123, 1261, 1392 ]
1607.02479
1
In cellular biology, reaching a target before being degraded or trapped is ubiquitous. An interesting example is given by the virus journey inside the cell cytoplasm: in order to replicate, most viruses have to reach the nucleus before being trapped or degraded. We present here a general approach to estimate the probability and the conditional mean first passage time for such a viral particle to attain safely the nucleus, covered with many (around two thousands ) small absorbing pores. Due to this large number of small holes, which defines the limiting scale , any Brownian simulationsare very unstable. Our new asymptotic formulas precisely account for this phenomena and allow to quantify the cytoplasmic stage of viral infection . We confirm our analysis with Brownian simulations .
A certain class of viruses replicates inside a cell if they can enter the nucleus through one of many small target pores, before being permanently trapped or degraded. We adopt for viral motion a switching stochastic process model and we estimate here the probability and the conditional mean first passage time for a viral particle to attain alive the nucleus. The cell nucleus is covered with thousands of small absorbing nuclear pores and the minimum distance between them defines the smallest spatial scale that limits the efficiency of stochastic simulations. Using the Neuman-Green's function method to solve the steady-state Fokker-Planck equation, we derive asymptotic formula for the probability and mean arrival time to a small window for various pores' distributions, that agree with stochastic simulations. These formulas reveal how key geometrical parameters defines the cytoplasmic stage of viral infection .
[ { "type": "R", "before": "In cellular biology, reaching a target before being degraded or trapped is ubiquitous. An interesting example is given by the virus journey inside the cell cytoplasm: in order to replicate, most viruses have to reach the nucleus before being", "after": "A certain class of viruses replicates inside a cell if they can enter the nucleus through one of many small target pores, before being permanently", "start_char_pos": 0, "end_char_pos": 241 }, { "type": "R", "before": "present here a general approach to estimate", "after": "adopt for viral motion a switching stochastic process model and we estimate here", "start_char_pos": 266, "end_char_pos": 309 }, { "type": "D", "before": "such", "after": null, "start_char_pos": 374, "end_char_pos": 378 }, { "type": "R", "before": "safely the nucleus, covered with many (around two thousands ) small absorbing pores. Due to this large number of small holes, which defines the limiting scale , any Brownian simulationsare very unstable. Our new asymptotic formulas precisely account for this phenomena and allow to quantify", "after": "alive the nucleus. The cell nucleus is covered with thousands of small absorbing nuclear pores and the minimum distance between them defines the smallest spatial scale that limits the efficiency of stochastic simulations. Using the Neuman-Green's function method to solve the steady-state Fokker-Planck equation, we derive asymptotic formula for the probability and mean arrival time to a small window for various pores' distributions, that agree with stochastic simulations. These formulas reveal how key geometrical parameters defines", "start_char_pos": 406, "end_char_pos": 696 }, { "type": "D", "before": ". We confirm our analysis with Brownian simulations", "after": null, "start_char_pos": 738, "end_char_pos": 789 } ]
[ 0, 86, 262, 490, 609, 739 ]
1607.02481
1
Bipartite networks are currently regarded as providing a major insight into URLanization of real-world systems, unveiling the mechanisms shaping the interactions occurring between distinct groups of nodes. One of the major problems encountered when dealing with bipartite networks is obtaining a (monopartite) projection over the layer of interest which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks . The criterion adopted to quantify the similarity of nodes rests upon the similarity of their neighborhoods : in order for any two nodes to be linked, a significantly-large number of neighbors must be shared. Naturally, assessing the statistical significance of nodes similarity requires the definition of a proper statistical benchmark: here we consider two recently-proposed null models for bipartite networks, opportunely defined through the exponential random graph formalism. The output of our algorithm thus consists of a matrix of p-values, from which a validated projection can be straightforwardly obtained, upon running a multiple hypothesis testing and linking only the nodes which pass it. We apply our algorithm to social and economic bipartite networks: when projecting the network of countries and exported goods on the countries layer, our method is able to cluster countries with similar a industrialisation; we also analysed a social network of users and positively-rated movieson the films layer: in this caseour approach divides movies in clusters of similar genres or audience .
Bipartite networks are currently regarded as providing a major insight into URLanization of real-world systems, unveiling the mechanisms shaping the interactions occurring between distinct groups of nodes. One of the major problems encountered when dealing with bipartite networks is obtaining a (monopartite) projection over the layer of interest which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated monopartite projections of bipartite networks , which implements a simple rule : in order for any two nodes to be linked, a significantly-large number of neighbors must be shared. Naturally, assessing the statistical significance of nodes similarity requires the definition of a proper statistical benchmark: here we consider two recently-proposed null models for bipartite networks, opportunely defined through the exponential random graph formalism. Our algorithm outputs a matrix of link-specific p-values, from which a validated projection can be straightforwardly obtained, upon running a multiple hypothesis test and retaining only the statistically significant links. Finally, to test our method we analyze a social network (i.e. the MovieLens dataset, a bipartite network of users and rated movies) and an economic network (i.e. the countries-products World Trade Web representation): while, in the first case, projecting MovieLens on the films layer allows clusters of movies belonging to similar genres to be detected, in the second case, projecting the World Trade Web on the countries layer reveals a modular structure of similarly-industrialized clusters of nations .
[ { "type": "A", "before": null, "after": "monopartite", "start_char_pos": 526, "end_char_pos": 526 }, { "type": "R", "before": ". The criterion adopted to quantify the similarity of nodes rests upon the similarity of their neighborhoods", "after": ", which implements a simple rule", "start_char_pos": 561, "end_char_pos": 669 }, { "type": "R", "before": "The output of our algorithm thus consists of", "after": "Our algorithm outputs", "start_char_pos": 1043, "end_char_pos": 1087 }, { "type": "A", "before": null, "after": "link-specific", "start_char_pos": 1100, "end_char_pos": 1100 }, { "type": "R", "before": "testing and linking only the nodes which pass it. We apply our algorithm to social and economic bipartite networks: when projecting the network of countries and exported goods on the countries layer, our method is able to cluster countries with similar a industrialisation; we also analysed", "after": "test and retaining only the statistically significant links. Finally, to test our method we analyze", "start_char_pos": 1215, "end_char_pos": 1505 }, { "type": "A", "before": null, "after": "(i.e. the MovieLens dataset, a bipartite network", "start_char_pos": 1523, "end_char_pos": 1523 }, { "type": "R", "before": "positively-rated movieson the films layer: in this caseour approach divides movies in clusters of similar genres or audience", "after": "rated movies) and an economic network (i.e. the countries-products World Trade Web representation): while, in the first case, projecting MovieLens on the films layer allows clusters of movies belonging to similar genres to be detected, in the second case, projecting the World Trade Web on the countries layer reveals a modular structure of similarly-industrialized clusters of nations", "start_char_pos": 1537, "end_char_pos": 1661 } ]
[ 0, 205, 446, 562, 770, 1042, 1264, 1488 ]
1607.02481
2
Bipartite networks are currently regarded as providing a major insight into URLanization of real-world systems, unveiling the mechanisms shaping the interactions occurring between distinct groups of nodes. One of the major problems encountered when dealing with bipartite networks is obtaining a (monopartite) projection over the layer of interest which preserves as much as possible the information encoded into the original bipartite structure . In the present paper we propose an algorithm to obtain statistically-validated monopartite projections of bipartite networks, which implements a simple rule: in order for any two nodes to be linked, a significantly-large number of neighbors must be shared . Naturally, assessing the statistical significance of nodes similarity requires the definition of a proper statistical benchmark: here we consider two recently-proposed null modelsfor bipartite networks, opportunely defined through the exponential random graph formalism . Our algorithm outputs a matrix of link-specific p-values, from which a validated projection can be straightforwardly obtained, upon running a multiple hypothesis test and retaining only the statistically significant links. Finally, to test our method we analyze a social network (i.e. the MovieLens dataset, a bipartite network of users and rated movies)and an economic network (i. e. the countries-products World Trade Web representation): while, in the first case, projecting MovieLens on the films layer allows clusters of movies belonging to similar genres to be detected, in the second case, projecting the World Trade Web on the countries layer reveals a modular structure of similarly-industrialized clusters of nations .
Bipartite networks are currently regarded as providing a major insight into URLanization of many real-world systems, unveiling the mechanisms driving the interactions which occur between distinct groups of nodes. One of the most important issues encountered when modeling bipartite networks is devising a way to obtain a (monopartite) projection on the layer of interest , which preserves the information encoded into the original bipartite structure as much as possible . In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, which implements a simple rule: in order for any two nodes to be linked, the number of shared neighbors must be statistically significant . Naturally, assessing the statistical significance of nodes similarity requires the definition of a proper statistical benchmark: here we consider a set of four null models, defined within the Exponential Random Graph framework . Our algorithm outputs a matrix of link-specific p-values, from which a validated projection can be straightforwardly obtained, upon running a multiple hypothesis test and retaining only the statistically-significant links. Finally, in order to test our method , we analyze an economic network (i.e. the countries-products World Trade Web representation) and a social network (i.e. the MovieLens dataset, collecting the users' ratings of a list of movies). In both cases non-trivial communities are detected. In the first case, while projecting the World Trade Web on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated .
[ { "type": "A", "before": null, "after": "many", "start_char_pos": 92, "end_char_pos": 92 }, { "type": "R", "before": "shaping the interactions occurring", "after": "driving the interactions which occur", "start_char_pos": 138, "end_char_pos": 172 }, { "type": "R", "before": "major problems encountered when dealing with", "after": "most important issues encountered when modeling", "start_char_pos": 218, "end_char_pos": 262 }, { "type": "R", "before": "obtaining a", "after": "devising a way to obtain a", "start_char_pos": 285, "end_char_pos": 296 }, { "type": "R", "before": "over", "after": "on", "start_char_pos": 322, "end_char_pos": 326 }, { "type": "R", "before": "which preserves as much as possible", "after": ", which preserves", "start_char_pos": 349, "end_char_pos": 384 }, { "type": "A", "before": null, "after": "as much as possible", "start_char_pos": 447, "end_char_pos": 447 }, { "type": "D", "before": "monopartite", "after": null, "start_char_pos": 529, "end_char_pos": 540 }, { "type": "R", "before": "a significantly-large number of", "after": "the number of shared", "start_char_pos": 649, "end_char_pos": 680 }, { "type": "R", "before": "shared", "after": "statistically significant", "start_char_pos": 699, "end_char_pos": 705 }, { "type": "R", "before": "two recently-proposed null modelsfor bipartite networks, opportunely defined through the exponential random graph formalism", "after": "a set of four null models, defined within the Exponential Random Graph framework", "start_char_pos": 854, "end_char_pos": 977 }, { "type": "R", "before": "statistically significant", "after": "statistically-significant", "start_char_pos": 1170, "end_char_pos": 1195 }, { "type": "A", "before": null, "after": "in order", "start_char_pos": 1212, "end_char_pos": 1212 }, { "type": "R", "before": "we analyze", "after": ", we analyze an economic network (i.e. the countries-products World Trade Web representation) and", "start_char_pos": 1232, "end_char_pos": 1242 }, { "type": "R", "before": "a bipartite network of users and rated movies)and an economic network (i. e. the countries-products", "after": "collecting the users' ratings of a list of movies). In both cases non-trivial communities are detected. In the first case, while projecting the", "start_char_pos": 1289, "end_char_pos": 1388 }, { "type": "R", "before": "representation): while, in the first case, projecting MovieLens on the films layer allows clusters of movies belonging to similar genres to be detected,", "after": "on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected;", "start_char_pos": 1405, "end_char_pos": 1557 }, { "type": "R", "before": "the World Trade Web on the countries layer reveals a modular structure of similarly-industrialized clusters of nations", "after": "MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated", "start_char_pos": 1589, "end_char_pos": 1707 } ]
[ 0, 206, 449, 607, 979, 1202, 1421 ]
1607.03304
1
We propose a hydrodynamic theory to describe shear flows in developing epithelial tissues. We introduce hydrodynamic fields corresponding to state properties of constituent cells as well as a contribution to overall tissue shear flow due to rearrangements in cell network topology. We then construct a constitutive equation for the shear rate due to topological rearrangements . We identify a novel rheological behaviour resulting from memory effects in the tissue. We show that anisotropic deformation of tissue and cells can arise from two distinct active cellular processes: generation of active stress in the tissue, and actively driven cellular rearrangements. These two active processes result in distinct cellular and tissue shape changes, depending on boundary conditions applied on the tissue. Our findings have consequences for the understanding of tissue morphogenesis during development.
We present a hydrodynamic theory to describe shear flows in developing epithelial tissues. We introduce hydrodynamic fields corresponding to state properties of constituent cells as well as a contribution to overall tissue shear flow due to rearrangements in cell network topology. We then construct a generic linear constitutive equation for the shear rate due to topological rearrangements and we investigate a novel rheological behaviour resulting from memory effects in the tissue. We identify two distinct active cellular processes: generation of active stress in the tissue, and actively driven topological rearrangements. We find that these two active processes can produce distinct cellular and tissue shape changes, depending on boundary conditions applied on the tissue. Our findings have consequences for the understanding of tissue morphogenesis during development.
[ { "type": "R", "before": "propose", "after": "present", "start_char_pos": 3, "end_char_pos": 10 }, { "type": "A", "before": null, "after": "generic linear", "start_char_pos": 302, "end_char_pos": 302 }, { "type": "R", "before": ". We identify", "after": "and we investigate", "start_char_pos": 378, "end_char_pos": 391 }, { "type": "R", "before": "show that anisotropic deformation of tissue and cells can arise from", "after": "identify", "start_char_pos": 470, "end_char_pos": 538 }, { "type": "R", "before": "cellular rearrangements. These", "after": "topological rearrangements. We find that these", "start_char_pos": 642, "end_char_pos": 672 }, { "type": "R", "before": "result in", "after": "can produce", "start_char_pos": 694, "end_char_pos": 703 } ]
[ 0, 90, 281, 379, 466, 666, 803 ]
1607.03430
1
The financial crisis showed the importance of measuring, allocating and regulating systemic risk. Recently, the systemic risk measures that can be decomposed into an aggregation function and a scalar measure of risk, received a lot of attention. In this framework, capital allocations are added after aggregation and can represent bailout costs. More recently, a framework has been introduced, where institutions are supplied with capital allocations before aggregation. This yields an interpretation that is particularly useful for regulatory purposes. In each framework, the set of all feasible capital allocations leads to a multivariate risk measure. In this paper, we present dual representations for scalar systemic risk measures as well as for the corresponding multivariate risk measures concerning capital allocations. Our results cover both frameworks: aggregating after allocating and allocating after aggregation. Economic interpretations of the obtained results are provided. It turns out that the representations in both frameworks are closely related .
The financial crisis showed the importance of measuring, allocating and regulating systemic risk. Recently, the systemic risk measures that can be decomposed into an aggregation function and a scalar measure of risk, received a lot of attention. In this framework, capital allocations are added after aggregation and can represent bailout costs. More recently, a framework has been introduced, where institutions are supplied with capital allocations before aggregation. This yields an interpretation that is particularly useful for regulatory purposes. In each framework, the set of all feasible capital allocations leads to a multivariate risk measure. In this paper, we present dual representations for scalar systemic risk measures as well as for the corresponding multivariate risk measures concerning capital allocations. Our results cover both frameworks: aggregating after allocating and allocating after aggregation. As examples, we consider the aggregation mechanisms of the Eisenberg-Noe model as well as those of the resource allocation and network flow models. Finally, we illustrate how the dual representations developed here can be useful for computational purposes .
[ { "type": "R", "before": "Economic interpretations of the obtained results are provided. It turns out that the representations in both frameworks are closely related", "after": "As examples, we consider the aggregation mechanisms of the Eisenberg-Noe model as well as those of the resource allocation and network flow models. Finally, we illustrate how the dual representations developed here can be useful for computational purposes", "start_char_pos": 926, "end_char_pos": 1065 } ]
[ 0, 97, 245, 345, 470, 553, 654, 827, 925, 988 ]
1607.03957
1
The simultaneous expression of the hunchback gene in the multiple nuclei of the developing fly embryo gives us a unique opportunity to study how transcription is regulated in URLanisms. A recently developed MS2-MCP technique for imaging transcription in living Drosophila embryos allows us to quantify the dynamics of the developmental transcription process. The initial measurement of the morphogens by the hunchback promoter takes place during very short cell cycles, not only giving each nucleus little time for a precise readout, but also resulting in short time traces . Additionally, the relationship between the measured signal and the promoter state depends on the molecular design of the reporting probe. We develop an analysis approach based on tailor made autocorrelation functions that overcomes the short trace problems and quantifies the dynamics of transcription initiation. Based on life imaging data, we identify signatures of bursty transcription initiation from the hunchback promoter. We show that the precision of the expression of the hunchback gene to measure its position along the anterior-posterior axis is low both at the boundary and in the anterior even at cycle 13, suggesting additional post-translational averaging mechanisms to provide the precision observed in fixed material .
The simultaneous expression of the hunchback gene in the numerous nuclei of the developing fly embryo gives us a unique opportunity to study how transcription is regulated in URLanisms. A recently developed MS2-MCP technique for imaging nascent messenger RNA in living Drosophila embryos allows us to quantify the dynamics of the developmental transcription process. The initial measurement of the morphogens by the hunchback promoter takes place during very short cell cycles, not only giving each nucleus little time for a precise readout, but also resulting in short time traces of transcription . Additionally, the relationship between the measured signal and the promoter state depends on the molecular design of the reporting probe. We develop an analysis approach based on tailor made autocorrelation functions that overcomes the short trace problems and quantifies the dynamics of transcription initiation. Based on live imaging data, we identify signatures of bursty transcription initiation from the hunchback promoter. We show that the precision of the expression of the hunchback gene to measure its position along the anterior-posterior axis is low both at the boundary and in the anterior even at cycle 13, suggesting additional post-transcriptional averaging mechanisms to provide the precision observed in fixed embryos .
[ { "type": "R", "before": "multiple", "after": "numerous", "start_char_pos": 57, "end_char_pos": 65 }, { "type": "R", "before": "transcription", "after": "nascent messenger RNA", "start_char_pos": 237, "end_char_pos": 250 }, { "type": "A", "before": null, "after": "of transcription", "start_char_pos": 574, "end_char_pos": 574 }, { "type": "R", "before": "life", "after": "live", "start_char_pos": 900, "end_char_pos": 904 }, { "type": "R", "before": "post-translational", "after": "post-transcriptional", "start_char_pos": 1219, "end_char_pos": 1237 }, { "type": "R", "before": "material", "after": "embryos", "start_char_pos": 1302, "end_char_pos": 1310 } ]
[ 0, 185, 358, 576, 714, 890, 1005 ]
1607.04153
1
Consider the problem of a government that wants to control its debt-to-GDP (gross domestic product) ratio , while taking into consideration the evolution of the inflation rate of the country. The uncontrolled inflation rate follows an Ornstein-Uhlenbeck dynamics and affects the growth rate of the debt ratio. The level of the latter can be reduced by the government through fiscal interventions. The government aims at choosing a debt reduction policy which minimises the total expected cost of having debt, plus the total expected cost of interventions on debt ratio. We model such problem as a two-dimensional singular stochastic control problem over an infinite time-horizon. We show that it is optimal for the government to adopt a policy that keeps the debt-to-GDP ratio under an inflation-dependent ceiling. This curve is the free-boundary of an associated fully two-dimensional optimal stopping problem, and it is shown to be the unique solution of a nonlinear integral equation .
Consider the problem of a government that wants to reduce the debt-to-GDP (gross domestic product) ratio of a country. The government aims at choosing a debt reduction policy which minimises the total expected cost of having debt, plus the total expected cost of interventions on the debt ratio. We model this problem as a singular stochastic control problem over an infinite time-horizon. In a general not necessarily Markovian framework, we first show by probabilistic arguments that the optimal debt reduction policy can be expressed in terms of the optimal stopping rule of an auxiliary optimal stopping problem. We then exploit such link to characterise the optimal control in a two-dimensional Markovian setting in which the state variables are the level of the debt-to-GDP ratio and the current inflation rate of the country. The latter follows uncontrolled Ornstein-Uhlenbeck dynamics and affects the growth rate of the debt ratio. We show that it is optimal for the government to adopt a policy that keeps the debt-to-GDP ratio under an inflation-dependent ceiling. This curve is given in terms of the solution of a nonlinear integral equation arising in the study of a fully two-dimensional optimal stopping problem .
[ { "type": "R", "before": "control its", "after": "reduce the", "start_char_pos": 51, "end_char_pos": 62 }, { "type": "R", "before": ", while taking into consideration the evolution of the inflation rate of the", "after": "of a", "start_char_pos": 106, "end_char_pos": 182 }, { "type": "R", "before": "uncontrolled inflation rate follows an Ornstein-Uhlenbeck dynamics and affects the growth rate of the debt ratio. The level of the latter can be reduced by the government through fiscal interventions. The government", "after": "government", "start_char_pos": 196, "end_char_pos": 411 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 558, "end_char_pos": 558 }, { "type": "R", "before": "such", "after": "this", "start_char_pos": 580, "end_char_pos": 584 }, { "type": "D", "before": "two-dimensional", "after": null, "start_char_pos": 598, "end_char_pos": 613 }, { "type": "R", "before": "We", "after": "In a general not necessarily Markovian framework, we first show by probabilistic arguments that the optimal debt reduction policy can be expressed in terms of the optimal stopping rule of an auxiliary optimal stopping problem. We then exploit such link to characterise the optimal control in a two-dimensional Markovian setting in which the state variables are the level of the debt-to-GDP ratio and the current inflation rate of the country. The latter follows uncontrolled Ornstein-Uhlenbeck dynamics and affects the growth rate of the debt ratio. We", "start_char_pos": 681, "end_char_pos": 683 }, { "type": "A", "before": null, "after": "given in terms of", "start_char_pos": 830, "end_char_pos": 830 }, { "type": "D", "before": "free-boundary of an associated fully two-dimensional optimal stopping problem, and it is shown to be the unique", "after": null, "start_char_pos": 835, "end_char_pos": 946 }, { "type": "A", "before": null, "after": "arising in the study of a fully two-dimensional optimal stopping problem", "start_char_pos": 989, "end_char_pos": 989 } ]
[ 0, 191, 309, 396, 570, 680, 815 ]
1607.04214
1
This paper is devoted to obtaining a wellposedness result for multidimensional BSDEs with possibly unbounded random time horizon and driven by a general martingale in a filtration only assumed to satisfy the usual hypotheses, which in particular may be stochastically discontinuous. We show that for stochastic Lipschitz generators these equations admit a unique solution in appropriately weighted spaces. Unlike the related results in the literature, we do not have to impose any smallness assumption on the size of the jumps of the predictable bracket of the driving martingale or on the Lipschitz constant of the generator .
This paper is devoted to obtaining a wellposedness result for multidimensional BSDEs with possibly unbounded random time horizon and driven by a general martingale in a filtration only assumed to satisfy the usual hypotheses, i.e. the filtration may be stochastically discontinuous. We show that for stochastic Lipschitz generators and unbounded, possibly infinite, time horizon, these equations admit a unique solution in appropriately weighted spaces. Our result allows in particular to obtain a wellposedness result for BSDEs driven by discrete-time approximations of general martingales .
[ { "type": "R", "before": "which in particular", "after": "i.e. the filtration", "start_char_pos": 226, "end_char_pos": 245 }, { "type": "A", "before": null, "after": "and unbounded, possibly infinite, time horizon,", "start_char_pos": 332, "end_char_pos": 332 }, { "type": "R", "before": "Unlike the related results in the literature, we do not have to impose any smallness assumption on the size of the jumps of the predictable bracket of the driving martingale or on the Lipschitz constant of the generator", "after": "Our result allows in particular to obtain a wellposedness result for BSDEs driven by discrete-time approximations of general martingales", "start_char_pos": 407, "end_char_pos": 626 } ]
[ 0, 282, 406 ]
1607.04298
1
Multi-threaded programming is emerging very fast and memory-hungry programs need more resources so chip multiprocessorsare widely used . Accessing L1 caches beside the cores are the fastest after registers but the size of private caches cannot increase because of design, cost and technology issues . Then split I-cache and D-cache are used with shared LLC (last level cache). For a unified shared LLC, bus interface is not scalable, and it seems that distributed shared LLC (DSLLC) is a better choice , so most of papers assume a distributed shared LLC beside each core in on-chip network. Many works assume that DSLLCs are placed in all cores; however we show that this design ignores the effect of traffic congestion in the on-chip network. In fact the problem is finding optimal placement of cores, DSLLCs and even memory controllers to minimize the expected latency based on traffic load in a mesh on-chip network with fixed number of cores and total cache capacity. We try to do some analytical modeling to derive the intended cost function and then optimize it for minimum mean delay . This work is supposed to be verified using some traffic patterns that are run on CSIM simulator.
Parallel programming is emerging fast and intensive applications need more resources , so there is a huge demand for on-chip multiprocessors . Accessing L1 caches beside the cores are the fastest after registers but the size of private caches cannot increase because of design, cost and technology limits . Then split I-cache and D-cache are used with shared LLC (last level cache). For a unified shared LLC, bus interface is not scalable, and it seems that distributed shared LLC (DSLLC) is a better choice . Most of papers assume a distributed shared LLC beside each core in on-chip network. Many works assume that DSLLCs are placed in all cores; however , we will show that this design ignores the effect of traffic congestion in on-chip network. In fact , our work focuses on optimal placement of cores, DSLLCs and even memory controllers to minimize the expected latency based on traffic load in a mesh on-chip network with fixed number of cores and total cache capacity. We try to do some analytical modeling deriving intended cost function and then optimize the mean delay of the on-chip network communication . This work is supposed to be verified using some traffic patterns that are run on CSIM simulator.
[ { "type": "R", "before": "Multi-threaded", "after": "Parallel", "start_char_pos": 0, "end_char_pos": 14 }, { "type": "R", "before": "very fast and memory-hungry programs", "after": "fast and intensive applications", "start_char_pos": 39, "end_char_pos": 75 }, { "type": "R", "before": "so chip multiprocessorsare widely used", "after": ", so there is a huge demand for on-chip multiprocessors", "start_char_pos": 96, "end_char_pos": 134 }, { "type": "R", "before": "issues", "after": "limits", "start_char_pos": 292, "end_char_pos": 298 }, { "type": "R", "before": ", so most", "after": ". Most", "start_char_pos": 502, "end_char_pos": 511 }, { "type": "R", "before": "we", "after": ", we will", "start_char_pos": 654, "end_char_pos": 656 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 723, "end_char_pos": 726 }, { "type": "R", "before": "the problem is finding", "after": ", our work focuses on", "start_char_pos": 752, "end_char_pos": 774 }, { "type": "R", "before": "to derive the", "after": "deriving", "start_char_pos": 1010, "end_char_pos": 1023 }, { "type": "R", "before": "it for minimum mean delay", "after": "the mean delay of the on-chip network communication", "start_char_pos": 1065, "end_char_pos": 1090 } ]
[ 0, 136, 300, 376, 590, 645, 743, 971, 1092 ]
1607.05831
1
We introduce and show the existence of a continuous time-varying parameter extension model to the self-exciting point process . The kernel shape is assumed to be exponentially decreasing . The quantity of interest is defined as the integrated parameter over time T^{-1}\int_0^T\theta_t^*dt, where \theta_t^* is the time-varying parameter . To estimate it na\"{i}vely, we chop the data into several blocks, compute the maximum likelihood estimator (MLE) on each block, and take the average of the local estimates. Correspondingly, we give conditions on the parameter process and the block length under which we can establish the local central limit theorem, and the boundedness of moments of order 2\kappa of the local estimators, where \kappa > 1. Under those assumptions, the global estimator asymptotic bias explodes asymptotically . As a consequence, we provide a non-na\"{i}ve estimator , which is constructed as the na\"{i}ve one when applying a first-order bias reduction to the local MLE. We derive such first-order bias formula for the self-exciting process, and provide further conditions under which the non-na\"{i .
We introduce and show the existence of a Hawkes self-exciting point process with exponentially-decreasing kernel and where parameters are time-varying . The quantity of interest is defined as the integrated parameter T^{-1}\int_0^T\theta_t^*dt, where \theta_t^* is the time-varying parameter , and we consider the high-frequency asymptotics . To estimate it na\"{i}vely, we chop the data into several blocks, compute the maximum likelihood estimator (MLE) on each block, and take the average of the local estimates. The asymptotic bias explodes asymptotically , thus we provide a non-na\"{i}ve estimator which is constructed as the na\"{i}ve one when applying a first-order bias reduction to the local MLE. We show the associated central limit theorem. Monte Carlo simulations show the importance of the bias correction and that the method performs well in finite sample, whereas the empirical study discusses the implementation in practice and documents the stochastic behavior of the parameters .
[ { "type": "R", "before": "continuous time-varying parameter extension model to the", "after": "Hawkes", "start_char_pos": 41, "end_char_pos": 97 }, { "type": "R", "before": ". The kernel shape is assumed to be exponentially decreasing", "after": "with exponentially-decreasing kernel and where parameters are time-varying", "start_char_pos": 126, "end_char_pos": 186 }, { "type": "D", "before": "over time", "after": null, "start_char_pos": 253, "end_char_pos": 262 }, { "type": "A", "before": null, "after": ", and we consider the high-frequency asymptotics", "start_char_pos": 338, "end_char_pos": 338 }, { "type": "R", "before": "Correspondingly, we give conditions on the parameter process and the block length under which we can establish the local central limit theorem, and the boundedness of moments of order 2\\kappa of the local estimators, where \\kappa > 1. Under those assumptions, the global estimator", "after": "The", "start_char_pos": 514, "end_char_pos": 794 }, { "type": "R", "before": ". As a consequence,", "after": ", thus", "start_char_pos": 835, "end_char_pos": 854 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 892, "end_char_pos": 893 }, { "type": "R", "before": "derive such first-order bias formula for the self-exciting process, and provide further conditions under which the non-na\\\"{i", "after": "show the associated central limit theorem. Monte Carlo simulations show the importance of the bias correction and that the method performs well in finite sample, whereas the empirical study discusses the implementation in practice and documents the stochastic behavior of the parameters", "start_char_pos": 1000, "end_char_pos": 1125 } ]
[ 0, 127, 188, 340, 513, 836, 996 ]
1607.05831
2
We introduce and show the existence of a Hawkes self-exciting point process with exponentially-decreasing kernel and where parameters are time-varying. The quantity of interest is defined as the integrated parameter T^{-1}\int_0^T\theta_t^*dt, where \theta_t^* is the time-varying parameter, and we consider the high-frequency asymptotics. To estimate it na\"{i , we chop the data into several blocks, compute the maximum likelihood estimator (MLE) on each block, and take the average of the local estimates. The asymptotic bias explodes asymptotically, thus we provide a non-na\"{i estimator which is constructed as the na\"{i one when applying a first-order bias reduction to the local MLE. We show the associated central limit theorem. Monte Carlo simulations show the importance of the bias correction and that the method performs well in finite sample, whereas the empirical study discusses the implementation in practice and documents the stochastic behavior of the parameters.
We introduce and show the existence of a Hawkes self-exciting point process with exponentially-decreasing kernel and where parameters are time-varying. The quantity of interest is defined as the integrated parameter T^{-1}\int_0^T\theta_t^*dt, where \theta_t^* is the time-varying parameter, and we consider the high-frequency asymptotics. To estimate it na\"ively , we chop the data into several blocks, compute the maximum likelihood estimator (MLE) on each block, and take the average of the local estimates. The asymptotic bias explodes asymptotically, thus we provide a non-na\"ive estimator which is constructed as the na\"ive one when applying a first-order bias reduction to the local MLE. We show the associated central limit theorem. Monte Carlo simulations show the importance of the bias correction and that the method performs well in finite sample, whereas the empirical study discusses the implementation in practice and documents the stochastic behavior of the parameters.
[ { "type": "R", "before": "na\\\"{i", "after": "na\\\"ively", "start_char_pos": 355, "end_char_pos": 361 }, { "type": "R", "before": "non-na\\\"{i", "after": "non-na\\\"ive", "start_char_pos": 572, "end_char_pos": 582 }, { "type": "R", "before": "na\\\"{i", "after": "na\\\"ive", "start_char_pos": 621, "end_char_pos": 627 } ]
[ 0, 151, 339, 508, 692, 738 ]
1607.06644
1
We extend the monotone stability approach for backward stochastic differential equations (BSDEs) that are jointly driven by a Brownian motion and a random measure , which can be of infinite activity and time-inhomogeneous with non-deterministic compensator. The BSDE generator function can be non-convex and needs not to satisfy classical global Lipschitz conditions in the jump integrand. We contribute concrete criteria, that are easy to verify, and extended results for comparison and for existence and uniqueness of bounded solutions to BSDEs with jumps . The scope of results, applicability of assumptions and differences to related results by some alternative approaches are demonstrated by several examples for control problems from finance .
We show a concise extension of the monotone stability approach to backward stochastic differential equations (BSDEs) that are jointly driven by a Brownian motion and a random measure for jumps, which could be of infinite activity with a non-deterministic and time inhomogeneous compensator. The BSDE generator function can be non convex and needs not to satisfy global Lipschitz conditions in the jump integrand. We contribute concrete criteria, that are easy to verify, for results on existence and uniqueness of bounded solutions to BSDEs with jumps , and on comparison and a priori L^{\infty .
[ { "type": "R", "before": "extend", "after": "show a concise extension of", "start_char_pos": 3, "end_char_pos": 9 }, { "type": "R", "before": "for", "after": "to", "start_char_pos": 42, "end_char_pos": 45 }, { "type": "R", "before": ", which can", "after": "for jumps, which could", "start_char_pos": 163, "end_char_pos": 174 }, { "type": "R", "before": "and time-inhomogeneous with", "after": "with a", "start_char_pos": 199, "end_char_pos": 226 }, { "type": "A", "before": null, "after": "and time inhomogeneous", "start_char_pos": 245, "end_char_pos": 245 }, { "type": "R", "before": "non-convex", "after": "non convex", "start_char_pos": 294, "end_char_pos": 304 }, { "type": "D", "before": "classical", "after": null, "start_char_pos": 330, "end_char_pos": 339 }, { "type": "R", "before": "and extended results for comparison and for", "after": "for results on", "start_char_pos": 449, "end_char_pos": 492 }, { "type": "R", "before": ". The scope of results, applicability of assumptions and differences to related results by some alternative approaches are demonstrated by several examples for control problems from finance", "after": ", and on comparison and a priori L^{\\infty", "start_char_pos": 559, "end_char_pos": 748 } ]
[ 0, 258, 390, 560 ]
1607.06847
1
We study the problem of decentralized Bayesian learning in a dynamical system involving strategic agents with asymmetric information. In a series of seminal papers in the literature, this problem has been studied under a simplifying model where selfish players appear sequentially and act once in the game, based on private noisy observations of the system state and public observation of past players' actions. It is shown that there exist information cascades where users discard their private information and mimic the action of their predecessor. In this paper, we provide a framework for studying Bayesian learning dynamics in a more general setting than the one described above. In particular, our model incorporates cases where players participate for the whole duration of the game, and cases where an endogenous process selects which subset of players will act at each time instance. The proposed methodology hinges on a sequential decomposition for finding perfect Bayesian equilibria (PBE) of a general class of dynamic games with asymmetric information, where user-specific states evolve as conditionally independent Markov process and users make independent noisy observations of their states. Using our methodology, we study a specific dynamic learning model where players make decisions about investing in the team, based on their estimates of everyone's types. We characterize a set of informational cascades for this problem where learning stops for the team as a whole .
We study the problem of Bayesian learning in a dynamical system involving strategic agents with asymmetric information. In a series of seminal papers in the literature, this problem has been investigated under a simplifying model where myopically selfish players appear sequentially and act once in the game, based on private noisy observations of the system state and public observation of past players' actions. It has been shown that there exist information cascades where users discard their private information and mimic the action of their predecessor. In this paper, we provide a framework for studying Bayesian learning dynamics in a more general setting than the one described above. In particular, our model incorporates cases where players are non-myopic and strategically participate for the whole duration of the game, and cases where an endogenous process selects which subset of players will act at each time instance. The proposed framework hinges on a sequential decomposition methodology for finding structured perfect Bayesian equilibria (PBE) of a general class of dynamic games with asymmetric information, where user-specific states evolve as conditionally independent Markov processes and users make independent noisy observations of their states. Using this methodology, we study a specific dynamic learning model where players make decisions about public investment based on their estimates of everyone's types. We characterize a set of informational cascades for this problem where learning stops for the team as a whole . We show that in such cascades, all players' estimates of other players' types freeze even though each individual player asymptotically learns its own true type .
[ { "type": "D", "before": "decentralized", "after": null, "start_char_pos": 24, "end_char_pos": 37 }, { "type": "R", "before": "studied", "after": "investigated", "start_char_pos": 205, "end_char_pos": 212 }, { "type": "A", "before": null, "after": "myopically", "start_char_pos": 245, "end_char_pos": 245 }, { "type": "R", "before": "is", "after": "has been", "start_char_pos": 416, "end_char_pos": 418 }, { "type": "A", "before": null, "after": "are non-myopic and strategically", "start_char_pos": 744, "end_char_pos": 744 }, { "type": "R", "before": "methodology", "after": "framework", "start_char_pos": 908, "end_char_pos": 919 }, { "type": "R", "before": "for finding", "after": "methodology for finding structured", "start_char_pos": 957, "end_char_pos": 968 }, { "type": "R", "before": "process", "after": "processes", "start_char_pos": 1138, "end_char_pos": 1145 }, { "type": "R", "before": "our", "after": "this", "start_char_pos": 1215, "end_char_pos": 1218 }, { "type": "R", "before": "investing in the team,", "after": "public investment", "start_char_pos": 1310, "end_char_pos": 1332 }, { "type": "A", "before": null, "after": ". We show that in such cascades, all players' estimates of other players' types freeze even though each individual player asymptotically learns its own true type", "start_char_pos": 1489, "end_char_pos": 1489 } ]
[ 0, 133, 412, 551, 685, 894, 1208, 1378 ]
1607.07108
1
In this paper, we are concerned with the valuation of Catastrophic Mortality Bonds and, in particular, we examine the case of the Swiss Re Mortality Bond 2003 as a primary example of this class of assets. This bond was the first Catastrophic Mortality Bond to be launched in the market and encapsulates the behaviour of a well-defined mortality index to generate payoffs for bondholders. Pricing this type of bonds is a challenging task and no closed form solution exists in the literature. In our approach, we adapt the payoff of such a bond in terms of the payoff of an Asian put option and present a new approach to derive model-independent bounds exploiting comonotonic theory as illustrated in prime1 0pt%DIFAUXCMD and \mbox{%DIFAUXCMD Simon }0pt%DIFAUXCMD } for the pricing of Asian options. We carry out Monte Carlo simulations to estimate the bond price and illustrate the strength of the bounds.
In this paper, we are concerned with the valuation of Catastrophic Mortality Bonds and, in particular, we examine the case of the Swiss Re Mortality Bond 2003 as a primary example of this class of assets. This bond was the first Catastrophic Mortality Bond to be launched in the market and encapsulates the behaviour of a well-defined mortality index to generate payoffs for bondholders. Pricing these type of bonds is a challenging task and no closed form solution exists in the literature. In our approach, we express the payoff of such a bond in terms of the payoff of an Asian put option and present a new approach to derive model-independent bounds exploiting comonotonic theory as illustrated in prime1 , \mbox{%DIFAUXCMD 20pt%DIFAUXCMD and \mbox{%DIFAUXCMD Simon }0pt%DIFAUXCMD } for the pricing of Asian options. We carry out Monte Carlo simulations to estimate the bond price and illustrate the quality of the bounds.
[ { "type": "R", "before": "this", "after": "these", "start_char_pos": 396, "end_char_pos": 400 }, { "type": "R", "before": "adapt", "after": "express", "start_char_pos": 511, "end_char_pos": 516 }, { "type": "A", "before": null, "after": ", \\mbox{%DIFAUXCMD 2", "start_char_pos": 706, "end_char_pos": 706 }, { "type": "R", "before": "strength", "after": "quality", "start_char_pos": 881, "end_char_pos": 889 } ]
[ 0, 204, 387, 490, 797 ]
1607.07197
1
We investigate the supports of extremal martingale measures with pre-specified marginals in a two-period setting. First, we establish in full generality the equivalence between the extremality of a given measure Q and the denseness in L^1(Q) of a suitable linear subspace, which can be seen as the set of all semi-static trading strategies. Moreover, when the supports of both marginals are countable, we focus on the slightly stronger notion of weak exact predictable representation property (henceforth, WEP) and provide two combinatorial sufficient conditions, called "2-link property" and "full erasability", on how the points in the supports are linked to each other for granting extremality. Finally, when the support of the first marginal is a finite set, we give a necessary and sufficient condition for the WEP to hold in terms of the new concept of 2-net .
We investigate the supports of extremal martingale measures with pre-specified marginals in a two-period setting. First, we establish in full generality the equivalence between the extremality of a given measure Q and the denseness in L^1(Q) of a suitable linear subspace, which can be seen as the set of all semi-static trading strategies. Moreover, when the supports of both marginals are countable, we focus on the slightly stronger notion of weak exact predictable representation property (henceforth, WEP) and provide two combinatorial sufficient conditions, called "2-link property" and "full erasability", on how the points in the supports are linked to each other for granting extremality. When the support of the first marginal is a finite set, we give a necessary and sufficient condition for the WEP to hold in terms of the new concept of 2-net . Finally, we study the relation between cycles and extremality .
[ { "type": "R", "before": "Finally, when", "after": "When", "start_char_pos": 698, "end_char_pos": 711 }, { "type": "A", "before": null, "after": ". Finally, we study the relation between cycles and extremality", "start_char_pos": 865, "end_char_pos": 865 } ]
[ 0, 113, 340, 697 ]
1607.07197
2
We investigate the supports of extremal martingale measures with pre-specified marginals in a two-period setting. First, we establish in full generality the equivalence between the extremality of a given measure Q and the denseness in L^1(Q) of a suitable linear subspace, which can be seen as the set of all semi-static trading strategies. Moreover, when the supports of both marginals are countable, we focus on the slightly stronger notion of weak exact predictable representation property (henceforth, WEP) and provide two combinatorial sufficient conditions, called "2-link property" and "full erasability", on how the points in the supports are linked to each other for granting extremality. When the support of the first marginal is a finite set, we give a necessary and sufficient condition for the WEP to hold in terms of the new concept of 2-net . Finally, we study the relation between cycles and extremality.
We investigate the supports of extremal martingale measures with pre-specified marginals in a two-period setting. First, we establish in full generality the equivalence between the extremality of a given measure Q and the denseness in L^1(Q) of a suitable linear subspace, which can be seen in a financial context as the set of all semi-static trading strategies. Moreover, when the supports of both marginals are countable, we focus on the slightly stronger notion of weak exact predictable representation property (henceforth, WEP) and provide two combinatorial sufficient conditions, called "2-link property" and "full erasability", on how the points in the supports are linked to each other for granting extremality. When the support of the first marginal is a finite set, we give a necessary and sufficient condition for the WEP to hold in terms of the new concepts of 2-net and deadlock . Finally, we study the relation between cycles and extremality.
[ { "type": "A", "before": null, "after": "in a financial context", "start_char_pos": 291, "end_char_pos": 291 }, { "type": "R", "before": "concept", "after": "concepts", "start_char_pos": 840, "end_char_pos": 847 }, { "type": "A", "before": null, "after": "and deadlock", "start_char_pos": 857, "end_char_pos": 857 } ]
[ 0, 113, 341, 698 ]
1607.07738
1
Theoretical results regarding two-dimensional ordinary-differential equations (ODEs) with second-degree polynomial right-hand sides are summarized, with a focus on multistability, limit cyclesand limit cycle bifurcations . The results are then used for construction of two reaction systems, which are at the deterministic level described by two-dimensional third-degree kinetic ODEs. The first system displays a homoclinic bifurcation, and a coexistence of a stable critical point and a stable limit cycle in the phase plane. The second system displays a multiple limit cycle bifurcation, and a coexistence of two stable limit cycles. The deterministic solutions (obtained by solving the kinetic ODEs) and stochastic solutions ( obtained by generating noisy time-series using the Gillespie algorithm ) of the constructed systems are compared, and the observed differences highlighted. The constructed systems are proposed as test problems for statistical methods, which are designed to detect and classify properties of given noisy time-series arising from biological applications.
Theoretical results regarding two-dimensional ordinary-differential equations (ODEs) with second-degree polynomial right-hand sides are summarized, with an emphasis on limit cycles, limit cycle bifurcations and multistability . The results are then used for construction of two reaction systems, which are at the deterministic level described by two-dimensional third-degree kinetic ODEs. The first system displays a homoclinic bifurcation, and a coexistence of a stable critical point and a stable limit cycle in the phase plane. The second system displays a multiple limit cycle bifurcation, and a coexistence of two stable limit cycles. The deterministic solutions (obtained by solving the kinetic ODEs) and stochastic solutions ( noisy time-series generating by the Gillespie algorithm , and the underlying probability distributions obtained by solving the chemical master equation (CME) ) of the constructed systems are compared, and the observed differences highlighted. The constructed systems are proposed as test problems for statistical methods, which are designed to detect and classify properties of given noisy time-series arising from biological applications.
[ { "type": "R", "before": "a focus on multistability, limit cyclesand", "after": "an emphasis on", "start_char_pos": 153, "end_char_pos": 195 }, { "type": "A", "before": null, "after": "cycles, limit", "start_char_pos": 202, "end_char_pos": 202 }, { "type": "A", "before": null, "after": "and multistability", "start_char_pos": 222, "end_char_pos": 222 }, { "type": "D", "before": "obtained by generating", "after": null, "start_char_pos": 731, "end_char_pos": 753 }, { "type": "R", "before": "using", "after": "generating by", "start_char_pos": 772, "end_char_pos": 777 }, { "type": "A", "before": null, "after": ", and the underlying probability distributions obtained by solving the chemical master equation (CME)", "start_char_pos": 802, "end_char_pos": 802 } ]
[ 0, 224, 385, 527, 636, 887 ]
1607.08287
1
The goal of this paper is to URLanized flocking behavior and systemic risk in heterogeneous mean-field interacting diffusions. We illustrate in a number of case studies the effect of heterogeneity in the behavior of systemic risk in the system . We also investigate the effect of heterogeneity on the "flocking behavior" of different agents, i.e., when agents with different dynamics end up behaving the same way in path space and follow closely the mean behavior of the system. Using Laplace asymptotics, we derive an asymptotic formula for the tail of the loss distribution as the number of agents grows to infinity. This characterizes the tail of the loss distribution and the effect of the heterogeneity of the network on the tail loss probability.
The goal of this paper is to URLanized flocking behavior and systemic risk in heterogeneous mean-field interacting diffusions. We illustrate in a number of case studies the effect of heterogeneity in the behavior of systemic risk in the system , i.e., the risk that several agents default simultaneously as a result of interconnections. We also investigate the effect of heterogeneity on the "flocking behavior" of different agents, i.e., when agents with different dynamics end up following very similar paths and follow closely the mean behavior of the system. Using Laplace asymptotics, we derive an asymptotic formula for the tail of the loss distribution as the number of agents grows to infinity. This characterizes the tail of the loss distribution and the effect of the heterogeneity of the network on the tail loss probability.
[ { "type": "R", "before": ".", "after": ", i.e., the risk that several agents default simultaneously as a result of interconnections.", "start_char_pos": 244, "end_char_pos": 245 }, { "type": "R", "before": "behaving the same way in path space", "after": "following very similar paths", "start_char_pos": 391, "end_char_pos": 426 } ]
[ 0, 126, 245, 478, 618 ]
1607.08886
1
We report the use of synthetic vesicles formed by amphiphilic block copolymers in water ( or polymersomes) to encapsulate myoglobin varying the vesicle size, and protein concentration. We show that confinement within polymersomes leads to a significant improvement in protein stability against thermal denaturation up to 95degC at neutral pH, with little or no evidence of unfolding or reduced enzymatic activity. The latter parameter actually exhibits a two-fold increase after thermal cycling when the confined protein concentration is higher than 5\% v/v . Our results suggest that nanoscopic confinement is a promising new avenue for the enhanced long-term storage of proteins. Moreover, our work has potentially important implications for the origin of life, since such compartmentalisation may well have been critical for ensuring the preservation of early functional proteins under relatively harsh conditions, thus playing a key role in the subsequent emergence of primitive life forms.
We report the use of synthetic vesicles formed by amphiphilic block copolymers in water ( known as polymersomes) for encapsulating proteins, varying the vesicle size, and protein concentration. We show that confinement within polymersomes core corresponds to a liquid-liquid phase transition with the protein/water within lumen interacting very differently than in bulk. We show this effect leads to considerable structural changes on the proteins with evidence suggesting non-alpha helical conformations. Most importantly both aspects lead to a significant improvement on protein stability against thermal denaturation up to 95degC at neutral pH, with little or no evidence of unfolding or reduced enzymatic activity. The latter parameter does indeed exhibit an increase after thermal cycling . Our results suggest that nanoscopic confinement is a promising new avenue for the enhanced long-term storage of proteins. Moreover, our investigations have potentially important implications for the origin of life, since such compartmentalization may well have been critical for ensuring the preservation of primordial functional proteins under relatively harsh conditions, thus playing a key role in the subsequent emergence of primitive life forms.
[ { "type": "R", "before": "or polymersomes) to encapsulate myoglobin", "after": "known as polymersomes) for encapsulating proteins,", "start_char_pos": 90, "end_char_pos": 131 }, { "type": "R", "before": "leads to", "after": "core corresponds to a liquid-liquid phase transition with the protein/water within lumen interacting very differently than in bulk. We show this effect leads to considerable structural changes on the proteins with evidence suggesting non-alpha helical conformations. Most importantly both aspects lead to", "start_char_pos": 230, "end_char_pos": 238 }, { "type": "R", "before": "in", "after": "on", "start_char_pos": 265, "end_char_pos": 267 }, { "type": "R", "before": "actually exhibits a two-fold", "after": "does indeed exhibit an", "start_char_pos": 435, "end_char_pos": 463 }, { "type": "D", "before": "when the confined protein concentration is higher than 5\\% v/v", "after": null, "start_char_pos": 495, "end_char_pos": 557 }, { "type": "R", "before": "work has", "after": "investigations have", "start_char_pos": 696, "end_char_pos": 704 }, { "type": "R", "before": "compartmentalisation", "after": "compartmentalization", "start_char_pos": 775, "end_char_pos": 795 }, { "type": "R", "before": "early", "after": "primordial", "start_char_pos": 857, "end_char_pos": 862 } ]
[ 0, 184, 413, 681 ]
1607.08886
2
We report the use of synthetic vesicles formed by amphiphilic block copolymers in water (known as polymersomes) for encapsulating proteins, varying the vesicle size, and protein concentration. We show that confinement within polymersomes core corresponds to a liquid-liquid phase transition with the protein/water within lumen interacting very differently than in bulk. We show this effect leads to considerable structural changes on the proteins with evidence suggesting non-alpha helical conformations. Most importantly both aspects lead to a significant improvement on protein stability against thermal denaturation up to 95degC at neutral pH, with little or no evidence of unfolding or reduced enzymatic activity. The latter parameter does indeed exhibit an increase after thermal cycling. Our results suggest that nanoscopic confinement is a promising new avenue for the enhanced long-term storage of proteins. Moreover, our investigations have potentially important implications for the origin of life, since such compartmentalization may well have been critical for ensuring the preservation of primordial functional proteins under relatively harsh conditions, thus playing a key role in the subsequent emergence of primitive life forms.
We report that protein confinement within nanoscopic vesicular compartments corresponds to a liquid-liquid phase transition with the protein/water within vesicle lumen interacting very differently than in bulk. We show this effect leads to considerable structural changes on the proteins with evidence suggesting non-alpha helical conformations. Most importantly both aspects lead to a significant improvement on protein stability against thermal denaturation up to 95degC at neutral pH, with little or no evidence of unfolding or reduced enzymatic activity. The latter parameter does indeed exhibit an increase after thermal cycling. Our results suggest that nanoscopic confinement is a promising new avenue for the enhanced long-term storage of proteins. Moreover, our investigations have potentially important implications for the origin of life, since such compartmentalization may well have been critical for ensuring the preservation of primordial functional proteins under relatively harsh conditions, thus playing a key role in the subsequent emergence of primitive life forms.
[ { "type": "R", "before": "the use of synthetic vesicles formed by amphiphilic block copolymers in water (known as polymersomes) for encapsulating proteins, varying the vesicle size, and protein concentration. We show that confinement within polymersomes core", "after": "that protein confinement within nanoscopic vesicular compartments", "start_char_pos": 10, "end_char_pos": 242 }, { "type": "A", "before": null, "after": "vesicle", "start_char_pos": 321, "end_char_pos": 321 } ]
[ 0, 192, 370, 505, 718, 794, 916 ]
1608.00535
1
Stochastic Boolean networks or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. The standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former gives a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. SDDS is a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution and then with the use of a genetic algorithm the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at URL
Stochastic Boolean networks , or more generally, stochastic discrete networks, are an important class of computational models for molecular interaction networks. The stochasticity stems from the updating schedule. Standard updating schedules include the synchronous update, where all the nodes are updated at the same time, and the asynchronous update where a random node is updated at each time step. The former produces a deterministic dynamics while the latter a stochastic dynamics. A more general stochastic setting considers propensity parameters for updating each node. Stochastic Discrete Dynamical Systems (SDDS) is a modeling framework that considers two propensity parameters for updating each node and uses one when the update has a positive impact on the variable, that is, when the update causes the variable to increase its value, and uses the other when the update has a negative impact, that is, when the update causes it to decrease its value. This framework offers additional features for simulations but also adds a complexity in parameter estimation of the propensities. This paper presents a method for estimating the propensity parameters for SDDS. The method is based on adding noise to the system using the Google PageRank approach to make the system ergodic and thus guaranteeing the existence of a stationary distribution . Then with the use of a genetic algorithm , the propensity parameters are estimated. Approximation techniques that make the search algorithms efficient are also presented and Matlab/Octave code to test the algorithms are available at URL
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 28, "end_char_pos": 28 }, { "type": "R", "before": "The standard", "after": "Standard", "start_char_pos": 213, "end_char_pos": 225 }, { "type": "R", "before": "gives", "after": "produces", "start_char_pos": 416, "end_char_pos": 421 }, { "type": "R", "before": "SDDS", "after": "Stochastic Discrete Dynamical Systems (SDDS)", "start_char_pos": 577, "end_char_pos": 581 }, { "type": "R", "before": "and then", "after": ". Then", "start_char_pos": 1309, "end_char_pos": 1317 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1354, "end_char_pos": 1354 } ]
[ 0, 160, 212, 404, 486, 576, 921, 1051, 1131, 1395 ]
1608.00768
1
We consider the problem of utility maximization for investors with power utility functions. Building on the earlier work Larsen et al. ( 2014 ), we prove that the value of the problem is a Frechet-differentiable function of the drift of the price process, provided that this drift lies in a suitable Banach space. We then study optimal investment problems with non-Markovian driving processes. In such models there is no hope to get a formula for the achievable maximal utility. Applying results of the first part of the paper we provide first order expansions for certain problems involving a fractional Brownian motion either in the drift or in the volatility. We also point out how asymptotic results can be derived for models with strong mean reversion.
We consider the problem of utility maximization for investors with power utility functions. Building on the earlier work Larsen et al. ( 2016 ), we prove that the value of the problem is a Frechet-differentiable function of the drift of the price process, provided that this drift lies in a suitable Banach space. We then study optimal investment problems with non-Markovian driving processes. In such models there is no hope to get a formula for the achievable maximal utility. Applying results of the first part of the paper we provide first order expansions for certain problems involving fractional Brownian motion either in the drift or in the volatility. We also point out how asymptotic results can be derived for models with strong mean reversion.
[ { "type": "R", "before": "2014", "after": "2016", "start_char_pos": 137, "end_char_pos": 141 }, { "type": "D", "before": "a", "after": null, "start_char_pos": 592, "end_char_pos": 593 } ]
[ 0, 91, 313, 393, 478, 662 ]
1608.00903
1
In this paper we propose that eukaryotic cells, under severe genotoxic stress, can commit genoautotomy (genome 'self-injury') that involves cutting and releasing single-strand DNA (ssDNA) fragments from double-stranded DNA and leaving ssDNA gaps in the genome. The ssDNA gaps could be easily and precisely repaired later. The released ssDNA fragments may play some role in the regulation of cell cycle progression. Taken together, genoautotomy causes limited nonlethal DNA damage, but prevents the whole genome from lethal damage, and thus should be deemed as a eukaryotic cellular defence response to genotoxic stress.
This paper proposes that eukaryotic cells, under severe genotoxic stress, can commit genoautotomy (genome 'self-injury') that involves cutting and releasing single-stranded DNA (ssDNA) fragments from double-stranded DNA and leaving ssDNA gaps in the genome. The ssDNA gaps could be easily and precisely repaired later. The released ssDNA fragments may play some role in the regulation of cell cycle progression. Taken together, genoautotomy causes limited nonlethal DNA damage, but prevents the whole genome from lethal damage, and thus should be deemed as a eukaryotic cellular defence response to genotoxic stress.
[ { "type": "R", "before": "In this paper we propose", "after": "This paper proposes", "start_char_pos": 0, "end_char_pos": 24 }, { "type": "R", "before": "single-strand", "after": "single-stranded", "start_char_pos": 162, "end_char_pos": 175 } ]
[ 0, 260, 321, 414 ]
1608.01365
1
Sector-wise productivity growths are measured, along with the sectoral elasticity of substitutions, under the multi-factor CES framework, by regressing the growths of factor-wise cost shares against the growths of relative factor prices. We use linked input-output tables for Japan and Korea as the data source for factor price and cost shares in two timely distant states. We then construct a multi-sectoral general equilibrium model using the system of estimated CES unit cost functions, and evaluate the economy-wide propagation of an exogenous productivity gain , in terms of welfare. Further, we examine the differences between models based on a priori elasticities such as Leontief and Cobb-Douglas.
Sector specific multifactor CES elasticity of substitution and the corresponding productivity growths are jointly measured by regressing the growths of factor-wise cost shares against the growths of factor prices. We use linked input-output tables for Japan and the Republic of Korea as the data source for factor price and cost shares in two temporally distant states. We then construct a multi-sectoral general equilibrium model using the system of estimated CES unit cost functions, and evaluate the economy-wide propagation of an exogenous productivity stimuli , in terms of welfare. Further, we examine the differences between models based on a priori elasticity such as Leontief and Cobb-Douglas.
[ { "type": "R", "before": "Sector-wise", "after": "Sector specific multifactor CES elasticity of substitution and the corresponding", "start_char_pos": 0, "end_char_pos": 11 }, { "type": "R", "before": "measured, along with the sectoral elasticity of substitutions, under the multi-factor CES framework,", "after": "jointly measured", "start_char_pos": 37, "end_char_pos": 137 }, { "type": "D", "before": "relative", "after": null, "start_char_pos": 214, "end_char_pos": 222 }, { "type": "A", "before": null, "after": "the Republic of", "start_char_pos": 286, "end_char_pos": 286 }, { "type": "R", "before": "timely", "after": "temporally", "start_char_pos": 352, "end_char_pos": 358 }, { "type": "R", "before": "gain", "after": "stimuli", "start_char_pos": 562, "end_char_pos": 566 }, { "type": "R", "before": "elasticities", "after": "elasticity", "start_char_pos": 659, "end_char_pos": 671 } ]
[ 0, 237, 374, 589 ]
1608.01895
1
Using theory on (conditionally) Gaussian processes with stationary increments developed in Barndorff-Nielsen et al. (2009, 2011), this paper presents a general semiparametric approach to conducting inference on the fractal index , \alpha, of a time series. Our setup encompasses a large class of Gaussian processes and we show how to extend it to a large class of non-Gaussian models as well. It is proved that the asymptotic distribution of the estimator of \alpha does not depend on the specifics of the data generating process for the observations, but only on the value of \alpha and a "heteroskedasticity" factor. Using this, we propose a simulation-based approach to inference, which is easily implemented and is valid more generally than asymptotic analysis. We detail how the methods can be applied to test whether a stochastic process is a non-semimartingale . Finally, the methods are illustrated in two empirical applications motivated from finance. We study time seriesof log-prices and log-volatility from 29 individual US stocks; no evidence of non-semimartingality in asset prices is found, but we do find evidence of non-semimartingality in volatility. This confirms a recently proposed conjecture that stochastic volatility processes of financial assets are rough (Gatheral et al., 2014) .
We study a well-known estimator of the fractal index of a stochastic process. Our framework is very general and encompasses many models of interest; we show how to extend the theory of the estimator to a large class of non-Gaussian processes. Particular focus is on clarity and ease of implementation of the estimator and the associated asymptotic results, making it easy for practitioners to apply the methods. We further develop a new estimator which is robust to measurement noise in the observations . Finally, the methods are illustrated on two time series; one of turbulent velocity flows and one of financial prices .
[ { "type": "R", "before": "Using theory on (conditionally) Gaussian processes with stationary increments developed in Barndorff-Nielsen et al. (2009, 2011), this paper presents a general semiparametric approach to conducting inference on", "after": "We study a well-known estimator of", "start_char_pos": 0, "end_char_pos": 210 }, { "type": "R", "before": ", \\alpha, of a time series. Our setup encompasses a large class of Gaussian processes and", "after": "of a stochastic process. Our framework is very general and encompasses many models of interest;", "start_char_pos": 229, "end_char_pos": 318 }, { "type": "R", "before": "it", "after": "the theory of the estimator", "start_char_pos": 341, "end_char_pos": 343 }, { "type": "R", "before": "models as well. It is proved that the asymptotic distribution of the estimator of \\alpha does not depend on the specifics of the data generating process for the observations, but only on the value of \\alpha and a \"heteroskedasticity\" factor. Using this, we propose a simulation-based approach to inference, which is easily implemented and is valid more generally than asymptotic analysis. We detail how the methods can be applied to test whether a stochastic process is a non-semimartingale", "after": "processes. Particular focus is on clarity and ease of implementation of the estimator and the associated asymptotic results, making it easy for practitioners to apply the methods. We further develop a new estimator which is robust to measurement noise in the observations", "start_char_pos": 377, "end_char_pos": 867 }, { "type": "R", "before": "in two empirical applications motivated from finance. We study time seriesof log-prices and log-volatility from 29 individual US stocks; no evidence of non-semimartingality in asset prices is found, but we do find evidence of non-semimartingality in volatility. This confirms a recently proposed conjecture that stochastic volatility processes of financial assets are rough (Gatheral et al., 2014)", "after": "on two time series; one of turbulent velocity flows and one of financial prices", "start_char_pos": 907, "end_char_pos": 1304 } ]
[ 0, 256, 392, 618, 765, 869, 960, 1043, 1168 ]
1608.01895
2
We study a well-known estimator of the fractal index of a stochastic process. Our framework is very general and encompasses many models of interest; we show how to extend the theory of the estimator to a large class of non-Gaussian processes. Particular focus is on clarity and ease of implementation of the estimator and the associated asymptotic results, making it easy for practitioners to apply the methods. We further develop a new estimator which is robust to measurement noise in the observations. Finally, the methods are illustrated on two time series ; one of turbulent velocity flows and one of financial prices.
We study a well-known estimator of the fractal index of a stochastic process. Our framework is very general and encompasses many models of interest; we show how to extend the theory of the estimator to a large class of non-Gaussian processes. Particular focus is on clarity and ease of implementation of the estimator and the associated asymptotic results, making it easy for practitioners to apply the methods. We additionally show how measurement noise in the observations will bias the estimator, potentially resulting in the practitioner erroneously finding evidence of fractal characteristics in a time series. We propose a new estimator which is robust to such noise and construct a formal hypothesis test for the presence of noise in the observations. Finally, the methods are illustrated on two empirical data sets ; one of turbulent velocity flows and one of financial prices.
[ { "type": "R", "before": "further develop a", "after": "additionally show how measurement noise in the observations will bias the estimator, potentially resulting in the practitioner erroneously finding evidence of fractal characteristics in a time series. We propose a", "start_char_pos": 415, "end_char_pos": 432 }, { "type": "R", "before": "measurement noise", "after": "such noise and construct a formal hypothesis test for the presence of noise", "start_char_pos": 466, "end_char_pos": 483 }, { "type": "R", "before": "time series", "after": "empirical data sets", "start_char_pos": 549, "end_char_pos": 560 } ]
[ 0, 77, 148, 242, 411, 504, 562 ]
1608.01900
1
We introduce a model of innovation in which products are composed of components and new components are adopted one at a time. We show that the number of products we can make now gives a distorted view of the number we can make in the future: the more complex a product is, the more it gets under-represented. From this complexity discount we derive a strategy for increasing the rate of innovation by choosing components on the basis of long-term growth rather than just short-term gain . We test our model on data from language, gastronomy and technology and predict the best strategy for innovating in each .
Innovation is URLanizations what evolution is URLanisms: it is how they adapt to changes in the environment and improve. Yet despite steady advances in how evolution works, what drives innovation remains elusive. We derive a theory of innovation in which products are composed of components and new components are adopted one at a time. We test it on data from language, gastronomy and technology. We show that the rate of innovation depends on the size distribution of products, and that a small number of simple products can dramatically increase the innovation rate. By strategically choosing which components to adopt, we show how to increase the innovation rate to achieve short-term gain or long-term growth .
[ { "type": "R", "before": "We introduce a model", "after": "Innovation is URLanizations what evolution is URLanisms: it is how they adapt to changes in the environment and improve. Yet despite steady advances in how evolution works, what drives innovation remains elusive. We derive a theory", "start_char_pos": 0, "end_char_pos": 20 }, { "type": "A", "before": null, "after": "test it on data from language, gastronomy and technology. We", "start_char_pos": 129, "end_char_pos": 129 }, { "type": "R", "before": "number of products we can make now gives a distorted view of the number we can make in the future: the more complex a product is, the more it gets under-represented. From this complexity discount we derive a strategy for increasing the rate of innovation by choosing components on the basis of long-term growth rather than just", "after": "rate of innovation depends on the size distribution of products, and that a small number of simple products can dramatically increase the innovation rate. By strategically choosing which components to adopt, we show how to increase the innovation rate to achieve", "start_char_pos": 144, "end_char_pos": 471 }, { "type": "R", "before": ". We test our model on data from language, gastronomy and technology and predict the best strategy for innovating in each", "after": "or long-term growth", "start_char_pos": 488, "end_char_pos": 609 } ]
[ 0, 125, 309, 489 ]