doc_id
stringlengths 2
10
| revision_depth
stringclasses 5
values | before_revision
stringlengths 3
309k
| after_revision
stringlengths 5
309k
| edit_actions
list | sents_char_pos
list |
---|---|---|---|---|---|
1507.04655 | 1 | We present a mathematical solution to the insurance puzzle. Our solution only uses time-average growth rates and makes no reference to risk preferences. The insurance puzzle is this: according to the expectation value of wealth, buying insurance is only rational at a price that makes it irrational to sell insurance. There is no price that is beneficial to both the buyer and the seller of an insurance contract. The puzzle why insurance contracts exist is traditionally resolved by appealing to utility theory, asymmetric information, or a mix of both . Here we note that the expectation value is the wrong starting point -- a legacy from the early days of probability theory. It is the wrong starting point because not even the most basic models of wealth (random walks) are stationary, and what the individual experiences over time is not the expectation value. We use the standard model of noisy exponential growth and compute time-average growth rates instead of expectation values of wealth . In this new paradigm insurance contracts exist that are beneficial for both parties . | Voluntary insurance contracts constitute a puzzle because they increase the expectation value of one party's wealth, whereas both parties must sign for such contracts to exist. Classically, the puzzle is resolved by introducing non-linear utility functions, which encode asymmetric risk preferences; or by assuming the parties have asymmetric information . Here we show the puzzle goes away if contracts are evaluated by their effect on the time-average growth rate of wealth. Our solution assumes only knowledge of wealth dynamics. Time averages and expectation values differ because wealth changes are non-ergodic. Our reasoning is generalisable: business happens when both parties grow faster . | [
{
"type": "R",
"before": "We present a mathematical solution to the insurance puzzle. Our solution only uses time-average growth rates and makes no reference to risk preferences. The insurance puzzle is this: according to",
"after": "Voluntary insurance contracts constitute a puzzle because they increase",
"start_char_pos": 0,
"end_char_pos": 195
},
{
"type": "R",
"before": "wealth, buying insurance is only rational at a price that makes it irrational to sell insurance. There is no price that is beneficial to both the buyer and the seller of an insurance contract. The puzzle why insurance contracts exist is traditionally resolved by appealing to utility theory, asymmetric information, or a mix of both",
"after": "one party's wealth, whereas both parties must sign for such contracts to exist. Classically, the puzzle is resolved by introducing non-linear utility functions, which encode asymmetric risk preferences; or by assuming the parties have asymmetric information",
"start_char_pos": 221,
"end_char_pos": 553
},
{
"type": "R",
"before": "note that the expectation value is the wrong starting point -- a legacy from the early days of probability theory. It is the wrong starting point because not even the most basic models of wealth (random walks) are stationary, and what the individual experiences over time is not the expectation value. We use the standard model of noisy exponential growth and compute",
"after": "show the puzzle goes away if contracts are evaluated by their effect on the",
"start_char_pos": 564,
"end_char_pos": 931
},
{
"type": "R",
"before": "growth rates instead of expectation values of wealth . In this new paradigm insurance contracts exist that are beneficial for both parties",
"after": "growth rate of wealth. Our solution assumes only knowledge of wealth dynamics. Time averages and expectation values differ because wealth changes are non-ergodic. Our reasoning is generalisable: business happens when both parties grow faster",
"start_char_pos": 945,
"end_char_pos": 1083
}
]
| [
0,
59,
152,
317,
413,
555,
678,
865,
999
]
|
1507.05055 | 1 | MTRV ( A Modern Theory of Random Variation , Wiley, 2012) provides an alternative approach to derivative asset pricing. This article provides a method for pricing American call options. | An analytic method for pricing American call options is provided; followed by an empirical method for pricing Asian call options. The methodology is the pricing theory presented in " A Modern Theory of Random Variation ", by Patrick Muldowney, 2012. | [
{
"type": "R",
"before": "MTRV (",
"after": "An analytic method for pricing American call options is provided; followed by an empirical method for pricing Asian call options. The methodology is the pricing theory presented in \"",
"start_char_pos": 0,
"end_char_pos": 6
},
{
"type": "R",
"before": ", Wiley, 2012) provides an alternative approach to derivative asset pricing. This article provides a method for pricing American call options.",
"after": "\", by Patrick Muldowney, 2012.",
"start_char_pos": 43,
"end_char_pos": 185
}
]
| [
0,
119
]
|
1507.05351 | 1 | The ongoing concern about systemic risk since the outburst of the global financial crisis has highlighted the need for risk measures at the level of sets of interconnected financial components, such as portfolios, institutions or members of clearinghouses . The two main issues in systemic risk are the computation of an overall reserve level and its allocation to the different components of the system according to their importance . We develop here a pragmatic approach to systemic risk measurement and allocation based on multivariate shortfall risk measures, where acceptable allocations are first computed and then aggregated so as to minimize costs. We emphasize the analysis of the risk allocation and of its sensitivity as an indicator of the systemic risk. Moreover, we provide efficient numerical schemes to assess the risk allocation in high dimensions. | The ongoing concern about systemic risk since the outburst of the global financial crisis has highlighted the need for risk measures at the level of sets of interconnected financial components, such as portfolios, institutions or members of clearing houses . The two main issues in systemic risk are the computation of an overall reserve level and its allocation to the different components according to their systemic relevance . We develop here a pragmatic approach to systemic risk measurement and allocation based on multivariate shortfall risk measures, where acceptable allocations are first computed and then aggregated so as to minimize costs. We analyze the sensitivity of the risk allocations to various factors and highlight its relevance as an indicator of systemic risk. Moreover, we provide numerical schemes to assess the risk allocation in high dimensions. | [
{
"type": "R",
"before": "clearinghouses",
"after": "clearing houses",
"start_char_pos": 241,
"end_char_pos": 255
},
{
"type": "D",
"before": "of the system",
"after": null,
"start_char_pos": 390,
"end_char_pos": 403
},
{
"type": "R",
"before": "importance",
"after": "systemic relevance",
"start_char_pos": 423,
"end_char_pos": 433
},
{
"type": "R",
"before": "emphasize the analysis",
"after": "analyze the sensitivity",
"start_char_pos": 660,
"end_char_pos": 682
},
{
"type": "R",
"before": "allocation and of its sensitivity",
"after": "allocations to various factors and highlight its relevance",
"start_char_pos": 695,
"end_char_pos": 728
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 748,
"end_char_pos": 751
},
{
"type": "D",
"before": "efficient",
"after": null,
"start_char_pos": 788,
"end_char_pos": 797
}
]
| [
0,
257,
435,
656,
766
]
|
1507.05351 | 2 | The ongoing concern about systemic risk since the outburst of the global financial crisis has highlighted the need for risk measures at the level of sets of interconnected financial components, such as portfolios, institutions or members of clearing houses. The two main issues in systemic risk are the computation of an overall reserve level and its allocation to the different components according to their systemic relevance. We develop here a pragmatic approach to systemic risk measurement and allocation based on multivariate shortfall risk measures, where acceptable allocations are first computed and then aggregated so as to minimize costs. We analyze the sensitivity of the risk allocations to various factors and highlight its relevance as an indicator of systemic risk . Moreover, we provide numerical schemes to assess the risk allocation in high dimensions. | The ongoing concern about systemic risk since the outburst of the global financial crisis has highlighted the need for risk measures at the level of sets of interconnected financial components, such as portfolios, institutions or members of clearing houses. The two main issues in systemic risk measurement are the computation of an overall reserve level and its allocation to the different components according to their systemic relevance. We develop here a pragmatic approach to systemic risk measurement and allocation based on multivariate shortfall risk measures, where acceptable allocations are first computed and then aggregated so as to minimize costs. We analyze the sensitivity of the risk allocations to various factors and highlight its relevance as an indicator of systemic risk . In particular, we study the interplay between the loss function and the dependence structure of the components, that provides valuable insights into the properties of good loss functions . Moreover, we provide numerical schemes to assess the risk allocation in high dimensions. | [
{
"type": "A",
"before": null,
"after": "measurement",
"start_char_pos": 295,
"end_char_pos": 295
},
{
"type": "A",
"before": null,
"after": ". In particular, we study the interplay between the loss function and the dependence structure of the components, that provides valuable insights into the properties of good loss functions",
"start_char_pos": 782,
"end_char_pos": 782
}
]
| [
0,
257,
429,
650,
784
]
|
1507.05351 | 3 | The ongoing concern about systemic risk since the outburst of the global financial crisis has highlighted the need for risk measures at the level of sets of interconnected financial components, such as portfolios, institutions or members of clearing houses. The two main issues in systemic risk measurement are the computation of an overall reserve level and its allocation to the different components according to their systemic relevance. We develop here a pragmatic approach to systemic risk measurement and allocation based on multivariate shortfall risk measures, where acceptable allocations are first computed and then aggregated so as to minimize costs. We analyze the sensitivity of the risk allocations to various factors and highlight its relevance as an indicator of systemic risk. In particular, we study the interplay between the loss function and the dependence structure of the components , that provides valuable insights into the properties of good loss functions. Moreover, we provide numerical schemes to assess the risk allocation in high dimensions . | The ongoing concern about systemic risk since the outburst of the global financial crisis has highlighted the need for risk measures at the level of sets of interconnected financial components, such as portfolios, institutions or members of clearing houses. The two main issues in systemic risk measurement are the computation of an overall reserve level and its allocation to the different components according to their systemic relevance. We develop here a pragmatic approach to systemic risk measurement and allocation based on multivariate shortfall risk measures, where acceptable allocations are first computed and then aggregated so as to minimize costs. We analyze the sensitivity of the risk allocations to various factors and highlight its relevance as an indicator of systemic risk. In particular, we study the interplay between the loss function and the dependence structure of the components . Moreover, we address the computational aspects of risk allocation. Finally, we apply this methodology to the allocation of the default fund of a CCP on real data . | [
{
"type": "R",
"before": ", that provides valuable insights into the properties of good loss functions. Moreover, we provide numerical schemes to assess the risk allocation in high dimensions",
"after": ". Moreover, we address the computational aspects of risk allocation. Finally, we apply this methodology to the allocation of the default fund of a CCP on real data",
"start_char_pos": 905,
"end_char_pos": 1070
}
]
| [
0,
257,
440,
661,
793,
982
]
|
1507.05415 | 1 | This paper proposes a simple technical approach for the derivation of future (forward) point-in-time PD forecasts, with minimal data requirements. The inputs required are the current and future through-the-cycle PDs of the obligors, their last known default rates, and a measure for the systematic dependence of the obligors. Technically, the forecasts are made from within a classical asset-based credit portfolio model, just with the assumption of a suitable autoregressive process for the systematic factor. The paper discusses in detail the practical issues of implementation, in particular the parametrization alternatives. The paper also shows how the approach can be naturally extended to low-default portfolios with volatile default rates, using Bayesian methodology. Furthermore, the expert judgments about the current macroeconomic state, although not necessary for the forecasts, can be embedded using the Bayesian technique. The presented forward PDs can be used for the derivation of lifetime credit losses required by the new accounting standard IFRS 9. In doing so, the presented approach is endogenous, as it does not require any exogenous macroeconomic forecasts which are notoriously unreliable and often subjective . | This paper proposes a simple technical approach for the analytical derivation of Point-in-Time PD (probability of default) forecasts, with minimal data requirements. The inputs required are the current and future Through-the-Cycle PDs of the obligors, their last known default rates, and a measurement of the systematic dependence of the obligors. Technically, the forecasts are made from within a classical asset-based credit portfolio model, with the additional assumption of a simple (first/second order) autoregressive process for the systematic factor. This paper elaborates in detail on the practical issues of implementation, especially on the parametrization alternatives. We also show how the approach can be naturally extended to low-default portfolios with volatile default rates, using Bayesian methodology. Furthermore, expert judgments on the current macroeconomic state, although not necessary for the forecasts, can be embedded into the model using the Bayesian technique. The resulting PD forecasts can be used for the derivation of expected lifetime credit losses as required by the newly adopted accounting standard IFRS 9. In doing so, the presented approach is endogenous, as it does not require any exogenous macroeconomic forecasts , which are notoriously unreliable and often subjective . Also, it does not require any dependency modeling between PDs and macroeconomic variables, which often proves to be cumbersome and unstable . | [
{
"type": "R",
"before": "derivation of future (forward) point-in-time PD",
"after": "analytical derivation of Point-in-Time PD (probability of default)",
"start_char_pos": 56,
"end_char_pos": 103
},
{
"type": "R",
"before": "through-the-cycle",
"after": "Through-the-Cycle",
"start_char_pos": 194,
"end_char_pos": 211
},
{
"type": "R",
"before": "measure for",
"after": "measurement of",
"start_char_pos": 271,
"end_char_pos": 282
},
{
"type": "R",
"before": "just with the",
"after": "with the additional",
"start_char_pos": 422,
"end_char_pos": 435
},
{
"type": "R",
"before": "suitable",
"after": "simple (first/second order)",
"start_char_pos": 452,
"end_char_pos": 460
},
{
"type": "R",
"before": "The paper discusses in detail",
"after": "This paper elaborates in detail on",
"start_char_pos": 511,
"end_char_pos": 540
},
{
"type": "R",
"before": "in particular",
"after": "especially on",
"start_char_pos": 581,
"end_char_pos": 594
},
{
"type": "R",
"before": "The paper also shows",
"after": "We also show",
"start_char_pos": 629,
"end_char_pos": 649
},
{
"type": "R",
"before": "the expert judgments about",
"after": "expert judgments on",
"start_char_pos": 789,
"end_char_pos": 815
},
{
"type": "A",
"before": null,
"after": "into the model",
"start_char_pos": 907,
"end_char_pos": 907
},
{
"type": "R",
"before": "presented forward PDs",
"after": "resulting PD forecasts",
"start_char_pos": 942,
"end_char_pos": 963
},
{
"type": "A",
"before": null,
"after": "expected",
"start_char_pos": 998,
"end_char_pos": 998
},
{
"type": "A",
"before": null,
"after": "as",
"start_char_pos": 1022,
"end_char_pos": 1022
},
{
"type": "R",
"before": "new",
"after": "newly adopted",
"start_char_pos": 1039,
"end_char_pos": 1042
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 1183,
"end_char_pos": 1183
},
{
"type": "A",
"before": null,
"after": ". Also, it does not require any dependency modeling between PDs and macroeconomic variables, which often proves to be cumbersome and unstable",
"start_char_pos": 1238,
"end_char_pos": 1238
}
]
| [
0,
146,
325,
510,
628,
775,
937
]
|
1507.06015 | 1 | When simulating a complex stochastic system, the behavior of the output response depends on the input parameters estimated from finite real-world data, and the finiteness of data brings input uncertainty to the output response . The quantification of the impact of input uncertainty on output response has been extensively studied. Most of the existing literature focuses on providing inferences on the mean output response with respect to input uncertainty , including point estimation and confidence interval construction of the mean response. However, risk assessment of the mean response with respect to input uncertainty often plays an important role in system evaluation /controlbecause it quantifies the behavior of the mean response under extreme input models. To the best of our knowledge, it has been rarely systematically studied in the literature. In the present paper, we will fill in the gap and introduce risk measures for input uncertaintyin output analysis. We develop nested Monte Carlo estimators and construct (asymptotically valid) confidence intervals for risk measures of mean response. We further study the associated budget allocation problem for more efficient nested simulation of the estimators, and propose a novel method to solve the problem . | When simulating a complex stochastic system, the behavior of output response depends on input parameters estimated from finite real-world data, and the finiteness of data brings input uncertainty into the system . The quantification of the impact of input uncertainty on output response has been extensively studied. Most of the existing literature focuses on providing inferences on the mean response at the true but unknown input parameter , including point estimation and confidence interval construction . Risk quantification of mean response under input uncertainty often plays an important role in system evaluation and control, because it provides inferences on extreme scenarios of mean response in all possible input models. To the best of our knowledge, it has rarely been systematically studied in the literature. In this paper, first we introduce risk measures of mean response under input uncertainty, and propose a nested Monte Carlo simulation approach to estimate them. Then we develop asymptotical properties such as consistency and asymptotic normality for the proposed nested risk estimators. Finally we study the associated budget allocation problem for efficient nested risk simulation . | [
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 61,
"end_char_pos": 64
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 92,
"end_char_pos": 95
},
{
"type": "R",
"before": "to the output response",
"after": "into the system",
"start_char_pos": 204,
"end_char_pos": 226
},
{
"type": "R",
"before": "output response with respect to input uncertainty",
"after": "response at the true but unknown input parameter",
"start_char_pos": 408,
"end_char_pos": 457
},
{
"type": "R",
"before": "of the mean response. However, risk assessment of the mean response with respect to",
"after": ". Risk quantification of mean response under",
"start_char_pos": 524,
"end_char_pos": 607
},
{
"type": "R",
"before": "/controlbecause it quantifies the behavior of the mean response under extreme",
"after": "and control, because it provides inferences on extreme scenarios of mean response in all possible",
"start_char_pos": 677,
"end_char_pos": 754
},
{
"type": "R",
"before": "been rarely",
"after": "rarely been",
"start_char_pos": 806,
"end_char_pos": 817
},
{
"type": "R",
"before": "the present paper, we will fill in the gap and",
"after": "this paper, first we",
"start_char_pos": 863,
"end_char_pos": 909
},
{
"type": "R",
"before": "for input uncertaintyin output analysis. We develop",
"after": "of mean response under input uncertainty, and propose a",
"start_char_pos": 934,
"end_char_pos": 985
},
{
"type": "R",
"before": "estimators and construct (asymptotically valid) confidence intervals for risk measures of mean response. We further",
"after": "simulation approach to estimate them. Then we develop asymptotical properties such as consistency and asymptotic normality for the proposed nested risk estimators. Finally we",
"start_char_pos": 1005,
"end_char_pos": 1120
},
{
"type": "R",
"before": "more efficient nested simulation of the estimators, and propose a novel method to solve the problem",
"after": "efficient nested risk simulation",
"start_char_pos": 1172,
"end_char_pos": 1271
}
]
| [
0,
228,
331,
545,
768,
859,
974,
1109
]
|
1507.06015 | 2 | When simulating a complex stochastic system, the behavior of output response depends on input parameters estimated from finite real-world data, and the finiteness of data brings input uncertainty into the system. The quantification of the impact of input uncertainty on output response has been extensively studied. Most of the existing literature focuses on providing inferences on the mean response at the true but unknown input parameter, including point estimation and confidence interval construction. Risk quantification of mean response under input uncertainty often plays an important role in system evaluation and control, because it provides inferences on extreme scenarios of mean response in all possible input models. To the best of our knowledge, it has rarely been systematically studied in the literature. In this paper, first we introduce risk measures of mean response under input uncertainty, and propose a nested Monte Carlo simulation approach to estimate them. Then we develop asymptotical properties such as consistency and asymptotic normality for the proposed nested risk estimators. Finally we study the associated budget allocation problem for efficient nested risk simulation . | When simulating a complex stochastic system, the behavior of output response depends on input parameters estimated from finite real-world data, and the finiteness of data brings input uncertainty into the system. The quantification of the impact of input uncertainty on output response has been extensively studied. Most of the existing literature focuses on providing inferences on the mean response at the true but unknown input parameter, including point estimation and confidence interval construction. Risk quantification of mean response under input uncertainty often plays an important role in system evaluation and control, because it provides inferences on extreme scenarios of mean response in all possible input models. To the best of our knowledge, it has rarely been systematically studied in the literature. In this paper, first we introduce risk measures of mean response under input uncertainty, and propose a nested Monte Carlo simulation approach to estimate them. Then we develop asymptotical properties such as consistency and asymptotic normality for the proposed nested risk estimators. We further study the associated budget allocation problem for efficient nested risk simulation , and finally use a sharing economy example to illustrate the importance of accessing and controlling risk due to input uncertainty . | [
{
"type": "R",
"before": "Finally we",
"after": "We further",
"start_char_pos": 1109,
"end_char_pos": 1119
},
{
"type": "A",
"before": null,
"after": ", and finally use a sharing economy example to illustrate the importance of accessing and controlling risk due to input uncertainty",
"start_char_pos": 1204,
"end_char_pos": 1204
}
]
| [
0,
212,
315,
506,
730,
821,
982,
1108
]
|
1507.06160 | 1 | It is generally recognized that a distinguishing feature of life is its peculiar capability to avoid equilibration. The origin of this capability and its evolution along the timeline of abiogenesis is not yet understood. We propose to study an analog of this phenomenon that could emerge in non-biological systems. To this end, we introduce the concept of sustainability of transient kinetic regimes. This concept is illustrated via investigation of cooperative effects in an extended system of compartmentalized chemical oscillators under batch conditions. The computational study of a model system shows robust enhancement of lifetimes of the decaying oscillations which translates into the evolution of the survival function of the non-equilibrium regime. This model does not rely on any form of replication. Rather, it explores the role of a structured effective environment as a contributor to the system-bath interactions that define non- equilibrium regimes. We implicate the noise produced by the effective environment of a compartmentalized oscillator as the cause of the lifetime extension. | It is generally recognized that a distinguishing feature of life is its peculiar capability to avoid equilibration. The origin of this capability and its evolution along the timeline of abiogenesis is not yet understood. We propose to study an analog of this phenomenon that could emerge in non-biological systems. To this end, we introduce the concept of sustainability of transient kinetic regimes. This concept is illustrated via investigation of cooperative effects in an extended system of compartmentalized chemical oscillators under batch and semi-batch conditions. The computational study of a model system shows robust enhancement of lifetimes of the decaying oscillations which translates into the evolution of the survival function of the transient non-equilibrium regime. This model does not rely on any form of replication. Rather, it explores the role of a structured effective environment as a contributor to the system-bath interactions that define non-equilibrium regimes. We implicate the noise produced by the effective environment of a compartmentalized oscillator as the cause of the lifetime extension. | [
{
"type": "A",
"before": null,
"after": "and semi-batch",
"start_char_pos": 546,
"end_char_pos": 546
},
{
"type": "A",
"before": null,
"after": "transient",
"start_char_pos": 736,
"end_char_pos": 736
},
{
"type": "R",
"before": "non- equilibrium",
"after": "non-equilibrium",
"start_char_pos": 942,
"end_char_pos": 958
}
]
| [
0,
115,
220,
314,
400,
558,
760,
813,
967
]
|
1507.06242 | 1 | The structure of return spillovers is examined by constructing Granger causality networks using daily closing prices of 40 stock markets from 2nd January 2006 to 31st December 2013. The data is properly aligned to take into account non-synchronous trading effects.By conducting a rolling window spatial probit analysis on the set of edges of Granger causality networks , we confirm the significance of temporal proximity and preferential attachment on edge creation.We extend the analysis by incorporating market specific factors, such as market capitalization, turnover and volatility . | Using a rolling windows analysis of filtered and aligned stock index returns from 40 countries during the period 2006-2014, we construct Granger causality networks and investigate the ensuing structure of the relationships by studying network properties and fitting spatial probit models. We provide evidence that stock market volatility and market size increases, while foreign exchange volatility decreases the probability of return spillover from a given market. We also show that market development and returns on the foreign exchange market and stock market also matter, but they exhibit significant time-varying behaviour with alternating effects. These results suggest that higher market integration periods are alternated with periods where investors appear to be chasing returns. Despite the significance of market characteristics and market conditions, what in reality matters for information propagation is the temporal distance between closing hours, i.e. the temporal proximity effect. This implies that choosing markets which trade in similar hours bears additional costs to investors, as the probability of return spillovers increases. The same effect was observed with regard to the temporal distance to the US market. Finally , we confirm the existence of the preferential attachment effect, i.e. the probability of a given market to propagate return spillovers to a new market depends endogenously and positively on the existing number of return spillovers from that market . | [
{
"type": "R",
"before": "The structure of return spillovers is examined by constructing",
"after": "Using a rolling windows analysis of filtered and aligned stock index returns from 40 countries during the period 2006-2014, we construct",
"start_char_pos": 0,
"end_char_pos": 62
},
{
"type": "R",
"before": "using daily closing prices of 40 stock markets from 2nd January 2006 to 31st December 2013. The data is properly aligned to take into account non-synchronous trading effects.By conducting a rolling window spatial probit analysis on the set of edges of Granger causality networks",
"after": "and investigate the ensuing structure of the relationships by studying network properties and fitting spatial probit models. We provide evidence that stock market volatility and market size increases, while foreign exchange volatility decreases the probability of return spillover from a given market. We also show that market development and returns on the foreign exchange market and stock market also matter, but they exhibit significant time-varying behaviour with alternating effects. These results suggest that higher market integration periods are alternated with periods where investors appear to be chasing returns. Despite the significance of market characteristics and market conditions, what in reality matters for information propagation is the temporal distance between closing hours, i.e. the temporal proximity effect. This implies that choosing markets which trade in similar hours bears additional costs to investors, as the probability of return spillovers increases. The same effect was observed with regard to the temporal distance to the US market. Finally",
"start_char_pos": 90,
"end_char_pos": 368
},
{
"type": "R",
"before": "significance of temporal proximity and preferential attachment on edge creation.We extend the analysis by incorporating market specific factors, such as market capitalization, turnover and volatility",
"after": "existence of the preferential attachment effect, i.e. the probability of a given market to propagate return spillovers to a new market depends endogenously and positively on the existing number of return spillovers from that market",
"start_char_pos": 386,
"end_char_pos": 585
}
]
| [
0,
181,
264,
466
]
|
1507.06354 | 1 | Several recent experiments have suggested that sharply bent DNA has a surprisingly high bending flexibility, but the cause is poorly understood. It has been demonstrated that excitation of flexible defects can explain the results; while whether such defects can be excited under the level of DNA bending in those experiments has remained unclearand been debated. Interestingly, due to experimental design DNA contained pre-existing nicks in nearly all those experiments, while the potential effect of nicks have never been considered . Here, using full-atom molecular dynamics (MD) simulations, we show that nicks promote DNA basepair disruption at the nicked sites which drastically reduced DNA bending energy. In the absence of nicks, basepair disruption can also occur , but it requires a higher level of DNA bending. Overall, our results challenge the interpretations of previous sharp DNA bending experiments and highlight that the micromechanics of sharply bent DNA still remains an open question. | Several recent experiments suggest that sharply bent DNA has a surprisingly high bending flexibility, but the cause of this flexibility is poorly understood. Although excitation of flexible defects can explain these results, whether such excitation can occur with the level of DNA bending in these experiments remains unclear. Intriguingly, the DNA contained preexisting nicks in most of these experiments but whether nicks might play a role in flexibility has never been considered in the interpretation of experimental results . Here, using full-atom molecular dynamics simulations, we show that nicks promote DNA basepair disruption at the nicked sites , which drastically reduces DNA bending energy. In addition, lower temperatures suppress the nick-dependent basepair disruption. In the absence of nicks, basepair disruption can also occur but requires a higher level of DNA bending. Therefore, basepair disruption inside B-form DNA can be suppressed if the DNA contains preexisting nicks. Overall, our results suggest that the reported mechanical anomaly of sharply bent DNA is likely dependent on preexisting nicks, therefore the intrinsic mechanisms of sharply bent nick-free DNA remain an open question. | [
{
"type": "R",
"before": "have suggested",
"after": "suggest",
"start_char_pos": 27,
"end_char_pos": 41
},
{
"type": "A",
"before": null,
"after": "of this flexibility",
"start_char_pos": 123,
"end_char_pos": 123
},
{
"type": "R",
"before": "It has been demonstrated that",
"after": "Although",
"start_char_pos": 146,
"end_char_pos": 175
},
{
"type": "R",
"before": "the results; while whether such defects can be excited under",
"after": "these results, whether such excitation can occur with",
"start_char_pos": 219,
"end_char_pos": 279
},
{
"type": "R",
"before": "those experiments has remained unclearand been debated. Interestingly, due to experimental design DNA contained pre-existing nicks in nearly all those experiments, while the potential effect of nicks have",
"after": "these experiments remains unclear. Intriguingly, the DNA contained preexisting nicks in most of these experiments but whether nicks might play a role in flexibility has",
"start_char_pos": 308,
"end_char_pos": 512
},
{
"type": "A",
"before": null,
"after": "in the interpretation of experimental results",
"start_char_pos": 535,
"end_char_pos": 535
},
{
"type": "D",
"before": "(MD)",
"after": null,
"start_char_pos": 579,
"end_char_pos": 583
},
{
"type": "R",
"before": "which drastically reduced",
"after": ", which drastically reduces",
"start_char_pos": 668,
"end_char_pos": 693
},
{
"type": "R",
"before": "the",
"after": "addition, lower temperatures suppress the nick-dependent basepair disruption. In the",
"start_char_pos": 717,
"end_char_pos": 720
},
{
"type": "R",
"before": ", but it",
"after": "but",
"start_char_pos": 774,
"end_char_pos": 782
},
{
"type": "A",
"before": null,
"after": "Therefore, basepair disruption inside B-form DNA can be suppressed if the DNA contains preexisting nicks.",
"start_char_pos": 823,
"end_char_pos": 823
},
{
"type": "R",
"before": "challenge the interpretations of previous sharp DNA bending experiments and highlight that the micromechanics",
"after": "suggest that the reported mechanical anomaly of sharply bent DNA is likely dependent on preexisting nicks, therefore the intrinsic mechanisms",
"start_char_pos": 845,
"end_char_pos": 954
},
{
"type": "R",
"before": "DNA still remains",
"after": "nick-free DNA remain",
"start_char_pos": 971,
"end_char_pos": 988
}
]
| [
0,
145,
231,
363,
537,
713,
822
]
|
1507.06514 | 1 | In an illiquid market as a result of a lack of counterparties and uncertainty about asset values, trading of assets is not being secured by the actual value. In this research, we develop an algorithmic trading strategy to deal with the discrete optimal liquidation problem of large order trading with different market microstructures in an illiquid market. In this market, order flow can be viewed as a Point process with stochastic arrival intensity. Interaction between price impact and price dynamics can be modeled as a dynamic optimization problem with price impact as a linear function of the self-exciting dynamic process. We formulate the liquidation problem as a discrete-time Markov Decision Processes where the state process is a Piecewise Deterministic Markov Process (PDMP) , which is a member of right continuous Markov Process family. We study the dynamics of a limit order book and its influence on the price dynamics and develop a stochastic model to retain the main statistical characteristics of limit order books in illiquid markets . | In this research, we develop a trading strategy for the discrete-time optimal liquidation problem of large order trading with different market microstructures in an illiquid market. In this framework, the flow of orders can be viewed as a point process with stochastic intensity. We model the price impact as a linear function of a self-exciting dynamic process. We formulate the liquidation problem as a discrete-time Markov Decision Processes , where the state process is a Piecewise Deterministic Markov Process (PDMP) . The numerical results indicate that an optimal trading strategy is dependent on characteristics of the market microstructure. When no orders above certain value come the optimal solution takes offers in the lower levels of the limit order book in order to prevent not filling of orders and facing final inventory costs . | [
{
"type": "D",
"before": "an illiquid market as a result of a lack of counterparties and uncertainty about asset values, trading of assets is not being secured by the actual value. In",
"after": null,
"start_char_pos": 3,
"end_char_pos": 160
},
{
"type": "R",
"before": "an algorithmic trading strategy to deal with the discrete",
"after": "a trading strategy for the discrete-time",
"start_char_pos": 187,
"end_char_pos": 244
},
{
"type": "R",
"before": "market, order flow",
"after": "framework, the flow of orders",
"start_char_pos": 365,
"end_char_pos": 383
},
{
"type": "R",
"before": "Point",
"after": "point",
"start_char_pos": 403,
"end_char_pos": 408
},
{
"type": "R",
"before": "arrival intensity. Interaction between price impact and price dynamics can be modeled as a dynamic optimization problem with price impact as a",
"after": "intensity. We model the price impact as a",
"start_char_pos": 433,
"end_char_pos": 575
},
{
"type": "R",
"before": "the",
"after": "a",
"start_char_pos": 595,
"end_char_pos": 598
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 712,
"end_char_pos": 712
},
{
"type": "R",
"before": ", which is a member of right continuous Markov Process family. We study the dynamics of a",
"after": ". The numerical results indicate that an optimal trading strategy is dependent on characteristics of the market microstructure. When no orders above certain value come the optimal solution takes offers in the lower levels of the",
"start_char_pos": 788,
"end_char_pos": 877
},
{
"type": "R",
"before": "and its influence on the price dynamics and develop a stochastic model to retain the main statistical characteristics of limit order books in illiquid markets",
"after": "in order to prevent not filling of orders and facing final inventory costs",
"start_char_pos": 895,
"end_char_pos": 1053
}
]
| [
0,
157,
356,
451,
629,
850
]
|
1507.07375 | 1 | We introduce two new algorithms to minimise smooth difference of convex (DC) functions that accelerate the convergence of the classical DC algorithm (DCA). We prove that the point computed by DCA can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of DCA together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analysed under the \L ojasiewicz property of the objective function. We apply our algorithms to a class of smooth DC programs arising in the study of biochemical reaction networks, where the objective function is real analytic and thus satisfies the \L ojasiewicz property. Numerical tests on various biochemical models clearly show that our algorithms outperforms DCA, being on average more than four times faster in both computational time and the number of iterations. The algorithms are globally convergent to a non-equilibrium steady state of a biochemical network , with only chemically consistent restrictions on the network topology. | We introduce two new algorithms to minimise smooth difference of convex (DC) functions that accelerate the convergence of the classical DC algorithm (DCA). We prove that the point computed by DCA can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of DCA together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analysed under the Lojasiewicz property of the objective function. We apply our algorithms to a class of smooth DC programs arising in the study of biochemical reaction networks, where the objective function is real analytic and thus satisfies the Lojasiewicz property. Numerical tests on various biochemical models clearly show that our algorithms outperforms DCA, being on average more than four times faster in both computational time and the number of iterations. Numerical experiments show that the algorithms are globally convergent to a non-equilibrium steady state of various biochemical networks , with only chemically consistent restrictions on the network topology. | [
{
"type": "D",
"before": "\\L",
"after": null,
"start_char_pos": 496,
"end_char_pos": 498
},
{
"type": "R",
"before": "ojasiewicz",
"after": "Lojasiewicz",
"start_char_pos": 499,
"end_char_pos": 509
},
{
"type": "D",
"before": "\\L",
"after": null,
"start_char_pos": 727,
"end_char_pos": 729
},
{
"type": "R",
"before": "ojasiewicz",
"after": "Lojasiewicz",
"start_char_pos": 730,
"end_char_pos": 740
},
{
"type": "R",
"before": "The",
"after": "Numerical experiments show that the",
"start_char_pos": 949,
"end_char_pos": 952
},
{
"type": "R",
"before": "a biochemical network",
"after": "various biochemical networks",
"start_char_pos": 1025,
"end_char_pos": 1046
}
]
| [
0,
155,
289,
405,
545,
750,
948
]
|
1507.07491 | 1 | It is shown that the density of modes of the vibrational spectrum of globular proteins is universal, i.e., regardless of the protein in question it closely follows one universal curve. The present study, including 135 proteins analyzed with a full atomic empirical potential (CHARMM22) and using the full complement of all atoms Cartesian degrees of freedom, goes far beyond confirming previous claims of universality, finding that universality holds even in the high-frequency range (300- 4000 1/cm), where peaks and turns in the density of states are faithfully reproduced from one protein to the next. We also characterize fluctuations of the spectral density from the average, paving the way to a meaningful discussion of rare, unusual spectra and the structural reasons for the deviations in such "outlier" proteins. Since the method used for the derivation of the vibrational modes (potential energy formulation, set of degrees of freedom employed, etc.) has a dramatic effect on the spectral density, another significant implication of our findings is that the universality can provide an exquisite tool for assessing and improving the quality of various models used for NMA computations. Finally, we show that the input configuration too affects the density of modes, thus emphasizing the importance of simplified potential energy formulations that are minimized at the outset. | It is shown that the density of modes of the vibrational spectrum of globular proteins is universal, i.e., regardless of the protein in question it closely follows one universal curve. The present study, including 135 proteins analyzed with a full atomic empirical potential (CHARMM22) and using the full complement of all atoms Cartesian degrees of freedom, goes far beyond previous claims of universality, confirming that universality holds even in the high-frequency range (300- 4000 1/cm), where peaks and turns in the density of states are faithfully reproduced from one protein to the next. We also characterize fluctuations of the spectral density from the average, paving the way to a meaningful discussion of rare, unusual spectra and the structural reasons for the deviations in such "outlier" proteins. Since the method used for the derivation of the vibrational modes (potential energy formulation, set of degrees of freedom employed, etc.) has a dramatic effect on the spectral density, another significant implication of our findings is that the universality can provide an exquisite tool for assessing and improving the quality of various models used for NMA computations. Finally, we show that the input configuration too affects the density of modes, thus emphasizing the importance of simplified potential energy formulations that are minimized at the outset. | [
{
"type": "D",
"before": "confirming",
"after": null,
"start_char_pos": 375,
"end_char_pos": 385
},
{
"type": "R",
"before": "finding",
"after": "confirming",
"start_char_pos": 419,
"end_char_pos": 426
}
]
| [
0,
184,
604,
821,
1195
]
|
1508.00632 | 1 | We show how to price and replicate a variety of barrier-style claims written on the \log price X and quadratic variation \<X \> \langle \rangle of a risky asset. Our framework assumes no arbitrage, frictionless markets and zero interest rates. We model the risky asset as a strictly positive continuous semimartingale with an independent volatility process. The volatility process may exhibit jumps and may be non-Markovian. As hedging instruments, we use only the underlying risky asset, a zero-coupon bond , and European calls and puts with the same maturity as the barrier-style claim. We consider both single-barrier and double barrier claims in three varieties: knock-in, knock-out and rebate . | We show how to price and replicate a variety of barrier-style claims written on the \log price X and quadratic variation \langle X\rangle of a risky asset. Our framework assumes no arbitrage, frictionless markets and zero interest rates. We model the risky asset as a strictly positive continuous semimartingale with an independent volatility process. The volatility process may exhibit jumps and may be non-Markovian. As hedging instruments, we use only the underlying risky asset, zero-coupon bonds , and European calls and puts with the same maturity as the barrier-style claim. We consider knock-in, knock-out and rebate claims in single and double barrier varieties . | [
{
"type": "D",
"before": "\\<X \\>",
"after": null,
"start_char_pos": 121,
"end_char_pos": 127
},
{
"type": "A",
"before": null,
"after": "X",
"start_char_pos": 136,
"end_char_pos": 136
},
{
"type": "D",
"before": "a",
"after": null,
"start_char_pos": 489,
"end_char_pos": 490
},
{
"type": "R",
"before": "bond",
"after": "bonds",
"start_char_pos": 503,
"end_char_pos": 507
},
{
"type": "D",
"before": "both single-barrier and double barrier claims in three varieties:",
"after": null,
"start_char_pos": 601,
"end_char_pos": 666
},
{
"type": "A",
"before": null,
"after": "claims in single and double barrier varieties",
"start_char_pos": 698,
"end_char_pos": 698
}
]
| [
0,
161,
243,
357,
424,
588
]
|
1508.01869 | 1 | Classic approaches to General Systems Theory often adopt an individual perspective and a limited number of systemic classes. As a result, those classes include a wide number and variety of systems that are result equivalent to each other. This paper introduces a different approach: First, systems belonging to a same class are further differentiated according to five major general characteristics. This introduces a "horizontal dimension" to system classification. A second component of our approach considers systems as nested compositional hierarchies of other sub-systems. The resulting "vertical dimension" further specializes the systemic classes and makes it easier to assess similarities and difference regarding properties such as resilience, performance, and quality-of-experience. Our approach is exemplified by considering a telemonitoring systems designed in the framework of Flemish project . We show how our approach makes it possible to design intelligent environments able to closely follow a system's horizontal and URLanization and to artificially augment its features by serving as crosscutting optimizers and as enablers of antifragile behaviors. | Classic approaches to General Systems Theory often adopt an individual perspective and a limited number of systemic classes. As a result, those classes include a wide number and variety of systems that result equivalent to each other. This paper introduces a different approach: First, systems belonging to a same class are further differentiated according to five major general characteristics. This introduces a "horizontal dimension" to system classification. A second component of our approach considers systems as nested compositional hierarchies of other sub-systems. The resulting "vertical dimension" further specializes the systemic classes and makes it easier to assess similarities and differences regarding properties such as resilience, performance, and quality-of-experience. Our approach is exemplified by considering a telemonitoring system designed in the framework of Flemish project "Little Sister" . We show how our approach makes it possible to design intelligent environments able to closely follow a system's horizontal and URLanization and to artificially augment its features by serving as crosscutting optimizers and as enablers of antifragile behaviors. | [
{
"type": "D",
"before": "are",
"after": null,
"start_char_pos": 202,
"end_char_pos": 205
},
{
"type": "R",
"before": "difference",
"after": "differences",
"start_char_pos": 701,
"end_char_pos": 711
},
{
"type": "R",
"before": "systems",
"after": "system",
"start_char_pos": 853,
"end_char_pos": 860
},
{
"type": "A",
"before": null,
"after": "\"Little Sister\"",
"start_char_pos": 906,
"end_char_pos": 906
}
]
| [
0,
124,
238,
399,
466,
577,
792,
908
]
|
1508.02085 | 1 | The problem of DNA-DNA interaction mediated by divalent counterions is studied using computer simulation . The effect of the counterion size on the condensation behavior of the DNA bundle is investigated. Experimentally, it is known that multivalent counterions has strong effect on the DNA condensation phenomenon. While tri- and tetra-valent counterions are shown to easily condense free DNA molecules in solution into torroidal bundles, the situation with divalent counterions are not as clear cut. Some divalent counterions like Mg^{+2} are not able to condense free DNA molecules in solution, while some like Mn^{+2} can condense them into disorder bundles. In restricted environment such as in two dimensional system or inside viral capsid, Mg^{+2} can have strong effect and able to condense them, but the condensation varies qualitatively with different system, different coions. It has been suggested that divalent counterions can induce attraction between DNA molecules but the strength of the attraction is not strong enough to condense free DNA in solution. However, if the configuration entropy of DNA is restricted, these attractions are enough to cause appreciable effects. The variations among different divalent salts might be due to the hydration effect of the divalent counterions. In this paper, we try to understand this variation using a very simple parameters, the size of the divalent counterions. We investigate how divalent counterions with different sizes can leads to varying qualitative behavior of DNA condensation in restricted environments. | The problem of DNA-DNA interaction mediated by divalent counterions is studied using a generalized Grand-canonical Monte-Carlo simulation for a system of two salts . The effect of the divalent counterion size on the condensation behavior of the DNA bundle is investigated. Experimentally, it is known that multivalent counterions has strong effect on the DNA condensation phenomenon. While tri- and tetra-valent counterions are shown to easily condense free DNA molecules in solution into torroidal bundles, the situation with divalent counterions are not as clear cut. Some divalent counterions like Mg^{+2} are not able to condense free DNA molecules in solution, while some like Mn^{+2} can condense them into disorder bundles. In restricted environment such as in two dimensional system or inside viral capsid, Mg^{+2} can have strong effect and able to condense them, but the condensation varies qualitatively with different system, different coions. It has been suggested that divalent counterions can induce attraction between DNA molecules but the strength of the attraction is not strong enough to condense free DNA in solution. However, if the configuration entropy of DNA is restricted, these attractions are enough to cause appreciable effects. The variations among different divalent salts might be due to the hydration effect of the divalent counterions. In this paper, we try to understand this variation using a very simple parameters, the size of the divalent counterions. We investigate how divalent counterions with different sizes can leads to varying qualitative behavior of DNA condensation in restricted environments. | [
{
"type": "R",
"before": "computer simulation",
"after": "a generalized Grand-canonical Monte-Carlo simulation for a system of two salts",
"start_char_pos": 85,
"end_char_pos": 104
},
{
"type": "A",
"before": null,
"after": "divalent",
"start_char_pos": 125,
"end_char_pos": 125
}
]
| [
0,
106,
205,
316,
502,
663,
888,
1070,
1189,
1301,
1422
]
|
1508.02085 | 2 | The problem of DNA-DNA interaction mediated by divalent counterions is studied using a generalized Grand-canonical Monte-Carlo simulation for a system of two salts. The effect of the divalent counterion size on the condensation behavior of the DNA bundle is investigated. Experimentally, it is known that multivalent counterions has strong effect on the DNA condensation phenomenon. While tri- and tetra-valent counterions are shown to easily condense free DNA molecules in solution into torroidal bundles, the situation with divalent counterions are not as clear cut. Some divalent counterions like Mg^{+2} are not able to condense free DNA molecules in solution, while some like Mn^{+2} can condense them into disorder bundles. In restricted environment such as in two dimensional system or inside viral capsid, Mg^{+2} can have strong effect and able to condense them, but the condensation varies qualitatively with different system, different coions. It has been suggested that divalent counterions can induce attraction between DNA molecules but the strength of the attraction is not strong enough to condense free DNA in solution. However, if the configuration entropy of DNA is restricted, these attractions are enough to cause appreciable effects. The variations among different divalent salts might be due to the hydration effect of the divalent counterions. In this paper, we try to understand this variation using a very simple parameters , the size of the divalent counterions. We investigate how divalent counterions with different sizes can leads to varying qualitative behavior of DNA condensation in restricted environments . | The problem of DNA-DNA interaction mediated by divalent counterions is studied using a generalized Grand-canonical Monte-Carlo simulation for a system of two salts. The effect of the divalent counterion size on the condensation behavior of the DNA bundle is investigated. Experimentally, it is known that multivalent counterions have strong effect on the DNA condensation phenomenon. While tri- and tetra-valent counterions are shown to easily condense free DNA molecules in solution into toroidal bundles, the situation with divalent counterions are not as clear cut. Some divalent counterions like Mg^{+2} are not able to condense free DNA molecules in solution, while some like Mn^{+2} can condense them into disorder bundles. In restricted environment such as in two dimensional system or inside viral capsid, Mg^{+2} can have strong effect and able to condense them, but the condensation varies qualitatively with different system, different coions. It has been suggested that divalent counterions can induce attraction between DNA molecules but the strength of the attraction is not strong enough to condense free DNA in solution. However, if the configuration entropy of DNA is restricted, these attractions are enough to cause appreciable effects. The variations among different divalent salts might be due to the hydration effect of the divalent counterions. In this paper, we try to understand this variation using a very simple parameter , the size of the divalent counterions. We investigate how divalent counterions with different sizes can leads to varying qualitative behavior of DNA condensation in restricted environments . Additionally a Grand canonical Monte-Carlo method for simulation of systems with two different salts is presented in detail . | [
{
"type": "R",
"before": "has",
"after": "have",
"start_char_pos": 329,
"end_char_pos": 332
},
{
"type": "R",
"before": "torroidal",
"after": "toroidal",
"start_char_pos": 488,
"end_char_pos": 497
},
{
"type": "R",
"before": "parameters",
"after": "parameter",
"start_char_pos": 1439,
"end_char_pos": 1449
},
{
"type": "A",
"before": null,
"after": ". Additionally a Grand canonical Monte-Carlo method for simulation of systems with two different salts is presented in detail",
"start_char_pos": 1640,
"end_char_pos": 1640
}
]
| [
0,
164,
271,
382,
568,
729,
954,
1136,
1255,
1367,
1489
]
|
1508.02601 | 1 | miRNAs serve as crucial post-transcriptional regulators of gene expression. Recent experimental studies report that an miRNA and its target mRNA reciprocally regulate each other and miRNA is recycled with a ratio upon degradation of mRNA-miRNA complex. The functionality of this mutual regulation and dynamic consequences of miRNA recycling are not fully understood. Here, we built a set of mathematical models of mRNA-miRNA interactions and systematically analyzed their dynamical responses under various conditions. First, we found that mRNA-miRNA reciprocal regulation manifests great versatility, such as subsensitive activation, ultrasensitive and subsensitive inhibition, depending on parameters such as the miRNA recycle ratio and the mRNA-miRNA complex degradation rate constant. Second, ultrasensitivity from reciprocal mRNA-miRNA regulation contributes to generation of bistability . Furthermore, the degree of ultrasensitivity is amplified by a stronger competing mRNA (ceRNA). Last, multiple miRNA binding sites on a target mRNA leads to emergence of nonmonotonic dual response (duality) and bistability even in the absence of any imposed feedback regulation . Thus, we demonstrated several novel functionalities that can be generated from simple mRNA-miRNA reciprocal regulation , in addition to canonical miRNA mediated degradation and translational repression of mRNA. Quantitative experiments are suggested to test the model predictions . | miRNAs serve as crucial post-transcriptional regulators in various essential cell fate decision. However, the contribution of the mRNA-miRNA mutual regulation to bistability is not fully understood. Here, we built a set of mathematical models of mRNA-miRNA interactions and systematically analyzed the sensitivity of response curves under various conditions. First, we found that mRNA-miRNA reciprocal regulation could manifest ultrasensitivity to subserve the generation of bistability when equipped with a positive feedback loop. Second, the region of bistability is expanded by a stronger competing mRNA (ceRNA). Interesting, bistability can be emerged without feedback loop if multiple miRNA binding sites exist on a target mRNA . Thus, we demonstrated the importance of simple mRNA-miRNA reciprocal regulation in cell fate decision . | [
{
"type": "R",
"before": "of gene expression. Recent experimental studies report that an miRNA and its target mRNA reciprocally regulate each other and miRNA is recycled with a ratio upon degradation of",
"after": "in various essential cell fate decision. However, the contribution of the",
"start_char_pos": 56,
"end_char_pos": 232
},
{
"type": "R",
"before": "complex. The functionality of this mutual regulation and dynamic consequences of miRNA recycling are",
"after": "mutual regulation to bistability is",
"start_char_pos": 244,
"end_char_pos": 344
},
{
"type": "R",
"before": "their dynamical responses",
"after": "the sensitivity of response curves",
"start_char_pos": 466,
"end_char_pos": 491
},
{
"type": "R",
"before": "manifests great versatility, such as subsensitive activation, ultrasensitive and subsensitive inhibition, depending on parameters such as the miRNA recycle ratio and the mRNA-miRNA complex degradation rate constant. Second, ultrasensitivity from reciprocal mRNA-miRNA regulation contributes to",
"after": "could manifest ultrasensitivity to subserve the",
"start_char_pos": 572,
"end_char_pos": 865
},
{
"type": "R",
"before": ". Furthermore, the degree of ultrasensitivity is amplified",
"after": "when equipped with a positive feedback loop. Second, the region of bistability is expanded",
"start_char_pos": 892,
"end_char_pos": 950
},
{
"type": "R",
"before": "Last,",
"after": "Interesting, bistability can be emerged without feedback loop if",
"start_char_pos": 989,
"end_char_pos": 994
},
{
"type": "A",
"before": null,
"after": "exist",
"start_char_pos": 1024,
"end_char_pos": 1024
},
{
"type": "D",
"before": "leads to emergence of nonmonotonic dual response (duality) and bistability even in the absence of any imposed feedback regulation",
"after": null,
"start_char_pos": 1042,
"end_char_pos": 1171
},
{
"type": "R",
"before": "several novel functionalities that can be generated from",
"after": "the importance of",
"start_char_pos": 1196,
"end_char_pos": 1252
},
{
"type": "R",
"before": ", in addition to canonical miRNA mediated degradation and translational repression of mRNA. Quantitative experiments are suggested to test the model predictions",
"after": "in cell fate decision",
"start_char_pos": 1293,
"end_char_pos": 1453
}
]
| [
0,
75,
252,
366,
517,
787,
893,
988,
1173
]
|
1508.02786 | 1 | During cell migration, cells become polarized, change their shape, and move in response to various cues, both internal and external . Many existing mathematical models of cell polarization are formulated in one or two spatial dimensions and hence cannot accurately capture the effect of cell shape, as well as the response of the cell to signals from different directions in a three-dimensional environment. To study those effects , we introduce a three-dimensional reaction-diffusion model of a cell. As some key molecules in cell polarization, such as the small GTPases, can exist both membrane bound and soluble in the cytosol, we first look at the role of cell geometry on the membrane binding/unbinding dynamics of such molecules. We derive quite general conditions under which effective existing one or two-dimensional computational models are valid, and find novel renormalizations of parameters in the effective model. We then extend an established one-dimensional cell polarization pathway in our three-dimensional framework. Our simulations indicate that even in some quasi-one-dimensional scenarios, such as polarization of a cell along a linear growth factor gradient, the cell shape can influence the polarization behavior of the cell, with cells of some shape polarizing more efficiently than those of other shapes. We also investigate the role of the previously ignored membrane unbinding rate on polarization. Furthermore, we simulate the response of the cell when the external signal is changing directions, and we find that more symmetric cells can change their polarized state more effectively towards the new stimulus than cells which are elongated along the direction of the original stimulus . | During cell migration, cells become polarized, change their shape, and move in response to various internal and external cues. Cell polarization is defined through the URLanization of molecules such as PI3K or small GTPases, and is determined by intracellular signaling networks. It results in directional forces through actin polymerization and myosin contractions. Many existing mathematical models of cell polarization are formulated in terms of reaction-diffusion systems of interacting molecules, and are often defined in one or two spatial dimensions . In this paper , we introduce a 3D reaction-diffusion model of interacting molecules in a single cell, and find that cell geometry has an important role affecting the capability of a cell to polarize, or change polarization when an external signal changes direction. Our results suggest a geometrical argument why more roundish cells can repolarize more effectively than cells which are elongated along the direction of the original stimulus , and thus enable roundish cells to turn faster, as has been observed in experiments. On the other hand, elongated cells preferentially polarize along their main axis even when a gradient stimulus appears from another direction. Furthermore, our 3D model can accurately capture the effect of binding and unbinding of important regulators of cell polarization to and from the cell membrane. This spatial separation of membrane and cytosol, not possible to capture in 1D or 2D models, leads to marked differences of our model from comparable lower-dimensional models . | [
{
"type": "D",
"before": "cues, both",
"after": null,
"start_char_pos": 99,
"end_char_pos": 109
},
{
"type": "R",
"before": ".",
"after": "cues. Cell polarization is defined through the URLanization of molecules such as PI3K or small GTPases, and is determined by intracellular signaling networks. It results in directional forces through actin polymerization and myosin contractions.",
"start_char_pos": 132,
"end_char_pos": 133
},
{
"type": "A",
"before": null,
"after": "terms of reaction-diffusion systems of interacting molecules, and are often defined in",
"start_char_pos": 207,
"end_char_pos": 207
},
{
"type": "R",
"before": "and hence cannot accurately capture the effect of cell shape, as well as the response of the cell to signals from different directions in a three-dimensional environment. To study those effects",
"after": ". In this paper",
"start_char_pos": 238,
"end_char_pos": 431
},
{
"type": "R",
"before": "three-dimensional",
"after": "3D",
"start_char_pos": 449,
"end_char_pos": 466
},
{
"type": "R",
"before": "a cell. As some key molecules in",
"after": "interacting molecules in a single cell, and find that cell geometry has an important role affecting the capability of a",
"start_char_pos": 495,
"end_char_pos": 527
},
{
"type": "R",
"before": "polarization, such as the small GTPases, can exist both membrane bound and soluble in the cytosol, we first look at the role of cell geometry on the membrane binding/unbinding dynamics of such molecules. We derive quite general conditions under which effective existing one or two-dimensional computational models are valid, and find novel renormalizations of parameters in the effective model. We then extend an established one-dimensional cell polarization pathway in our three-dimensional framework. Our simulations indicate that even in some quasi-one-dimensional scenarios, such as polarization of a cell along a linear growth factor gradient, the cell shape can influence the polarization behavior of the cell, with cells of some shape polarizing more efficiently than those of other shapes. We also investigate the role of the previously ignored membrane unbinding rate on polarization. Furthermore, we simulate the response of the cell when the external signal is changing directions, and we find that more symmetric cells can change their polarized state more effectively towards the new stimulus",
"after": "to polarize, or change polarization when an external signal changes direction. Our results suggest a geometrical argument why more roundish cells can repolarize more effectively",
"start_char_pos": 533,
"end_char_pos": 1638
},
{
"type": "A",
"before": null,
"after": ", and thus enable roundish cells to turn faster, as has been observed in experiments. On the other hand, elongated cells preferentially polarize along their main axis even when a gradient stimulus appears from another direction. Furthermore, our 3D model can accurately capture the effect of binding and unbinding of important regulators of cell polarization to and from the cell membrane. This spatial separation of membrane and cytosol, not possible to capture in 1D or 2D models, leads to marked differences of our model from comparable lower-dimensional models",
"start_char_pos": 1715,
"end_char_pos": 1715
}
]
| [
0,
133,
408,
502,
736,
927,
1035,
1330,
1426
]
|
1508.02824 | 1 | Operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions. Yet several desirable properties of MLE (e.g. asymptotic normality) are generally valid only for large sample-sizes, a situation rarely encountered in operational risk. We study MLE in operational risk models for small sample-sizes across a range of loss severity distributions. We apply these results to assess (1) the approximation of parameter confidence intervals by asymptotic normality, and (2) value-at-risk (VaR) stability as a function of sample-size. Finally, we discuss implications for operational risk modeling . | Operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions. Yet several desirable properties of MLE (e.g. asymptotic normality) are generally valid only for large sample-sizes, a situation rarely encountered in operational risk. In this paper, we study how asymptotic normality does--or does not--hold for common severity distributions in operational risk models . We then apply these results to evaluate errors caused by failure of asymptotic normality in constructing confidence intervals around the MLE fitted parameters . | [
{
"type": "R",
"before": "We study MLE",
"after": "In this paper, we study how asymptotic normality does--or does not--hold for common severity distributions",
"start_char_pos": 293,
"end_char_pos": 305
},
{
"type": "R",
"before": "for small sample-sizes across a range of loss severity distributions. We",
"after": ". We then",
"start_char_pos": 333,
"end_char_pos": 405
},
{
"type": "R",
"before": "assess (1) the approximation of parameter confidence intervals by asymptotic normality, and (2) value-at-risk (VaR) stability as a function of sample-size. Finally, we discuss implications for operational risk modeling",
"after": "evaluate errors caused by failure of asymptotic normality in constructing confidence intervals around the MLE fitted parameters",
"start_char_pos": 429,
"end_char_pos": 647
}
]
| [
0,
123,
292,
402,
584
]
|
1508.03282 | 1 | We provide a general account of the strong predictable representation property in filtrations initially enlarged with a random variable L. We prove that the strong predictable representation property can always be transferred to the enlarged filtration as long as the classical density hypothesis of Jacod (1985) holds. This generalizes the existing martingale representation results and does not rely on the equivalence between the conditional and the unconditional law of L . The results are illustrated in the context of hedging contingent claims under insider information. | We study the strong predictable representation property in filtrations initially enlarged with a random variable L. We prove that the strong predictable representation property can always be transferred to the enlarged filtration as long as the classical density hypothesis of Jacod (1985) holds. This generalizes the existing martingale representation results and does not rely on the equivalence between the conditional and the unconditional laws of L. Depending on the behavior of the density process at zero, different forms of martingale representation are established . The results are illustrated in the context of hedging contingent claims under insider information. | [
{
"type": "R",
"before": "provide a general account of",
"after": "study",
"start_char_pos": 3,
"end_char_pos": 31
},
{
"type": "R",
"before": "law of L",
"after": "laws of L. Depending on the behavior of the density process at zero, different forms of martingale representation are established",
"start_char_pos": 467,
"end_char_pos": 475
}
]
| [
0,
319
]
|
1508.03373 | 1 | In this work , we use Martingale theory to derive formulas for the expected decision time, error rates , and first passage times associated with a multistage drift diffusion model, or a Wiener diffusion model with piecewise constant time-varying drift rates and decision boundaries. The model we study is a generalization of that considered in Ratcliff (1980) . The derivation relies on using the optional stopping theorem for properly chosen Martingales, thus obtaining formulae which may be used to compute performance metrics for a particular stage of the stochastic decision process. We also explicitly solve the case of a two stage diffusion model, and provide numerical demonstrations of the computations suggested by our analysis. Finally we present calculations that allow our techniques to approximate time-varying Ornstein-Uhlenbeck processes . By presenting these explicit formulae, we aim to foster the development of refined numerical methods and analytical techniques for studying diffusion decision processes with time-varying drift rates and thresholds . | Research in psychology and neuroscience has successfully modeled decision making as a process of noisy evidence accumulation to a decision bound. While there are several variants and implementations of this idea, the majority of these models make use of a noisy accumulation between two absorbing boundaries. A common assumption of these models is that decision parameters, e.g., the rate of accumulation (drift rate), remain fixed over the course of a decision, allowing the derivation of analytic formulas for the probabilities of hitting the upper or lower decision threshold, and the mean decision time. There is reason to believe, however, that many types of behavior would be better described by a model in which the parameters were allowed to vary over the course of the decision process. In this paper , we use martingale theory to derive formulas for the mean decision time, hitting probabilities , and first passage time (FPT) densities of a Wiener process with time-varying drift between two time-varying absorbing boundaries. This model was first studied by Ratcliff (1980) in the two-stage form, and here we consider the same model for an arbitrary number of stages (i.e. intervals of time during which parameters are constant). Our calculations enable direct computation of mean decision times and hitting probabilities for the associated multistage process. We also provide a review of how martingale theory may be used to analyze similar models employing Wiener processes by re-deriving some classical results. In concert with a variety of numerical tools already available, the current derivations should encourage mathematical analysis of more complex models of decision making with time-varying evidence . | [
{
"type": "R",
"before": "In this work",
"after": "Research in psychology and neuroscience has successfully modeled decision making as a process of noisy evidence accumulation to a decision bound. While there are several variants and implementations of this idea, the majority of these models make use of a noisy accumulation between two absorbing boundaries. A common assumption of these models is that decision parameters, e.g., the rate of accumulation (drift rate), remain fixed over the course of a decision, allowing the derivation of analytic formulas for the probabilities of hitting the upper or lower decision threshold, and the mean decision time. There is reason to believe, however, that many types of behavior would be better described by a model in which the parameters were allowed to vary over the course of the decision process. In this paper",
"start_char_pos": 0,
"end_char_pos": 12
},
{
"type": "R",
"before": "Martingale",
"after": "martingale",
"start_char_pos": 22,
"end_char_pos": 32
},
{
"type": "R",
"before": "expected",
"after": "mean",
"start_char_pos": 67,
"end_char_pos": 75
},
{
"type": "R",
"before": "error rates",
"after": "hitting probabilities",
"start_char_pos": 91,
"end_char_pos": 102
},
{
"type": "R",
"before": "times associated with a multistage drift diffusion model, or a Wiener diffusion model with piecewise constant",
"after": "time (FPT) densities of a Wiener process with",
"start_char_pos": 123,
"end_char_pos": 232
},
{
"type": "R",
"before": "rates and decision boundaries. The model we study is a generalization of that considered in",
"after": "between two time-varying absorbing boundaries. This model was first studied by",
"start_char_pos": 252,
"end_char_pos": 343
},
{
"type": "R",
"before": ". The derivation relies on using the optional stopping theorem for properly chosen Martingales, thus obtaining formulae which may be used to compute performance metrics for a particular stage of the stochastic decision",
"after": "in the two-stage form, and here we consider the same model for an arbitrary number of stages (i.e. intervals of time during which parameters are constant). Our calculations enable direct computation of mean decision times and hitting probabilities for the associated multistage",
"start_char_pos": 360,
"end_char_pos": 578
},
{
"type": "R",
"before": "explicitly solve the case of a two stage diffusion model, and provide numerical demonstrations of the computations suggested by our analysis. Finally we present calculations that allow our techniques to approximate time-varying Ornstein-Uhlenbeck processes . By presenting these explicit formulae, we aim to foster the development of refined numerical methods and analytical techniques for studying diffusion decision processes",
"after": "provide a review of how martingale theory may be used to analyze similar models employing Wiener processes by re-deriving some classical results. In concert with a variety of numerical tools already available, the current derivations should encourage mathematical analysis of more complex models of decision making",
"start_char_pos": 596,
"end_char_pos": 1023
},
{
"type": "R",
"before": "drift rates and thresholds",
"after": "evidence",
"start_char_pos": 1042,
"end_char_pos": 1068
}
]
| [
0,
282,
361,
587,
737
]
|
1508.03533 | 2 | Since 2007, several contributions have tried to identify early-warning signals of the financial crisis. However, the vast majority of analyses has focused , so far, on financial systems and little theoretical work has been done on the economic counterpart. In the present paper we fill this gap and employ the theoretical tools of network theory to shed light on the response of world trade to the financial crisis of 2007 and the economic recession of 2008-2009. We have explored the evolution of the bipartite World Trade Web (WTW) across the years 1995-2010, monitoring the behaviour of the system both before and after 2007. Remarkably, our results point out the presence of early structural changes in the WTW topology: from 2003 on , the WTW becomes more and more compatible with the picture of a network where correlations between countries and products are progressively lost. Moreover, the most evident modification in the structure of the world trade network can be considered as concluded in 2010, after a seemingly stationary phase of three years. We have also refined our analysis by considering specific subsets of countries and products: according to our analysis, the most statistically significant early-warning signals are provided by the most volatile macrosectors, especially when measured on emerging economies , suggesting the latter as the most sensitive indicators of the WTW health . | Since 2007, several contributions have tried to identify early-warning signals of the financial crisis. However, the vast majority of analyses has focused on financial systems and little theoretical work has been done on the economic counterpart. In the present paper we fill this gap and employ the theoretical tools of network theory to shed light on the response of world trade to the financial crisis of 2007 and the economic recession of 2008-2009. We have explored the evolution of the bipartite World Trade Web (WTW) across the years 1995-2010, monitoring the behavior of the system both before and after 2007. Our analysis shows early structural changes in the WTW topology: since 2003 , the WTW becomes increasingly compatible with the picture of a network where correlations between countries and products are progressively lost. Moreover, the WTW structural modification can be considered as concluded in 2010, after a seemingly stationary phase of three years. We have also refined our analysis by considering specific subsets of countries and products: the most statistically significant early-warning signals are provided by the most volatile macrosectors, especially when measured on developing countries , suggesting the emerging economies as being the most sensitive ones to the global economic cycles . | [
{
"type": "D",
"before": ", so far,",
"after": null,
"start_char_pos": 155,
"end_char_pos": 164
},
{
"type": "R",
"before": "behaviour",
"after": "behavior",
"start_char_pos": 577,
"end_char_pos": 586
},
{
"type": "R",
"before": "Remarkably, our results point out the presence of",
"after": "Our analysis shows",
"start_char_pos": 629,
"end_char_pos": 678
},
{
"type": "R",
"before": "from",
"after": "since",
"start_char_pos": 725,
"end_char_pos": 729
},
{
"type": "D",
"before": "on",
"after": null,
"start_char_pos": 735,
"end_char_pos": 737
},
{
"type": "R",
"before": "more and more",
"after": "increasingly",
"start_char_pos": 756,
"end_char_pos": 769
},
{
"type": "R",
"before": "most evident modification in the structure of the world trade network",
"after": "WTW structural modification",
"start_char_pos": 899,
"end_char_pos": 968
},
{
"type": "D",
"before": "according to our analysis,",
"after": null,
"start_char_pos": 1153,
"end_char_pos": 1179
},
{
"type": "R",
"before": "emerging economies",
"after": "developing countries",
"start_char_pos": 1313,
"end_char_pos": 1331
},
{
"type": "R",
"before": "latter as",
"after": "emerging economies as being",
"start_char_pos": 1349,
"end_char_pos": 1358
},
{
"type": "R",
"before": "indicators of the WTW health",
"after": "ones to the global economic cycles",
"start_char_pos": 1378,
"end_char_pos": 1406
}
]
| [
0,
103,
256,
463,
628,
884,
1059
]
|
1508.03677 | 1 | Commodity prices depend on supply and demand. With an uneven distribution of resources, prices are high at locations starved of commodity and low where it is abundant. We introduce an agent-based model in which agents set their prices to maximize profit. At steady state , the market URLanizes into three groups: excess producers, consumers , and balanced agents . When resources are scarce , prices rise sharply at a turning point due to the disappearance of excess producers. Market dataof commodities provide evidence of turning pointsfor essential commodities, as well as a yield point for non-essential ones . | We introduce an agent-based model , in which agents set their prices to maximize profit. At steady state the market URLanizes into three groups: excess producers, consumers and balanced agents , with prices determined by their own resource level and a couple of macroscopic parameters that emerge naturally from the analysis, akin to mean-field parameters in statistical mechanics . When resources are scarce prices rise sharply below a turning point that marks the disappearance of excess producers. To compare the model with real empirical data, we study the relations between commodity prices and stock-to-use ratios of a range of commodities such as agricultural products and metals. By introducing an elasticity parameter to mitigate noise and long-term changes in commodities data, we confirm the trend of rising prices, provide evidence for turning points, and indicate yield points for less essential commodities . | [
{
"type": "D",
"before": "Commodity prices depend on supply and demand. With an uneven distribution of resources, prices are high at locations starved of commodity and low where it is abundant.",
"after": null,
"start_char_pos": 0,
"end_char_pos": 167
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 202,
"end_char_pos": 202
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 272,
"end_char_pos": 273
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 342,
"end_char_pos": 343
},
{
"type": "A",
"before": null,
"after": ", with prices determined by their own resource level and a couple of macroscopic parameters that emerge naturally from the analysis, akin to mean-field parameters in statistical mechanics",
"start_char_pos": 364,
"end_char_pos": 364
},
{
"type": "D",
"before": ",",
"after": null,
"start_char_pos": 393,
"end_char_pos": 394
},
{
"type": "R",
"before": "at",
"after": "below",
"start_char_pos": 415,
"end_char_pos": 417
},
{
"type": "R",
"before": "due to",
"after": "that marks",
"start_char_pos": 434,
"end_char_pos": 440
},
{
"type": "R",
"before": "Market dataof commodities provide evidence of turning pointsfor essential commodities, as well as a yield point for non-essential ones",
"after": "To compare the model with real empirical data, we study the relations between commodity prices and stock-to-use ratios of a range of commodities such as agricultural products and metals. By introducing an elasticity parameter to mitigate noise and long-term changes in commodities data, we confirm the trend of rising prices, provide evidence for turning points, and indicate yield points for less essential commodities",
"start_char_pos": 480,
"end_char_pos": 614
}
]
| [
0,
45,
167,
255,
366,
479
]
|
1508.04122 | 1 | The temperature dependence of the DNA flexibility is studied in the presence of stretching and unzipping forces. Two classes of models are considered. In one case the origin of elasticity is entropic due to the polymeric correlations, and in the other double stranded DNA is taken to have an intrinsic rigidity for bending. In both cases single strands are completely flexible. The change in the elastic constant for the flexible case is shown to be due to the thermally generated bubbles . For the case of intrinsic rigidity, the elastic constant is found to be proportional to the bubble number fluctuation. | The temperature dependence of the DNA flexibility is studied in the presence of stretching and unzipping forces. Two classes of models are considered. In one case the origin of elasticity is entropic due to the polymeric correlations, and in the other the double stranded DNA is taken to have an intrinsic rigidity for bending. In both cases single strands are completely flexible. The change in the elastic constant for the flexible case due to the thermally generated bubbles has been obtained exactly . For the case of intrinsic rigidity, the elastic constant is found to be proportional to the bubble number fluctuation. | [
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 252,
"end_char_pos": 252
},
{
"type": "D",
"before": "is shown to be",
"after": null,
"start_char_pos": 436,
"end_char_pos": 450
},
{
"type": "A",
"before": null,
"after": "has been obtained exactly",
"start_char_pos": 490,
"end_char_pos": 490
}
]
| [
0,
112,
150,
324,
378,
492
]
|
1508.04122 | 2 | The temperature dependence of the DNA flexibility is studied in the presence of stretching and unzipping forces. Two classes of models are considered. In one case the origin of elasticity is entropic due to the polymeric correlations, and in the other the double stranded DNA is taken to have an intrinsic rigidity for bending. In both cases single strands are completely flexible. The change in the elastic constant for the flexible case due to the thermally generated bubbles has been obtained exactly. For the case of intrinsic rigidity, the elastic constant is found to be proportional to the bubble number fluctuation. | The temperature dependence of DNA flexibility is studied in the presence of stretching and unzipping forces. Two classes of models are considered. In one case the origin of elasticity is entropic due to the polymeric correlations, and in the other the double-stranded DNA is taken to have an intrinsic rigidity for bending. In both cases single strands are completely flexible. The change in the elastic constant for the flexible case due to thermally generated bubbles is obtained exactly. For the case of intrinsic rigidity, the elastic constant is found to be proportional to the square root of the bubble number fluctuation. | [
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 30,
"end_char_pos": 33
},
{
"type": "R",
"before": "double stranded",
"after": "double-stranded",
"start_char_pos": 256,
"end_char_pos": 271
},
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 446,
"end_char_pos": 449
},
{
"type": "R",
"before": "has been",
"after": "is",
"start_char_pos": 478,
"end_char_pos": 486
},
{
"type": "A",
"before": null,
"after": "the square root of",
"start_char_pos": 593,
"end_char_pos": 593
}
]
| [
0,
112,
150,
327,
381,
504
]
|
1508.04332 | 1 | In this paper we seek to demonstrate the predictability of stock market returns and explain the nature of this return predictability. To this end, we further develop the news-driven analytic model of the stock market derived in Gusev et al. (2015). This enables us to capture market dynamics at various timescalesand shed light on mechanisms underlying certain market behaviors such as transitions between bull- and bear markets and the self-similar behavior of price changes. We investigate the model and show that the market is nearly efficient on timescales shorter than one day , adjusting quickly to incoming news, but is inefficient on longer timescales, where news may have a long-lasting nonlinear impact on dynamics attributable to a feedback mechanism acting over these horizons. Using the model, we design the prototypes of algorithmic strategies that utilize news flow, quantified and measured, as the only input to trade on market return forecasts over multiple horizons, from days to months. The backtested results suggest that the return is predictable to the extent that successful trading strategies can be constructed to harness this predictability. | In this paper we seek to demonstrate the predictability of stock market returns and explain the nature of this return predictability. To this end, we introduce investors with different investment horizons into the news-driven , analytic, agent-based market model developed in Gusev et al. (2015). This heterogeneous framework enables us to capture dynamics at multiple timescales, expanding the model's applications and improving precision. We study the heterogeneous model theoretically and empirically to highlight essential mechanisms underlying certain market behaviors , such as transitions between bull- and bear markets and the self-similar behavior of price changes. Most importantly, we apply this model to show that the stock market is nearly efficient on intraday timescales , adjusting quickly to incoming news, but becomes inefficient on longer timescales, where news may have a long-lasting nonlinear impact on dynamics , attributable to a feedback mechanism acting over these horizons. Then, using the model, we design algorithmic strategies that utilize news flow, quantified and measured, as the only input to trade on market return forecasts over multiple horizons, from days to months. The backtested results suggest that the return is predictable to the extent that successful trading strategies can be constructed to harness this predictability. | [
{
"type": "R",
"before": "further develop",
"after": "introduce investors with different investment horizons into",
"start_char_pos": 150,
"end_char_pos": 165
},
{
"type": "R",
"before": "analytic model of the stock market derived",
"after": ", analytic, agent-based market model developed",
"start_char_pos": 182,
"end_char_pos": 224
},
{
"type": "A",
"before": null,
"after": "heterogeneous framework",
"start_char_pos": 254,
"end_char_pos": 254
},
{
"type": "R",
"before": "market dynamics at various timescalesand shed light on",
"after": "dynamics at multiple timescales, expanding the model's applications and improving precision. We study the heterogeneous model theoretically and empirically to highlight essential",
"start_char_pos": 277,
"end_char_pos": 331
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 379,
"end_char_pos": 379
},
{
"type": "R",
"before": "We investigate the model and",
"after": "Most importantly, we apply this model to",
"start_char_pos": 479,
"end_char_pos": 507
},
{
"type": "A",
"before": null,
"after": "stock",
"start_char_pos": 522,
"end_char_pos": 522
},
{
"type": "R",
"before": "timescales shorter than one day",
"after": "intraday timescales",
"start_char_pos": 553,
"end_char_pos": 584
},
{
"type": "R",
"before": "is",
"after": "becomes",
"start_char_pos": 627,
"end_char_pos": 629
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 728,
"end_char_pos": 728
},
{
"type": "R",
"before": "Using",
"after": "Then, using",
"start_char_pos": 794,
"end_char_pos": 799
},
{
"type": "D",
"before": "the prototypes of",
"after": null,
"start_char_pos": 821,
"end_char_pos": 838
}
]
| [
0,
133,
248,
478,
793,
1009
]
|
1508.04754 | 1 | We study the performance of the euro/Swiss franc exchange rate in the extraordinary period from September 6, 2011 and January 15, 2015 when the Swiss National Bank enforced a minimum exchange rate of 1.20 Swiss francs per euro. Within the general framework built on geometric Brownian motions (GBM) , the first-order effect of such a steric constraint would enter a priori in the form of a repulsive entropic force associated with the paths crossing the barrier that are forbidden. It turns out that this naive theory is proved empirically to be completely mistaken . The clue is to realise that the random walk nature of financial prices results from the continuous anticipations of traders about future opportunities, whose aggregate actions translate into an approximate efficient market with almost no arbitrage opportunities. With the Swiss National Bank stated commitment to enforce the barrier, traders's anticipation of this action leads to a volatility of the exchange rate that depends on the distance to the barrier. This effect described by Krugman's model is supported by non-parametric measurements of the conditional drift and volatility from the data. Despite the obvious differences between "brainless" physical Brownian motions and complex financial Brownian motions resulting from the aggregated investments of anticipating agents, we show that ] the two systems can be described with the same mathematics after all. Using a recently proposed extended analogy in terms of a colloidal Brownian particle embedded in a fluid of molecules associated with the underlying order book, we derive that, close to the restricting boundary, the dynamics of both systems is described by a stochastic differential equation with a very small constant drift and a linear diffusion coefficient. | We study the performance of the euro/Swiss franc exchange rate in the extraordinary period from September 6, 2011 and January 15, 2015 when the Swiss National Bank enforced a minimum exchange rate of 1.20 Swiss francs per euro. Based on the analogy between Brownian motion in finance and physics , the first-order effect of such a steric constraint would enter a priori in the form of a repulsive entropic force associated with the paths crossing the barrier that are forbidden. Non-parametric empirical estimates of drift and volatility show that the predicted first-order analogy between economics and physics are incorrect . The clue is to realise that the random walk nature of financial prices results from the continuous anticipations of traders about future opportunities, whose aggregate actions translate into an approximate efficient market with almost no arbitrage opportunities. With the Swiss National Bank stated commitment to enforce the barrier, traders's anticipation of this action leads to a vanishing drift together with a volatility of the exchange rate that depends on the distance to the barrier. We give direct quantitative empirical evidence that this effect is well described by Krugman's target zone model P.R. Krugman. The Quarterly Journal of Economics, 106(3):669-682, 1991]. Motivated by the insights from this economical model, we revise the initial economics-physics analogy and show that, within the context of hindered diffusion, the two systems can be described with the same mathematics after all. Using a recently proposed extended analogy in terms of a colloidal Brownian particle embedded in a fluid of molecules associated with the underlying order book, we derive that, close to the restricting boundary, the dynamics of both systems is described by a stochastic differential equation with a very small constant drift and a linear diffusion coefficient. | [
{
"type": "R",
"before": "Within the general framework built on geometric Brownian motions (GBM)",
"after": "Based on the analogy between Brownian motion in finance and physics",
"start_char_pos": 228,
"end_char_pos": 298
},
{
"type": "R",
"before": "It turns out that this naive theory is proved empirically to be completely mistaken",
"after": "Non-parametric empirical estimates of drift and volatility show that the predicted first-order analogy between economics and physics are incorrect",
"start_char_pos": 482,
"end_char_pos": 565
},
{
"type": "A",
"before": null,
"after": "vanishing drift together with a",
"start_char_pos": 951,
"end_char_pos": 951
},
{
"type": "R",
"before": "This effect",
"after": "We give direct quantitative empirical evidence that this effect is well",
"start_char_pos": 1029,
"end_char_pos": 1040
},
{
"type": "R",
"before": "model is supported by non-parametric measurements of the conditional drift and volatility from the data. Despite the obvious differences between \"brainless\" physical Brownian motions and complex financial Brownian motions resulting from the aggregated investments of anticipating agents, we show that",
"after": "target zone model",
"start_char_pos": 1064,
"end_char_pos": 1364
},
{
"type": "A",
"before": null,
"after": "P.R. Krugman. The Quarterly Journal of Economics, 106(3):669-682, 1991",
"start_char_pos": 1365,
"end_char_pos": 1365
},
{
"type": "A",
"before": null,
"after": ". Motivated by the insights from this economical model, we revise the initial economics-physics analogy and show that, within the context of hindered diffusion,",
"start_char_pos": 1366,
"end_char_pos": 1366
}
]
| [
0,
227,
481,
567,
830,
1028,
1168,
1436
]
|
1508.05233 | 1 | We study the problem of super-replication of game options in general stochastic volatility models which include e. g. the Heston model, the Hull-White model and the Scott model. For simplicity, we consider models with one risky asset. We show that the super-replication price is the cheapest cost of a trivial super-replication strategy. Furthermore, we calculate explicitly the super-replication price and the corresponding optimal hedge. The super-replication price can be seen as the game variant of a concave envelope . Our approach is purely probabilistic. | In this work we introduce the notion of extremely incomplete markets. We prove that for these markets the super-replication price coincide with the model free super-replication price. Namely, the knowledge of the model does not reduce the super-replication price . We provide two families of extremely incomplete models: stochastic volatility models and rough volatility models. Moreover, we give several computational examples . Our approach is purely probabilistic. | [
{
"type": "R",
"before": "We study the problem of super-replication of game options in general stochastic volatility models which include e. g. the Heston model, the Hull-White model and the Scott model. For simplicity, we consider models with one risky asset. We show that",
"after": "In this work we introduce the notion of extremely incomplete markets. We prove that for these markets",
"start_char_pos": 0,
"end_char_pos": 247
},
{
"type": "R",
"before": "is the cheapest cost of a trivial",
"after": "coincide with the model free",
"start_char_pos": 276,
"end_char_pos": 309
},
{
"type": "R",
"before": "strategy. Furthermore, we calculate explicitly the",
"after": "price. Namely, the knowledge of the model does not reduce the",
"start_char_pos": 328,
"end_char_pos": 378
},
{
"type": "D",
"before": "price and the corresponding optimal hedge. The super-replication",
"after": null,
"start_char_pos": 397,
"end_char_pos": 461
},
{
"type": "R",
"before": "can be seen as the game variant of a concave envelope",
"after": ". We provide two families of extremely incomplete models: stochastic volatility models and rough volatility models. Moreover, we give several computational examples",
"start_char_pos": 468,
"end_char_pos": 521
}
]
| [
0,
177,
234,
337,
439,
523
]
|
1508.05233 | 2 | In this work we introduce the notion of extremely incomplete markets. We prove that for these markets the super-replication price coincide with the model free super-replication price. Namely, the knowledge of the model does not reduce the super-replication price. We provide two families of extremely incomplete models: stochastic volatility models and rough volatility models. Moreover, we give several computational examples. Our approach is purely probabilistic. | In this work we introduce the notion of fully incomplete markets. We prove that for these markets the super-replication price coincide with the model free super-replication price. Namely, the knowledge of the model does not reduce the super-replication price. We provide two families of fully incomplete models: stochastic volatility models and rough volatility models. Moreover, we give several computational examples. Our approach is purely probabilistic. | [
{
"type": "R",
"before": "extremely",
"after": "fully",
"start_char_pos": 40,
"end_char_pos": 49
},
{
"type": "R",
"before": "extremely",
"after": "fully",
"start_char_pos": 291,
"end_char_pos": 300
}
]
| [
0,
69,
183,
263,
377,
427
]
|
1508.05241 | 1 | Studying Binomial as well as Normal return dynamics in discrete time, we explain how , in zero-growth environments, trading strategies can be found which generate exponential growthof wealth. We include numerical results for simulated and real world processes confirming the observed phenomena while also highlighting implicit risks. | Studying Binomial and Gaussian return dynamics in discrete time, we show how excess volatility can be traded to create growth. We test our results on real world data to confirm the observed model phenomena while also highlighting implicit risks. | [
{
"type": "R",
"before": "as well as Normal",
"after": "and Gaussian",
"start_char_pos": 18,
"end_char_pos": 35
},
{
"type": "R",
"before": "explain how , in zero-growth environments, trading strategies can be found which generate exponential growthof wealth. We include numerical results for simulated and real world processes confirming the observed",
"after": "show how excess volatility can be traded to create growth. We test our results on real world data to confirm the observed model",
"start_char_pos": 73,
"end_char_pos": 283
}
]
| [
0,
191
]
|
1508.05751 | 1 | Do judicial decisions affect the securities markets in discernible and perhaps predictable ways? In other words, is there "law on the market" (LOTM)? This is a question that has been raised by commentators, but answered by very few in a systematic and financially rigorous manner. Using intraday data and a multiday event window, this large scale event study seeks to determine the existence, frequency and magnitude of equity market impacts flowing from Supreme Court decisions. We demonstrate that , while certainly not present in every case, "law on the market" events are fairly common. Across all cases decided by the Supreme Court of the United States between the 1999-2013 terms, we identify 79 cases where the share price of one or more publicly traded company moved in direct response to a Supreme Court decision. In the aggregate, over fifteen years, Supreme Court decisions were responsible for more than 140 billion dollars in absolute changes in wealth. Our analysis not only contributes to our understanding of the political economy of judicial decision making, but also links to the broader set of research exploring the performance in financial markets using event study methods. We conclude by exploring the informational efficiency of law as a market by highlighting the speed at which information from Supreme Court decisions is assimilated by the market. Relatively speaking, LOTM events have historically exhibited slow rates of information incorporation for affected securities. This implies a market ripe for arbitrage where an event-based trading strategy could be successful . | What happens when the Supreme Court of the United States decides a case impacting one or more publicly-traded firms? While many have observed anecdotal evidence linking decisions or oral arguments to abnormal stock returns, few have rigorously or systematically investigated the behavior of equities around Supreme Court actions. In this research, we present the first comprehensive, longitudinal study on the topic, spanning over 15 years and hundreds of cases and firms. Using both intra- and interday data around decisions and oral arguments, we evaluate the frequency and magnitude of statistically-significant abnormal return events after Supreme Court action. On a per-term basis, we find 5.3 cases and 7.8 stocks that exhibit abnormal returns after decision. In total, across the cases we examined, we find 79 out of the 211 cases (37\%) exhibit an average abnormal return of 4.4\% over a two-session window with an average |t|-statistic of 2.9. Finally, we observe that abnormal returns following Supreme Court decisions materialize over the span of hours and days, not minutes, yielding strong implications for market efficiency in this context. While we cannot causally separate substantive legal impact from mere revision of beliefs, we do find strong evidence that there is indeed a "law on the market" effect as measured by the frequency of abnormal return events, and that these abnormal returns are not immediately incorporated into prices . | [
{
"type": "R",
"before": "Do judicial decisions affect the securities markets in discernible and perhaps predictable ways? In other words, is there \"law on the market\" (LOTM)? This is a question that has been raised by commentators, but answered by very few in a systematic and financially rigorous manner. Using intraday data and a multiday event window, this large scale event study seeks to determine the existence,",
"after": "What happens when the Supreme Court of the United States decides a case impacting one or more publicly-traded firms? While many have observed anecdotal evidence linking decisions or oral arguments to abnormal stock returns, few have rigorously or systematically investigated the behavior of equities around Supreme Court actions. In this research, we present the first comprehensive, longitudinal study on the topic, spanning over 15 years and hundreds of cases and firms. Using both intra- and interday data around decisions and oral arguments, we evaluate the",
"start_char_pos": 0,
"end_char_pos": 392
},
{
"type": "R",
"before": "equity market impacts flowing from Supreme Court decisions. We demonstrate that , while certainly not present in every case, \"law on the market\" events are fairly common. Across all cases decided by the Supreme Court of the United States between the 1999-2013 terms, we identify",
"after": "statistically-significant abnormal return events after Supreme Court action. On a per-term basis, we find 5.3 cases and 7.8 stocks that exhibit abnormal returns after decision. In total, across the cases we examined, we find",
"start_char_pos": 420,
"end_char_pos": 698
},
{
"type": "R",
"before": "cases where the share price of one or more publicly traded company moved in direct response to a Supreme Court decision. In the aggregate, over fifteen years,",
"after": "out of the 211 cases (37\\%) exhibit an average abnormal return of 4.4\\% over a two-session window with an average |t|-statistic of 2.9. Finally, we observe that abnormal returns following",
"start_char_pos": 702,
"end_char_pos": 860
},
{
"type": "R",
"before": "were responsible for more than 140 billion dollars in absolute changes in wealth. Our analysis not only contributes to our understanding of the political economy of judicial decision making, but also links to the broader set of research exploring the performance in financial markets using event study methods. We conclude by exploring the informational efficiency of law as a market by highlighting the speed at which information from Supreme Court decisions is assimilated by the market. Relatively speaking, LOTM events have historically exhibited slow rates of information incorporation for affected securities. This implies a market ripe for arbitrage where an event-based trading strategy could be successful",
"after": "materialize over the span of hours and days, not minutes, yielding strong implications for market efficiency in this context. While we cannot causally separate substantive legal impact from mere revision of beliefs, we do find strong evidence that there is indeed a \"law on the market\" effect as measured by the frequency of abnormal return events, and that these abnormal returns are not immediately incorporated into prices",
"start_char_pos": 885,
"end_char_pos": 1599
}
]
| [
0,
96,
149,
280,
479,
590,
822,
966,
1195,
1374,
1500
]
|
1508.05837 | 1 | Hydro storage system optimization is becoming one of the most challenging task in Energy Finance. Following the Blomvall and Lindberg (2002) interior point model, we set up a stochastic multiperiod optimization procedure by means of a "bushy" recombining tree that provides fast computational results. Inequality constraints are packed into the objective function by the logarithmic barrier approach and the utility function is approximated by its second order Taylor polynomial. The optimal solution for the original problem is obtained as a diagonal sequence where the first diagonal dimension is the parameter controlling the logarithmic penalty and the second is the parameter for the Newton step in the construction of the approximated solution. Optimimal intraday electricity trading and water values for hydroassets are computed. The algorithm is implemented in Mathematica. | Hydro storage system optimization is becoming one of the most challenging task in Energy Finance. Following the Blomvall and Lindberg (2002) interior point model, we set up a stochastic multiperiod optimization procedure by means of a "bushy" recombining tree that provides fast computational results. Inequality constraints are packed into the objective function by the logarithmic barrier approach and the utility function is approximated by its second order Taylor polynomial. The optimal solution for the original problem is obtained as a diagonal sequence where the first diagonal dimension is the parameter controlling the logarithmic penalty and the second is the parameter for the Newton step in the construction of the approximated solution. Optimal intraday electricity trading and water values for hydroassets as shadow prices are computed. The algorithm is implemented in Mathematica. | [
{
"type": "R",
"before": "Optimimal",
"after": "Optimal",
"start_char_pos": 751,
"end_char_pos": 760
},
{
"type": "A",
"before": null,
"after": "as shadow prices",
"start_char_pos": 823,
"end_char_pos": 823
}
]
| [
0,
97,
301,
479,
750,
837
]
|
1508.05837 | 2 | Hydro storage system optimization is becoming one of the most challenging task in Energy Finance. Following the Blomvall and Lindberg (2002) interior point model, we set up a stochastic multiperiod optimization procedure by means of a "bushy" recombining tree that provides fast computational results. Inequality constraints are packed into the objective function by the logarithmic barrier approach and the utility function is approximated by its second order Taylor polynomial. The optimal solution for the original problem is obtained as a diagonal sequence where the first diagonal dimension is the parameter controlling the logarithmic penalty and the second is the parameter for the Newton step in the construction of the approximated solution. Optimal intraday electricity trading and water values for hydroassets as shadow prices are computed. The algorithm is implemented in Mathematica. | Hydro storage system optimization is becoming one of the most challenging tasks in Energy Finance. While currently the state-of-the-art of the commercial software in the industry implements mainly linear models, we would like to introduce risk aversion and a generic utility function. At the same time, we aim to develop and implement a computational efficient algorithm, which is not affected by the curse of dimensionality and does not utilize subjective heuristics to prevent it. For the short term power market we propose a simultaneous solution for both dispatch and bidding problems. Following the Blomvall and Lindberg (2002) interior point model, we set up a stochastic multiperiod optimization procedure by means of a "bushy" recombining tree that provides fast computational results. Inequality constraints are packed into the objective function by the logarithmic barrier approach and the utility function is approximated by its second order Taylor polynomial. The optimal solution for the original problem is obtained as a diagonal sequence where the first diagonal dimension is the parameter controlling the logarithmic penalty and the second is the parameter for the Newton step in the construction of the approximated solution. Optimal intraday electricity trading and water values for hydro assets as shadow prices are computed. The algorithm is implemented in Mathematica. | [
{
"type": "R",
"before": "task",
"after": "tasks",
"start_char_pos": 74,
"end_char_pos": 78
},
{
"type": "A",
"before": null,
"after": "While currently the state-of-the-art of the commercial software in the industry implements mainly linear models, we would like to introduce risk aversion and a generic utility function. At the same time, we aim to develop and implement a computational efficient algorithm, which is not affected by the curse of dimensionality and does not utilize subjective heuristics to prevent it. For the short term power market we propose a simultaneous solution for both dispatch and bidding problems.",
"start_char_pos": 98,
"end_char_pos": 98
},
{
"type": "R",
"before": "hydroassets",
"after": "hydro assets",
"start_char_pos": 810,
"end_char_pos": 821
}
]
| [
0,
97,
302,
480,
751,
852
]
|
1508.06339 | 1 | Collateralization with daily margining and the so-called OIS-discounting have become a new standard in the post-crisis financial market. Although there appeared a large amount of literature to deal with a so-called multi-curve framework, a complete picture for a multi-currency setup with currency funding spreads, which are necessary to explain non-zero cross currency basis , can be rarely found since our initial attempts 9, 10, 11 . This note gives an extension of these works regarding a general framework of interest rates for a fully collateralized market. We provide a new formulation of the currency funding spread which is more suitable in the presence of non-zero correlation to the collateral rates. In particular, the last half of the paper is dedicated to develop a discretization of the HJM framework including stochastic collateral rates, LIBORs, foreign exchange rates as well as currency funding spreads with a fixed tenor structure, which makes it readily implementable as a traditional Market Model of interest rates . | Collateralization with daily margining has become a new standard in the post-crisis market. Although there appeared vast literature on a so-called multi-curve framework, a complete picture of a multi-currency setup with cross-currency basis can be rarely found since our initial attempts . This work gives its extension regarding a general framework of interest rates in a fully collateralized market. It gives a new formulation of the currency funding spread which is better suited for the general dependence. In the last half , it develops a discretization of the HJM framework with a fixed tenor structure, which makes it implementable as a traditional Market Model . | [
{
"type": "R",
"before": "and the so-called OIS-discounting have",
"after": "has",
"start_char_pos": 39,
"end_char_pos": 77
},
{
"type": "D",
"before": "financial",
"after": null,
"start_char_pos": 119,
"end_char_pos": 128
},
{
"type": "R",
"before": "a large amount of literature to deal with",
"after": "vast literature on",
"start_char_pos": 161,
"end_char_pos": 202
},
{
"type": "R",
"before": "for",
"after": "of",
"start_char_pos": 257,
"end_char_pos": 260
},
{
"type": "R",
"before": "currency funding spreads, which are necessary to explain non-zero cross currency basis ,",
"after": "cross-currency basis",
"start_char_pos": 289,
"end_char_pos": 377
},
{
"type": "D",
"before": "9, 10, 11",
"after": null,
"start_char_pos": 425,
"end_char_pos": 434
},
{
"type": "R",
"before": ". This note gives an extension of these works",
"after": ". This work gives its extension",
"start_char_pos": 435,
"end_char_pos": 480
},
{
"type": "R",
"before": "for",
"after": "in",
"start_char_pos": 529,
"end_char_pos": 532
},
{
"type": "R",
"before": "We provide",
"after": "It gives",
"start_char_pos": 564,
"end_char_pos": 574
},
{
"type": "R",
"before": "more suitable in the presence of non-zero correlation to the collateral rates. In particular,",
"after": "better suited for the general dependence. In",
"start_char_pos": 633,
"end_char_pos": 726
},
{
"type": "R",
"before": "of the paper is dedicated to develop",
"after": ", it develops",
"start_char_pos": 741,
"end_char_pos": 777
},
{
"type": "D",
"before": "including stochastic collateral rates, LIBORs, foreign exchange rates as well as currency funding spreads",
"after": null,
"start_char_pos": 816,
"end_char_pos": 921
},
{
"type": "D",
"before": "readily",
"after": null,
"start_char_pos": 967,
"end_char_pos": 974
},
{
"type": "D",
"before": "of interest rates",
"after": null,
"start_char_pos": 1019,
"end_char_pos": 1036
}
]
| [
0,
136,
563,
711
]
|
1508.06966 | 1 | Collective sensing by many interacting cells is observed in a variety of biological systems, and yet a quantitative understanding of how sensory information is encoded by many cells is still lacking. In this study, we characterize the calcium dynamics of cocultured monolayers of fibroblast cells and breast cancer cells in response to external ATP stimuli. We find that gap junctional communication is suppressed by the presence of cancer cells, similar in effect to reducing cell density. The ATP-induced calcium dynamics of nearest neighbor cells are correlated, and the cross-correlation functions depend on both ATP concentration and the architecture of the multicellular network. Combining experiments and stochastic modeling, we find that networks with sparse or defective connectivity have a reduced propensity for calcium oscillations and exhibit a broader distribution of oscillation frequencies. We find that cell-to-cell variability makes the conventional model of frequency encoding ineffective for a large population of communicating cells. Instead, our results suggest that multicellular networks URLanized near a dynamical critical point which allows cell-to-cell communications to significantly modulate the collective cellular dynamics . | Collective sensing by interacting cells is observed in a variety of biological systems, and yet a quantitative understanding of how sensory information is collectively encoded is lacking. Here we investigate the ATP-induced calcium dynamics of monolayers of fibroblast cells that communicate via gap junctions. Combining experiments and stochastic modeling, we find that increasing the ATP stimulus increases the propensity for calcium oscillations despite large cell-to-cell variability. The model further predicts that the oscillation propensity increases not only with the stimulus, but also with the cell density due to increased communication. Experiments confirm this prediction, showing that cell density modulates the collective sensory response. We further implicate cell-cell communication by coculturing the fibroblasts with cancer cells, which we show act as "defects" in the communication network, thereby reducing the oscillation propensity. These results suggest that multicellular networks sit at a point in parameter space where cell-cell communication has a significant effect on the sensory response, allowing cells to simultaneously respond to a sensory input and to the presence of neighbors . | [
{
"type": "D",
"before": "many",
"after": null,
"start_char_pos": 22,
"end_char_pos": 26
},
{
"type": "R",
"before": "encoded by many cells is still lacking. In this study, we characterize the",
"after": "collectively encoded is lacking. Here we investigate the ATP-induced",
"start_char_pos": 160,
"end_char_pos": 234
},
{
"type": "D",
"before": "cocultured",
"after": null,
"start_char_pos": 255,
"end_char_pos": 265
},
{
"type": "R",
"before": "and breast cancer cells in response to external ATP stimuli. We find that gap junctional communication is suppressed by the presence of cancer cells, similar in effect to reducing cell density. The ATP-induced calcium dynamics of nearest neighbor cells are correlated, and the cross-correlation functions depend on both ATP concentration and the architecture of the multicellular network.",
"after": "that communicate via gap junctions.",
"start_char_pos": 297,
"end_char_pos": 685
},
{
"type": "R",
"before": "networks with sparse or defective connectivity have a reduced",
"after": "increasing the ATP stimulus increases the",
"start_char_pos": 746,
"end_char_pos": 807
},
{
"type": "R",
"before": "and exhibit a broader distribution of oscillation frequencies. We find that",
"after": "despite large",
"start_char_pos": 844,
"end_char_pos": 919
},
{
"type": "R",
"before": "variability makes the conventional model of frequency encoding ineffective for a large population of communicating cells. Instead, our",
"after": "variability. The model further predicts that the oscillation propensity increases not only with the stimulus, but also with the cell density due to increased communication. Experiments confirm this prediction, showing that cell density modulates the collective sensory response. We further implicate cell-cell communication by coculturing the fibroblasts with cancer cells, which we show act as \"defects\" in the communication network, thereby reducing the oscillation propensity. These",
"start_char_pos": 933,
"end_char_pos": 1067
},
{
"type": "R",
"before": "URLanized near a dynamical critical point which allows cell-to-cell communications to significantly modulate the collective cellular dynamics",
"after": "sit at a point in parameter space where cell-cell communication has a significant effect on the sensory response, allowing cells to simultaneously respond to a sensory input and to the presence of neighbors",
"start_char_pos": 1112,
"end_char_pos": 1253
}
]
| [
0,
199,
357,
490,
685,
906,
1054
]
|
1508.07428 | 1 | We measure the influence of different time-scales on the dynamics of financial market data. This is obtained by decomposing financial time series into simple oscillations associated with distinct time-scales. We propose two new time-varying measures: 1) an amplitude scaling exponent and 2) an entropy like measure. We apply these measures to intra-day , 30-second sampled prices of various stock indices. Our results reveal intra-day trends where different time-horizons contribute with variable relative amplitudes over the course of the trading day. Our findings indicate that the time series we analysed have a non-stationary multi-fractional nature with predominantly persistent behaviour at the middle of the trading session and anti-persistent behaviour at the open and close. We demonstrate that these deviations are statistically significant and robust. | We measure the influence of different time-scales on the dynamics of financial market data. This is obtained by decomposing financial time series into simple oscillations associated with distinct time-scales. We propose two new time-varying measures: 1) an amplitude scaling exponent and 2) an entropy-like measure. We apply these measures to intraday , 30-second sampled prices of various stock indices. Our results reveal intraday trends where different time-horizons contribute with variable relative amplitudes over the course of the trading day. Our findings indicate that the time series we analysed have a non-stationary multifractal nature with predominantly persistent behaviour at the middle of the trading session and anti-persistent behaviour at the open and close. We demonstrate that these deviations are statistically significant and robust. | [
{
"type": "R",
"before": "entropy like",
"after": "entropy-like",
"start_char_pos": 294,
"end_char_pos": 306
},
{
"type": "R",
"before": "intra-day",
"after": "intraday",
"start_char_pos": 343,
"end_char_pos": 352
},
{
"type": "R",
"before": "intra-day",
"after": "intraday",
"start_char_pos": 425,
"end_char_pos": 434
},
{
"type": "R",
"before": "multi-fractional",
"after": "multifractal",
"start_char_pos": 630,
"end_char_pos": 646
}
]
| [
0,
91,
208,
315,
405,
552,
783
]
|
1508.07561 | 1 | We consider the problem of utility maximization with exponential preferences in a market where the traded stock/risky asset price is modelled as a L\'evy-driven pure jump process (i.e. the driving L\'evy process has no Brownian component). In this setting, we study the terminal utility optimization problem in the presence of a European contingent claim. We consider in detail the BSDE (backward stochastic differential equations ) characterising the value function. First we analyse the well-definedness of the generator. This leads to some conditions on the market model related to conditions for the market to admit no free lunches. Then we give bounds on the candidate optimal strategy. Thereafter, we discuss the example of a cross-hedging problem and, under severe assumptions on the structure of the claim, we give explicit solutions. Finally, we establish an explicit solution for a related BSDE with a suitable terminal condition but a simpler generator. | We consider the problem of utility maximization with exponential preferences in a market where the traded stock/risky asset price is modelled as a L\'evy-driven pure jump process (i.e. the driving L\'evy process has no Brownian component). In this setting, we study the terminal utility optimization problem in the presence of a European contingent claim. We consider in detail the BSDE (backward stochastic differential equation ) characterising the value function when using an exponential utility function. First we analyse the well-definedness of the generator. This leads to some conditions on the market model related to conditions for the market to admit no free lunches. Then we give bounds on the candidate optimal strategy. Thereafter, we discuss the example of a cross-hedging problem and, under severe assumptions on the structure of the claim, we give explicit solutions. Finally, we establish an explicit solution for a related BSDE with a suitable terminal condition but a simpler generator. | [
{
"type": "R",
"before": "equations",
"after": "equation",
"start_char_pos": 421,
"end_char_pos": 430
},
{
"type": "A",
"before": null,
"after": "function when using an exponential utility",
"start_char_pos": 458,
"end_char_pos": 458
}
]
| [
0,
239,
355,
468,
524,
637,
692,
843
]
|
1508.07761 | 1 | We treat an infinite dimensional optimization problem arising in economic theory. Under appropriate conditions, we show the existence of an optimal strategy for an investor trading in the classical Arbitrage Pricing Model of S. A. Ross. As a consequence, we derive the existence of equivalent risk-neutral measures of a particular form which have favourable integrability properties . | We consider an infinite dimensional optimization problem motivated by mathematical economics. Within the celebrated " Arbitrage Pricing Model ", we use probabilistic and functional analytic techniques to show the existence of optimal strategies for investors who maximize their expected utility . | [
{
"type": "R",
"before": "treat",
"after": "consider",
"start_char_pos": 3,
"end_char_pos": 8
},
{
"type": "R",
"before": "arising in economic theory. Under appropriate conditions, we show the existence of an optimal strategy for an investor trading in the classical",
"after": "motivated by mathematical economics. Within the celebrated \"",
"start_char_pos": 54,
"end_char_pos": 197
},
{
"type": "R",
"before": "of S. A. Ross. As a consequence, we derive",
"after": "\", we use probabilistic and functional analytic techniques to show",
"start_char_pos": 222,
"end_char_pos": 264
},
{
"type": "R",
"before": "equivalent risk-neutral measures of a particular form which have favourable integrability properties",
"after": "optimal strategies for investors who maximize their expected utility",
"start_char_pos": 282,
"end_char_pos": 382
}
]
| [
0,
81,
236
]
|
1508.07914 | 1 | In this work, we present a modeling framework in which the shape and dynamics of a Limit Order Book (LOB) arise endogenously from an equilibrium between multiple market participants (agents). On the one hand, the new framework captures very closely the true, micro-level, mechanics of an auction-style exchange. On the other hand , it uses the standard abstractions of games with continuum of players (in particular, the mean field game theory) to obtain a tractable macro-level description of the LOB. We use the proposed modeling framework to analyze the effects of trading frequency on the liquidity of the market in a very general setting. In particular, we show that the higher trading frequency increases market efficiency if the agents choose to provide liquidity in equilibrium. However, we also show that the higher trading frequency makes markets more fragile, in the following sense : in a high-frequency trading regime, the agents choose to provide liquidity in equilibrium if and only if they are market-neutral (i.e. their beliefs satisfy certain martingale property). The theoretical results are illustrated with numerical examples . | In this work, we present a discrete time modeling framework, in which the shape and dynamics of a Limit Order Book (LOB) arise endogenously from an equilibrium between multiple market participants (agents). The new framework captures very closely the true, micro-level, mechanics of an auction-style exchange. At the same time , it uses the standard abstractions of a continuum-player game to obtain a tractable macro-level description of the LOB. We use the proposed modeling framework to analyze the effects of trading frequency on the market liquidity in a very general setting. In particular, we demonstrate the dual effect of high trading frequency. On the one hand, the higher frequency increases market efficiency , if the agents choose to provide liquidity in equilibrium. On the other hand, the higher trading frequency also makes markets more fragile, in the sense that the agents choose to provide liquidity in equilibrium only if they are market-neutral (i.e. their beliefs satisfy certain martingale property). Even a very small deviation from market-neutrality may cause the agents to stop providing liquidity, if the trading frequency is sufficiently high, which represents a self-inflicted liquidity crises (aka flash crash) in the market. This framework allows us to provide more insight into how such a liquidity crises unfolds, connecting it to the so-called adverse selection effect . | [
{
"type": "R",
"before": "modeling framework",
"after": "discrete time modeling framework,",
"start_char_pos": 27,
"end_char_pos": 45
},
{
"type": "R",
"before": "On the one hand, the",
"after": "The",
"start_char_pos": 192,
"end_char_pos": 212
},
{
"type": "R",
"before": "On the other hand",
"after": "At the same time",
"start_char_pos": 312,
"end_char_pos": 329
},
{
"type": "R",
"before": "games with continuum of players (in particular, the mean field game theory)",
"after": "a continuum-player game",
"start_char_pos": 369,
"end_char_pos": 444
},
{
"type": "R",
"before": "liquidity of the market",
"after": "market liquidity",
"start_char_pos": 593,
"end_char_pos": 616
},
{
"type": "R",
"before": "show that the higher trading frequency",
"after": "demonstrate the dual effect of high trading frequency. On the one hand, the higher frequency",
"start_char_pos": 662,
"end_char_pos": 700
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 729,
"end_char_pos": 729
},
{
"type": "R",
"before": "However, we also show that",
"after": "On the other hand,",
"start_char_pos": 788,
"end_char_pos": 814
},
{
"type": "A",
"before": null,
"after": "also",
"start_char_pos": 844,
"end_char_pos": 844
},
{
"type": "R",
"before": "following sense : in a high-frequency trading regime,",
"after": "sense that",
"start_char_pos": 880,
"end_char_pos": 933
},
{
"type": "D",
"before": "if and",
"after": null,
"start_char_pos": 988,
"end_char_pos": 994
},
{
"type": "R",
"before": "The theoretical results are illustrated with numerical examples",
"after": "Even a very small deviation from market-neutrality may cause the agents to stop providing liquidity, if the trading frequency is sufficiently high, which represents a self-inflicted liquidity crises (aka flash crash) in the market. This framework allows us to provide more insight into how such a liquidity crises unfolds, connecting it to the so-called adverse selection effect",
"start_char_pos": 1085,
"end_char_pos": 1148
}
]
| [
0,
191,
311,
502,
643,
787,
1084
]
|
1508.07914 | 2 | In this work, we present a discrete time modeling framework, in which the shape and dynamics of a Limit Order Book (LOB) arise endogenously from an equilibrium between multiple market participants (agents). The new framework captures very closely the true, micro-level, mechanics of an auction-style exchange. At the same time, it uses the standard abstractions of a continuum-player game to obtain a tractable macro-level description of the LOB.We use the proposed modeling framework to analyze the effects of trading frequency on the market liquidity in a very general setting. In particular, we demonstrate the dual effect of high trading frequency. On the one hand, the higher frequency increases market efficiency, if the agents choose to provide liquidity in equilibrium. On the other hand, the higher trading frequency also makes markets more fragile, in the sense that the agents choose to provide liquidity in equilibrium only if they are market-neutral (i.e. their beliefs satisfy certain martingale property). Even a very small deviation from market-neutrality may cause the agents to stop providing liquidity, if the trading frequency is sufficiently high, which represents a self-inflicted liquidity crises (aka flash crash) in the market. This framework allows us to provide more insight into how such a liquidity crises unfolds, connecting it to the so-called adverse selection effect. | In this work, we present a discrete time modeling framework, in which the shape and dynamics of a Limit Order Book (LOB) arise endogenously from an equilibrium between multiple market participants (agents). The new framework captures very closely the true, micro-level, mechanics of an auction-style exchange. At the same time, it uses the standard abstractions of a continuum-player game to obtain a tractable macro-level description of the LOB.We use the proposed modeling framework to analyze the effects of trading frequency on market liquidity in a very general setting. In particular, we demonstrate the dual effect of high trading frequency. On the one hand, the higher frequency increases market efficiency, if the agents choose to provide liquidity in equilibrium. On the other hand, the higher trading frequency also makes markets more fragile, in the sense that the agents choose to provide liquidity in equilibrium only if they are market-neutral (i.e. their beliefs satisfy certain martingale property). Even a very small deviation from market-neutrality may cause the agents to stop providing liquidity, if the trading frequency is sufficiently high, which represents a self-inflicted liquidity crises (aka flash crash) in the market. This framework allows us to provide more insight into how such a liquidity crisis unfolds, connecting it to the so-called adverse selection effect. | [
{
"type": "D",
"before": "the",
"after": null,
"start_char_pos": 532,
"end_char_pos": 535
},
{
"type": "R",
"before": "crises",
"after": "crisis",
"start_char_pos": 1328,
"end_char_pos": 1334
}
]
| [
0,
206,
309,
446,
579,
652,
777,
1020,
1252
]
|
1508.07914 | 3 | In this work , we present a discrete time modeling framework, in which the shape and dynamics of a Limit Order Book (LOB) arise endogenously from an equilibrium between multiple market participants (agents). The new framework captures very closely the true, micro-level, mechanics of an auction-style exchange. At the same time, it uses the standard abstractions of a continuum-player game to obtain a tractable macro-level description of the LOB. We use the proposed modeling framework to analyze the effects of trading frequency on market liquidity in a very general setting. In particular, we demonstrate the dual effect of high trading frequency. On the one hand, the higher frequency increases market efficiency, if the agents choose to provide liquidity in equilibrium. On the other hand, the higher trading frequency also makes markets more fragile, in the sense that the agents choose to provide liquidity in equilibrium only if they are market-neutral (i.e. their beliefs satisfy certain martingale property). Even a very small deviation from market-neutrality may cause the agents to stop providing liquidity, if the trading frequency is sufficiently high, which represents a self-inflicted liquidity crises (aka flash crash) in the market. This framework allows us to provide more insight into how such a liquidity crisis unfolds, connecting it to the so-called adverse selection effect. | In this article , we present a discrete time modeling framework, in which the shape and dynamics of a Limit Order Book (LOB) arise endogenously from an equilibrium between multiple market participants (agents). We use the proposed modeling framework to analyze the effects of trading frequency on market liquidity in a very general setting. In particular, we demonstrate the dual effect of high trading frequency. On the one hand, the higher frequency increases market efficiency, if the agents choose to provide liquidity in equilibrium. On the other hand, it also makes markets more fragile, in the sense that the agents choose to provide liquidity in equilibrium only if they are market-neutral (i.e. , their beliefs satisfy certain martingale property). Even a very small deviation from market-neutrality may cause the agents to stop providing liquidity, if the trading frequency is sufficiently high, which represents an endogenous liquidity crisis (aka flash crash) in the market. This framework enables us to provide more insight into how such a liquidity crisis unfolds, connecting it to the so-called adverse selection effect. | [
{
"type": "R",
"before": "work",
"after": "article",
"start_char_pos": 8,
"end_char_pos": 12
},
{
"type": "D",
"before": "The new framework captures very closely the true, micro-level, mechanics of an auction-style exchange. At the same time, it uses the standard abstractions of a continuum-player game to obtain a tractable macro-level description of the LOB.",
"after": null,
"start_char_pos": 208,
"end_char_pos": 447
},
{
"type": "R",
"before": "the higher trading frequency",
"after": "it",
"start_char_pos": 795,
"end_char_pos": 823
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 967,
"end_char_pos": 967
},
{
"type": "R",
"before": "a self-inflicted liquidity crises",
"after": "an endogenous liquidity crisis",
"start_char_pos": 1185,
"end_char_pos": 1218
},
{
"type": "R",
"before": "allows",
"after": "enables",
"start_char_pos": 1267,
"end_char_pos": 1273
}
]
| [
0,
207,
310,
447,
577,
650,
775,
1019,
1251
]
|
1509.00372 | 1 | Our paper aims to model and forecast the electricity price in a completely new and promising style . Instead of directly modeling the electricity price as it is usually done in time series or data mining approaches, we model and utilize its true source: the sale and purchase curves of the electricity exchange. We will refer to this new model as X-Model, as almost every deregulated electricity price is simply the result of the intersection of the electricity supply and demand curve at a certain auction. Therefore we show an approach to deal with a tremendous amount of auction data, using a subtle data processing technique as well as dimension reduction and lasso based estimation methods. We incorporate not only several known features, such as seasonal behavior or the impact of other processes like renewable energy, but also completely new elaborated stylized facts of the bidding structure. Our model is able to capture the non-linear behavior of the electricity price, which is especially useful for predicting huge price spikes. Using simulation methods we show how to derive prediction intervals . We describe and show the proposed methods for the day-ahead EPEX spot price of Germany and Austria. | Our paper aims to model and forecast the electricity price by taking a completely new perspective on the data. It will be the first approach which is able to combine the insights of market structure models with extensive and modern econometric analysis . Instead of directly modeling the electricity price as it is usually done in time series or data mining approaches, we model and utilize its true source: the sale and purchase curves of the electricity exchange. We will refer to this new model as X-Model, as almost every deregulated electricity price is simply the result of the intersection of the electricity supply and demand curve at a certain auction. Therefore we show an approach to deal with a tremendous amount of auction data, using a subtle data processing technique as well as dimension reduction and lasso based estimation methods. We incorporate not only several known features, such as seasonal behavior or the impact of other processes like renewable energy, but also completely new elaborated stylized facts of the bidding structure. Our model is able to capture the non-linear behavior of the electricity price, which is especially useful for predicting huge price spikes. Using simulation methods we show how to derive prediction intervals for probabilistic forecasting . We describe and show the proposed methods for the day-ahead EPEX spot price of Germany and Austria. | [
{
"type": "R",
"before": "in",
"after": "by taking",
"start_char_pos": 59,
"end_char_pos": 61
},
{
"type": "R",
"before": "and promising style",
"after": "perspective on the data. It will be the first approach which is able to combine the insights of market structure models with extensive and modern econometric analysis",
"start_char_pos": 79,
"end_char_pos": 98
},
{
"type": "A",
"before": null,
"after": "for probabilistic forecasting",
"start_char_pos": 1110,
"end_char_pos": 1110
}
]
| [
0,
100,
311,
507,
695,
901,
1041,
1112
]
|
1509.01083 | 1 | Cellular processes do not follow deterministic rules , even in identical environments genetically identical cells can make random choices leading to different phenotypes. This randomness originates from fluctuations present in the biomolecular interaction networks. Most previous work has been focused on the intrinsic noise (IN) of these networks. Yet, especially for high-copy-number biomolecules, extrinsic or environmental noise (EN) has been experimentally shown to dominate the variation. Here we develop an analytical formalism that allows for calculation of the effect of extrinsic noise on gene expression motifs. We introduce a new method for modeling bounded EN as an auxiliary species in the master equation. The method is fully generic and is not limited to systems with small EN magnitudes. We focus our study on motifs that can be viewed as the building blocks of genetic switches: a non-regulated gene, a self-inhibiting gene, and a self-promoting gene. The role of the EN properties (magnitude, correlation time, and distribution) on the statistics of interest are systematically investigated, and the effect of fluctuations in different reaction rates is compared. Due to its analytical nature, our formalism can be used to quantify the effect of EN on the dynamics of biochemical networks and can also be used to improve the interpretation of data from single-cell gene expression experiments. | Cellular processes do not follow deterministic rules ; even in identical environments genetically identical cells can make random choices leading to different phenotypes. This randomness originates from fluctuations present in the biomolecular interaction networks. Most previous work has been focused on the intrinsic noise (IN) of these networks. Yet, especially for high-copy-number biomolecules, extrinsic or environmental noise (EN) has been experimentally shown to dominate the variation. Here , we develop an analytical formalism that allows for calculation of the effect of EN on gene-expression motifs. We introduce a method for modeling bounded EN as an auxiliary species in the master equation. The method is fully generic and is not limited to systems with small EN magnitudes. We focus our study on motifs that can be viewed as the building blocks of genetic switches: a nonregulated gene, a self-inhibiting gene, and a self-promoting gene. The role of the EN properties (magnitude, correlation time, and distribution) on the statistics of interest are systematically investigated, and the effect of fluctuations in different reaction rates is compared. Due to its analytical nature, our formalism can be used to quantify the effect of EN on the dynamics of biochemical networks and can also be used to improve the interpretation of data from single-cell gene-expression experiments. | [
{
"type": "R",
"before": ",",
"after": ";",
"start_char_pos": 53,
"end_char_pos": 54
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 500,
"end_char_pos": 500
},
{
"type": "R",
"before": "extrinsic noise on gene expression",
"after": "EN on gene-expression",
"start_char_pos": 581,
"end_char_pos": 615
},
{
"type": "D",
"before": "new",
"after": null,
"start_char_pos": 639,
"end_char_pos": 642
},
{
"type": "R",
"before": "non-regulated",
"after": "nonregulated",
"start_char_pos": 900,
"end_char_pos": 913
},
{
"type": "R",
"before": "gene expression",
"after": "gene-expression",
"start_char_pos": 1385,
"end_char_pos": 1400
}
]
| [
0,
170,
265,
348,
494,
623,
721,
805,
970,
1183
]
|
1509.01157 | 1 | Climate change is widely expected to increase weather related damage and the insurance claims that result from it. This has the undesirable consequence of increasing insurance costs , in a way that is independent of a customer's contribution to the causes of climate change. This is unfortunate because insurance provides a financial mechanism that mitigates some of the consequences of climate change, allowing damage from increasingly frequent events to be repaired. We observe that the insurance industry could reclaim any increase in claims due to climate change, by increasing the insurance premiums on energy producers for example, without needing government intervention or a new tax. We argue that this insurance-led levy must acknowledge both present carbon emissions and a modern industry's carbon inheritance, that is, to recognise that fossil-fuel driven industrial growth has provided the innovations and conditions needed for modern civilisation to exist and develop. A tax or levy on energy production is one mechanism that would recognise carbon inheritance through the increased (energy) costs for manufacturing and using modern technology, and can also provide an incentive to minimise carbon emissions, through higher costs for the most polluting industries. The necessary increases in insurance premiums would initially be small, and will require an event attribution (EA) methodology to determine their size. We propose that the levies can be phased in as the science of event attribution becomes sufficiently robust for each claim type, to ultimately provide a global insurance-led response to climate change. | Climate change is widely expected to increase weather related damage and the insurance claims that result from it. This will increase insurance premiums , in a way that is independent of a customer's contribution to the causes of climate change. Insurance provides a financial mechanism that mitigates some of the consequences of climate change, allowing damage from increasingly frequent events to be repaired. We observe that the insurance industry could reclaim any increase in claims due to climate change, by increasing the insurance premiums on energy producers for example, without needing government intervention or a new tax. We argue that this insurance-led levy must acknowledge both present carbon emissions and a modern industry's carbon inheritance, that is, to recognise that fossil-fuel driven industrial growth has provided the innovations and conditions needed for modern civilisation to exist and develop. A tax or levy on energy production is one mechanism that would recognise carbon inheritance through the increased (energy) costs for manufacturing and using modern technology, and can also provide an incentive to minimise carbon emissions, through higher costs for the most polluting industries. The necessary increases in insurance premiums would initially be small, and will require an event attribution (EA) methodology to determine their size. We propose that the levies can be phased in as the science of event attribution becomes sufficiently robust for each claim type, to ultimately provide a global insurance-led response to climate change. | [
{
"type": "R",
"before": "has the undesirable consequence of increasing insurance costs",
"after": "will increase insurance premiums",
"start_char_pos": 120,
"end_char_pos": 181
},
{
"type": "R",
"before": "This is unfortunate because insurance",
"after": "Insurance",
"start_char_pos": 275,
"end_char_pos": 312
}
]
| [
0,
114,
274,
468,
691,
981,
1277,
1429
]
|
1509.01217 | 1 | This paper studies the trading volumes and wealth distribution of a novel agent-based model of an artificial financial market. In this model, heterogeneous agents, behaving according to the Von Neumann and URLenstern utility theory, may mutually interact. A Tobin-like tax on successful investments and a flat tax are compared to assess the effects on the agents' wealth distribution. We carry out extensive numerical simulations in two alternative scenarios: i) a reference scenario, where the agents keep their utility function fixed, and ii) a focal scenario, where the agents are adaptive and URLanize in communities, emulating their neighbours by updating their own utility function. Specifically, the interactions among the agents are modelled through a directed scale-free network to account for the presence of community leaders, and the herding-like effect is tested against the reference scenario. We observe that our model is capable of replicating the benefits and drawbacks of the two taxation systems and that the interactions among the agents strongly affect the wealth distribution across the communities. Remarkably, the communities benefit from the presence of leaders with successful trading strategies, and are more likely to increase their average wealth. Moreover, this emulation mechanism mitigates the decrease in trading volumes, which is a typical drawback of Tobin-like taxes . | This paper studies the trading volumes and wealth distribution of a novel agent-based model of an artificial financial market. In this model, heterogeneous agents, behaving according to the Von Neumann and URLenstern utility theory, may mutually interact. A Tobin-like tax (TT) on successful investments and a flat tax are compared to assess the effects on the agents' wealth distribution. We carry out extensive numerical simulations in two alternative scenarios: i) a reference scenario, where the agents keep their utility function fixed, and ii) a focal scenario, where the agents are adaptive and URLanize in communities, emulating their neighbours by updating their own utility function. Specifically, the interactions among the agents are modelled through a directed scale-free network to account for the presence of community leaders, and the herding-like effect is tested against the reference scenario. We observe that our model is capable of replicating the benefits and drawbacks of the two taxation systems and that the interactions among the agents strongly affect the wealth distribution across the communities. Remarkably, the communities benefit from the presence of leaders with successful trading strategies, and are more likely to increase their average wealth. Moreover, this emulation mechanism mitigates the decrease in trading volumes, which is a typical drawback of TTs . | [
{
"type": "A",
"before": null,
"after": "(TT)",
"start_char_pos": 273,
"end_char_pos": 273
},
{
"type": "R",
"before": "Tobin-like taxes",
"after": "TTs",
"start_char_pos": 1387,
"end_char_pos": 1403
}
]
| [
0,
126,
255,
385,
689,
908,
1122,
1277
]
|
1509.01483 | 1 | Building upon the standard model of monopolistic competition on the market for intermediary goods, we propose a simple dynamical model of the formation of production networks . The model subsumes the standard general equilibrium approach and robustly reproduces key stylized facts of firms' demographics. Firms' growth rates are negatively correlated with size and follow a core double-exponential distribution followed by fat tails. Firms' size and production network are power-law distributed. These properties emerge because continuous inflow of new firms shifts away the model from a steady state to a disequilibrium regime in which firms get scaled according to their resistance to competitive forces . | We propose a simple dynamical model of the formation of production networks among monopolistically competitive firms . The model subsumes the standard general equilibrium approach \`a la Arrow-Debreu but displays a wide set of potential dynamic behaviors. It robustly reproduces key stylized facts of firms' demographics. Our main result is that competition between intermediate good producers generically leads to the emergence of scale-free production networks . | [
{
"type": "R",
"before": "Building upon the standard model of monopolistic competition on the market for intermediary goods, we",
"after": "We",
"start_char_pos": 0,
"end_char_pos": 101
},
{
"type": "A",
"before": null,
"after": "among monopolistically competitive firms",
"start_char_pos": 175,
"end_char_pos": 175
},
{
"type": "R",
"before": "and",
"after": "\\`a la Arrow-Debreu but displays a wide set of potential dynamic behaviors. It",
"start_char_pos": 239,
"end_char_pos": 242
},
{
"type": "R",
"before": "Firms' growth rates are negatively correlated with size and follow a core double-exponential distribution followed by fat tails. Firms' size and production network are power-law distributed. These properties emerge because continuous inflow of new firms shifts away the model from a steady state to a disequilibrium regime in which firms get scaled according to their resistance to competitive forces",
"after": "Our main result is that competition between intermediate good producers generically leads to the emergence of scale-free production networks",
"start_char_pos": 306,
"end_char_pos": 706
}
]
| [
0,
177,
305,
434,
496
]
|
1509.01672 | 1 | We generalize the results of Mostovyi (2015) on the problem of optimal investment with intermediate consumption in a general semimartingale setting. We show that the no unbounded profit with bounded risk condition suffices to establish the key duality relations of utility maximization and , moreover, represents the minimal no-arbitrage-type assumption . | We consider the problem of optimal investment with intermediate consumption in a general semimartingale model of an incomplete market, with preferences being represented by utility stochastic fields. By building on the results of Mostovyi (2015), we show that the key duality relations of the utility maximization theory hold under the minimal assumptions of no unbounded profit with bounded risk (NUPBR) and of the finiteness of both primal and dual value functions . | [
{
"type": "R",
"before": "generalize the results of Mostovyi (2015) on the",
"after": "consider the",
"start_char_pos": 3,
"end_char_pos": 51
},
{
"type": "R",
"before": "setting. We",
"after": "model of an incomplete market, with preferences being represented by utility stochastic fields. By building on the results of Mostovyi (2015), we",
"start_char_pos": 140,
"end_char_pos": 151
},
{
"type": "A",
"before": null,
"after": "key duality relations of the utility maximization theory hold under the minimal assumptions of",
"start_char_pos": 166,
"end_char_pos": 166
},
{
"type": "R",
"before": "condition suffices to establish the key duality relations of utility maximization and , moreover, represents the minimal no-arbitrage-type assumption",
"after": "(NUPBR) and of the finiteness of both primal and dual value functions",
"start_char_pos": 205,
"end_char_pos": 354
}
]
| [
0,
148
]
|
1509.01672 | 2 | We consider the problem of optimal investment with intermediate consumption in a general semimartingale model of an incomplete market, with preferences being represented by utility stochastic fields. By building on the results of Mostovyi (2015), we show that the key duality relations of the utility maximization theory hold under the minimal assumptions of no unbounded profit with bounded risk (NUPBR) and of the finiteness of both primal and dual value functions. | We consider the problem of optimal investment with intermediate consumption in a general semimartingale model of an incomplete market, with preferences being represented by a utility stochastic field. We show that the key conclusions of the utility maximization theory hold under the assumptions of no unbounded profit with bounded risk (NUPBR) and of the finiteness of both primal and dual value functions. | [
{
"type": "R",
"before": "utility stochastic fields. By building on the results of Mostovyi (2015), we",
"after": "a utility stochastic field. We",
"start_char_pos": 173,
"end_char_pos": 249
},
{
"type": "R",
"before": "duality relations",
"after": "conclusions",
"start_char_pos": 268,
"end_char_pos": 285
},
{
"type": "D",
"before": "minimal",
"after": null,
"start_char_pos": 336,
"end_char_pos": 343
}
]
| [
0,
199
]
|
1509.01966 | 1 | In this paper we present a regression based model for day-ahead electricity spot prices. We estimate the considered linear regression model by the lasso estimation method. The lasso approach allows for many possible parameters in the model, but also shrinks and sparsifies the parameters automatically to avoid overfitting. Thus, it is able to capture changes in the intraday dependency structure of the electricity price as the estimated model structure can vary over the day . We discuss in detail the estimation results which provide insights to the intraday behavior of electricity prices. We perform an out-of-sample forecasting study for several European electricity markets. The results illustrate well that the efficient lasso based estimation technique can join advantages from two common model approaches. | In this paper we present a regression based model for day-ahead electricity spot prices. We estimate the considered linear regression model by the lasso estimation method. The lasso approach allows for many possible parameters in the model, but also shrinks and sparsifies the parameters automatically to avoid overfitting. Thus, it is able to capture the autoregressive intraday dependency structure of the electricity price well . We discuss in detail the estimation results which provide insights to the intraday behavior of electricity prices. We perform an out-of-sample forecasting study for several European electricity markets. The results illustrate well that the efficient lasso based estimation technique can exhibit advantages from two popular model approaches. | [
{
"type": "R",
"before": "changes in the",
"after": "the autoregressive",
"start_char_pos": 352,
"end_char_pos": 366
},
{
"type": "R",
"before": "as the estimated model structure can vary over the day",
"after": "well",
"start_char_pos": 422,
"end_char_pos": 476
},
{
"type": "R",
"before": "join",
"after": "exhibit",
"start_char_pos": 766,
"end_char_pos": 770
},
{
"type": "R",
"before": "common",
"after": "popular",
"start_char_pos": 791,
"end_char_pos": 797
}
]
| [
0,
88,
171,
323,
478,
593,
681
]
|
1509.02640 | 1 | The total estimated energy bill for data centers in 2010 was \11.5 billion, and experts estimate that the energy cost of a typical data center doubles every five years. On the other hand, storage advancements have started to lag behind computational developments , therein becoming a bottleneck for the ongoing data growth which already approaches Exascale levels. We investigate the relationship among data throughput and energy footprint on a large storage cluster, with the goal of formalizing it as a metric that reflects the trading among consistency and energy. Employing a client-centric consistency approach, and while honouring ACID properties of the chosen columnar store for the case study (Apache HBase), we present the factors involved in the energy consumption of the system as well as lessons learned to underpin further design of energy-efficient cluster scale storage systems. | The total estimated energy bill for data centers in 2010 was \11.5 billion, and experts estimate that the energy cost of a typical data center doubles every five years. On the other hand, computational developments have started to lag behind storage advancements , therein becoming a future bottleneck for the ongoing data growth which already approaches Exascale levels. We investigate the relationship among data throughput and energy footprint on a large storage cluster, with the goal of formalizing it as a metric that reflects the trading among consistency and energy. Employing a client-centric consistency approach, and while honouring ACID properties of the chosen columnar store for the case study (Apache HBase), we present the factors involved in the energy consumption of the system as well as lessons learned to underpin further design of energy-efficient cluster scale storage systems. | [
{
"type": "R",
"before": "storage advancements",
"after": "computational developments",
"start_char_pos": 188,
"end_char_pos": 208
},
{
"type": "R",
"before": "computational developments",
"after": "storage advancements",
"start_char_pos": 236,
"end_char_pos": 262
},
{
"type": "A",
"before": null,
"after": "future",
"start_char_pos": 284,
"end_char_pos": 284
}
]
| [
0,
168,
365,
568
]
|
1509.02727 | 1 | We consider utility maximisation problem for exponential Levy models and HARA utilities in presence of illiquid asset . This illiquid asset is modelled by an option of European type on another risky asset which is correlated with the first one. Under some hypothesis on Levy processes, we give the expressions for information processes figured in maximum utility formula. As applications, we consider Black-Scholes models with correlated Brownian Motions, and also Black-Scholes models with jump part represented by Poisson process. | We consider expected utility maximisation problem for exponential Levy models and HARA utilities in presence of illiquid asset in portfolio . This illiquid asset is modelled by an option of European type on another risky asset which is correlated with the first one. Under some hypothesis on Levy processes, we give the expressions of information processes figured in maximum utility formula. As applications, we consider Black-Scholes models with correlated Brownian Motions, and also Black-Scholes models with jump part represented by Poisson process. | [
{
"type": "A",
"before": null,
"after": "expected",
"start_char_pos": 12,
"end_char_pos": 12
},
{
"type": "A",
"before": null,
"after": "in portfolio",
"start_char_pos": 119,
"end_char_pos": 119
},
{
"type": "R",
"before": "for",
"after": "of",
"start_char_pos": 312,
"end_char_pos": 315
}
]
| [
0,
121,
246,
373
]
|
1509.03153 | 1 | We give a graphically based procedure to reduce a reaction network to a smaller reaction network with fewer species after linear elimination of a set of noninteracting species. We give a description of the reduced reaction network, its kinetics and conservations laws, and explore properties of the network and its kinetics. We conclude by comparing our approach to an older similar approach by Temkin and co-workers. Finally, we apply the procedure to biological examples such as substrate mechanisms, post-translational modification systems and networks with intermediates (transient) steps. | The quasi-steady state approximation and time-scale separation are commonly applied methods to simplify models of biochemical reaction networks based on ordinary differential equations (ODEs). The concentrations of the "fast" species are assumed effectively to be at steady state with respect to the "slow" species. Under this assumption the steady state equations can be used to eliminate the "fast" variables and a new ODE system with only the slow species can be obtained. We interpret a reduced system obtained by time-scale separation as the ODE system arising from a unique reaction network, by identification of a set of reactions and the corresponding rate functions. The procedure is graphically based and can easily be worked out by hand for small networks. For larger networks, we provide a pseudo-algorithm. We study properties of the reduced network, its kinetics and conservation laws, and show that the kinetics of the reduced network fulfil realistic assumptions, provided the original network does. We illustrate our results using biological examples such as substrate mechanisms, post-translational modification systems and networks with intermediates (transient) steps. | [
{
"type": "R",
"before": "We give a graphically based procedure to reduce a reaction network to a smaller reaction network with fewer species after linear elimination",
"after": "The quasi-steady state approximation and time-scale separation are commonly applied methods to simplify models of biochemical reaction networks based on ordinary differential equations (ODEs). The concentrations of the \"fast\" species are assumed effectively to be at steady state with respect to the \"slow\" species. Under this assumption the steady state equations can be used to eliminate the \"fast\" variables and a new ODE system with only the slow species can be obtained. We interpret a reduced system obtained by time-scale separation as the ODE system arising from a unique reaction network, by identification",
"start_char_pos": 0,
"end_char_pos": 140
},
{
"type": "R",
"before": "noninteracting species. We give a description",
"after": "reactions and the corresponding rate functions. The procedure is graphically based and can easily be worked out by hand for small networks. For larger networks, we provide a pseudo-algorithm. We study properties",
"start_char_pos": 153,
"end_char_pos": 198
},
{
"type": "D",
"before": "reaction",
"after": null,
"start_char_pos": 214,
"end_char_pos": 222
},
{
"type": "R",
"before": "conservations",
"after": "conservation",
"start_char_pos": 249,
"end_char_pos": 262
},
{
"type": "R",
"before": "explore properties of the network and its kinetics. We conclude by comparing our approach to an older similar approach by Temkin and co-workers. Finally, we apply the procedure to",
"after": "show that the kinetics of the reduced network fulfil realistic assumptions, provided the original network does. We illustrate our results using",
"start_char_pos": 273,
"end_char_pos": 452
}
]
| [
0,
176,
324,
417
]
|
1509.03264 | 1 | Geometric Arbitrage Theory reformulates a generic asset model possibly allowing for arbitrage by packaging all assets and their forwards dynamics into a stochastic principal fibre bundle, with a connection whose parallel transport encodes discounting and portfolio rebalancing, and whose curvature measures, in this geometric language, the " instantaneous arbitrage capability " generated by the market itself. The cashflow bundle is the vector bundle associated to this stochastic principal fibre bundle for the natural choice of the vector space fibre. The cashflow bundle carries a stochastic covariant differentiation induced by the connection on the principal fibre bundle. The link between arbitrage theory and spectral theory of the connection Laplacian on the vector bundle is given by the zero eigenspace resulting in a parametrization of all risk neutral measures equivalent to the statistical one. This indicates that a market satisfies the no-free-lunch-with vanishing-risk condition if it is only if 0 is in the spectrum . | Geometric Arbitrage Theory reformulates a generic asset model possibly allowing for arbitrage by packaging all assets and their forwards dynamics into a stochastic principal fibre bundle, with a connection whose parallel transport encodes discounting and portfolio rebalancing, and whose curvature measures, in this geometric language, the ' instantaneous arbitrage capability ' generated by the market itself. The cashflow bundle is the vector bundle associated to this stochastic principal fibre bundle for the natural choice of the vector space fibre. The cashflow bundle carries a stochastic covariant differentiation induced by the connection on the principal fibre bundle. The link between arbitrage theory and spectral theory of the connection Laplacian on the vector bundle is given by the zero eigenspace resulting in a parametrization of all risk neutral measures equivalent to the statistical one. This indicates that a market satisfies the (NFLVR) condition if and only if 0 is in the discrete spectrum of the connection Laplacian on the cash flow bundle or of the Dirac Laplacian of the twisted cash flow bundle with the exterior algebra bundle. We apply this result by extending Jarrow-Protter-Shimbo theory of asset bubbles for complete arbitrage free markets to markets not satisfying the (NFLVR). Moreover, by means of the Atiyah-Singer index theorem, we prove that the Euler characteristic of the asset nominal space is a topological obstruction to the the (NFLVR) condition, and, by means of the Bochner-Weitzenb\"ock formula, the non vanishing of the homology group of the cash flow bundle is revealed to be a topological obstruction to (NFLVR), too. Asset bubbles are defined, classified and decomposed for markets allowing arbitrage . | [
{
"type": "R",
"before": "\"",
"after": "'",
"start_char_pos": 340,
"end_char_pos": 341
},
{
"type": "R",
"before": "\"",
"after": "'",
"start_char_pos": 377,
"end_char_pos": 378
},
{
"type": "R",
"before": "no-free-lunch-with vanishing-risk condition if it is",
"after": "(NFLVR) condition if and",
"start_char_pos": 952,
"end_char_pos": 1004
},
{
"type": "R",
"before": "spectrum",
"after": "discrete spectrum of the connection Laplacian on the cash flow bundle or of the Dirac Laplacian of the twisted cash flow bundle with the exterior algebra bundle. We apply this result by extending Jarrow-Protter-Shimbo theory of asset bubbles for complete arbitrage free markets to markets not satisfying the (NFLVR). Moreover, by means of the Atiyah-Singer index theorem, we prove that the Euler characteristic of the asset nominal space is a topological obstruction to the the (NFLVR) condition, and, by means of the Bochner-Weitzenb\\\"ock formula, the non vanishing of the homology group of the cash flow bundle is revealed to be a topological obstruction to (NFLVR), too. Asset bubbles are defined, classified and decomposed for markets allowing arbitrage",
"start_char_pos": 1025,
"end_char_pos": 1033
}
]
| [
0,
410,
554,
678,
908
]
|
1509.03781 | 1 | This study proposes axioms for inconsistency indicators in pairwise comparisons. The new observation (by Szybowski), that " no PC submatrix may have a worse inconsistency indicator than the given PC matrix " is an essential simplification of the axiomatization. The goal of formulating axioms for all future definitions of new inconsistency indicators is difficult and as illusive as the inconsistency concept itself . This study improves the axiomatization proposed by Koczkodaj and Szwarc in 2014. As a side product, the new axiom allows to prevent approximation error aberrations of an arbitrarily large value in the input data . | This study proposes revised axioms for defining inconsistency indicators in pairwise comparisons. It is based on the new findings that " PC submatrix cannot have a worse inconsistency indicator than the PC matrix containing it " and that there must be a PC submatrix with the same inconsistency as the given PC matrix . This study also provides better reasoning for the need of normalization. It is a revision of axiomatization by Koczkodaj and Szwarc , 2014 which proposed axioms expressed informally with some deficiencies addressed in this study . | [
{
"type": "R",
"before": "axioms for",
"after": "revised axioms for defining",
"start_char_pos": 20,
"end_char_pos": 30
},
{
"type": "R",
"before": "The new observation (by Szybowski),",
"after": "It is based on the new findings",
"start_char_pos": 81,
"end_char_pos": 116
},
{
"type": "R",
"before": "no PC submatrix may",
"after": "PC submatrix cannot",
"start_char_pos": 124,
"end_char_pos": 143
},
{
"type": "R",
"before": "given PC matrix",
"after": "PC matrix containing it",
"start_char_pos": 190,
"end_char_pos": 205
},
{
"type": "R",
"before": "is an essential simplification of the axiomatization. The goal of formulating axioms for all future definitions of new inconsistency indicators is difficult and as illusive as the inconsistency concept itself",
"after": "and that there must be a PC submatrix with the same inconsistency as the given PC matrix",
"start_char_pos": 208,
"end_char_pos": 416
},
{
"type": "R",
"before": "improves the axiomatization proposed",
"after": "also provides better reasoning for the need of normalization. It is a revision of axiomatization",
"start_char_pos": 430,
"end_char_pos": 466
},
{
"type": "R",
"before": "in 2014. As a side product, the new axiom allows to prevent approximation error aberrations of an arbitrarily large value in the input data",
"after": ", 2014 which proposed axioms expressed informally with some deficiencies addressed in this study",
"start_char_pos": 491,
"end_char_pos": 630
}
]
| [
0,
80,
261,
418,
499
]
|
1509.04135 | 1 | We derive the optimal investment decision in a project where both demand and investment costs are stochastic processes, eventually subject to shocks. We extend the approach used in \mbox{%DIFAUXCMD Dixit:Pindyck:94 , chapter 6.5, to deal with two sources of uncertainty, but assuming that the underlying processes are no longer geometric Brownian diffusions but rather jump diffusion processes. For the class of isoelastic functions that we address in this paper, it is still possible to derive a closed expression for the value of the firm. We prove formally that the result we get is indeed the solution of the optimization problem. | We derive the optimal investment decision in a project where both demand and investment costs are stochastic processes, eventually subject to shocks. We extend the approach used in Dixit and Pindyck (1994) , chapter 6.5, to deal with two sources of uncertainty, but assuming that the underlying processes are no longer geometric Brownian diffusions but rather jump diffusion processes. For the class of isoelastic functions that we address in this paper, it is still possible to derive a closed expression for the value of the firm. We prove formally that the result we get is indeed the solution of the optimization problem. | [
{
"type": "R",
"before": "\\mbox{%DIFAUXCMD Dixit:Pindyck:94",
"after": "Dixit and Pindyck (1994)",
"start_char_pos": 181,
"end_char_pos": 214
}
]
| [
0,
149,
394,
541
]
|
1509.04145 | 1 | In URLanisms, all cells share the same genome, but every cell expresses only a limited and specific set of genes that defines the cell type. During cell division, not only the genome, but also the cell type is inherited by the daughter cells. This intriguing phenomenon is achieved by a variety of processes that have been collectively termed epigenetics: the stable and inheritable changes in gene expression patterns. This article reviews the extremely rich and exquisitely multi-scale physical mechanisms that govern the biological processes behind the initiation, spreading and inheritance of epigenetic states. These include not only the change in the molecular properties associated with the chemical modifications of DNA and histone proteins - such as methylation and acetylation - but also less conventional ones, such as the physics that governs the URLanization of the genome in cell nuclei. Strikingly, to achieve stability and heritability of epigenetic states, cells take advantage of many different physical principles, such as the universal behavior of polymers and copolymers, the general features of non-equilibrium dynamical systems, and the electrostatic and mechanical properties related to chemical modifications of DNA and histones. By putting the complex biological literature under this new light, the emerging picture is that a limited set of general physical rules play a key role in initiating, shaping and transmitting this crucial "epigenetic landscape". This new perspective not only allows to rationalize the normal cellular functions, but also helps to understand the emergence of pathological states, in which the epigenetic landscape becomes dysfunctional. | In URLanisms, all cells share the same genome, but every cell expresses only a limited and specific set of genes that defines the cell type. During cell division, not only the genome, but also the cell type is inherited by the daughter cells. This intriguing phenomenon is achieved by a variety of processes that have been collectively termed epigenetics: the stable and inheritable changes in gene expression patterns. This article reviews the extremely rich and exquisitely multi-scale physical mechanisms that govern the biological processes behind the initiation, spreading and inheritance of epigenetic states. These include not only the changes in the molecular properties associated with the chemical modifications of DNA and histone proteins , such as methylation and acetylation , but also less conventional ones, such as the physics that governs the URLanization of the genome in cell nuclei. Strikingly, to achieve stability and heritability of epigenetic states, cells take advantage of many different physical principles, such as the universal behavior of polymers and copolymers, the general features of non-equilibrium dynamical systems, and the electrostatic and mechanical properties related to chemical modifications of DNA and histones. By putting the complex biological literature under this new light, the emerging picture is that a limited set of general physical rules play a key role in initiating, shaping and transmitting this crucial "epigenetic landscape". This new perspective not only allows to rationalize the normal cellular functions, but also helps to understand the emergence of pathological states, in which the epigenetic landscape becomes dysfunctional. | [
{
"type": "R",
"before": "change",
"after": "changes",
"start_char_pos": 643,
"end_char_pos": 649
},
{
"type": "R",
"before": "-",
"after": ",",
"start_char_pos": 749,
"end_char_pos": 750
},
{
"type": "R",
"before": "-",
"after": ",",
"start_char_pos": 787,
"end_char_pos": 788
}
]
| [
0,
140,
242,
419,
615,
901,
1254,
1483
]
|
1509.04564 | 1 | There are many constraints on population growth or decay in a country : several are of socio-economic origins. Sometimes cultual constraintsalso exist: sexual intercourse is banned in various religions, during Nativity and Lent fasting periods. We analyzed data consisting of registered daily birth records for very long (35,429 points) time series and many (24,947,061) babies in Romania between 1905 and 2001 (97 years) . The data was obtained from the 1992 and 2002 censuses . We grouped the population into two categories (Eastern Orthodox and Non-Orthodox) in order to distinguish cultual constraints . We performed extensive data analysis in a comparative manner for both groups. From such a long time series data analysis, it seems that the Lent fast has a more drastic effect than the Nativity fast over baby conception within the Eastern Orthodox population, thereby differently increasing the population ratio. Thereafter, we developed and tested econometric models where the dependent variable is the baby conception deduced day, while the independent variables are: (i) religious affiliation; (ii) Nativity and Lent fast time intervals; (iii) rurality; (iv) day length; (v) weekend, and (vi) a trend background. Our findings are concordant with other papers, proving differences between religious groups on conception, - although reaching different conclusions regarding the influence of weather on fertility. The approach seems a useful hint for developing econometric-like models in other sociophysics prone cases. | Population growth (or decay) in a country can be due to various f socio-economic constraints, as demonstrated in this paper. For example, sexual intercourse is banned in various religions, during Nativity and Lent fasting periods. Data consisting of registered daily birth records for very long (35,429 points) time series and many (24,947,061) babies in Romania between 1905 and 2001 (97 years) is analyzed . The data was obtained from the 1992 and 2002 censuses , thus on persons alive at that time . We grouped the population into two categories (Eastern Orthodox and Non-Orthodox) in order to distinguish religious constraints and performed extensive data analysis in a comparative manner for both groups. From such a long time series data analysis, it seems that the Lent fast has a more drastic effect than the Nativity fast over baby conception within the Eastern Orthodox population, thereby differently increasing the population ratio. Thereafter, we developed and tested econometric models where the dependent variable is the baby conception deduced day, while the independent variables are: (i) religious affiliation; (ii) Nativity and Lent fast time intervals; (iii) rurality; (iv) day length; (v) weekend, and (vi) a trend background. Our findings are concordant with other papers, proving differences between religious groups on conception, - although reaching different conclusions regarding the influence of weather on fertility. The approach seems a useful hint for developing econometric-like models in other sociophysics prone cases. | [
{
"type": "R",
"before": "There are many constraints on population growth or decay",
"after": "Population growth (or decay)",
"start_char_pos": 0,
"end_char_pos": 56
},
{
"type": "R",
"before": ": several are of",
"after": "can be due to various f",
"start_char_pos": 70,
"end_char_pos": 86
},
{
"type": "R",
"before": "origins. Sometimes cultual constraintsalso exist:",
"after": "constraints, as demonstrated in this paper. For example,",
"start_char_pos": 102,
"end_char_pos": 151
},
{
"type": "R",
"before": "We analyzed data",
"after": "Data",
"start_char_pos": 245,
"end_char_pos": 261
},
{
"type": "A",
"before": null,
"after": "is analyzed",
"start_char_pos": 422,
"end_char_pos": 422
},
{
"type": "A",
"before": null,
"after": ", thus on persons alive at that time",
"start_char_pos": 479,
"end_char_pos": 479
},
{
"type": "R",
"before": "cultual constraints . We",
"after": "religious constraints and",
"start_char_pos": 588,
"end_char_pos": 612
}
]
| [
0,
110,
244,
424,
481,
609,
687,
922,
1106,
1150,
1166,
1183,
1225,
1423
]
|
1509.06034 | 1 | For dynamical systems arising from chemical reaction networks, persistence is the property that each species concentration remains positively bounded away from zero, as long as species concentrations were all positive in the beginning. We describe two graphical procedures for simplifying reaction networks without breaking known necessary or sufficient conditions for persistence, by iteratively removing so-called intermediates and catalysts from the network. The procedures are easy to apply and, in many cases, lead to highly simplified network structures, such as monomolecular networks. For specific classes of reaction networks, we show that these conditions are equivalent to one another and, thus, necessary and sufficient for persistence . Furthermore, they can also be characterized by easily checkable strong connectivity properties of the underlying graph. In particular, this is the case for (conservative) monomolecular networks, as well as cascades of a large class of post-translational modification systems (of which the MAPK cascade and the n-site futile cycle are prominent examples). Since the aforementioned sufficient conditions for persistence preclude the existence of boundary steady states, our method also provides a graphical tool to check for that. | For dynamical systems arising from chemical reaction networks, persistence is the property that each species concentration remains positively bounded away from zero, as long as species concentrations were all positive in the beginning. We describe two graphical procedures for simplifying reaction networks without breaking known necessary or sufficient conditions for persistence, by iteratively removing so-called intermediates and catalysts from the network. The procedures are easy to apply and, in many cases, lead to highly simplified network structures, such as monomolecular networks. For specific classes of reaction networks, we show that these conditions for persistence are equivalent to one another . Furthermore, they can also be characterized by easily checkable strong connectivity properties of a related graph. In particular, this is the case for (conservative) monomolecular networks, as well as cascades of a large class of post-translational modification systems (of which the MAPK cascade and the n-site futile cycle are prominent examples). Since one of the aforementioned sufficient conditions for persistence precludes the existence of boundary steady states, our method also provides a graphical tool to check for that. | [
{
"type": "A",
"before": null,
"after": "for persistence",
"start_char_pos": 666,
"end_char_pos": 666
},
{
"type": "D",
"before": "and, thus, necessary and sufficient for persistence",
"after": null,
"start_char_pos": 697,
"end_char_pos": 748
},
{
"type": "R",
"before": "the underlying",
"after": "a related",
"start_char_pos": 849,
"end_char_pos": 863
},
{
"type": "A",
"before": null,
"after": "one of",
"start_char_pos": 1112,
"end_char_pos": 1112
},
{
"type": "R",
"before": "preclude",
"after": "precludes",
"start_char_pos": 1170,
"end_char_pos": 1178
}
]
| [
0,
235,
461,
592,
750,
870,
1105
]
|
1509.06225 | 1 | In this paper an algorithm is given to determine all possible structurally different linearly conjugate realizations of a given kinetic polynomial system. The solution is based on the iterative search for constrained dense realizations using linear programming. Since there might exist exponentially many different reaction graph structures, we cannot expect to have a polynomial-time algorithm, but it can be shown that polynomial time is elapsed between displaying any two consecutive realizations. The correctness of the algorithm is proved, and the possibilities of parallel implementation are outlined . The operation of the method is shown on two illustrative examples. | In this paper an algorithm is given to determine all possible structurally different linearly conjugate realizations of a given kinetic polynomial system. The solution is based on the iterative search for constrained dense realizations using linear programming. Since there might exist exponentially many different reaction graph structures, we cannot expect to have a polynomial-time algorithm, but we URLanize the computation in such a way that polynomial time is elapsed between displaying any two consecutive realizations. The correctness of the algorithm is proved, and possibilities of a parallel implementation are discussed . The operation of the method is shown on two illustrative examples. | [
{
"type": "R",
"before": "it can be shown",
"after": "we URLanize the computation in such a way",
"start_char_pos": 400,
"end_char_pos": 415
},
{
"type": "R",
"before": "the possibilities of",
"after": "possibilities of a",
"start_char_pos": 549,
"end_char_pos": 569
},
{
"type": "R",
"before": "outlined",
"after": "discussed",
"start_char_pos": 598,
"end_char_pos": 606
}
]
| [
0,
154,
261,
500,
608
]
|
1509.06472 | 1 | We consider a market with fractional Brownian motion with stochastic integrals generated by the Riemann sums. We found that this market is arbitrage free if admissible strategies that are using observations with an arbitrarily small delay. Moreover, we found that this approach eliminates the discontinuity with respect to the Hurst parameter H at H=1/ 2 of the expectations of stochastic integrals. | We consider a market with fractional Brownian motion with stochastic integrals generated by the Riemann sums. We found that this market is arbitrage free if admissible strategies that are using observations with an arbitrarily small delay. Moreover, we found that this approach eliminates the discontinuity of the stochastic integrals with respect to the Hurst parameter H at H=1/ 2. | [
{
"type": "A",
"before": null,
"after": "of the stochastic integrals",
"start_char_pos": 307,
"end_char_pos": 307
},
{
"type": "R",
"before": "2 of the expectations of stochastic integrals.",
"after": "2.",
"start_char_pos": 354,
"end_char_pos": 400
}
]
| [
0,
109,
239
]
|
1509.06612 | 1 | Historical economic growth is analysedusing the method of reciprocal values . Included in the analysis is the world and regional economic growth. The analysis demonstrates that the natural tendency for the historical economic growth was to increase hyperbolically . | Data describing historical economic growth are analysed . Included in the analysis is the world and regional economic growth. The analysis demonstrates that historical economic growth had a natural tendency to follow hyperbolic distributions. Parameters describing hyperbolic distributions have been determined. A search for takeoffs from stagnation to growth produced negative results. This analysis throws a new light on the interpretation of the mechanism of the historical economic growth and suggests new lines of research . | [
{
"type": "R",
"before": "Historical economic growth is analysedusing the method of reciprocal values",
"after": "Data describing historical economic growth are analysed",
"start_char_pos": 0,
"end_char_pos": 75
},
{
"type": "R",
"before": "the natural tendency for the",
"after": "historical economic growth had a natural tendency to follow hyperbolic distributions. Parameters describing hyperbolic distributions have been determined. A search for takeoffs from stagnation to growth produced negative results. This analysis throws a new light on the interpretation of the mechanism of the",
"start_char_pos": 177,
"end_char_pos": 205
},
{
"type": "R",
"before": "was to increase hyperbolically",
"after": "and suggests new lines of research",
"start_char_pos": 233,
"end_char_pos": 263
}
]
| [
0,
77,
145
]
|
1509.07219 | 1 | Biopolymers serve as one-dimensional tracks on which motor proteins move to perform their biological roles. Motor protein phenomena have inspired theoretical models of one-dimensional transport, crowding, and jamming. Experiments studying the motion of Xklp1 motors on reconstituted antiparallel microtubule overlaps demonstrated that motors recruited to the overlap walk toward the plus end of individual microtubules and frequently switch between filaments. We study a model of this system that couples the totally asymmetric simple exclusion process (TASEP) for motor motion with switches between antiparallel filaments and binding kinetics. We determine steady-state motor density profiles for fixed-length overlaps using exact and approximate solutions of the continuum differential equations and compare to kinetic Monte Carlo simulations. The center region, far from the overlap ends, has a constant motor density as one would na\"ively expect. However, rather than following a simple binding equilibrium, the center motor density depends on total overlap length, motor speed, and motor switching rate. The size of the crowded boundary layer near the overlap ends is also dependent on the overlap length and switching rate in addition to the motor speed and bulk concentration. The antiparallel microtubule overlap geometry may offer a novel mechanism for biological regulation of protein concentration and consequent activity. | Biopolymers serve as one-dimensional tracks on which motor proteins move to perform their biological roles. Motor protein phenomena have inspired theoretical models of one-dimensional transport, crowding, and jamming. Experiments studying the motion of Xklp1 motors on reconstituted antiparallel microtubule overlaps demonstrated that motors recruited to the overlap walk toward the plus end of individual microtubules and frequently switch between filaments. We study a model of this system that couples the totally asymmetric simple exclusion process (TASEP) for motor motion with switches between antiparallel filaments and binding kinetics. We determine steady-state motor density profiles for fixed-length overlaps using exact and approximate solutions of the continuum differential equations and compare to kinetic Monte Carlo simulations. Overlap motor density profiles and motor trajectories resemble experimental measurements. The phase diagram of the model is similar to the single-filament case for low switching rate, while for high switching rate we find a new low density-high density-low density-high density phase. The overlap center region, far from the overlap ends, has a constant motor density as one would naively expect. However, rather than following a simple binding equilibrium, the center motor density depends on total overlap length, motor speed, and motor switching rate. The size of the crowded boundary layer near the overlap ends is also dependent on the overlap length and switching rate in addition to the motor speed and bulk concentration. The antiparallel microtubule overlap geometry may offer a previously unrecognized mechanism for biological regulation of protein concentration and consequent activity. | [
{
"type": "R",
"before": "The",
"after": "Overlap motor density profiles and motor trajectories resemble experimental measurements. The phase diagram of the model is similar to the single-filament case for low switching rate, while for high switching rate we find a new low density-high density-low density-high density phase. The overlap",
"start_char_pos": 846,
"end_char_pos": 849
},
{
"type": "R",
"before": "na\\\"ively",
"after": "naively",
"start_char_pos": 934,
"end_char_pos": 943
},
{
"type": "R",
"before": "novel",
"after": "previously unrecognized",
"start_char_pos": 1343,
"end_char_pos": 1348
}
]
| [
0,
107,
217,
459,
644,
845,
951,
1109,
1284
]
|
1509.07719 | 1 | We perform a geometric study of the equilibrium locus of the Ribosome Flow Model on a Ring . We prove that when considering the set of all possible values of the parameters, the equilibrium locus is a smooth manifold with boundaries , while for a given value of the parameters, it is an embedded smooth and connected curve. For different values of the parameters, the curves are all isomorphic. Moreover, we show how to build a homotopy between different curves obtained for different values of the parameter set. This procedure allows the efficient computation of the equilibrium point for each value of some first integral of the system. This point would have been otherwise difficult to be computed for higher dimensions. We illustrate this construction by some numerical experiments . | We perform a geometric study of the equilibrium locus of the flow that models the diffusion process over a circular network of cells . We prove that when considering the set of all possible values of the parameters, the equilibrium locus is a smooth manifold with corners , while for a given value of the parameters, it is an embedded smooth and connected curve. For different values of the parameters, the curves are all isomorphic. Moreover, we show how to build a homotopy between different curves obtained for different values of the parameter set. This procedure allows the efficient computation of the equilibrium point for each value of some first integral of the system. This point would have been otherwise difficult to be computed for higher dimensions. We illustrate this construction by some numerical experiments . Eventually, we show that when considering the parameters as inputs, one can easily bring the system asymptotically to any equilibrium point in the reachable set, which we also easily characterize . | [
{
"type": "R",
"before": "Ribosome Flow Model on a Ring",
"after": "flow that models the diffusion process over a circular network of cells",
"start_char_pos": 61,
"end_char_pos": 90
},
{
"type": "R",
"before": "boundaries",
"after": "corners",
"start_char_pos": 222,
"end_char_pos": 232
},
{
"type": "A",
"before": null,
"after": ". Eventually, we show that when considering the parameters as inputs, one can easily bring the system asymptotically to any equilibrium point in the reachable set, which we also easily characterize",
"start_char_pos": 787,
"end_char_pos": 787
}
]
| [
0,
92,
323,
394,
513,
639,
724
]
|
1509.07982 | 1 | We consider the problem of jointly estimating multiple precision matrices from (aggregated) high-dimensional data consisting of distinct classes. An \ell_2-penalized maximum-likelihood approach is employed. The suggested approach is flexible and generic, incorporating several other \ell_2-penalized estimators as special cases. In addition, the approach allows for the specification of target matrices through which prior knowledge may be incorporated and which can stabilize the estimation procedure in high-dimensional settings. The result is a targeted fused ridge estimator that is of use when the precision matrices of the constituent classes are believed to chiefly share the same structure while potentially differing in a number of locations of interest. It has many applications in (multi)factorial study designs. We focus on the graphical interpretation of precision matrices with the proposed estimator then serving as a basis for integrative or meta-analytic Gaussian graphical modeling. Situations are considered in which the classes are defined by data sets and /or (subtypes of ) diseases. The performance of the proposed estimator in the graphical modeling setting is assessed through extensive simulation experiments. Its practical usability is illustrated by the differential network modeling of 11 large-scale diffuse large B-cell lymphoma gene expression data sets . The estimator and its related procedures are incorporated into the R-package rags2ridges. | We consider the problem of jointly estimating multiple inverse covariance matrices from high-dimensional data consisting of distinct classes. An \ell_2-penalized maximum likelihood approach is employed. The suggested approach is flexible and generic, incorporating several other \ell_2-penalized estimators as special cases. In addition, the approach allows specification of target matrices through which prior knowledge may be incorporated and which can stabilize the estimation procedure in high-dimensional settings. The result is a targeted fused ridge estimator that is of use when the precision matrices of the constituent classes are believed to chiefly share the same structure while potentially differing in a number of locations of interest. It has many applications in (multi)factorial study designs. We focus on the graphical interpretation of precision matrices with the proposed estimator then serving as a basis for integrative or meta-analytic Gaussian graphical modeling. Situations are considered in which the classes are defined by data sets and subtypes of diseases. The performance of the proposed estimator in the graphical modeling setting is assessed through extensive simulation experiments. Its practical usability is illustrated by the differential network modeling of 12 large-scale gene expression data sets of diffuse large B-cell lymphoma subtypes . The estimator and its related procedures are incorporated into the R-package rags2ridges. | [
{
"type": "R",
"before": "precision matrices from (aggregated)",
"after": "inverse covariance matrices from",
"start_char_pos": 55,
"end_char_pos": 91
},
{
"type": "R",
"before": "maximum-likelihood",
"after": "maximum likelihood",
"start_char_pos": 166,
"end_char_pos": 184
},
{
"type": "D",
"before": "for the",
"after": null,
"start_char_pos": 362,
"end_char_pos": 369
},
{
"type": "R",
"before": "/or (subtypes of )",
"after": "subtypes of",
"start_char_pos": 1077,
"end_char_pos": 1095
},
{
"type": "R",
"before": "11",
"after": "12",
"start_char_pos": 1315,
"end_char_pos": 1317
},
{
"type": "D",
"before": "diffuse large B-cell lymphoma",
"after": null,
"start_char_pos": 1330,
"end_char_pos": 1359
},
{
"type": "A",
"before": null,
"after": "of diffuse large B-cell lymphoma subtypes",
"start_char_pos": 1386,
"end_char_pos": 1386
}
]
| [
0,
145,
206,
328,
531,
763,
823,
1000,
1105,
1235,
1388
]
|
1509.08281 | 1 | We study the high-frequency limits of strategies and costs in a Nash equilibrium for two agents that are competing to minimize liquidation costs in a discrete-time market impact model with exponentially decaying price impact and quadratic transaction costs of size \theta\ge0. We show that, for \theta=0, equilibrium strategies and costs will oscillate indefinitely between two accumulation points. For \theta>0, however, both strategiesand costs will converge towards limits that are independent of \theta. We then show that the limiting strategies form a Nash equilibrium for a continuous-time version of the model with \theta equal to a certain critical value \theta^*>0, and that the corresponding expected costs coincide with the high-frequency limits of the discrete-time equilibrium costs. For \theta\neq\theta^*, however, continuous-time Nash equilibria will typically not exist. Our results permit us to give mathematically rigorous proofs of numerical observations made in Schied and Zhang arXiv:1305.4013, 2013 . In particular, we provide a range of model parameters for which the limiting expected costs of both agents are decreasing functions of \theta. That is, for sufficiently high trading speed, raising additional transaction costs can reduce the expected costs of all agents. | We study the high-frequency limits of strategies and costs in a Nash equilibrium for two agents that are competing to minimize liquidation costs in a discrete-time market impact model with exponentially decaying price impact and quadratic transaction costs of size \theta\ge0. We show that, for \theta=0, equilibrium strategies and costs will oscillate indefinitely between two accumulation points. For \theta>0, however, strategies, costs, and total transaction costs will converge towards limits that are independent of \theta. We then show that the limiting strategies form a Nash equilibrium for a continuous-time version of the model with \theta equal to a certain critical value \theta^*>0, and that the corresponding expected costs coincide with the high-frequency limits of the discrete-time equilibrium costs. For \theta\neq\theta^*, however, continuous-time Nash equilibria will typically not exist. Our results permit us to give mathematically rigorous proofs of numerical observations made in Schied and Zhang ( 2013 ) . In particular, we provide a range of model parameters for which the limiting expected costs of both agents are decreasing functions of \theta. That is, for sufficiently high trading speed, raising additional transaction costs can reduce the expected costs of all agents. | [
{
"type": "R",
"before": "both strategiesand",
"after": "strategies, costs, and total transaction",
"start_char_pos": 422,
"end_char_pos": 440
},
{
"type": "R",
"before": "arXiv:1305.4013,",
"after": "(",
"start_char_pos": 1000,
"end_char_pos": 1016
},
{
"type": "A",
"before": null,
"after": ")",
"start_char_pos": 1022,
"end_char_pos": 1022
}
]
| [
0,
276,
398,
507,
796,
887,
1167
]
|
1509.08869 | 1 | We study asymptotic properties of maximum likelihood estimators of drift parameters for a jump-type Heston model based on continuous time observations of the price process together with its jump part. We prove strong consistency and asymptotic normality for all admissible parameter values except one, where we show only weak consistency and non-normal asymptotic behavior. We also present some simulations to illustrate our results. | We study asymptotic properties of maximum likelihood estimators of drift parameters for a jump-type Heston model based on continuous time observations of the price process together with its jump part. We prove strong consistency and asymptotic normality for all admissible parameter values except one, where we show only weak consistency and mixed normal (but non-normal ) asymptotic behavior. We also present some simulations to illustrate our results. | [
{
"type": "A",
"before": null,
"after": "mixed normal (but",
"start_char_pos": 342,
"end_char_pos": 342
},
{
"type": "A",
"before": null,
"after": ")",
"start_char_pos": 354,
"end_char_pos": 354
}
]
| [
0,
200,
375
]
|
1509.08869 | 2 | We study asymptotic properties of maximum likelihood estimators of drift parameters for a jump-type Heston model based on continuous time observations of the price process together with its jump part . We prove strong consistency and asymptotic normality for all admissible parameter values except one, where we show only weak consistency and mixed normal (but non-normal) asymptotic behavior. We also present some simulations to illustrate our results. | We study asymptotic properties of maximum likelihood estimators of drift parameters for a jump-type Heston model based on continuous time observations . We prove strong consistency and asymptotic normality for all admissible parameter values except one, where we show only weak consistency and mixed normal (but non-normal) asymptotic behavior. We also present some numerical illustrations to confirm our results. | [
{
"type": "D",
"before": "of the price process together with its jump part",
"after": null,
"start_char_pos": 151,
"end_char_pos": 199
},
{
"type": "R",
"before": "simulations to illustrate",
"after": "numerical illustrations to confirm",
"start_char_pos": 415,
"end_char_pos": 440
}
]
| [
0,
201,
393
]
|
1509.08869 | 3 | We study asymptotic properties of maximum likelihood estimators of drift parameters for a jump-type Heston model based on continuous time observations . We prove strong consistency and asymptotic normality for all admissible parameter values except one, where we show only weak consistency and mixed normal (but non-normal) asymptotic behavior . We also present some numerical illustrations to confirm our results. | We study asymptotic properties of maximum likelihood estimators of drift parameters for a jump-type Heston model based on continuous time observations , where the jump process can be any purely non-Gaussian L\'evy process of not necessarily bounded variation with a L\'evy measure concentrated on (-1,\infty) . We prove strong consistency and asymptotic normality for all admissible parameter values except one, where we show only weak consistency and mixed normal (but non-normal) asymptotic behavior . It turns out that the volatility of the price process is a measurable function of the price process . We also present some numerical illustrations to confirm our results. | [
{
"type": "A",
"before": null,
"after": ", where the jump process can be any purely non-Gaussian L\\'evy process of not necessarily bounded variation with a L\\'evy measure concentrated on (-1,\\infty)",
"start_char_pos": 151,
"end_char_pos": 151
},
{
"type": "A",
"before": null,
"after": ". It turns out that the volatility of the price process is a measurable function of the price process",
"start_char_pos": 345,
"end_char_pos": 345
}
]
| [
0,
153,
347
]
|
1509.09174 | 1 | Computational models of complex systems are usually elaborate and sensitive to implementation details, characteristics which often affect model verification and validation. Model replication is a possible solution to this problem, as it bypasses the biases associated with the language or toolkit used to develop the original model, promoting model verification , model validation, and improved modelunderstanding. Some argue that a computational model is untrustworthy until it has been successfully replicated. However, most models have only been implemented by the original developer, and thus, have never been replicated. Several reasons for this problem have been identified, namely: a) lack of incentive; b) below par modelcommunication; c) insufficient knowledge of how to replicate; and, d) level of difficulty of the replication task. In this paper, we present a model comparison technique, which uses principal component analysis to convert simulation output into a set of linearly uncorrelated statistical measures, analyzable in a consistent, model-independent fashion. It is appropriate for ascertaining distributional equivalence of a model replication with its original implementation. Besides model-independence, this technique has three other desirable properties: a) it automatically selects output features that best explain implementation differences; b) it does not depend on the distributional properties of simulation output; and, c) it simplifies the modelers' work, as it can be used directly on simulation outputs. The proposed technique is shown to produce similar results to classic comparison methods when applied to a well-studied reference model. | Computational models of complex systems are usually elaborate and sensitive to implementation details, characteristics which often affect their verification and validation. Model replication is a possible solution to this issue. It avoids biases associated with the language or toolkit used to develop the original model, not only promoting its verification and validation, but also fostering the credibility of the underlying conceptual model. However, different model implementations must be compared to assess their equivalence. The problem is, given two or more implementations of a stochastic model, how to prove that they display similar behavior? In this paper, we present a model comparison technique, which uses principal component analysis to convert simulation output into a set of linearly uncorrelated statistical measures, analyzable in a consistent, model-independent fashion. It is appropriate for ascertaining distributional equivalence of a model replication with its original implementation. Besides model-independence, this technique has three other desirable properties: a) it automatically selects output features that best explain implementation differences; b) it does not depend on the distributional properties of simulation output; and, c) it simplifies the modelers' work, as it can be used directly on simulation outputs. The proposed technique is shown to produce similar results to classic comparison methods when applied to a well-studied reference model. | [
{
"type": "R",
"before": "model",
"after": "their",
"start_char_pos": 138,
"end_char_pos": 143
},
{
"type": "R",
"before": "problem, as it bypasses the",
"after": "issue. It avoids",
"start_char_pos": 222,
"end_char_pos": 249
},
{
"type": "R",
"before": "promoting model verification , model validation, and improved modelunderstanding. Some argue that a computational model is untrustworthy until it has been successfully replicated. However, most models have only been implemented by the original developer, and thus, have never been replicated. Several reasons for this problem have been identified, namely: a) lack of incentive; b) below par modelcommunication; c) insufficient knowledge of how to replicate; and, d) level of difficulty of the replication task.",
"after": "not only promoting its verification and validation, but also fostering the credibility of the underlying conceptual model. However, different model implementations must be compared to assess their equivalence. The problem is, given two or more implementations of a stochastic model, how to prove that they display similar behavior?",
"start_char_pos": 333,
"end_char_pos": 843
}
]
| [
0,
172,
414,
512,
625,
710,
743,
790,
843,
1081,
1200,
1371,
1448,
1540
]
|
1510.01197 | 1 | Theory of chemical reaction network recent developedand application of the frame work for informatics has been aimed. Here, we hypothesized chemical reaction network that obeys Tsallis q-statistics. We applied the Crooks fluctuation theorem for analysis of analyzed an idealized coding way on a simple chemical reaction cascade from perspectives of information conveyed along the signaling pathways . As a result, the information could be quantitatively calculated using Tsallis q-statistics. This mathematically formulating provides a general quantitative viewpoint of biological cellular signaling suitable to evaluate redundancies in actual signaling cascades . | The field of information science has greatly developed, and applications in various fields have emerged. In this paper, we evaluated the coding system in the theory of Tsallis entropy for transmission of messages and aimed to formulate the channel capacity by maximization of the Tsallis entropy within a given condition of code length . As a result, we obtained a simple relational expression between code length and code appearance probability and, additionally, a generalized formula of the channel capacity on the basis of Tsallis entropy statistics. This theoretical framework may contribute to data processing techniques and other applications . | [
{
"type": "R",
"before": "Theory of chemical reaction network recent developedand application of the frame work for informatics has been aimed. Here, we hypothesized chemical reaction network that obeys Tsallis q-statistics. We applied the Crooks fluctuation theorem for analysis of analyzed an idealized coding way on a simple chemical reaction cascade from perspectives of information conveyed along the signaling pathways",
"after": "The field of information science has greatly developed, and applications in various fields have emerged. In this paper, we evaluated the coding system in the theory of Tsallis entropy for transmission of messages and aimed to formulate the channel capacity by maximization of the Tsallis entropy within a given condition of code length",
"start_char_pos": 0,
"end_char_pos": 398
},
{
"type": "R",
"before": "the information could be quantitatively calculated using Tsallis q-statistics. This mathematically formulating provides a general quantitative viewpoint of biological cellular signaling suitable to evaluate redundancies in actual signaling cascades",
"after": "we obtained a simple relational expression between code length and code appearance probability and, additionally, a generalized formula of the channel capacity on the basis of Tsallis entropy statistics. This theoretical framework may contribute to data processing techniques and other applications",
"start_char_pos": 414,
"end_char_pos": 662
}
]
| [
0,
117,
198,
400,
492
]
|
1510.01420 | 1 | RNA genes are ubiquitous in cell physiology, with a diverse repertoire of known functions. In fact, the majority of the eukaryotic genome does not code for proteins, and thousands of conserved RNAs of currently unknown function have been identified . Knowledge of 3D structure could can help elucidate the function of these RNAs but despite outstanding word using X-ray crystallography, NMR and cryoEM, structure determination remains low-throughput. RNA structure prediction in silico is a promising alternative. However, 3D structure prediction for large RNAs requires tertiary contacts between distant secondary structural elements that are difficult to infer with existing methods. Here, based only on sequences, we use a global statistical probability model of co-variation to detect 3D contacts, in analogy to recently developed breakthrough methods for computational protein folding. In blinded tests on 22 known RNA structures ranging in size from 65 to 1800 nucleotides, the predicted contacts matched physical interactions with 65-95\% prediction accuracy. Importantly, we infer many long-range tertiary contacts, including non-Watson-Crick interactions. When used as restraints in molecular dynamics simulations, the inferred contacts improve RNA 3D structure prediction to a coordinate error as low as 6 to 10 Angstrom rmsd with potential for use with other constraints. These contacts include functionally important interactions, such as those that distinguish the active and inactive conformations of four riboswitches. In blind prediction mode, we present evolutionary couplings for 180 RNAs of unknown structure (available at URL We anticipate that this approach will shed light on the structure and function of as yet less known RNA genes. | RNA genes are ubiquitous in cell physiology, but the vast majority of non-coding RNAs are poorly understood . Knowledge of 3D structure can help elucidate the function , but in silico 3D structure prediction for large RNAs requires tertiary contacts between distant secondary structural elements that are difficult to infer with existing methods. Using a global probability model of sequence co-variation , we predict contact with 65-90\% precision and capture many long-range tertiary contacts, including non-Watson-Crick interactions. These contacts allow all-atom blinded structure prediction with an accuracy of 6-10 rmsd. We present evolutionary couplings for 160 RNAs of unknown structure (available at URL and use these predictions to shed light on the mechanism of tRNA sensing by the T-box riboswitch, as well as the evolutionary history of the ribozyme RNase P. We anticipate that this approach will shed light on the structure and function of as yet less known RNA genes. | [
{
"type": "R",
"before": "with a diverse repertoire of known functions. In fact, the majority of the eukaryotic genome does not code for proteins, and thousands of conserved RNAs of currently unknown function have been identified",
"after": "but the vast majority of non-coding RNAs are poorly understood",
"start_char_pos": 45,
"end_char_pos": 248
},
{
"type": "D",
"before": "could",
"after": null,
"start_char_pos": 277,
"end_char_pos": 282
},
{
"type": "R",
"before": "of these RNAs but despite outstanding word using X-ray crystallography, NMR and cryoEM, structure determination remains low-throughput. RNA structure prediction in silico is a promising alternative. However,",
"after": ", but in silico",
"start_char_pos": 315,
"end_char_pos": 522
},
{
"type": "R",
"before": "Here, based only on sequences, we use a global statistical",
"after": "Using a global",
"start_char_pos": 686,
"end_char_pos": 744
},
{
"type": "A",
"before": null,
"after": "sequence",
"start_char_pos": 766,
"end_char_pos": 766
},
{
"type": "R",
"before": "to detect 3D contacts, in analogy to recently developed breakthrough methods for computational protein folding. In blinded tests on 22 known RNA structures ranging in size from 65 to 1800 nucleotides, the predicted contacts matched physical interactions with 65-95\\% prediction accuracy. Importantly, we infer",
"after": ", we predict contact with 65-90\\% precision and capture",
"start_char_pos": 780,
"end_char_pos": 1089
},
{
"type": "R",
"before": "When used as restraints in molecular dynamics simulations, the inferred contacts improve RNA 3D structure prediction to a coordinate error as low as 6 to 10 Angstrom rmsd with potential for use with other constraints. These contacts include functionally important interactions, such as those that distinguish the active and inactive conformations of four riboswitches. In blind prediction mode, we",
"after": "These contacts allow all-atom blinded structure prediction with an accuracy of 6-10 rmsd. We",
"start_char_pos": 1166,
"end_char_pos": 1563
},
{
"type": "R",
"before": "180",
"after": "160",
"start_char_pos": 1599,
"end_char_pos": 1602
},
{
"type": "A",
"before": null,
"after": "and use these predictions to shed light on the mechanism of tRNA sensing by the T-box riboswitch, as well as the evolutionary history of the ribozyme RNase P.",
"start_char_pos": 1647,
"end_char_pos": 1647
}
]
| [
0,
90,
450,
513,
685,
891,
1067,
1165,
1383,
1534
]
|
1510.01420 | 2 | RNA genes are ubiquitousin cell physiology , but the vast majority of non-coding RNAs are poorly understood. Knowledge of 3D structure can help elucidate the function, but in silico 3D structure prediction for large RNAs requires tertiary contacts between distant secondary structural elements that are difficult to infer with existing methods. Using a global probability model of sequence co-variation , we predict contact with 65-90\% precision and capture many long-range tertiary contacts, including non-Watson-Crick interactions . These contacts allow all-atom blinded structure prediction with an accuracy of 6-10 rmsd. We present evolutionary couplings for 160 RNAs of unknown structure (available at URL and use these predictions to shed light on the mechanism of tRNA sensing by the T-box riboswitch, as well as the evolutionary history of the ribozyme RNase P. We anticipate that this approach will shed light on the structure and function of as yet less known RNAgenes . | Non-coding RNAs are ubiquitous , but the discovery of new RNA gene sequences far outpaces research on their structure and functional interactions. We mine the evolutionary sequence record to derive precise information about function and structure of RNAs and RNA-protein complexes. As in protein structure prediction, we use maximum entropy global probability models of sequence co-variation to infer evolutionarily constrained nucleotide-nucleotide interactions within RNA molecules, and nucleotide-amino acid interactions in RNA-protein complexes. The predicted contacts allow all-atom blinded 3D structure prediction at good accuracy for several known RNA structures and RNA-protein complexes. For unknown structures, we predict contacts in 160 non-coding RNA families. Beyond 3D structure prediction, evolutionary couplings help identify important functional interactions, e.g., at switch points in riboswitches and at a complex nucleation site in HIV. Aided by accelerating sequence accumulation, evolutionary coupling analysis can accelerate the discovery of functional interactions and 3D structures involving RNA . | [
{
"type": "R",
"before": "RNA genes are ubiquitousin cell physiology",
"after": "Non-coding RNAs are ubiquitous",
"start_char_pos": 0,
"end_char_pos": 42
},
{
"type": "R",
"before": "vast majority of non-coding RNAs are poorly understood. Knowledge of 3D structure can help elucidate the function, but in silico 3D structure prediction for large RNAs requires tertiary contacts between distant secondary structural elements that are difficult to infer with existing methods. Using a global probability model",
"after": "discovery of new RNA gene sequences far outpaces research on their structure and functional interactions. We mine the evolutionary sequence record to derive precise information about function and structure of RNAs and RNA-protein complexes. As in protein structure prediction, we use maximum entropy global probability models",
"start_char_pos": 53,
"end_char_pos": 377
},
{
"type": "R",
"before": ", we predict contact with 65-90\\% precision and capture many long-range tertiary contacts, including non-Watson-Crick interactions . These",
"after": "to infer evolutionarily constrained nucleotide-nucleotide interactions within RNA molecules, and nucleotide-amino acid interactions in RNA-protein complexes. The predicted",
"start_char_pos": 403,
"end_char_pos": 541
},
{
"type": "R",
"before": "structure prediction with an accuracy of 6-10 rmsd. We present evolutionary couplings for",
"after": "3D structure prediction at good accuracy for several known RNA structures and RNA-protein complexes. For unknown structures, we predict contacts in",
"start_char_pos": 574,
"end_char_pos": 663
},
{
"type": "R",
"before": "RNAs of unknown structure (available at URL and use these predictions to shed light on the mechanism of tRNA sensing by the T-box riboswitch, as well as the evolutionary history of the ribozyme RNase P. We anticipate that this approach will shed light on the structure and function of as yet less known RNAgenes",
"after": "non-coding RNA families. Beyond 3D structure prediction, evolutionary couplings help identify important functional interactions, e.g., at switch points in riboswitches and at a complex nucleation site in HIV. Aided by accelerating sequence accumulation, evolutionary coupling analysis can accelerate the discovery of functional interactions and 3D structures involving RNA",
"start_char_pos": 668,
"end_char_pos": 979
}
]
| [
0,
108,
344,
535,
625,
870
]
|
1510.01679 | 1 | We study several aspects of the so-called low-vol and low-\beta anomalies, some already documented (such as the universality of the effect over different geographical zones), others hitherto not clearly discussed in the literature. Our most significant message is that the low-vol anomaly is the result of two independent effects. One is the striking negative correlation between past realized volatility and dividend yield. Second is the fact that ex-dividend returns themselves are weakly dependent on the volatility level, leading to better risk-adjusted returns for low-vol stocks. This effect is further amplified by compounding. We find that the low-vol strategy is not associated to short term reversals, nor does it qualify as a Risk-Premium strategy, since its overall skewness is slightly positive. For practical purposes, the strong dividend bias and the resulting correlation with other valuation metrics (such as Earnings to Price or Book to Price) does make the low-vol strategies to some extent redundant, at least for equities. | We study several aspects of the so-called low-vol and low-beta anomalies, some already documented (such as the universality of the effect over different geographical zones), others hitherto not clearly discussed in the literature. Our most significant message is that the low-vol anomaly is the result of two independent effects. One is the striking negative correlation between past realized volatility and dividend yield. Second is the fact that ex-dividend returns themselves are weakly dependent on the volatility level, leading to better risk-adjusted returns for low-vol stocks. This effect is further amplified by compounding. We find that the low-vol strategy is not associated to short term reversals, nor does it qualify as a Risk-Premium strategy, since its overall skewness is slightly positive. For practical purposes, the strong dividend bias and the resulting correlation with other valuation metrics (such as Earnings to Price or Book to Price) does make the low-vol strategies to some extent redundant, at least for equities. | [
{
"type": "R",
"before": "low-\\beta",
"after": "low-beta",
"start_char_pos": 54,
"end_char_pos": 63
}
]
| [
0,
231,
330,
424,
585,
634,
808
]
|
1510.01890 | 1 | We consider a continuous-time financial market that consists of securities available for dynamic trading, and securities only available for static trading. We work in a robust framework where a set of non-dominated models is given. The concept of semi-static completeness is introduced: it corresponds to having exact replication by means of semi-static strategies. We show that semi-static completeness is equivalent to an extremality property, and give a characterization of the induced filtration structure. Finally , we consider investors with additional information and, for specific types of extra information, we characterize the models that are semi-statically complete for the informed investors . | We consider a continuous-time financial market that consists of securities available for dynamic trading, and securities only available for static trading. We work in a robust framework where a set of non-dominated models is given. The concept of semi-static completeness is introduced: it corresponds to having exact replication by means of semi-static strategies. We show that semi-static completeness is equivalent to an extremality property, and give a characterization of the induced filtration structure. Furthermore , we consider investors with additional information and, for specific types of extra information, we characterize the models that are semi-statically complete for the informed investors . Finally, we provide some examples where robust pricing for informed and uninformed agents can be done over semi-statically complete models . | [
{
"type": "R",
"before": "Finally",
"after": "Furthermore",
"start_char_pos": 511,
"end_char_pos": 518
},
{
"type": "A",
"before": null,
"after": ". Finally, we provide some examples where robust pricing for informed and uninformed agents can be done over semi-statically complete models",
"start_char_pos": 705,
"end_char_pos": 705
}
]
| [
0,
155,
231,
365,
510
]
|
1510.02013 | 1 | High order discretization schemes of SDEs by using free Lia algebra valued random variables are introduced by Kusuoka, Lyons-Victoir, Ninomiya-Victoir and Ninomiya-Ninomiya etc . These schemes are called KLNV method. These scheme involves solving flow of vector fields usually by numerical method . The authors found the special Lie algebraic structure on the vector fields in the measure financial diffusion models. Using this structure, the flow associated with vector fields can be solved analytically , and enable the high speed computation . | High order discretization schemes of SDEs by using free Lie algebra valued random variables are introduced by Kusuoka, Lyons-Victoir, Ninomiya-Victoir and Ninomiya-Ninomiya . These schemes are called KLNV methods. They involve solving the flows of vector fields associated with SDEs and it is usually done by numerical methods . The authors found a special Lie algebraic structure on the vector fields in the major financial diffusion models. Using this structure, we can solve the flows associated with vector fields analytically and efficiently. Numerical examples show that our method saves the computation time drastically . | [
{
"type": "R",
"before": "Lia",
"after": "Lie",
"start_char_pos": 56,
"end_char_pos": 59
},
{
"type": "D",
"before": "etc",
"after": null,
"start_char_pos": 173,
"end_char_pos": 176
},
{
"type": "R",
"before": "method. These scheme involves solving flow",
"after": "methods. They involve solving the flows",
"start_char_pos": 209,
"end_char_pos": 251
},
{
"type": "R",
"before": "usually by numerical method",
"after": "associated with SDEs and it is usually done by numerical methods",
"start_char_pos": 269,
"end_char_pos": 296
},
{
"type": "R",
"before": "the",
"after": "a",
"start_char_pos": 317,
"end_char_pos": 320
},
{
"type": "R",
"before": "measure",
"after": "major",
"start_char_pos": 381,
"end_char_pos": 388
},
{
"type": "R",
"before": "the flow",
"after": "we can solve the flows",
"start_char_pos": 439,
"end_char_pos": 447
},
{
"type": "R",
"before": "can be solved analytically , and enable the high speed computation",
"after": "analytically and efficiently. Numerical examples show that our method saves the computation time drastically",
"start_char_pos": 478,
"end_char_pos": 544
}
]
| [
0,
178,
216,
298,
416
]
|
1510.02510 | 1 | Gene coexpression is a common feature employed in predicting buffering relationships that explain genetic interactions , which constitute an important mechanism behind the robustness of cells to genetic perturbations. The complete removal of such buffering connections impacts the entire molecular circuitry, ultimately leading to cellular death. Coexpression is commonly measured through Pearson correlation coefficients. However, Pearson correlation values are sensitive to indirect effects and often partial correlations are used instead. Yet, partial correlation values convey no information on the (linear) influence of the association within the entire multivariate system or, in other words, of the represented edge within the entire network. Jones and West (2005) showed that covariance can be decomposed into the weights of the paths that connect two variables within the corresponding undirected network. Here we provide a precise interpretation of path weights and show that, in the particular case of single-edge paths, this interpretation leads to a quantity we call networked partial correlation whose value depends on both the partial correlation between the intervening variables and their association with the rest of the multivariate system. We show that this new quantity correlates better with quantitative genetic interactions in yeast than classical coexpression measures . | Genetic interactions confer robustness on cells in response to genetic perturbations. This often occurs through molecular buffering mechanisms that can be predicted using, among other features, the degree of coexpression between genes, commonly estimated through marginal measures of association such as Pearson or Spearman correlation coefficients. However, marginal correlations are sensitive to indirect effects and often partial correlations are used instead. Yet, partial correlations convey no information about the (linear) influence of the coexpressed genes on the entire multivariate system , which may be crucial to discriminate functional associations from genetic interactions. To address these two shortcomings, here we propose to use the edge weight derived from the covariance decomposition over the paths of the associated gene network. We call this new quantity the networked partial correlation and use it to analyze genetic interactions in yeast . | [
{
"type": "R",
"before": "Gene coexpression is a common feature employed in predicting buffering relationships that explain genetic interactions , which constitute an important mechanism behind the robustness of cells",
"after": "Genetic interactions confer robustness on cells in response",
"start_char_pos": 0,
"end_char_pos": 191
},
{
"type": "R",
"before": "The complete removal of such buffering connections impacts the entire molecular circuitry, ultimately leading to cellular death. Coexpression is commonly measured through Pearson",
"after": "This often occurs through molecular buffering mechanisms that can be predicted using, among other features, the degree of coexpression between genes, commonly estimated through marginal measures of association such as Pearson or Spearman",
"start_char_pos": 218,
"end_char_pos": 396
},
{
"type": "R",
"before": "Pearson correlation values",
"after": "marginal correlations",
"start_char_pos": 432,
"end_char_pos": 458
},
{
"type": "R",
"before": "correlation values",
"after": "correlations",
"start_char_pos": 555,
"end_char_pos": 573
},
{
"type": "R",
"before": "on",
"after": "about",
"start_char_pos": 596,
"end_char_pos": 598
},
{
"type": "R",
"before": "association within",
"after": "coexpressed genes on",
"start_char_pos": 629,
"end_char_pos": 647
},
{
"type": "R",
"before": "or, in other words, of the represented edge within the entire network. Jones and West (2005) showed that covariance can be decomposed into the weights of the paths that connect two variables within the corresponding undirected network. Here we provide a precise interpretation of path weights and show that, in the particular case of single-edge paths, this interpretation leads to a quantity we call",
"after": ", which may be crucial to discriminate functional associations from genetic interactions. To address these two shortcomings, here we propose to use the edge weight derived from the covariance decomposition over the paths of the associated gene network. We call this new quantity the",
"start_char_pos": 679,
"end_char_pos": 1079
},
{
"type": "R",
"before": "whose value depends on both the partial correlation between the intervening variables and their association with the rest of the multivariate system. We show that this new quantity correlates better with quantitative",
"after": "and use it to analyze",
"start_char_pos": 1110,
"end_char_pos": 1326
},
{
"type": "D",
"before": "than classical coexpression measures",
"after": null,
"start_char_pos": 1357,
"end_char_pos": 1393
}
]
| [
0,
217,
346,
422,
541,
749,
914,
1259
]
|
1510.02630 | 1 | The possible recognition between double stranded (ds) DNA with homologous sequences was many times invoked to explain different biological observations. Direct pairing was considered among other possibilities, but it seems hardly compatible with the DNA structure . Using quantum chemistry, molecular mechanics, and hints from recent genetics experiments (Nat.Commun.5,3509,2014) here it is shown that direct recognition between homologous dsDNA is possible through formation of short quadruplexes by complementary hydrogen bonding of the major grooves . The constraints imposed by the predicted structures of the recognition units determine the mechanism of complexation between long dsDNA. This mechanism agrees with experimental data and explains several puzzling observations on the sequence dependence and the involvement of topoisomerases in the recognition. | Molecular recognition between two double stranded (ds) DNA with homologous sequences may not seem compatible with the B-DNA structure because the sequence information is hidden when it is used for joining the two strands. Nevertheless, it has to be invoked to account for various biological data. Using quantum chemistry, molecular mechanics, and hints from recent genetics experiments I show here that direct recognition between homologous dsDNA is possible through formation of short quadruplexes due to direct complementary hydrogen bonding of major groove surfaces in parallel alignment . The constraints imposed by the predicted structures of the recognition units determine the mechanism of complexation between long dsDNA. This mechanism and concomitant predictions agree with available experimental data and shed light upon the sequence effects and the possible involvement of topoisomerase II in the recognition. | [
{
"type": "R",
"before": "The possible recognition between",
"after": "Molecular recognition between two",
"start_char_pos": 0,
"end_char_pos": 32
},
{
"type": "R",
"before": "was many times invoked to explain different biological observations. Direct pairing was considered among other possibilities, but it seems hardly",
"after": "may not seem",
"start_char_pos": 84,
"end_char_pos": 229
},
{
"type": "R",
"before": "DNA structure .",
"after": "B-DNA structure because the sequence information is hidden when it is used for joining the two strands. Nevertheless, it has to be invoked to account for various biological data.",
"start_char_pos": 250,
"end_char_pos": 265
},
{
"type": "R",
"before": "(Nat.Commun.5,3509,2014) here it is shown",
"after": "I show here",
"start_char_pos": 355,
"end_char_pos": 396
},
{
"type": "R",
"before": "by",
"after": "due to direct",
"start_char_pos": 498,
"end_char_pos": 500
},
{
"type": "R",
"before": "the major grooves",
"after": "major groove surfaces in parallel alignment",
"start_char_pos": 535,
"end_char_pos": 552
},
{
"type": "R",
"before": "agrees with",
"after": "and concomitant predictions agree with available",
"start_char_pos": 707,
"end_char_pos": 718
},
{
"type": "R",
"before": "explains several puzzling observations on the sequence dependence and the involvement of topoisomerases",
"after": "shed light upon the sequence effects and the possible involvement of topoisomerase II",
"start_char_pos": 741,
"end_char_pos": 844
}
]
| [
0,
152,
367,
554,
691
]
|
1510.02808 | 1 | Consider a family of portfolio strategies with the aim of achieving the asymptotic growth rate of the best one. Cover's solution is to build a wealth-weighted average which can be regarded as a buy-and-hold portfolio of portfolios. When an optimal portfolio exists, the wealth-weighted average converges to it by concentration of wealth. Under suitable conditions , we show that the distribution of wealth in the family satisfies a pathwise large deviation principle as time tends to infinity. In particular, we study Cover's portfolio for the nonparametric family of functionally generated portfolios in stochastic portfolio theory and establish its asymptotic universality. | Consider a family of portfolio strategies with the aim of achieving the asymptotic growth rate of the best one. The idea behind Cover's universal portfolio is to build a wealth-weighted average which can be viewed as a buy-and-hold portfolio of portfolios. When an optimal portfolio exists, the wealth-weighted average converges to it by concentration of wealth. Working under a discrete time and pathwise setup , we show under suitable conditions that the distribution of wealth in the family satisfies a pathwise large deviation principle as time tends to infinity. Our main result extends Cover's portfolio to the nonparametric family of functionally generated portfolios in stochastic portfolio theory and establishes its asymptotic universality. | [
{
"type": "A",
"before": null,
"after": "The idea behind",
"start_char_pos": 112,
"end_char_pos": 112
},
{
"type": "R",
"before": "solution",
"after": "universal portfolio",
"start_char_pos": 121,
"end_char_pos": 129
},
{
"type": "R",
"before": "regarded",
"after": "viewed",
"start_char_pos": 181,
"end_char_pos": 189
},
{
"type": "R",
"before": "Under suitable conditions",
"after": "Working under a discrete time and pathwise setup",
"start_char_pos": 339,
"end_char_pos": 364
},
{
"type": "A",
"before": null,
"after": "under suitable conditions",
"start_char_pos": 375,
"end_char_pos": 375
},
{
"type": "R",
"before": "In particular, we study",
"after": "Our main result extends",
"start_char_pos": 496,
"end_char_pos": 519
},
{
"type": "R",
"before": "for",
"after": "to",
"start_char_pos": 538,
"end_char_pos": 541
},
{
"type": "R",
"before": "establish",
"after": "establishes",
"start_char_pos": 639,
"end_char_pos": 648
}
]
| [
0,
111,
232,
338,
495
]
|
1510.03220 | 1 | The paper develops an asymptotic expansion method for forward-backward SDEs driven by the random Poisson measures with sigma-finite compensators. The expansion is performed around the small-variance limit of the forward SDE and does not necessarily require a small size of the non-linearity in the BSDE's driver, which was actually the case for the linearization method proposed by the current authors before in a Brownian setup . A solution technique, which only requires a system of ODEs (one is non-linear and the others are linear) to be solved, as well as its error estimate are provided. In the case of a finite jump measure with a bounded intensity, one can also handle a state-dependent intensity process, which is quite relevant for many practical applications . | The paper develops an asymptotic expansion method for forward-backward SDEs (FBSDEs) driven by the random Poisson measures with sigma-finite compensators. The expansion is performed around the small-variance limit of the forward SDE and does not necessarily require a small size of the non-linearity in the BSDE's driver, which was actually the case for the linearization method proposed by the current authors in a Brownian setup before . A solution technique, which only requires a system of ODEs (one is non-linear and the others are linear) to be solved, as well as its error estimate are provided. In the case of a finite jump measure with a bounded intensity, the method can also handle sate-dependent (and hence non-Poissonian) jumps, which are quite relevant for many practical applications . Based on the stability result, we also provide a rigorous justification to use arbitrarily smooth coefficients in FBSDEs for any approximation purpose whenever rather mild conditions are satisfied . | [
{
"type": "A",
"before": null,
"after": "(FBSDEs)",
"start_char_pos": 76,
"end_char_pos": 76
},
{
"type": "D",
"before": "before",
"after": null,
"start_char_pos": 403,
"end_char_pos": 409
},
{
"type": "A",
"before": null,
"after": "before",
"start_char_pos": 430,
"end_char_pos": 430
},
{
"type": "R",
"before": "one",
"after": "the method",
"start_char_pos": 659,
"end_char_pos": 662
},
{
"type": "R",
"before": "a state-dependent intensity process, which is",
"after": "sate-dependent (and hence non-Poissonian) jumps, which are",
"start_char_pos": 679,
"end_char_pos": 724
},
{
"type": "A",
"before": null,
"after": ". Based on the stability result, we also provide a rigorous justification to use arbitrarily smooth coefficients in FBSDEs for any approximation purpose whenever rather mild conditions are satisfied",
"start_char_pos": 772,
"end_char_pos": 772
}
]
| [
0,
146,
432,
595
]
|
1510.03220 | 2 | The paper develops an asymptotic expansion method for forward-backward SDEs (FBSDEs) driven by the random Poisson measures with sigma-finite compensators . The expansion is performed around the small-variance limit of the forward SDE and does not necessarily require a small size of the non-linearity in the BSDE's driver, which was actually the case for the linearization method proposed by the current authors in a Brownian setup before. A semi-analytic solution technique , which only requires a system of ODEs (one is non-linear and the others are linear) to be solved, as well as its error estimate are provided . In the case of a finite jump measure with a bounded intensity, the method can also handle sate-dependent ( and hence non-Poissonian ) jumps, which are quite relevant for many practical applications . Based on the stability result, we also provide a rigorous justification to use arbitrarily smooth coefficients in FBSDEs for any approximation purpose whenever rather mild conditions are satisfied . | This work provides a semi-analytic approximation method for decoupled forwardbackward SDEs (FBSDEs) with jumps. In particular, we construct an asymptotic expansion method for FBSDEs driven by the random Poisson measures with \sigma -finite compensators as well as the standard Brownian motions around the small-variance limit of the forward SDE . We provide a semi-analytic solution technique as well as its error estimate for which we only need to solve essentially a system of linear ODEs . In the case of a finite jump measure with a bounded intensity, the method can also handle state-dependent and hence non-Poissonian jumps, which are quite relevant for many practical applications . | [
{
"type": "R",
"before": "The paper develops an asymptotic expansion method for forward-backward",
"after": "This work provides a semi-analytic approximation method for decoupled forwardbackward",
"start_char_pos": 0,
"end_char_pos": 70
},
{
"type": "A",
"before": null,
"after": "with jumps. In particular, we construct an asymptotic expansion method for FBSDEs",
"start_char_pos": 85,
"end_char_pos": 85
},
{
"type": "R",
"before": "sigma-finite compensators . The expansion is performed",
"after": "\\sigma",
"start_char_pos": 129,
"end_char_pos": 183
},
{
"type": "A",
"before": null,
"after": "-finite compensators as well as the standard Brownian motions",
"start_char_pos": 184,
"end_char_pos": 184
},
{
"type": "R",
"before": "and does not necessarily require a small size of the non-linearity in the BSDE's driver, which was actually the case for the linearization method proposed by the current authors in a Brownian setup before. A",
"after": ". We provide a",
"start_char_pos": 236,
"end_char_pos": 443
},
{
"type": "D",
"before": ", which only requires a system of ODEs (one is non-linear and the others are linear) to be solved,",
"after": null,
"start_char_pos": 477,
"end_char_pos": 575
},
{
"type": "R",
"before": "are provided",
"after": "for which we only need to solve essentially a system of linear ODEs",
"start_char_pos": 606,
"end_char_pos": 618
},
{
"type": "R",
"before": "sate-dependent (",
"after": "state-dependent",
"start_char_pos": 711,
"end_char_pos": 727
},
{
"type": "D",
"before": ")",
"after": null,
"start_char_pos": 753,
"end_char_pos": 754
},
{
"type": "D",
"before": ". Based on the stability result, we also provide a rigorous justification to use arbitrarily smooth coefficients in FBSDEs for any approximation purpose whenever rather mild conditions are satisfied",
"after": null,
"start_char_pos": 819,
"end_char_pos": 1017
}
]
| [
0,
156,
441,
620
]
|
1510.03550 | 1 | We develop a simple stock selection model to explain why active equity managers tend to underperform a benchmark index. We motivate our model with the empirical observation that the best performing stocks in a broad market index perform much better than the other stocks in the index. While randomly selecting a subset of securities from the index increases the chance of outperforming the index, it also increases the chance of underperforming the index , with the frequency of underperformance being larger than the frequency of overperformance . The relative likelihood of underperformance by investors choosing active management likely is much more important than the loss to those same investors of the higher fees for active management relative to passive index investing. Thus, the stakes for finding the best active managers may be larger than previously assumed. | We develop a simple stock selection model to explain why active equity managers tend to underperform a benchmark index. We motivate our model with the empirical observation that the best performing stocks in a broad market index often perform much better than the other stocks in the index. Randomly selecting a subset of securities from the index may dramatically increase the chance of underperforming the index . The relative likelihood of underperformance by investors choosing active management likely is much more important than the loss to those same investors from the higher fees for active management relative to passive index investing. Thus, active management may be even more challenging than previously believed, and the stakes for finding the best active managers may be larger than previously assumed. | [
{
"type": "A",
"before": null,
"after": "often",
"start_char_pos": 229,
"end_char_pos": 229
},
{
"type": "R",
"before": "While randomly",
"after": "Randomly",
"start_char_pos": 286,
"end_char_pos": 300
},
{
"type": "R",
"before": "increases",
"after": "may dramatically increase",
"start_char_pos": 349,
"end_char_pos": 358
},
{
"type": "D",
"before": "outperforming the index, it also increases the chance of",
"after": null,
"start_char_pos": 373,
"end_char_pos": 429
},
{
"type": "D",
"before": ", with the frequency of underperformance being larger than the frequency of overperformance",
"after": null,
"start_char_pos": 456,
"end_char_pos": 547
},
{
"type": "R",
"before": "of",
"after": "from",
"start_char_pos": 702,
"end_char_pos": 704
},
{
"type": "A",
"before": null,
"after": "active management may be even more challenging than previously believed, and",
"start_char_pos": 786,
"end_char_pos": 786
}
]
| [
0,
119,
285,
549,
779
]
|
1510.03638 | 1 | Coverage prediction is one of the most important aspects of cellular network optimization for a mobile operator . Spatial statistics can be used for coverage prediction. This approach is based on the collected geo-located measurements performed usually by drive test campaigns. Notice that uncertainty in reporting the location can result in inaccurate coverage prediction . In urban environments the location error can reach important levels up to 30 m using the Global Positioning System (GPS) . In this paper, we propose to consider the location uncertainty in the spatial prediction technique. We focus also on the complexity problem. We therefore propose to use the Fixed Rank Kriging (FRK) as spatial prediction technique . We validate the model using a field-like dataset obtained from a sophisticated simulation/planning tool. With the location uncertainty , the FRK proves to reduce the computational complexity of the spatial interpolation while keeping an acceptable prediction error . | Coverage optimization is an important process for the operator as it is a crucial prerequisite towards offering a satisfactory quality of service to the end-users. The first step of this process is coverage prediction, which can be performed by interpolating geo-located measurements reported to the network by mobile users' equipments. In previous works, we proposed a low complexity coverage prediction algorithm based on the adaptation of the Geo-statistics Fixed Rank Kriging (FRK) algorithm. We supposed that the geo-location information reported with the radio measurements was perfect, which is not the case in reality. In this paper, we study the impact of location uncertainty on the coverage prediction accuracy and we extend the previously proposed algorithm to include geo-location error in the prediction model . We validate the proposed algorithm using both simulated and real field measurements. The FRK extended to take into account the location uncertainty proves to enhance the prediction accuracy while keeping a reasonable computational complexity . | [
{
"type": "R",
"before": "prediction is one of the most important aspects of cellular network optimization for a mobile operator . Spatial statistics can be used for coverage prediction. This approach is based on the collected",
"after": "optimization is an important process for the operator as it is a crucial prerequisite towards offering a satisfactory quality of service to the end-users. The first step of this process is coverage prediction, which can be performed by interpolating",
"start_char_pos": 9,
"end_char_pos": 209
},
{
"type": "R",
"before": "performed usually by drive test campaigns. Notice that uncertainty in reporting the location can result in inaccurate coverage prediction . In urban environments the location error can reach important levels up to 30 m using the Global Positioning System (GPS) .",
"after": "reported to the network by mobile users' equipments. In previous works, we proposed a low complexity coverage prediction algorithm based on the adaptation of the Geo-statistics Fixed Rank Kriging (FRK) algorithm. We supposed that the geo-location information reported with the radio measurements was perfect, which is not the case in reality.",
"start_char_pos": 235,
"end_char_pos": 497
},
{
"type": "R",
"before": "propose to consider the location uncertainty in the spatial prediction technique. We focus also on the complexity problem. We therefore propose to use the Fixed Rank Kriging (FRK) as spatial prediction technique",
"after": "study the impact of location uncertainty on the coverage prediction accuracy and we extend the previously proposed algorithm to include geo-location error in the prediction model",
"start_char_pos": 516,
"end_char_pos": 727
},
{
"type": "R",
"before": "model using a field-like dataset obtained from a sophisticated simulation/planning tool. With",
"after": "proposed algorithm using both simulated and real field measurements. The FRK extended to take into account",
"start_char_pos": 746,
"end_char_pos": 839
},
{
"type": "R",
"before": ", the FRK proves to reduce the computational complexity of the spatial interpolation while keeping an acceptable prediction error",
"after": "proves to enhance the prediction accuracy while keeping a reasonable computational complexity",
"start_char_pos": 865,
"end_char_pos": 994
}
]
| [
0,
169,
277,
374,
597,
638,
729,
834
]
|
1510.04061 | 1 | Fractional processes have gained popularity in financial modeling due to the dependence structure of their increments and the roughness of their sample paths. The non-Markovianity of these processes gives, however, rise to conceptual and practical difficulties in computation and calibration. To address these issues, we show that a certain class of fractional processes can be represented as linear functionals of an infinite dimensional affine process. We demonstrate by means of several examples that the affine structure allows one to construct tractable financial models with fractional features. | Fractional processes have gained popularity in financial modeling due to the dependence structure of their increments and the roughness of their sample paths. The non-Markovianity of these processes gives, however, rise to conceptual and practical difficulties in computation and calibration. To address these issues, we show that a certain class of fractional processes can be represented as linear functionals of an infinite dimensional affine process. This can be derived from integral representations similar to those of Carmona, Coutin, Montseny, and Muravlev. We demonstrate by means of several examples that this allows one to construct tractable financial models with fractional features. | [
{
"type": "A",
"before": null,
"after": "This can be derived from integral representations similar to those of Carmona, Coutin, Montseny, and Muravlev.",
"start_char_pos": 455,
"end_char_pos": 455
},
{
"type": "R",
"before": "the affine structure",
"after": "this",
"start_char_pos": 505,
"end_char_pos": 525
}
]
| [
0,
158,
292,
454
]
|
1510.04350 | 1 | A great variety of biologically relevant monolayers present phase coexistence characterized by domains formed by lipids in a long-range ordered phase state dispersed in a continuous, disordered phase. Because of the difference in surface densities the domains possess an excess dipolar density with respect to the surrounding liquid phase . In this work we propose an alternative method to measure the dipolar repulsion for neutral lipid monolayers. The procedure is based on the comparison of the radial distribution function , g(r), from experiments and Brownian dynamic (BD) simulations . The domains were modeled as disks with surface dipolar density, whose strength was varied to best describe the experimentally determined monolayer structure. For comparison, the point dipole approximation was also studied. As an example, we applied the method for mixed monolayers with different proportions of distearoylphosphatidylcholine (DSPC) and dimyristoylphosphatidylcholine (DMPC ) and obtained the excess dipolar density, which were in agreement with those obtained from surface potential measurements . A systematic analysis for experimentally relevant parameter range is given, which may be used as a working curve for obtaining the dipolar repulsion in different systems provided that the experimental g(r) can be calculated from a statistically relevant amount of images . | A great variety of biologically relevant monolayers present phase coexistence characterized by domains formed by lipids in an ordered phase state dispersed in a continuous, disordered phase. The difference in surface densities between these phases originates inter-domain dipolar interactions, which are relevant for the determination of the spacial distribution of domains, as well as their dynamics . In this work , we propose a novel manner of estimating the dipolar repulsion using a passive method that involves the analysis of images of the monolayer with phase coexistence. The method is based on the comparison of the pair correlation function obtained from experiments with that obtained from Brownian dynamics simulations of a model system. As an example, we determined the difference in dipolar density of a binary monolayer of DSPC/DMPC at the air-water interface from the analysis of the radial distribution of domains, and the results are compared with those obtained by surface potential determinations . A systematic analysis for experimentally relevant parameter range is given, which may be used as a working curve for obtaining the dipolar repulsion in different systems . | [
{
"type": "R",
"before": "a long-range",
"after": "an",
"start_char_pos": 123,
"end_char_pos": 135
},
{
"type": "R",
"before": "Because of the",
"after": "The",
"start_char_pos": 201,
"end_char_pos": 215
},
{
"type": "R",
"before": "the domains possess an excess dipolar density with respect to the surrounding liquid phase",
"after": "between these phases originates inter-domain dipolar interactions, which are relevant for the determination of the spacial distribution of domains, as well as their dynamics",
"start_char_pos": 248,
"end_char_pos": 338
},
{
"type": "R",
"before": "we propose an alternative method to measure",
"after": ", we propose a novel manner of estimating",
"start_char_pos": 354,
"end_char_pos": 397
},
{
"type": "R",
"before": "for neutral lipid monolayers. The procedure",
"after": "using a passive method that involves the analysis of images of the monolayer with phase coexistence. The method",
"start_char_pos": 420,
"end_char_pos": 463
},
{
"type": "R",
"before": "radial distribution function , g(r), from experiments and Brownian dynamic (BD) simulations . The domains were modeled as disks with surface dipolar density, whose strength was varied to best describe the experimentally determined monolayer structure. For comparison, the point dipole approximation was also studied.",
"after": "pair correlation function obtained from experiments with that obtained from Brownian dynamics simulations of a model system.",
"start_char_pos": 498,
"end_char_pos": 814
},
{
"type": "R",
"before": "applied the method for mixed monolayers with different proportions of distearoylphosphatidylcholine (DSPC) and dimyristoylphosphatidylcholine (DMPC ) and obtained the excess dipolar density, which were in agreement",
"after": "determined the difference in dipolar density of a binary monolayer of DSPC/DMPC at the air-water interface from the analysis of the radial distribution of domains, and the results are compared",
"start_char_pos": 833,
"end_char_pos": 1047
},
{
"type": "R",
"before": "from surface potential measurements",
"after": "by surface potential determinations",
"start_char_pos": 1068,
"end_char_pos": 1103
},
{
"type": "D",
"before": "provided that the experimental g(r) can be calculated from a statistically relevant amount of images",
"after": null,
"start_char_pos": 1276,
"end_char_pos": 1376
}
]
| [
0,
200,
340,
449,
591,
749,
814,
1105
]
|
1510.04488 | 1 | In this paper, we analyze a scheduling algorithm which is suitable for the heterogeneous traffic network. In the large deviation setting, we are interested to see, how the asymptotic decay rate of maximum queue overflow probabilityachieved by this algorithm . We first derive an upper bound on the decay rate of the queue overflow probability as the queue overflow threshold approaches infinity. Then, we study several structural properties of the minimum cost path of the maximum queue length . Given these properties, we prove that the maximum queue length follows a sample path with linear increment. For certain parameter values , the scheduling algorithm is asymptotically optimal in reducing the maximum queue length. Through numerical results, we have show the large deviation properties of the queue length typically used in practice . | In this paper, we study the stability of light traffic achieved by a scheduling algorithm which is suitable for heterogeneous traffic networks. Since analyzing a scheduling algorithm is intractable using the conventional mathematical tool, our goal is to minimize the largest queue-overflow probability achieved by the algorithm. In the large deviation setting, this problem is equivalent to maximizing the asymptotic decay rate of the largest queue-overflow probability . We first derive an upper bound on the decay rate of the queue overflow probability as the queue overflow threshold approaches infinity. Then, we study several structural properties of the minimum-cost-path to overflow of the queue with the largest length, which is basically equivalent to the decay rate of the largest queue-overflow probability . Given these properties, we prove that the queue with the largest length follows a sample path with linear increment. For certain parameter value , the scheduling algorithm is asymptotically optimal in reducing the largest queue length. Through numerical results, we have shown the large deviation properties of the queue length typically used in practice while varying one parameter of the algorithm . | [
{
"type": "R",
"before": "analyze",
"after": "study the stability of light traffic achieved by",
"start_char_pos": 18,
"end_char_pos": 25
},
{
"type": "R",
"before": "the heterogeneous traffic network.",
"after": "heterogeneous traffic networks. Since analyzing a scheduling algorithm is intractable using the conventional mathematical tool, our goal is to minimize the largest queue-overflow probability achieved by the algorithm.",
"start_char_pos": 71,
"end_char_pos": 105
},
{
"type": "R",
"before": "we are interested to see, how",
"after": "this problem is equivalent to maximizing",
"start_char_pos": 138,
"end_char_pos": 167
},
{
"type": "R",
"before": "maximum queue overflow probabilityachieved by this algorithm",
"after": "the largest queue-overflow probability",
"start_char_pos": 197,
"end_char_pos": 257
},
{
"type": "R",
"before": "minimum cost path of the maximum queue length",
"after": "minimum-cost-path to overflow of the queue with the largest length, which is basically equivalent to the decay rate of the largest queue-overflow probability",
"start_char_pos": 448,
"end_char_pos": 493
},
{
"type": "R",
"before": "maximum queue",
"after": "queue with the largest",
"start_char_pos": 538,
"end_char_pos": 551
},
{
"type": "R",
"before": "values",
"after": "value",
"start_char_pos": 626,
"end_char_pos": 632
},
{
"type": "R",
"before": "maximum",
"after": "largest",
"start_char_pos": 702,
"end_char_pos": 709
},
{
"type": "R",
"before": "show",
"after": "shown",
"start_char_pos": 759,
"end_char_pos": 763
},
{
"type": "A",
"before": null,
"after": "while varying one parameter of the algorithm",
"start_char_pos": 842,
"end_char_pos": 842
}
]
| [
0,
105,
259,
395,
603,
723
]
|
1510.05118 | 1 | In this paper, we define weighted directed networks for large panels of financial time series where the edges and the associated weights are reflecting the dynamic conditional correlation structure of the panel. Those networks produce a most informative picture of the interconnections among the various series in the panel . In particular, we are combining this network-based analysis and a general dynamic factor decomposition in a study of the volatilities of the stocks of the Standard \&Poor's 100 index over the period 2000-2013. This approach allows us to decompose the panel into two components which represent the two main sources of variation of financial time series: common or market shocks, and the stock-specific or idiosyncratic ones. While the common components, driven by market shocks, are related to the non-diversifiable or {\it systematic } components of risk, the idiosyncratic components show important interdependencies which are nicely described through network structures. Those networks shed some light on the contagion phenomenons associated with financial crises, and help assessing how {\it systemic} a given firm islikely to be.We show how to estimate them by combining dynamic principal components and sparse VAR techniques. The results provide evidence of high positive intra-sectoral and lower, but nevertheless quite important, negative inter-sectoral, dependencies, the Energy and Financials sectors being the most interconnected ones. In particular, the Financials stocks appear to be the most central vertices in the network, making them the main source of contagion . | We consider weighted directed networks for analysing, over the period 2000-2013, the interdependencies between volatilities of a large panel of stocks belonging to the S\&P100 index . In particular, we focus on the so-called {\it Long-Run Variance Decomposition Network } (LVDN), where the nodes are stocks, and the weight associated with edge (i,j) represents the proportion of h-step-ahead forecast error variance of variable i accounted for by variable j's innovations. To overcome the curse of dimensionality, we decompose the panel into a component driven by few global, market-wide, factors, and an idiosyncratic one modelled by means of a sparse vector autoregression (VAR) model. Inversion of the VAR together with suitable identification restrictions, produces the estimated network, by means of which we can assess how {\it systemic} each firm is.~Our analysis demonstrates the prominent role of financial firms as sources of contagion, especially during the~2007-2008 crisis . | [
{
"type": "R",
"before": "In this paper, we define",
"after": "We consider",
"start_char_pos": 0,
"end_char_pos": 24
},
{
"type": "R",
"before": "large panels of financial time series where the edges and the associated weights are reflecting the dynamic conditional correlation structure of the panel. Those networks produce a most informative picture of the interconnections among the various series in the panel",
"after": "analysing, over the period 2000-2013, the interdependencies between volatilities of a large panel of stocks belonging to the S\\&P100 index",
"start_char_pos": 56,
"end_char_pos": 323
},
{
"type": "R",
"before": "are combining this network-based analysis and a general dynamic factor decomposition in a study of the volatilities of the stocks of the Standard \\&Poor's 100 index over the period 2000-2013. This approach allows us to decompose the panel into two components which represent the two main sources of variation of financial time series: common or market shocks, and the stock-specific or idiosyncratic ones. While the common components, driven by market shocks, are related to the non-diversifiable or",
"after": "focus on the so-called",
"start_char_pos": 344,
"end_char_pos": 843
},
{
"type": "R",
"before": "systematic",
"after": "Long-Run Variance Decomposition Network",
"start_char_pos": 849,
"end_char_pos": 859
},
{
"type": "R",
"before": "components of risk, the idiosyncratic components show important interdependencies which are nicely described through network structures. Those networks shed some light on the contagion phenomenons associated with financial crises, and help assessing",
"after": "(LVDN), where the nodes are stocks, and the weight associated with edge (i,j) represents the proportion of h-step-ahead forecast error variance of variable i accounted for by variable j's innovations. To overcome the curse of dimensionality, we decompose the panel into a component driven by few global, market-wide, factors, and an idiosyncratic one modelled by means of a sparse vector autoregression (VAR) model. Inversion of the VAR together with suitable identification restrictions, produces the estimated network, by means of which we can assess",
"start_char_pos": 862,
"end_char_pos": 1111
},
{
"type": "R",
"before": "a given firm islikely to be.We show how to estimate them by combining dynamic principal components and sparse VAR techniques. The results provide evidence of high positive intra-sectoral and lower, but nevertheless quite important, negative inter-sectoral, dependencies, the Energy and Financials sectors being the most interconnected ones. In particular, the Financials stocks appear to be the most central vertices in the network, making them the main source of contagion",
"after": "each firm is.~Our analysis demonstrates the prominent role of financial firms as sources of contagion, especially during the~2007-2008 crisis",
"start_char_pos": 1131,
"end_char_pos": 1604
}
]
| [
0,
211,
325,
535,
749,
998,
1159,
1256,
1471
]
|
1510.05123 | 1 | We consider the problem of finding investment strategies that maximize the average growth-rate of the capital of an investor. This is usually achieved through the so-called Kelly criterion, which in a dynamic setting where investment decisions are adjusted over time, prescribes a constant optimal fraction of capital that should be re-invested at each time, i.e. the investor's optimal leverage . We generalize this problem by accounting for the effects of market impact, that is the fact that prices respond to trading activity. In particular, we assume that the value of an investment portfolio should be measured in terms of the cash-flow that can be generated by liquidating the portfolio, rather than by its mark-to-market value . We formulate the problem in terms of a stochastic process with multiplicative noise and a non-linear drift term that is determined by the specific functional form of market-impact . We solve the stochastic equation for two classes of market-impact functions (power laws and logarithmic), and in both cases we compute optimal leverage trajectories . We further test numerically the validity of our analytical result . | We consider the problem of finding optimal strategies that maximize the average growth-rate of multiplicative stochastic processes. For a geometric Brownian motion the problem is solved through the so-called Kelly criterion, according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics . We generalize these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance . We formulate the problem in terms of a stochastic process with multiplicative noise and a non-linear drift term that is determined by the specific functional form of carrying capacity . We solve the stochastic equation for two classes of carrying capacity functions (power laws and logarithmic), and in both cases compute optimal trajectories of the control parameter . We further test the validity of our analytical results using numerical simulations . | [
{
"type": "R",
"before": "investment",
"after": "optimal",
"start_char_pos": 35,
"end_char_pos": 45
},
{
"type": "R",
"before": "the capital of an investor. This is usually achieved",
"after": "multiplicative stochastic processes. For a geometric Brownian motion the problem is solved",
"start_char_pos": 98,
"end_char_pos": 150
},
{
"type": "R",
"before": "which in a dynamic setting where investment decisions are adjusted over time, prescribes a constant optimal fraction of capital that should be re-invested at each time, i.e. the investor's optimal leverage",
"after": "according to which the optimal growth rate is achieved by investing a constant given fraction of resources at any step of the dynamics",
"start_char_pos": 190,
"end_char_pos": 395
},
{
"type": "R",
"before": "this problem by accounting for the effects of market impact, that is the fact that prices respond to trading activity. In particular, we assume that the value of an investment portfolio should be measured in terms of the cash-flow that can be generated by liquidating the portfolio, rather than by its mark-to-market value",
"after": "these finding to the case of dynamical equations with finite carrying capacity, which can find applications in biology, mathematical ecology, and finance",
"start_char_pos": 412,
"end_char_pos": 734
},
{
"type": "R",
"before": "market-impact",
"after": "carrying capacity",
"start_char_pos": 903,
"end_char_pos": 916
},
{
"type": "R",
"before": "market-impact",
"after": "carrying capacity",
"start_char_pos": 971,
"end_char_pos": 984
},
{
"type": "R",
"before": "we compute optimal leverage trajectories",
"after": "compute optimal trajectories of the control parameter",
"start_char_pos": 1043,
"end_char_pos": 1083
},
{
"type": "D",
"before": "numerically",
"after": null,
"start_char_pos": 1102,
"end_char_pos": 1113
},
{
"type": "R",
"before": "result",
"after": "results using numerical simulations",
"start_char_pos": 1145,
"end_char_pos": 1151
}
]
| [
0,
125,
397,
530,
736,
918,
1085
]
|
1510.05790 | 1 | We prove that the Omega measure, which considers all moments when assessing portfolio performance, is equivalent to the widely used Sharpe ratio under a jointly normal distribution . Portfolio optimization of the Sharpe ratio is explored, with a novel active-set algorithm presented for markets prohibiting short sales. Experimental results show an improvement in average solution time of over an order of magnitude when compared to standard optimization techniques . | We prove that the Omega measure, which considers all moments when assessing portfolio performance, is equivalent to the widely used Sharpe ratio under jointly elliptic distributions of returns . Portfolio optimization of the Sharpe ratio is then explored, with an active-set algorithm presented for markets prohibiting short sales. When asymmetric returns are considered we show that the Omega measure and Sharpe ratio lead to different optimal portfolios . | [
{
"type": "R",
"before": "a jointly normal distribution",
"after": "jointly elliptic distributions of returns",
"start_char_pos": 151,
"end_char_pos": 180
},
{
"type": "A",
"before": null,
"after": "then",
"start_char_pos": 229,
"end_char_pos": 229
},
{
"type": "R",
"before": "a novel",
"after": "an",
"start_char_pos": 245,
"end_char_pos": 252
},
{
"type": "R",
"before": "Experimental results show an improvement in average solution time of over an order of magnitude when compared to standard optimization techniques",
"after": "When asymmetric returns are considered we show that the Omega measure and Sharpe ratio lead to different optimal portfolios",
"start_char_pos": 321,
"end_char_pos": 466
}
]
| [
0,
320
]
|
1510.05875 | 1 | Our goal here is to study American options in discrete time without using probability and stochastic process . Using the binomial model we compute the fair price of European and American options. We explain the notion of Arbitrage and the notion of the fair price of an option using common sense. Finally, we give a criterion that the holder can use to decide when it is appropriate to exercise the option . | Our goal here is to discuss the pricing problem of European and American options in discrete time using elementary calculus . Using the binomial model we compute the fair price of European and American options. We explain the notion of Arbitrage and the notion of the fair price of an option using common sense. We give a criterion that the holder can use to decide when it is appropriate to exercise the option . We also discuss the portfolio's optimization problem . | [
{
"type": "R",
"before": "study",
"after": "discuss the pricing problem of European and",
"start_char_pos": 20,
"end_char_pos": 25
},
{
"type": "R",
"before": "without using probability and stochastic process",
"after": "using elementary calculus",
"start_char_pos": 60,
"end_char_pos": 108
},
{
"type": "R",
"before": "Finally, we",
"after": "We",
"start_char_pos": 297,
"end_char_pos": 308
},
{
"type": "A",
"before": null,
"after": ". We also discuss the portfolio's optimization problem",
"start_char_pos": 406,
"end_char_pos": 406
}
]
| [
0,
195,
296
]
|
1510.05875 | 2 | Our goal here is to discuss the pricing problem of European and American options in discrete time using elementary calculus . Using the binomial model we compute the fair price of European and American options. We explain the notion of Arbitrage and the notion of the fair price of an option using common sense. We give a criterion that the holder can use to decide when it is appropriate to exercise the option. We also discuss the portfolio's optimization problem . | Our goal here is to discuss the pricing problem of European and American options in discrete time using elementary calculus so as to be an easy reference for first year undergraduate students . Using the binomial model we compute the fair price of European and American options. We explain the notion of Arbitrage and the notion of the fair price of an option using common sense. We give a criterion that the holder can use to decide when it is appropriate to exercise the option. We prove the put-call parity formulas for both European and American options and we discuss the relation between American and European options. We also discuss the portfolio's optimization problem and the fair value in the case where the holder can not produce the opposite portfolio . | [
{
"type": "A",
"before": null,
"after": "so as to be an easy reference for first year undergraduate students",
"start_char_pos": 124,
"end_char_pos": 124
},
{
"type": "A",
"before": null,
"after": "prove the put-call parity formulas for both European and American options and we discuss the relation between American and European options. We",
"start_char_pos": 417,
"end_char_pos": 417
},
{
"type": "A",
"before": null,
"after": "and the fair value in the case where the holder can not produce the opposite portfolio",
"start_char_pos": 468,
"end_char_pos": 468
}
]
| [
0,
126,
211,
312,
413
]
|
1510.06794 | 1 | The direct-coupling analysis is a powerful method for protein contact prediction, and enables us to extract ``direct'' correlations between distant sites that are latent in ``indirect'' correlations observed in a protein multiple-sequence alignment. I show that the direct correlation can be obtained by using a formulation analogous to the Ornstein-Zernike integral equation in liquid theory. This formulation intuitively illustrates how the indirect or apparent correlation arises from an infinite series of direct correlations, and provides interesting insights into protein structure prediction. | The direct-coupling analysis is a powerful method for protein contact prediction, and enables us to extract "direct" correlations between distant sites that are latent in "indirect" correlations observed in a protein multiple-sequence alignment. I show that the direct correlation can be obtained by using a formulation analogous to the Ornstein-Zernike integral equation in liquid theory. This formulation intuitively illustrates how the indirect or apparent correlation arises from an infinite series of direct correlations, and provides interesting insights into protein structure prediction. | [
{
"type": "R",
"before": "``direct''",
"after": "\"direct\"",
"start_char_pos": 108,
"end_char_pos": 118
},
{
"type": "R",
"before": "``indirect''",
"after": "\"indirect\"",
"start_char_pos": 173,
"end_char_pos": 185
}
]
| [
0,
249,
393
]
|
1510.06946 | 1 | In this paper we introduce quantile cross-spectral analysis of multiple time series which is designed to detect general dependence structures emerging in quantiles of the joint distribution in the frequency domain . We argue that this type of dependence is natural for economic time series but remains invisible when the traditional analysis is employed. To illustrate how such dependence structures can arise between variables in different parts of the joint distribution and across frequencies, we consider quantile vector autoregression processes. We define new estimators which capture the general dependence structure, provide a detailed analysis of their asymptotic properties and discuss how to conduct inference for a general class of possibly nonlinear processes. In an empirical illustration we examine one of the most prominent time series in economics and shed new light on the dependence of bivariate stock market returns . | In this paper , we introduce quantile coherency to measure general dependence structures emerging in the joint distribution in the frequency domain and argue that this type of dependence is natural for economic time series but remains invisible when only the traditional analysis is employed. We define estimators which capture the general dependence structure, provide a detailed analysis of their asymptotic properties and discuss how to conduct inference for a general class of possibly nonlinear processes. In an empirical illustration we examine the dependence of bivariate stock market returns and shed new light on measurement of tail risk in financial markets. We also provide a modelling exercise to illustrate how applied researchers can benefit from using quantile coherency when assessing time series models . | [
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 14,
"end_char_pos": 14
},
{
"type": "R",
"before": "cross-spectral analysis of multiple time series which is designed to detect",
"after": "coherency to measure",
"start_char_pos": 37,
"end_char_pos": 112
},
{
"type": "D",
"before": "quantiles of",
"after": null,
"start_char_pos": 155,
"end_char_pos": 167
},
{
"type": "R",
"before": ". We",
"after": "and",
"start_char_pos": 215,
"end_char_pos": 219
},
{
"type": "A",
"before": null,
"after": "only",
"start_char_pos": 318,
"end_char_pos": 318
},
{
"type": "R",
"before": "To illustrate how such dependence structures can arise between variables in different parts of the joint distribution and across frequencies, we consider quantile vector autoregression processes. We define new",
"after": "We define",
"start_char_pos": 357,
"end_char_pos": 566
},
{
"type": "R",
"before": "one of the most prominent time series in economics and shed new light on the",
"after": "the",
"start_char_pos": 815,
"end_char_pos": 891
},
{
"type": "A",
"before": null,
"after": "and shed new light on measurement of tail risk in financial markets. We also provide a modelling exercise to illustrate how applied researchers can benefit from using quantile coherency when assessing time series models",
"start_char_pos": 937,
"end_char_pos": 937
}
]
| [
0,
216,
356,
552,
774
]
|
1510.07418 | 1 | Cells transmit information via signaling pathways , using temporal dynamical patterns . As optimality with respect to environments is the universal principle in biological URLanisms have acquired an optimal way of transmitting information. Here we obtain optimal dynamical signal patterns which can transmit information efficiently (low power) and reliably (high accuracy) using the optimal control theory . Adopting an activation-inactivation decoding network, we reproduced several dynamical patterns found in actual signals, such as steep, gradual and overshooting dynamics. Notably, when minimizing the power of the input signal, optimal signals exhibit the overshootingpattern , which is a biphasic pattern with transient and steady phases; this pattern is prevalent in actual dynamical patterns as it can be generated by an incoherent feed-forward loop (FFL), a common motif in biochemical networks . We also identified conditions when the three patterns , steep, gradual and overshooting, confer advantages. | Cells use temporal dynamical patterns to transmit information via signaling pathways . As optimality with respect to the environment is a universal principle in biological URLanisms have evolved optimal ways to transmit information. Here , we use optimal control theory to obtain optimal dynamical signal patterns that can transmit information efficiently (low power) and reliably (high accuracy) . Adopting an activation-inactivation decoding network, we reproduce several dynamical patterns found in actual signals, such as steep, gradual , and overshooting dynamics. Notably, when minimizing the power of the input signal, the optimal signals exhibit overshooting , which is a biphasic pattern with transient and steady phases; this pattern is prevalent in actual dynamical patterns . We also identify conditions when these three patterns ( steep, gradual , and overshooting) confer advantages. | [
{
"type": "A",
"before": null,
"after": "use temporal dynamical patterns to",
"start_char_pos": 6,
"end_char_pos": 6
},
{
"type": "D",
"before": ", using temporal dynamical patterns",
"after": null,
"start_char_pos": 51,
"end_char_pos": 86
},
{
"type": "R",
"before": "environments is the",
"after": "the environment is a",
"start_char_pos": 119,
"end_char_pos": 138
},
{
"type": "R",
"before": "acquired an optimal way of transmitting",
"after": "evolved optimal ways to transmit",
"start_char_pos": 188,
"end_char_pos": 227
},
{
"type": "R",
"before": "we",
"after": ", we use optimal control theory to",
"start_char_pos": 246,
"end_char_pos": 248
},
{
"type": "R",
"before": "which",
"after": "that",
"start_char_pos": 290,
"end_char_pos": 295
},
{
"type": "D",
"before": "using the optimal control theory",
"after": null,
"start_char_pos": 374,
"end_char_pos": 406
},
{
"type": "R",
"before": "reproduced",
"after": "reproduce",
"start_char_pos": 466,
"end_char_pos": 476
},
{
"type": "A",
"before": null,
"after": ",",
"start_char_pos": 552,
"end_char_pos": 552
},
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 636,
"end_char_pos": 636
},
{
"type": "R",
"before": "the overshootingpattern",
"after": "overshooting",
"start_char_pos": 661,
"end_char_pos": 684
},
{
"type": "D",
"before": "as it can be generated by an incoherent feed-forward loop (FFL), a common motif in biochemical networks",
"after": null,
"start_char_pos": 804,
"end_char_pos": 907
},
{
"type": "R",
"before": "identified conditions when the three patterns ,",
"after": "identify conditions when these three patterns (",
"start_char_pos": 918,
"end_char_pos": 965
},
{
"type": "R",
"before": "and overshooting,",
"after": ", and overshooting)",
"start_char_pos": 981,
"end_char_pos": 998
}
]
| [
0,
88,
240,
408,
579,
748,
909
]
|
1510.07418 | 2 | Cells use temporal dynamical patterns to transmit information via signaling pathways. As optimality with respect to the environment is a universal principle in biological URLanisms have evolved optimal ways to transmit information. Here, we use optimal control theory to obtain optimal dynamical signal patterns that can transmit information efficiently (low power ) and reliably (high accuracy). Adopting an activation-inactivation decoding network, we reproduce several dynamical patterns found in actual signals, such as steep, gradual, and overshooting dynamics. Notably, when minimizing the power of the input signal, the optimal signals exhibit overshooting, which is a biphasic pattern with transient and steady phases; this pattern is prevalent in actual dynamical patterns. We also identify conditions when these three patterns (steep, gradual, and overshooting) confer advantages. | Cells use temporal dynamical patterns to transmit information via signaling pathways. As optimality with respect to the environment plays a fundamental role in biological URLanisms have evolved optimal ways to transmit information. Here, we use optimal control theory to obtain the optimal dynamical signal patterns that can transmit information efficiently (low energy ) and reliably (high accuracy). Adopting an activation-inactivation decoding network, we reproduce several dynamical patterns found in actual signals, such as steep, gradual, and overshooting dynamics. Notably, when minimizing the energy of the input signal, the optimal signals exhibit overshooting, which is a biphasic pattern with transient and steady phases; this pattern is prevalent in actual dynamical patterns. We also identify conditions when these three patterns (steep, gradual, and overshooting) confer advantages. | [
{
"type": "R",
"before": "is a universal principle",
"after": "plays a fundamental role",
"start_char_pos": 132,
"end_char_pos": 156
},
{
"type": "A",
"before": null,
"after": "the",
"start_char_pos": 278,
"end_char_pos": 278
},
{
"type": "R",
"before": "power",
"after": "energy",
"start_char_pos": 360,
"end_char_pos": 365
},
{
"type": "R",
"before": "power",
"after": "energy",
"start_char_pos": 597,
"end_char_pos": 602
}
]
| [
0,
85,
231,
397,
567,
727,
783
]
|
1510.07418 | 3 | Cells use temporal dynamical patterns to transmit information via signaling pathways. As optimality with respect to the environment plays a fundamental role in biological URLanisms have evolved optimal ways to transmit information. Here, we use optimal control theory to obtain the optimal dynamical signal patterns that can transmit informationefficiently (low energy) and reliably (high accuracy ). Adopting an activation-inactivation decoding network, we reproduce several dynamical patterns found in actual signals, such as steep, gradual, and overshooting dynamics. Notably, when minimizing the energy of the input signal, the optimal signals exhibit overshooting, which is a biphasic pattern with transient and steady phases; this pattern is prevalent in actual dynamical patterns. We also identify conditions when these three patterns (steep, gradual, and overshooting) confer advantages . | Cells use temporal dynamical patterns to transmit information via signaling pathways. As optimality with respect to the environment plays a fundamental role in biological URLanisms have evolved optimal ways to transmit information. Here, we use optimal control theory to obtain the dynamical signal patterns for the optimal transmission of information, in terms of efficiency (low energy) and reliability (low uncertainty ). Adopting an activation-deactivation decoding network, we reproduce several dynamical patterns found in actual signals, such as steep, gradual, and overshooting dynamics. Notably, when minimizing the energy of the input signal, the optimal signals exhibit overshooting, which is a biphasic pattern with transient and steady phases; this pattern is prevalent in actual dynamical patterns. We also identify conditions in which these three patterns (steep, gradual, and overshooting) confer advantages . Our study shows that cellular signal transduction is governed by the principle of minimizing free energy dissipation and uncertainty; these constraints serve as selective pressures when designing dynamical signaling patterns . | [
{
"type": "D",
"before": "optimal",
"after": null,
"start_char_pos": 282,
"end_char_pos": 289
},
{
"type": "R",
"before": "that can transmit informationefficiently",
"after": "for the optimal transmission of information, in terms of efficiency",
"start_char_pos": 316,
"end_char_pos": 356
},
{
"type": "R",
"before": "reliably (high accuracy",
"after": "reliability (low uncertainty",
"start_char_pos": 374,
"end_char_pos": 397
},
{
"type": "R",
"before": "activation-inactivation",
"after": "activation-deactivation",
"start_char_pos": 413,
"end_char_pos": 436
},
{
"type": "R",
"before": "when",
"after": "in which",
"start_char_pos": 816,
"end_char_pos": 820
},
{
"type": "A",
"before": null,
"after": ". Our study shows that cellular signal transduction is governed by the principle of minimizing free energy dissipation and uncertainty; these constraints serve as selective pressures when designing dynamical signaling patterns",
"start_char_pos": 895,
"end_char_pos": 895
}
]
| [
0,
85,
231,
400,
570,
731,
787
]
|
1510.07430 | 1 | While polymerizing a RNA molecule, a RNA polymerase (RNAP) walks step-by-step on the corresponding single-stranded DNA (ssDNA) template in a specific direction. Thus, a RNAP can be regarded as a molecular motor for which the ssDNA template serves as the track. The sites of start and stop of its walk on the DNA mark the two ends of the genetic message that it transcribes into RNA. Interference of transcription of two overlapping genes can strongly influence the levels of their expression, i. e., the overall rate of the synthesis of the corresponding full-length RNA molecules, through suppressive effect of one on the other. Here we model this process as a mixed traffic of two groups of RNAP motors that are characterized by two distinct pairs of on- and off-ramps. Each group polymerizes identical copies of a RNA while the RNAs polymerized by the two groups are different. These models, which may also be viewed as two interfering totally asymmetric simple exclusion processes, account for all modes of transcriptional interference in spite of their extreme simplicity. We study both co-directional and contra-directional traffic of the two groups of RNAP motors. Two special cases of the general model correspond to traffic of bacteriophage RNAP motors and that of non-phage RNAP motors. By a combination of mean-field theory and computer simulation of these models we establish the conditions under which increasing rate of initiation of transcription of one gene can switch off another. However, the mechanisms of switching observed in the traffic of phage-RNAP and non-phage RNAP motors are different. Some of our new predictions can be tested experimentally by correlating the rate of RNA synthesis with the RNAP footprints on the respective DNA templates . | We introduce exclusion models of two distinguishable species of hard rods with their distinct sites of entry and exit under open boundary conditions. In the first model both species of rods move in the same direction whereas in the other two models they move in the opposite direction. These models are motivated by the biological phenomenon known as Transcriptional Interference. Therefore, the rules for the kinetics of the models, particularly the rules for the outcome of the encounter of the rods, are also formulated to mimic those observed in Transcriptional Interference. By a combination of mean-field theory and computer simulation of these models we demonstrate how the flux of one species of rods is completely switched off by the other. Exploring the parameter space of the model we also establish the conditions under which switch-like regulation of two fluxes is possible; from the extensive analysis we discover more than one possible mechanism of this phenomenon . | [
{
"type": "R",
"before": "While polymerizing a RNA molecule, a RNA polymerase (RNAP) walks step-by-step on the corresponding single-stranded DNA (ssDNA) template in a specific direction. Thus, a RNAP can be regarded as a molecular motor for which the ssDNA template serves as the track. The sites of start and stop of its walk on the DNA mark the two ends of the genetic message that it transcribes into RNA. Interference of transcription of two overlapping genes can strongly influence the levels of their expression, i. e., the overall rate of the synthesis of the corresponding full-length RNA molecules, through suppressive effect of one on the other. Here we model this process as a mixed traffic of two groups of RNAP motors that are characterized by two distinct pairs of on- and off-ramps. Each group polymerizes identical copies of a RNA while the RNAs polymerized by the two groups are different. These models, which may also be viewed as two interfering totally asymmetric simple exclusion processes, account for all modes of transcriptional interference in spite of their extreme simplicity. We study both co-directional and contra-directional traffic of the two groups of RNAP motors. Two special cases of the general model correspond to traffic of bacteriophage RNAP motors and that of non-phage RNAP motors.",
"after": "We introduce exclusion models of two distinguishable species of hard rods with their distinct sites of entry and exit under open boundary conditions. In the first model both species of rods move in the same direction whereas in the other two models they move in the opposite direction. These models are motivated by the biological phenomenon known as Transcriptional Interference. Therefore, the rules for the kinetics of the models, particularly the rules for the outcome of the encounter of the rods, are also formulated to mimic those observed in Transcriptional Interference.",
"start_char_pos": 0,
"end_char_pos": 1296
},
{
"type": "R",
"before": "establish the conditions under which increasing rate of initiation of transcription of one gene can switch off another. However, the mechanisms of switching observed in the traffic of phage-RNAP and non-phage RNAP motors are different. Some of our new predictions can be tested experimentally by correlating the rate of RNA synthesis with the RNAP footprints on the respective DNA templates",
"after": "demonstrate how the flux of one species of rods is completely switched off by the other. Exploring the parameter space of the model we also establish the conditions under which switch-like regulation of two fluxes is possible; from the extensive analysis we discover more than one possible mechanism of this phenomenon",
"start_char_pos": 1378,
"end_char_pos": 1768
}
]
| [
0,
160,
260,
629,
771,
880,
1077,
1171,
1296,
1497,
1613
]
|
1510.07986 | 1 | Mobile offloading is an effective way that migrates computation-intensive parts of applications from resource-constrained mobile devices onto remote resource-rich servers . Application partitioning plays a critical role in high-performance offloading systems, which involves splitting the execution of applications between the mobile side and cloud side so that the total execution cost is minimized. Through partitioning, the mobile device can have the most benefit from offloading the application to a remote cloud. In this paper, we study how to effectively and dynamically partition a given application into local and remote parts while keeping the total cost as small as possible. For general tasks (i.e., arbitrary topological consumption graphs), we propose a new min-cost offloading partitioning (MCOP) algorithm that aims at finding the optimal application partitioning (determining which portions of the application to run on mobile devices and which portions on cloud servers) under different partitioning cost models and mobile environments. The simulation results show that the proposed algorithm provides a stably low time complexity method and can significantly reduce execution time and energy consumption by optimally distributing tasks between mobile devices and cloud servers, and in the meantime, it can well adapt to environment changes. | Mobile cloud offloading that migrates computation-intensive parts of applications from resource-constrained mobile devices onto remote resource-rich servers , is an effective way to shorten response time and extend battery life of mobile devices . Application partitioning plays a critical role in high-performance offloading systems, which involves splitting the execution of applications between the mobile side and cloud side so that the total execution cost is minimized. Through partitioning, the mobile device can have the most benefit from offloading the application to a remote cloud. In this paper, we study how to effectively and dynamically partition a given application into local and remote parts while keeping the total cost as small as possible. For general tasks (i.e., arbitrary topological consumption graphs), we propose a novel min-cost offloading partitioning (MCOP) algorithm that aims at finding the optimal partitioning plan (determining which portions of the application to run on mobile devices and which portions on cloud servers) under different cost models and mobile environments. The simulation results show that the proposed algorithm provides a stably low time complexity method and can significantly reduce execution time and energy consumption by optimally distributing tasks between mobile devices and cloud servers, and in the meantime, it can well adapt to environment changes. | [
{
"type": "R",
"before": "offloading is an effective way",
"after": "cloud offloading",
"start_char_pos": 7,
"end_char_pos": 37
},
{
"type": "A",
"before": null,
"after": ", is an effective way to shorten response time and extend battery life of mobile devices",
"start_char_pos": 171,
"end_char_pos": 171
},
{
"type": "R",
"before": "new",
"after": "novel",
"start_char_pos": 768,
"end_char_pos": 771
},
{
"type": "R",
"before": "application partitioning",
"after": "partitioning plan",
"start_char_pos": 855,
"end_char_pos": 879
},
{
"type": "D",
"before": "partitioning",
"after": null,
"start_char_pos": 1005,
"end_char_pos": 1017
}
]
| [
0,
401,
518,
686,
1054
]
|
1510.08161 | 1 | We study the price of Asian options with floating-strike when the underlying asset price follows a regime-switching geometric Brownian motion . We propose an iterative procedure to compute the option prices without recourse to solving a PDE system. Our approach makes use of the scaling property of Brownian motion and the Fixed-Point Theorem. | We study the price of Asian options with floating-strike when the underlying asset price follows a Markov-modulated (or regime-switching ) geometric Brownian motion , where both the drift and diffusion coefficients depend on an independent continuous-time finite-state Markov chain . We propose an iterative procedure that converges to the option prices without recourse to solving a coupled PDE system. Our approach makes use of path properties of Brownian motion and the Fixed-Point Theorem. | [
{
"type": "A",
"before": null,
"after": "Markov-modulated (or",
"start_char_pos": 99,
"end_char_pos": 99
},
{
"type": "A",
"before": null,
"after": ")",
"start_char_pos": 117,
"end_char_pos": 117
},
{
"type": "A",
"before": null,
"after": ", where both the drift and diffusion coefficients depend on an independent continuous-time finite-state Markov chain",
"start_char_pos": 144,
"end_char_pos": 144
},
{
"type": "R",
"before": "to compute",
"after": "that converges to",
"start_char_pos": 181,
"end_char_pos": 191
},
{
"type": "A",
"before": null,
"after": "coupled",
"start_char_pos": 240,
"end_char_pos": 240
},
{
"type": "R",
"before": "the scaling property",
"after": "path properties",
"start_char_pos": 279,
"end_char_pos": 299
}
]
| [
0,
146,
252
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.