doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1308.6759
1
A microeconomic approach is proposed to derive the fluctuations of risky asset price, where the market participants are modeled as prospect trading agents. As asset price is generated by the temporary equilibrium between demand and supply, the agents' trading behaviors can affect the price process , which is called the feedback effect. The prospect agents make actions based on their reactions to gains and losses, and as a consequence of the feedback effect, a relationship between the agents' trading behavior and the price fluctuations is constructed, which explains the implied volatility smile and skewness phenomena observed in actual market.
A microeconomic approach is proposed to derive the fluctuations of risky asset price, where the market participants are modeled as prospect trading agents. As asset price is generated by the temporary equilibrium between demand and supply, the agents' trading behaviors can affect the price process in turn , which is called the feedback effect. The prospect agents make actions based on their reactions to gains and losses, and as a consequence of the feedback effect, a relationship between the agents' trading behavior and the price fluctuations is constructed, which explains the implied volatility skew and smile observed in actual market.
[ { "type": "A", "before": null, "after": "in turn", "start_char_pos": 299, "end_char_pos": 299 }, { "type": "R", "before": "smile and skewness phenomena", "after": "skew and smile", "start_char_pos": 596, "end_char_pos": 624 } ]
[ 0, 155, 338 ]
1309.0046
1
We consider the stochastic solution to a Cauchy problem corresponding to a nonnegative diffusion with zero drift, which represents a price process under some risk-neutral measure . When the diffusion coefficient is locally Holder continuous with some exponent in (0, 1], the stochastic solution is shown to be a classical solution . A comparison theorem for the Cauchy problem is also proved, without the\ge linear growth condition on the diffusion coefficient. Moreover, we establish the equivalence: the stochastic solution is the unique classical solution to the Cauchy problem if, and only if, a comparison theorem holds. For the case where the stochastic solution may not be smooth, we characterize it as a limit of smooth stochastic solutions associated with some approximating Cauchy problems .
We study the stochastic solution to a Cauchy problem for a degenerate parabolic equation arising from option pricing . When the diffusion coefficient of the underlying price process is locally H\"older continuous with exponent \delta\in (0, 1], the stochastic solution , which represents the price of a European option, is shown to be a classical solution to the Cauchy problem . This improves the standard requirement \delta\ge 1/2. Uniqueness results, including a Feynman-Kac formula and a comparison theorem, are established without assuming the usual linear growth condition on the diffusion coefficient. When the stochastic solution is not smooth , it is characterized as the limit of an approximating smooth stochastic solutions . In deriving the main results, we discover a new, probabilistic proof of Kotani's criterion for martingality of a one-dimensional diffusion in natural scale .
[ { "type": "R", "before": "consider", "after": "study", "start_char_pos": 3, "end_char_pos": 11 }, { "type": "R", "before": "corresponding to a nonnegative diffusion with zero drift, which represents a price process under some risk-neutral measure", "after": "for a degenerate parabolic equation arising from option pricing", "start_char_pos": 56, "end_char_pos": 178 }, { "type": "R", "before": "is locally Holder continuous with some exponent in", "after": "of the underlying price process is locally H\\\"older continuous with exponent \\delta\\in", "start_char_pos": 212, "end_char_pos": 262 }, { "type": "A", "before": null, "after": ", which represents the price of a European option,", "start_char_pos": 295, "end_char_pos": 295 }, { "type": "R", "before": ". A comparison theorem for", "after": "to", "start_char_pos": 332, "end_char_pos": 358 }, { "type": "R", "before": "is also proved, without the", "after": ". This improves the standard requirement \\delta", "start_char_pos": 378, "end_char_pos": 405 }, { "type": "A", "before": null, "after": "1/2. Uniqueness results, including a Feynman-Kac formula and a comparison theorem, are established without assuming the usual", "start_char_pos": 409, "end_char_pos": 409 }, { "type": "R", "before": "Moreover, we establish the equivalence: the", "after": "When the", "start_char_pos": 464, "end_char_pos": 507 }, { "type": "R", "before": "the unique classical solution to the Cauchy problem if, and only if, a comparison theorem holds. For the case where the stochastic solution may not be smooth, we characterize it as a limit of", "after": "not", "start_char_pos": 531, "end_char_pos": 722 }, { "type": "A", "before": null, "after": ", it is characterized as the limit of an approximating smooth", "start_char_pos": 730, "end_char_pos": 730 }, { "type": "R", "before": "associated with some approximating Cauchy problems", "after": ". In deriving the main results, we discover a new, probabilistic proof of Kotani's criterion for martingality of a one-dimensional diffusion in natural scale", "start_char_pos": 752, "end_char_pos": 802 } ]
[ 0, 180, 333, 463, 627 ]
1309.0260
1
Regression analysis aims to use observational data from multiple observations to develop a functional relationship relating explanatory variables to response variables, which is important for much of modern statistics, and econometrics, and also the field of machine learning. In this paper, we consider the special case where the explanatory variable is a stream of information, and the response is also potentially a stream. We provide an approach based on identifying carefully chosen features of the stream which allows linear regression to be used to characterise the functional relationship between explanatory variables and the conditional distribution of the response; the methods used to develop and justify this approach, such as the signature of a stream and the shuffle product of tensors, are standard tools in the theory of rough paths and seem appropriate in this context of regression as well and provide a surprisingly unified and non-parametric approach. To illustrate the approach we consider the problem of using datato predict the conditional distribution of the near future of a stationary, ergodic time series and compare it with probabilistic approaches based on first fitting a model. We believe our reduction of this regression problem for streams to a linear problem is clean, systematic, and efficient in minimizing the effective dimensionality. The clear gradation of finite dimensional approximations increases its usefulness. Although the approach is non-parametric, it presents itself in computationally tractable and flexible restricted forms in examples we considered . Popular techniques in time series analysis such as AR, ARCH and GARCH can be seen to be special cases of our approach, but it is not clear if they are always the best or most informative choices .
Regression analysis aims to use observational data from multiple observations to develop a functional relationship relating explanatory variables to response variables, which is important for much of modern statistics, and econometrics, and also the field of machine learning. In this paper, we consider the special case where the explanatory variable is a stream of information, and the response is also potentially a stream. We provide an approach based on identifying carefully chosen features of the stream which allows linear regression to be used to characterise the functional relationship between explanatory variables and the conditional distribution of the response; the methods used to develop and justify this approach, such as the signature of a stream and the shuffle product of tensors, are standard tools in the theory of rough paths and seem appropriate in this context of regression as well and provide a surprisingly unified and non-parametric approach. We believe that the insight provided by this paper will provide additional tool in the toolbox for studying sequential data. Our reduction of this regression problem for streams to a linear problem is clean, systematic, and efficient in minimizing the effective dimensionality. The clear gradation of finite dimensional approximations increases its usefulness. In examples we considered, we use the autoregressive calibration (AR approach) and Gaussian processes regression (GP approach) as two benchmarks, our approach presents itself in a more robust and flexible restricted form compared with the AR approach, while as a non-parametric approach, it achieves similar accuracy to the GP approach with much lower computational cost especially when the sample size is large . Popular techniques in time series analysis such as AR, ARCH and GARCH can be incorporated to our model .
[ { "type": "R", "before": "To illustrate the approach we consider the problem of using datato predict the conditional distribution of the near future of a stationary, ergodic time series and compare it with probabilistic approaches based on first fitting a model. We believe our", "after": "We believe that the insight provided by this paper will provide additional tool in the toolbox for studying sequential data. Our", "start_char_pos": 973, "end_char_pos": 1224 }, { "type": "R", "before": "Although the approach is non-parametric, it", "after": "In examples we considered, we use the autoregressive calibration (AR approach) and Gaussian processes regression (GP approach) as two benchmarks, our approach", "start_char_pos": 1457, "end_char_pos": 1500 }, { "type": "R", "before": "computationally tractable", "after": "a more robust", "start_char_pos": 1520, "end_char_pos": 1545 }, { "type": "R", "before": "forms in examples we considered", "after": "form compared with the AR approach, while as a non-parametric approach, it achieves similar accuracy to the GP approach with much lower computational cost especially when the sample size is large", "start_char_pos": 1570, "end_char_pos": 1601 }, { "type": "R", "before": "seen to be special cases of our approach, but it is not clear if they are always the best or most informative choices", "after": "incorporated to our model", "start_char_pos": 1681, "end_char_pos": 1798 } ]
[ 0, 276, 426, 676, 972, 1209, 1373, 1456 ]
1309.0260
2
Regression analysis aims to use observational data from multiple observations to develop a functional relationship relating explanatory variables to response variables, which is important for much of modern statistics, and econometrics, and also the field of machine learning. In this paper, we consider the special case where the explanatory variable is a stream of information, and the response is also potentially a stream. We provide an approach based on identifying carefully chosen features of the streamwhich allows linear regression to be used to characterise the functional relationship between explanatory variables and the conditional distribution of the response; the methods used to develop and justify this approach, such as the signature of a streamand the shuffle product of tensors, are standard tools in the theory of rough pathsand seem appropriate in this context of regression as well and provide a surprisingly unified and non-parametric approach. We believe that the insight provided by this paper will provide additional tool in the toolbox for studying sequential data. Our reductionof this regression problem for streams to a linear problem is clean, systematic, and efficient in minimizing the effective dimensionality. The clear gradation of finite dimensional approximations increases its usefulness. In examples we considered, we use the autoregressive calibration (AR approach) and Gaussian processes regression (GP approach) as two benchmarks, our approach presents itself in a more robust and flexible restricted form compared with the AR approach, while as a non-parametric approach, it achieves similar accuracy to the GP approach with much lower computational cost especially when the sample size is large . Popular techniques in time series analysis such as AR, ARCH and GARCH can be incorporated to our model .
We bring the theory of rough paths to the study of non-parametric statistics on streamed data. We discuss the problem of regression where the input variable is a stream of information, and the dependent response is also (potentially) a stream. A certain graded feature set of a stream, known in the rough path literature as the signature, has a universality that allows formally, linear regression to be used to characterise the functional relationship between independent explanatory variables and the conditional distribution of the dependent response. This approach, via linear regression on the signature of the stream, is almost totally general, and yet it still allows explicit computation. The grading allows truncation of the feature set and so leads to an efficient local description for streams (rough paths). In the statistical context this method offers potentially significant, even transformational dimension reduction. By way of illustration, our approach is applied to stationary time series including the familiar AR model and ARCH model. In the numerical examples we examined, our predictions achieve similar accuracy to the Gaussian Process (GP) approach with much lower computational cost especially when the sample size is large .
[ { "type": "R", "before": "Regression analysis aims to use observational data from multiple observations to develop a functional relationship relating explanatory variables to response variables, which is important for much of modern statistics, and econometrics, and also the field of machine learning. In this paper, we consider the special case where the explanatory", "after": "We bring the theory of rough paths to the study of non-parametric statistics on streamed data. We discuss the problem of regression where the input", "start_char_pos": 0, "end_char_pos": 342 }, { "type": "A", "before": null, "after": "dependent", "start_char_pos": 388, "end_char_pos": 388 }, { "type": "R", "before": "potentially", "after": "(potentially)", "start_char_pos": 406, "end_char_pos": 417 }, { "type": "R", "before": "We provide an approach based on identifying carefully chosen features of the streamwhich allows", "after": "A certain graded feature set of a stream, known in the rough path literature as the signature, has a universality that allows formally,", "start_char_pos": 428, "end_char_pos": 523 }, { "type": "A", "before": null, "after": "independent", "start_char_pos": 605, "end_char_pos": 605 }, { "type": "R", "before": "response; the methods used to develop and justify this approach, such as", "after": "dependent response. This approach, via linear regression on", "start_char_pos": 668, "end_char_pos": 740 }, { "type": "R", "before": "a streamand the shuffle product of tensors, are standard tools in the theory of rough pathsand seem appropriate in this context of regression as well and provide a surprisingly unified and non-parametric approach. We believe that the insight provided by this paper will provide additional tool in the toolbox for studying sequential data. Our reductionof this regression problem for streams to a linear problem is clean, systematic, and efficient in minimizing the effective dimensionality. The clear gradation of finite dimensional approximations increases its usefulness. In examples we considered, we use the autoregressive calibration (AR approach) and Gaussian processes regression (GP approach) as two benchmarks, our approach presents itself in a more robust and flexible restricted form compared with the AR approach, while as a non-parametric approach, it achieves", "after": "the stream, is almost totally general, and yet it still allows explicit computation. The grading allows truncation of the feature set and so leads to an efficient local description for streams (rough paths). In the statistical context this method offers potentially significant, even transformational dimension reduction. By way of illustration, our approach is applied to stationary time series including the familiar AR model and ARCH model. In the numerical examples we examined, our predictions achieve", "start_char_pos": 758, "end_char_pos": 1631 }, { "type": "R", "before": "GP", "after": "Gaussian Process (GP)", "start_char_pos": 1656, "end_char_pos": 1658 }, { "type": "D", "before": ". Popular techniques in time series analysis such as AR, ARCH and GARCH can be incorporated to our model", "after": null, "start_char_pos": 1744, "end_char_pos": 1848 } ]
[ 0, 276, 427, 677, 971, 1096, 1248, 1331, 1410 ]
1309.0474
1
We establish existence and uniqueness of a classical solution to a semilinear parabolic partial differential equation with singular initial condition. This equation describes the value function of the control problem of a financial trader that needs to unwind a large asset portfolio within a short period of time. The trader can simultaneously submit active orders to a primary market and passive orders to a dark pool. Our framework is flexible enough to allow for price dependent impact functions describing the trading costs in the primary market and price dependent adverse selection costs associated with dark pool trading. We establish the explicit asymptotic behavior of the value function at the terminal time and give the optimal trading strategy in feedback form.
We consider the stochastic control problem of a financial trader that needs to unwind a large asset portfolio within a short period of time. The trader can simultaneously submit active orders to a primary market and passive orders to a dark pool. Our framework is flexible enough to allow for price-dependent impact functions describing the trading costs in the primary market and price-dependent adverse selection costs associated with dark pool trading. We prove that the value function can be characterized in terms of the unique smooth solution to a PDE with singular terminal value, establish its explicit asymptotic behavior at the terminal time , and give the optimal trading strategy in feedback form.
[ { "type": "R", "before": "establish existence and uniqueness of a classical solution to a semilinear parabolic partial differential equation with singular initial condition. This equation describes the value function of the", "after": "consider the stochastic", "start_char_pos": 3, "end_char_pos": 200 }, { "type": "R", "before": "price dependent", "after": "price-dependent", "start_char_pos": 467, "end_char_pos": 482 }, { "type": "R", "before": "price dependent", "after": "price-dependent", "start_char_pos": 555, "end_char_pos": 570 }, { "type": "R", "before": "establish the explicit asymptotic behavior of the value function", "after": "prove that the value function can be characterized in terms of the unique smooth solution to a PDE with singular terminal value, establish its explicit asymptotic behavior", "start_char_pos": 633, "end_char_pos": 697 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 719, "end_char_pos": 719 } ]
[ 0, 150, 314, 420, 629 ]
1309.0599
1
The DevR-DevS two component system of Mycobacterium tuberculosis is responsible for its dormancy in host and becomes operative under hypoxic condition. It is experimentally known that phosphorylated DevR controls the expression of several downstream genes in a complex manner. In the present work we have developed a mathematical model to show the role of binding sites in the DevR mediated gene expression. Through modeling it has been shown the individual and collective role of the binding sites in regulating the DevR mediated gene expression . The objective of the present work is two fold. First, to describe qualitatively the temporal dynamics of wild type genes and their known mutants. Based on these results we propose that DevR controlled gene expression follows a specific pattern which is efficient in describing other DevR mediated gene expression. Second, to analyze the behavior of the system from the information theoretical point of view. Using the tools of information theory we have calculated the molecular efficiency of the system and have shown that it is close to the maximum limit of isothermal efficiency.
The DevRS two component system of Mycobacterium tuberculosis is responsible for its dormancy in host and becomes operative under hypoxic condition. It is experimentally known that phosphorylated DevR controls the expression of several downstream genes in a complex manner. In the present work we propose a theoretical model to show role of binding sites in DevR mediated gene expression. Individual and collective role of binding sites in regulating DevR mediated gene expression has been shown via modeling. Objective of the present work is two fold. First, to describe qualitatively the temporal dynamics of wild type genes and their known mutants. Based on these results we propose that DevR controlled gene expression follows a specific pattern which is efficient in describing other DevR mediated gene expression. Second, to analyze behavior of the system from information theoretical point of view. Using the tools of information theory we have calculated molecular efficiency of the system and have shown that it is close to the maximum limit of isothermal efficiency.
[ { "type": "R", "before": "DevR-DevS", "after": "DevRS", "start_char_pos": 4, "end_char_pos": 13 }, { "type": "R", "before": "have developed a mathematical", "after": "propose a theoretical", "start_char_pos": 300, "end_char_pos": 329 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 344, "end_char_pos": 347 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 373, "end_char_pos": 376 }, { "type": "R", "before": "Through modeling it has been shown the individual", "after": "Individual", "start_char_pos": 408, "end_char_pos": 457 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 481, "end_char_pos": 484 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 513, "end_char_pos": 516 }, { "type": "R", "before": ". The objective", "after": "has been shown via modeling. Objective", "start_char_pos": 547, "end_char_pos": 562 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 882, "end_char_pos": 885 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 914, "end_char_pos": 917 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 1014, "end_char_pos": 1017 } ]
[ 0, 151, 276, 407, 548, 595, 694, 862, 956 ]
1309.0765
1
Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network provided the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily complicated systems by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) the time evolution can be easily discretized, rendering the dynamics suitable for computer simulation in a simple and direct way. Finally, it is shown that one obtains the classical rate equations form the stochastic models by averaging over an ensemble .
Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) the time evolution can be easily discretized, rendering the dynamics suitable for computer simulation in a simple and direct way. Finally, it is shown that one obtains the classical rate equations form the corresponding stochastic versions as the equations satisfied by the mean values of the random variables .
[ { "type": "R", "before": "provided", "after": "given", "start_char_pos": 209, "end_char_pos": 217 }, { "type": "R", "before": "complicated systems", "after": "large networks", "start_char_pos": 370, "end_char_pos": 389 }, { "type": "R", "before": "stochastic models by averaging over an ensemble", "after": "corresponding stochastic versions as the equations satisfied by the mean values of the random variables", "start_char_pos": 1093, "end_char_pos": 1140 } ]
[ 0, 485, 634, 881, 1016 ]
1309.0765
2
Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) the time evolution can be easily discretized, rendering the dynamics suitable for computer simulation in a simple and direct way. Finally, it is shown that one obtains the classical rate equations form the corresponding stochastic versions as the equations satisfied by the mean values of the random variables .
Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) one may obtain the classical rate equations form the corresponding stochastic version by averaging the dynamic random variables. It is important to emphasize that unlike the deterministic case, where coupling two equations is a trivial matter, coupling two RDS is non-trivial, specially in our case, where the coupling is performed between a state variable of one gene and the switching stochastic process of another gene and, hence, it is not a priori true that the resulting coupled system will satisfy the definition of a random dynamical system. We shall provide the necessary arguments that ensure that our coupling prescription does indeed furnish a notion of coupled random dynamical system. Finally, we illustrate our framework with three simple examples of "single-gene dynamics", which are the build blocks of our networks .
[ { "type": "R", "before": "the time evolution can be easily discretized, rendering the dynamics suitable for computer simulation in a simple and direct way. Finally, it is shown that one obtains", "after": "one may obtain", "start_char_pos": 879, "end_char_pos": 1046 }, { "type": "R", "before": "versions as the equations satisfied by the mean values of the random variables", "after": "version by averaging the dynamic random variables. It is important to emphasize that unlike the deterministic case, where coupling two equations is a trivial matter, coupling two RDS is non-trivial, specially in our case, where the coupling is performed between a state variable of one gene and the switching stochastic process of another gene and, hence, it is not a priori true that the resulting coupled system will satisfy the definition of a random dynamical system. We shall provide the necessary arguments that ensure that our coupling prescription does indeed furnish a notion of coupled random dynamical system. Finally, we illustrate our framework with three simple examples of \"single-gene dynamics\", which are the build blocks of our networks", "start_char_pos": 1110, "end_char_pos": 1188 } ]
[ 0, 477, 626, 873, 1008 ]
1309.0765
3
Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) one may obtain the classical rate equations form the corresponding stochastic version by averaging the dynamic random variables . It is important to emphasize that unlike the deterministic case, where coupling two equations is a trivial matter, coupling two RDS is non-trivial, specially in our case, where the coupling is performed between a state variable of one gene and the switching stochastic process of another gene and, hence, it is not a priori true that the resulting coupled system will satisfy the definition of a random dynamical system. We shall provide the necessary arguments that ensure that our coupling prescription does indeed furnish a notion of coupled random dynamical system . Finally, we illustrate our framework with three simple examples of "single-gene dynamics", which are the build blocks of our networks .
Here we propose a new approach to modeling gene expression based on the theory of random dynamical systems (RDS) that provides a general coupling prescription between the nodes of any given regulatory network given the dynamics of each node is modeled by a RDS. The main virtues of this approach are the following: (i) it provides a natural way to obtain arbitrarily large networks by coupling together simple basic pieces, thus revealing the modularity of regulatory networks; (ii) the assumptions about the stochastic processes used in the modeling are fairly general, in the sense that the only requirement is stationarity; (iii) there is a well developed mathematical theory, which is a blend of smooth dynamical systems theory, ergodic theory and stochastic analysis that allows one to extract relevant dynamical and statistical information without solving the system; (iv) one may obtain the classical rate equations form the corresponding stochastic version by averaging the dynamic random variables (small noise limit) . It is important to emphasize that unlike the deterministic case, where coupling two equations is a trivial matter, coupling two RDS is non-trivial, specially in our case, where the coupling is performed between a state variable of one gene and the switching stochastic process of another gene and, hence, it is not a priori true that the resulting coupled system will satisfy the definition of a random dynamical system. We shall provide the necessary arguments that ensure that our coupling prescription does indeed furnish a coupled regulatory network of random dynamical systems . Finally, the fact that classical rate equations are the small noise limit of our stochastic model ensures that any validation or prediction made on the basis of the classical theory is also a validation or prediction of our model .
[ { "type": "A", "before": null, "after": "(small noise limit)", "start_char_pos": 1007, "end_char_pos": 1007 }, { "type": "R", "before": "notion of coupled random dynamical system", "after": "coupled regulatory network of random dynamical systems", "start_char_pos": 1537, "end_char_pos": 1578 }, { "type": "R", "before": "we illustrate our framework with three simple examples of \"single-gene dynamics\", which are the build blocks of our networks", "after": "the fact that classical rate equations are the small noise limit of our stochastic model ensures that any validation or prediction made on the basis of the classical theory is also a validation or prediction of our model", "start_char_pos": 1590, "end_char_pos": 1714 } ]
[ 0, 477, 626, 873, 1009, 1430, 1580 ]
1309.1420
1
We prove the Fundamental Theorem of Asset Pricing for a discrete time financial market consisting of a money market account and a single stock whose trading is subject to proportional transaction cost and whose price dynamic is modeled by a family of probability measures, possibly non-dominated. Under a continuity assumption, we prove using a backward-forward scheme that the absence of arbitrage in a quasi-sure sense is equivalent to the existence of a suitable family of consistent price systems. A parallel statement between robust no-arbitrage and strictly consistent price systems is also obtained .
We prove the Fundamental Theorem of Asset Pricing for a discrete time financial market where trading is subject to proportional transaction cost and the asset price dynamic is modeled by a family of probability measures, possibly non-dominated. Using a backward-forward scheme , we show that when the market consists of a money market account and a single stock, no-arbitrage in a quasi-sure sense is equivalent to the existence of a suitable family of consistent price systems. We also show that when the market consists of multiple dynamically traded assets and satisfiesefficient friction, strict no-arbitrage in a quasi-sure sense is equivalent to the existence of a suitable family of strictly consistent price systems .
[ { "type": "R", "before": "consisting of a money market account and a single stock whose", "after": "where", "start_char_pos": 87, "end_char_pos": 148 }, { "type": "R", "before": "whose", "after": "the asset", "start_char_pos": 205, "end_char_pos": 210 }, { "type": "R", "before": "Under a continuity assumption, we prove using a", "after": "Using a", "start_char_pos": 297, "end_char_pos": 344 }, { "type": "R", "before": "that the absence of arbitrage", "after": ", we show that when the market consists of a money market account and a single stock, no-arbitrage", "start_char_pos": 369, "end_char_pos": 398 }, { "type": "R", "before": "A parallel statement between robust no-arbitrage and", "after": "We also show that when the market consists of multiple dynamically traded assets and satisfies", "start_char_pos": 502, "end_char_pos": 554 }, { "type": "A", "before": null, "after": "efficient friction", "start_char_pos": 554, "end_char_pos": 554 }, { "type": "A", "before": null, "after": ", strict no-arbitrage in a quasi-sure sense is equivalent to the existence of a suitable family of", "start_char_pos": 554, "end_char_pos": 554 }, { "type": "D", "before": "is also obtained", "after": null, "start_char_pos": 589, "end_char_pos": 605 } ]
[ 0, 296, 501 ]
1309.1647
1
Pricing formulae for defaultable corporate bonds with discrete coupons ( under consideration of the government taxes ) in the united model of structural and reduced form models are provided. The aim of this paper is to generalize the structural model for defaultable corporate discrete coupon bonds (considered in [1]) into the unified model of structural and reduced form models. In our model the bond holders receive the coupon at predetermined coupon dates and the face value (debt) and the coupon at the maturity as well as the effect of government taxes which are paid on the proceeds of an investment in bonds is considered . The expected default event occurs when the equity value is not enough to pay coupon or debt at the coupon dates or maturity and unexpected default event can occur at the first jump time of a Poisson process with the given default intensity provided by a step function of time variable. We consider the model and pricing formula for equity value and using it calculate expected default barrier , Then We provide pricing model and formula for defaultable corporate bonds with discrete coupons and consider its duration . The results can be used in duration analysis of bonds and credit risk management in corporate finance .
Pricing formulae for defaultable corporate bonds with discrete coupons under consideration of the government taxes in the united model of structural and reduced form models are provided. The aim of this paper is to generalize the comprehensive structural model for defaultable fixed income bonds (considered in [1]) into a comprehensive unified model of structural and reduced form models. Here we consider the one factor model and the two factor model. In the one factor model the bond holders receive the deterministic coupon at predetermined coupon dates and the face value (debt) and the coupon at the maturity as well as the effect of government taxes which are paid on the proceeds of an investment in bonds is considered under constant short rate. In the two factor model the bond holders receive the stochastic coupon (discounted value of that at the maturity) at predetermined coupon dates and the face value (debt) and the coupon at the maturity as well as the effect of government taxes which are paid on the proceeds of an investment in bonds is considered under stochastic short rate. The expected default event occurs when the equity value is not enough to pay coupon or debt at the coupon dates or maturity and unexpected default event can occur at the first jump time of a Poisson process with the given default intensity provided by a step function of time variable. We consider the model and pricing formula for equity value and using it calculate expected default barrier . Then we provide pricing model and formula for defaultable corporate bonds with discrete coupons and consider its duration and the effect of the government taxes .
[ { "type": "D", "before": "(", "after": null, "start_char_pos": 71, "end_char_pos": 72 }, { "type": "D", "before": ")", "after": null, "start_char_pos": 117, "end_char_pos": 118 }, { "type": "A", "before": null, "after": "comprehensive", "start_char_pos": 234, "end_char_pos": 234 }, { "type": "R", "before": "corporate discrete coupon", "after": "fixed income", "start_char_pos": 268, "end_char_pos": 293 }, { "type": "R", "before": "the", "after": "a comprehensive", "start_char_pos": 325, "end_char_pos": 328 }, { "type": "R", "before": "In our", "after": "Here we consider the one factor model and the two factor model. In the one factor", "start_char_pos": 382, "end_char_pos": 388 }, { "type": "A", "before": null, "after": "deterministic", "start_char_pos": 424, "end_char_pos": 424 }, { "type": "R", "before": ".", "after": "under constant short rate. In the two factor model the bond holders receive the stochastic coupon (discounted value of that at the maturity) at predetermined coupon dates and the face value (debt) and the coupon at the maturity as well as the effect of government taxes which are paid on the proceeds of an investment in bonds is considered under stochastic short rate.", "start_char_pos": 632, "end_char_pos": 633 }, { "type": "R", "before": ", Then We", "after": ". Then we", "start_char_pos": 1027, "end_char_pos": 1036 }, { "type": "R", "before": ". The results can be used in duration analysis of bonds and credit risk management in corporate finance", "after": "and the effect of the government taxes", "start_char_pos": 1151, "end_char_pos": 1254 } ]
[ 0, 190, 381, 633, 919, 1152 ]
1309.1844
1
We investigate a randomization procedure undertaken in real option games which can serve as a raw model of regulation in a duopoly model of preemptive investment. We recall the rigorous framework of [ Grasselli, M. R., Lecl\`ere, V. and Ludkovski, M. Priority option : the value of being a leader . Math. and Fin. Econ., 2013] and extend it to the presence of a random regulator. This model generalizes and unifies the different competitive frameworks proposed in the literature, and creates a new one similar to a Stackelberg leadership. We fully characterize strategic interactions in the several situations following from the parametrization of the regulator. Finally, we study the effect of the coordination game and uncertainty of outcome when agents are risk-averse, providing new intuitions for the standard case.
We investigate a randomization procedure undertaken in real option games which can serve as a basic model of regulation in a duopoly model of preemptive investment. We recall the rigorous framework of [ M. Grasselli, V. Lecl\`ere and M. Ludkovsky, Priority Option : the value of being a leader , International Journal of Theoretical and Applied Finance, 16, 2013] , and extend it to a random regulator. This model generalizes and unifies the different competitive frameworks proposed in the literature, and creates a new one similar to a Stackelberg leadership. We fully characterize strategic interactions in the several situations following from the parametrization of the regulator. Finally, we study the effect of the coordination game and uncertainty of outcome when agents are risk-averse, providing new intuitions for the standard case.
[ { "type": "R", "before": "raw", "after": "basic", "start_char_pos": 94, "end_char_pos": 97 }, { "type": "R", "before": "Grasselli, M. R., Lecl\\`ere, V. and Ludkovski, M. Priority option", "after": "M. Grasselli, V. Lecl\\`ere and M. Ludkovsky, Priority Option", "start_char_pos": 201, "end_char_pos": 266 }, { "type": "R", "before": ". Math. and Fin. Econ.,", "after": ", International Journal of Theoretical and Applied Finance, 16,", "start_char_pos": 297, "end_char_pos": 320 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 327, "end_char_pos": 327 }, { "type": "D", "before": "the presence of", "after": null, "start_char_pos": 345, "end_char_pos": 360 } ]
[ 0, 162, 313, 380, 539, 663 ]
1309.2130
1
Using public data (Forbes Global 2000) we show that the distribution of asset sizes for the largest global firms follows a Pareto distribution in an intermediate range that is "interrupted" by a sharp cutoff in its upper tail, which is totally dominated by financial firms. This contrasts with a large body of empirical literature which finds a Pareto distribution for firm sizes both across countries and over time. Pareto distributions are generally traced back to a mechanism of proportional random growth, based on a regime of constant returns to scale : this makes our evidence of an "interrupted" Pareto distribution all the more puzzling, because we provide evidence that financial firms in our sample operate in such a regime. We claim that the missing mass from the upper tail of the asset size distribution is a consequence of shadow banking activity and that it provides an estimate of the size of the shadow banking system. This estimate -- that we propose as a shadow banking index -- compares well with estimates of the Financial Stability Board until 2009, but it shows a sharper rise in shadow banking activity after 2010.
Using public data (Forbes Global 2000) we show that the asset sizes for the largest global firms follow a Pareto distribution in an intermediate range , that is ``interrupted'' by a sharp cut-off in its upper tail, where it is totally dominated by financial firms. This flattening of the distribution contrasts with a large body of empirical literature which finds a Pareto distribution for firm sizes both across countries and over time. Pareto distributions are generally traced back to a mechanism of proportional random growth, based on a regime of constant returns to scale . This makes our findings of an ``interrupted'' Pareto distribution all the more puzzling, because we provide evidence that financial firms in our sample should operate in such a regime. We claim that the missing mass from the upper tail of the asset size distribution is a consequence of shadow banking activity and that it provides an (upper) estimate of the size of the shadow banking system. This estimate -- which we propose as a shadow banking index -- compares well with estimates of the Financial Stability Board until 2009, but it shows a sharper rise in shadow banking activity after 2010. Finally, we propose a proportional random growth model that reproduces the observed distribution, thereby providing a quantitative estimate of the intensity of shadow banking activity.
[ { "type": "D", "before": "distribution of", "after": null, "start_char_pos": 56, "end_char_pos": 71 }, { "type": "R", "before": "follows", "after": "follow", "start_char_pos": 113, "end_char_pos": 120 }, { "type": "R", "before": "that is \"interrupted\"", "after": ", that is ``interrupted''", "start_char_pos": 168, "end_char_pos": 189 }, { "type": "R", "before": "cutoff", "after": "cut-off", "start_char_pos": 201, "end_char_pos": 207 }, { "type": "R", "before": "which", "after": "where it", "start_char_pos": 227, "end_char_pos": 232 }, { "type": "A", "before": null, "after": "flattening of the distribution", "start_char_pos": 279, "end_char_pos": 279 }, { "type": "R", "before": ": this makes our evidence of an \"interrupted\"", "after": ". This makes our findings of an ``interrupted''", "start_char_pos": 558, "end_char_pos": 603 }, { "type": "A", "before": null, "after": "should", "start_char_pos": 710, "end_char_pos": 710 }, { "type": "A", "before": null, "after": "(upper)", "start_char_pos": 887, "end_char_pos": 887 }, { "type": "R", "before": "that", "after": "which", "start_char_pos": 956, "end_char_pos": 960 }, { "type": "A", "before": null, "after": "Finally, we propose a proportional random growth model that reproduces the observed distribution, thereby providing a quantitative estimate of the intensity of shadow banking activity.", "start_char_pos": 1142, "end_char_pos": 1142 } ]
[ 0, 273, 417, 736, 938 ]
1309.2131
1
We compare the order parameters predicted for the hydrocarbon segments in lipid bilayer headgroup region by the Berger molecular dynamics simulation model to those measured by Nuclear Magnetic Resonance (NMR) experiments. We first show resultsfor a fully hydrated POPC bilayer, and then focus on changes of the order parameters as a function of hydration level, NaCl and CaCl2 concentrations, and cholesterol content. The experimental headgroup order parameters are never reproduced. This indicates that under all of these conditions the used model is unable to correctly reproduce the headgroup structure. Consequently, many of the conclusions drawn over the years from this modelmight be erroneous. This manuscript has not beensubmitted to any journal, instead its contents are discussed at nmrlipids.blogspot.fi.
Phospholipids are essential building blocks of biological membranes. Despite of vast amount of accurate experimental data the atomistic resolution structures sampled by the glycerol backbone and choline headgroup in phoshatidylcholine bilayers are not known. Atomistic resolution molecular dynamics simulation model would automatically resolve the structures giving an interpretation of experimental results, if the model would reproduce the experimental data. In this work we compare the C-H bond vector order parameters for glycerol backbone and choline headgroup between 14 different atomistic resolution models and experiments in fully hydrated lipid bilayer. The current models are not accurately enough to resolve the structure. However, closer inspection of three best performing models (CHARMM36, GAFFlipid and MacRog) suggest that improvements in the sampled dihedral angle distributions would potentilly lead to the model which would resolve the structure. Despite of the inaccuracy in the fully hydrated structures, the response to the dehydration, i.e. P-N vector tilting more parallel to membrane normal, is qualitatively correct in all models. The CHARMM36 and MacRog models describe the interactions between lipids and cholesterol better than Berger/H\"oltje model. This work has been, and continues to be, progressed and discussed through the blog: nmrlipids.blogspot.fi. Everyone is invited to join the discussion and make contributions through the blog. The manuscript will be eventually submitted to an appropriate scientific journal. Everyone who has contributed to the work through the blog will be offered coauthorship. For more details see: nmrlipids.blogspot.fi.
[ { "type": "R", "before": "We compare the order parameters predicted for the hydrocarbon segments in lipid bilayer headgroup region by the Berger", "after": "Phospholipids are essential building blocks of biological membranes. Despite of vast amount of accurate experimental data the atomistic resolution structures sampled by the glycerol backbone and choline headgroup in phoshatidylcholine bilayers are not known. Atomistic resolution", "start_char_pos": 0, "end_char_pos": 118 }, { "type": "R", "before": "to those measured by Nuclear Magnetic Resonance (NMR) experiments. We first show resultsfor a fully hydrated POPC bilayer, and then focus on changes of the order parameters as a function of hydration level, NaCl and CaCl2 concentrations, and cholesterol content. The experimental headgroup order parameters are never reproduced. This indicates that under all of these conditions the used model is unable to correctly reproduce the headgroup structure. Consequently, many of the conclusions drawn over the years from this modelmight be erroneous. This manuscript has not beensubmitted to any journal, instead its contents are discussed at", "after": "would automatically resolve the structures giving an interpretation of experimental results, if the model would reproduce the experimental data. In this work we compare the C-H bond vector order parameters for glycerol backbone and choline headgroup between 14 different atomistic resolution models and experiments in fully hydrated lipid bilayer. The current models are not accurately enough to resolve the structure. However, closer inspection of three best performing models (CHARMM36, GAFFlipid and MacRog) suggest that improvements in the sampled dihedral angle distributions would potentilly lead to the model which would resolve the structure. Despite of the inaccuracy in the fully hydrated structures, the response to the dehydration, i.e. P-N vector tilting more parallel to membrane normal, is qualitatively correct in all models. The CHARMM36 and MacRog models describe the interactions between lipids and cholesterol better than Berger/H\\\"oltje model. This work has been, and continues to be, progressed and discussed through the blog: nmrlipids.blogspot.fi. Everyone is invited to join the discussion and make contributions through the blog. The manuscript will be eventually submitted to an appropriate scientific journal. Everyone who has contributed to the work through the blog will be offered coauthorship. For more details see:", "start_char_pos": 155, "end_char_pos": 792 } ]
[ 0, 221, 417, 483, 606, 700 ]
1309.2211
1
We show how the results on forward-backward SDEs driven by L\'evy processes obtained in our previous paper can be applied to portfolio selection in a L\'evy-type market . Our approach allows to characterize a class L\'evy driven FBSDEs which allows to select an optimal portfolio .
We propose a model for hedging in a market with jumps for a large investor. The dynamics of the stock prices and the value process is governed by forward-backward SDEs driven by Teugels martingales. Unlike known FBSDE market models, ours accounts for jumps in stock prices. Moreover, it allows to find an optimal hedging strategy .
[ { "type": "R", "before": "show how the results on", "after": "propose a model for hedging in a market with jumps for a large investor. The dynamics of the stock prices and the value process is governed by", "start_char_pos": 3, "end_char_pos": 26 }, { "type": "R", "before": "L\\'evy processes obtained in our previous paper can be applied to portfolio selection in a L\\'evy-type market . Our approach allows to characterize a class L\\'evy driven FBSDEs which allows to select an optimal portfolio", "after": "Teugels martingales. Unlike known FBSDE market models, ours accounts for jumps in stock prices. Moreover, it allows to find an optimal hedging strategy", "start_char_pos": 59, "end_char_pos": 279 } ]
[ 0, 170 ]
1309.2728
1
We show that the results of ArXiv:1305.6008 on the Fundamental Theorem of Asset Pricing and the super-hedging theorem can be easily extended to the case in which the hedging options are quoted with bid-ask spreads. It turns out that the dual elements have to be martingale measures that need to price the non-redundant hedging options correctly .
We show that the results of ArXiv:1305.6008 on the Fundamental Theorem of Asset Pricing and the super-hedging theorem can be extended to the case in which the options available for static hedging ( hedging options ) are quoted with bid-ask spreads. It turns out that the dual elements have to be martingale measures that need to price the non-redundant hedging options correctly . A key result is the closedness of the set of attainable claims, which requires a new proof in our setting .
[ { "type": "D", "before": "easily", "after": null, "start_char_pos": 125, "end_char_pos": 131 }, { "type": "A", "before": null, "after": "options available for static hedging (", "start_char_pos": 166, "end_char_pos": 166 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 183, "end_char_pos": 183 }, { "type": "A", "before": null, "after": ". A key result is the closedness of the set of attainable claims, which requires a new proof in our setting", "start_char_pos": 347, "end_char_pos": 347 } ]
[ 0, 216 ]
1309.2728
2
We show that the results of ArXiv:1305.6008 on the Fundamental Theorem of Asset Pricing and the super-hedging theorem can be extended to the case in which the options available for static hedging (hedging options) are quoted with bid-ask spreads. It turns out that the dual elements have to be martingale measures that need to price thenon-redundant hedging options correctly . A key result is the closedness of the set of attainable claims, which requires a new proof in our setting.
We show that the results of ArXiv:1305.6008 on the Fundamental Theorem of Asset Pricing and the super-hedging theorem can be extended to the case in which the options available for static hedging (hedging options) are quoted with bid-ask spreads. In this set-up, we need to work with the notion ofrobust no-arbitrage which turns out to be equivalent to no-arbitrage under the additional assumption that hedging options with non-zero spread arenon-redundant . A key result is the closedness of the set of attainable claims, which requires a new proof in our setting.
[ { "type": "D", "before": "It turns out that the dual elements have to be martingale measures that need to price the", "after": null, "start_char_pos": 247, "end_char_pos": 336 }, { "type": "D", "before": "non-redundant", "after": null, "start_char_pos": 336, "end_char_pos": 349 }, { "type": "R", "before": "hedging options correctly", "after": "In this set-up, we need to work with the notion of", "start_char_pos": 350, "end_char_pos": 375 }, { "type": "A", "before": null, "after": "robust no-arbitrage", "start_char_pos": 375, "end_char_pos": 375 }, { "type": "A", "before": null, "after": "which turns out to be equivalent to no-arbitrage under the additional assumption that hedging options with non-zero spread are", "start_char_pos": 376, "end_char_pos": 376 }, { "type": "A", "before": null, "after": "non-redundant", "start_char_pos": 376, "end_char_pos": 376 } ]
[ 0, 246, 378 ]
1309.2982
1
We consider as given a discrete time financial market with a risky asset and options written on that asset and determine the sub-hedging price of an American option in the model independent framework of ArXiv: 1301.5568. We also show that the order of min and max in the dual representation of the price can be exchanged . Our results generalize those of ArXiv:1304.3574 to the case when static positions in (finitely many) European options can be used in the hedging portfolio.
We consider as given a discrete time financial market with a risky asset and options written on that asset and determine both the sub- and super-hedging prices of an American option in the model independent framework of ArXiv: 1305.6008. We obtain the duality of results for the super- and sub-hedging prices. Then assuming that the path space is compact, we construct a discretization of the path space and demonstrate the convergence of the hedging prices at the optimal rate. The latter result would be useful for numerical computation of the hedging prices . Our results generalize those of ArXiv:1304.3574 to the case when static positions in (finitely many) European options can be used in the hedging portfolio.
[ { "type": "R", "before": "the sub-hedging price", "after": "both the sub- and super-hedging prices", "start_char_pos": 121, "end_char_pos": 142 }, { "type": "R", "before": "1301.5568. We also show that the order of min and max in the dual representation of the price can be exchanged", "after": "1305.6008. We obtain the duality of results for the super- and sub-hedging prices. Then assuming that the path space is compact, we construct a discretization of the path space and demonstrate the convergence of the hedging prices at the optimal rate. The latter result would be useful for numerical computation of the hedging prices", "start_char_pos": 210, "end_char_pos": 320 } ]
[ 0, 220, 322 ]
1309.2982
2
We consider as given a discrete time financial market with a risky asset and options written on that asset and determine both the sub- and super-hedging prices of an American option in the model independent framework of ArXiv:1305.6008. We obtain the duality of results for the super- and sub-hedging prices . Then assuming that the path space is compact, we construct a discretization of the path space and demonstrate the convergence of the hedging prices at the optimal rate. The latter result would be useful for numerical computation of the hedging prices. Our results generalize those of ArXiv:1304.3574 to the case when static positions in (finitely many) European options can be used in the hedging portfolio.
We consider as given a discrete time financial market with a risky asset and options written on that asset and determine both the sub- and super-hedging prices of an American option in the model independent framework of ArXiv:1305.6008. We obtain the duality of results for the sub- and super-hedging prices. For the sub-hedging prices we discuss whether the sup and inf in the dual representation can be exchanged (a counter example shows that this is not true in general). For the super-hedging prices we discuss several alternative definitions and argue why our choice is more reasonable . Then assuming that the path space is compact, we construct a discretization of the path space and demonstrate the convergence of the hedging prices at the optimal rate. The latter result would be useful for numerical computation of the hedging prices. Our results generalize those of ArXiv:1304.3574 to the case when static positions in (finitely many) European options can be used in the hedging portfolio.
[ { "type": "R", "before": "super- and", "after": "sub- and super-hedging prices. For the", "start_char_pos": 278, "end_char_pos": 288 }, { "type": "A", "before": null, "after": "we discuss whether the sup and inf in the dual representation can be exchanged (a counter example shows that this is not true in general). For the super-hedging prices we discuss several alternative definitions and argue why our choice is more reasonable", "start_char_pos": 308, "end_char_pos": 308 } ]
[ 0, 236, 310, 479, 562 ]
1309.3057
1
We present sharp tail asymptotics for the density and the distribution function of linear combinations of correlated log-normal random variables, that is, exponentials of components of a correlated Gaussian vector. The asymptotic behavior turns out to be determined by a subset of components of the Gaussian vector, and we identify the relevant components by relating the asymptotics to a tractable quadratic optimization problem. As a corollary , we characterize the limiting behavior of the conditional law of the Gaussian vector, given a linear combination of the exponentials of its components. Our results can be used either to estimate the probability of tail events directly, or to construct efficient variance reduction procedures for precise estimation of these probabilities by Monte Carlo methods. They lead to important insights concerning the behavior of individual stocks and portfolios during market downturns in the multidimensional Black-Scholes model .
We present sharp tail asymptotics for the density and the distribution function of linear combinations of correlated log-normal random variables, that is, exponentials of components of a correlated Gaussian vector. The asymptotic behavior turns out to depend on the correlation between the components, and the explicit solution is found by solving a tractable quadratic optimization problem. These results can be used either to approximate the probability of tail events directly, or to construct variance reduction procedures to estimate these probabilities by Monte Carlo methods. In particular, we propose an efficient importance sampling estimator for the left tail of the distribution function of the sum of log-normal variables. As a corollary of the tail asymptotics, we compute the asymptotics of the conditional law of a Gaussian random vector given a linear combination of exponentials of its components. In risk management applications, this finding can be used for the systematic construction of stress tests, which the financial institutions are required to conduct by the regulators. We also characterize the asymptotic behavior of the Value at Risk for log-normal portfolios in the case where the confidence level tends to one .
[ { "type": "R", "before": "be determined by a subset of components of the Gaussian vector, and we identify the relevant components by relating the asymptotics to", "after": "depend on the correlation between the components, and the explicit solution is found by solving", "start_char_pos": 252, "end_char_pos": 386 }, { "type": "A", "before": null, "after": "These results can be used either to approximate the probability of tail events directly, or to construct variance reduction procedures to estimate these probabilities by Monte Carlo methods. In particular, we propose an efficient importance sampling estimator for the left tail of the distribution function of the sum of log-normal variables.", "start_char_pos": 431, "end_char_pos": 431 }, { "type": "R", "before": ", we characterize the limiting behavior", "after": "of the tail asymptotics, we compute the asymptotics", "start_char_pos": 447, "end_char_pos": 486 }, { "type": "R", "before": "the Gaussian vector,", "after": "a Gaussian random vector", "start_char_pos": 513, "end_char_pos": 533 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 564, "end_char_pos": 567 }, { "type": "R", "before": "Our results", "after": "In risk management applications, this finding", "start_char_pos": 600, "end_char_pos": 611 }, { "type": "R", "before": "either to estimate the probability of tail events directly, or to construct efficient variance reduction procedures for precise estimation of these probabilities by Monte Carlo methods. They lead to important insights concerning the behavior of individual stocks and portfolios during market downturns in the multidimensional Black-Scholes model", "after": "for the systematic construction of stress tests, which the financial institutions are required to conduct by the regulators. We also characterize the asymptotic behavior of the Value at Risk for log-normal portfolios in the case where the confidence level tends to one", "start_char_pos": 624, "end_char_pos": 969 } ]
[ 0, 214, 430, 599, 809 ]
1309.3639
1
Building on similarities between earthquakes and extreme financial events, we use a URLanized criticality-generating model to study herding and avalanches dynamics in financial markets. We consider a community of interacting investors, distributed on a small world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market compared to the day before, following the S&P500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of trader s , randomly distributed inside the network, who adopt a random investment strategy. These results suggest a promising strategy to limit the size of financial bubbles and crashes. We also find that the final wealth distribution of all traders corresponds to the well-known Pareto power law, while that one of random traders only is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.
Building on similarities between earthquakes and extreme financial events, we use a URLanized criticality-generating model to study herding and avalanche dynamics in financial markets. We consider a community of interacting investors, distributed on a small-world network, who bet on the bullish (increasing) or bearish (decreasing) behavior of the market which has been specified according to the S&P500 historical time series. Remarkably, we find that the size of herding-related avalanches in the community can be strongly reduced by the presence of a relatively small percentage of traders , randomly distributed inside the network, who adopt a random investment strategy. Our findings suggest a promising strategy to limit the size of financial bubbles and crashes. We also obtain that the resulting wealth distribution of all traders corresponds to the well-known Pareto power law, while the one of random traders is exponential. In other words, for technical traders, the risk of losses is much greater than the probability of gains compared to those of random traders.
[ { "type": "R", "before": "avalanches", "after": "avalanche", "start_char_pos": 144, "end_char_pos": 154 }, { "type": "R", "before": "small world", "after": "small-world", "start_char_pos": 253, "end_char_pos": 264 }, { "type": "R", "before": "compared to the day before, following the", "after": "which has been specified according to the", "start_char_pos": 357, "end_char_pos": 398 }, { "type": "R", "before": "trader s", "after": "traders", "start_char_pos": 587, "end_char_pos": 595 }, { "type": "R", "before": "These results", "after": "Our findings", "start_char_pos": 679, "end_char_pos": 692 }, { "type": "R", "before": "find that the final", "after": "obtain that the resulting", "start_char_pos": 782, "end_char_pos": 801 }, { "type": "R", "before": "that", "after": "the", "start_char_pos": 891, "end_char_pos": 895 }, { "type": "D", "before": "only", "after": null, "start_char_pos": 918, "end_char_pos": 922 } ]
[ 0, 185, 429, 678, 773, 938 ]
1309.3832
1
We propose a new approach to solve optimal stopping problems via simulation. Working within the backward dynamic programming/Snell envelope framework, we augment the methodology of Longstaff-Schwartz that focuses on approximating the stopping strategy. We reinterpret the corresponding partitions of the state space into the continuation and stopping regions as statistical classification problems with noisy observations. Accordingly, a key new objective that we pursue is efficient design of the stochastic grids formed by the simulated sample paths of the underlying state process. To this end, we introduce active learning schemes that adaptively place new grid points close to the stopping boundaries. We then discuss dynamic regression algorithms that can implement such recursive estimation and local refinement of the classifiers. The new algorithm is illustrated with a variety of numerical experiments, showing that an order of magnitude savings in terms of total grid size can be achieved. We also compare with existing benchmarks in the context of pricing multi-dimensional Bermudan options.
We propose a new approach to solve optimal stopping problems via simulation. Working within the backward dynamic programming/Snell envelope framework, we augment the methodology of Longstaff-Schwartz that focuses on approximating the stopping strategy. Namely, we introduce adaptive generation of the stochastic grids anchoring the simulated sample paths of the underlying state process. This allows for active learning of the classifiers partitioning the state space into the continuation and stopping regions. To this end, we examine sequential design schemes that adaptively place new design points close to the stopping boundaries. We then discuss dynamic regression algorithms that can implement such recursive estimation and local refinement of the classifiers. The new algorithm is illustrated with a variety of numerical experiments, showing that an order of magnitude savings in terms of design size can be achieved. We also compare with existing benchmarks in the context of pricing multi-dimensional Bermudan options.
[ { "type": "R", "before": "We reinterpret the corresponding partitions of the state space into the continuation and stopping regions as statistical classification problems with noisy observations. Accordingly, a key new objective that we pursue is efficient design", "after": "Namely, we introduce adaptive generation", "start_char_pos": 253, "end_char_pos": 490 }, { "type": "R", "before": "formed by", "after": "anchoring", "start_char_pos": 515, "end_char_pos": 524 }, { "type": "A", "before": null, "after": "This allows for active learning of the classifiers partitioning the state space into the continuation and stopping regions.", "start_char_pos": 585, "end_char_pos": 585 }, { "type": "R", "before": "introduce active learning", "after": "examine sequential design", "start_char_pos": 602, "end_char_pos": 627 }, { "type": "R", "before": "grid", "after": "design", "start_char_pos": 662, "end_char_pos": 666 }, { "type": "R", "before": "total grid", "after": "design", "start_char_pos": 969, "end_char_pos": 979 } ]
[ 0, 76, 252, 422, 584, 707, 839, 1001 ]
1309.3957
1
We explore the combinatorics of reaction networks , with a view towards the Global Attractor Conjecture. We prove that full-rank matrices with positive off-diagonal and negative diagonal entries permit a positive linear combination of the rows so that all coordinates have the same sign. Using this, we show that a reaction network has critical siphons iff it has "drainable" or "self-replicable" siphons. Further, if the minimal siphons of a reaction network are not drainable , then the dynamics is persistent. Consequently, we obtain a new, elementary proof for the persistence of non-catalytic weakly-reversible chemical reaction networks . Our results clarify that the difficulties in proving the Global Attractor Conjecture are essentially due to competition between extinction and autocatalytic growth .
The persistence conjecture is a long-standing open problem in chemical reaction network theory. It concerns the behavior of solutions to coupled ODE systems that arise from applying mass-action kinetics to a network of chemical reactions. The idea is that if all reactions are reversible in a weak sense, then no species can go extinct. A notion that has been found useful in thinking about persistence is that of "critical siphon." We explore the combinatorics of critical siphons , with a view towards the persistence conjecture. We introduce the notions of "drainable" and "self-replicable" (or autocatalytic) siphons. We show that: every minimal critical siphon is either drainable or self-replicable; reaction networks without drainable siphons are persistent; and non-autocatalytic weakly-reversible networks are persistent . Our results clarify that the difficulties in proving the persistence conjecture are essentially due to competition between drainable and self-replicable siphons .
[ { "type": "A", "before": null, "after": "The persistence conjecture is a long-standing open problem in chemical reaction network theory. It concerns the behavior of solutions to coupled ODE systems that arise from applying mass-action kinetics to a network of chemical reactions. The idea is that if all reactions are reversible in a weak sense, then no species can go extinct. A notion that has been found useful in thinking about persistence is that of \"critical siphon.\"", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "reaction networks", "after": "critical siphons", "start_char_pos": 33, "end_char_pos": 50 }, { "type": "R", "before": "Global Attractor Conjecture. We prove that full-rank matrices with positive off-diagonal and negative diagonal entries permit a positive linear combination of the rows so that all coordinates have the same sign. Using this, we show that a reaction network has critical siphons iff it has", "after": "persistence conjecture. We introduce the notions of", "start_char_pos": 77, "end_char_pos": 364 }, { "type": "R", "before": "or", "after": "and", "start_char_pos": 377, "end_char_pos": 379 }, { "type": "R", "before": "siphons. Further, if the minimal siphons of a reaction network are not drainable , then the dynamics is persistent. Consequently, we obtain a new, elementary proof for the persistence of non-catalytic", "after": "(or autocatalytic) siphons. We show that: every minimal critical siphon is either drainable or self-replicable; reaction networks without drainable siphons are persistent; and non-autocatalytic", "start_char_pos": 398, "end_char_pos": 598 }, { "type": "R", "before": "chemical reaction networks", "after": "networks are persistent", "start_char_pos": 617, "end_char_pos": 643 }, { "type": "R", "before": "Global Attractor Conjecture", "after": "persistence conjecture", "start_char_pos": 703, "end_char_pos": 730 }, { "type": "R", "before": "extinction and autocatalytic growth", "after": "drainable and self-replicable siphons", "start_char_pos": 774, "end_char_pos": 809 } ]
[ 0, 105, 288, 406, 513, 645 ]
1309.4050
1
We consider networks with a specific type of nodes that can have either a discrete or continuous set of states. It is shown that no matter how complex the network is, its dynamical response to arbitrary inputs is defined in a simple way by its response to a monotone input. As illustrative applications, we propose and discuss a quasistatic mechanical model with objects interacting via friction forces, and a financial market model with avalanches and critical behavior induced by momentum trading strategies.
We show that for a certain class of dynamics at the nodes the response of a network of any topology to arbitrary inputs is defined in a simple way by its response to a monotone input. The nodes may have either a discrete or continuous set of states and there is no limit on the complexity of the network. The results provide both an efficient numerical method and the potential for accurate analytic approximation of the dynamics on such networks. As illustrative applications, we introduce a quasistatic mechanical model with objects interacting via frictional forces, and a financial market model with avalanches and critical behavior that are generated by momentum trading strategies.
[ { "type": "R", "before": "consider networks with a specific type of nodes that can have either a discrete or continuous set of states. It is shown that no matter how complex the network is, its dynamical response", "after": "show that for a certain class of dynamics at the nodes the response of a network of any topology", "start_char_pos": 3, "end_char_pos": 189 }, { "type": "A", "before": null, "after": "The nodes may have either a discrete or continuous set of states and there is no limit on the complexity of the network. The results provide both an efficient numerical method and the potential for accurate analytic approximation of the dynamics on such networks.", "start_char_pos": 274, "end_char_pos": 274 }, { "type": "R", "before": "propose and discuss", "after": "introduce", "start_char_pos": 308, "end_char_pos": 327 }, { "type": "R", "before": "friction", "after": "frictional", "start_char_pos": 388, "end_char_pos": 396 }, { "type": "R", "before": "induced", "after": "that are generated", "start_char_pos": 472, "end_char_pos": 479 } ]
[ 0, 111, 273 ]
1309.4662
1
Self-assembly of DNA molecules by origami folding involves finding a route for the scaffolding strand through the desired structure. When the target structure is a 1-complex (or the geometric realization of a graph), an optimal route corresponds to an Eulerian circuit through the graph with minimum turning cost. By showing that it leads to a solution to the 3-SAT problem, we prove that the general problem of finding an optimal route for a scaffolding strand for such structures is NP-Hard . We then show that the problem may readily be transformed into a Traveling Salesman Problem (TSP), so that the machinery that has been developed for the TSP may be applied to find optimal routes for the scaffolding strand in a DNA origami self-assembly process. We give results for a few special cases, showing for example that the problem remains intractable for graphs with maximum degree 8, but is polynomial time for 4-regular plane graphs if the circuit is restricted to following faces. We conclude with implications of these results for related problems, such as biomolecular computing and mill routing problems.
Building a structure using self-assembly of DNA molecules by origami folding requires finding a route for the scaffolding strand through the desired structure. When the target structure is a 1-complex (or the geometric realization of a graph), an optimal route corresponds to an Eulerian circuit through the graph with minimum turning cost. By showing that it leads to a solution to the 3-SAT problem, we prove that the general problem of finding an optimal route for a scaffolding strand for such structures is NP-hard . We then show that the problem may readily be transformed into a Traveling Salesman Problem (TSP), so that machinery that has been developed for the TSP may be applied to find optimal routes for the scaffolding strand in a DNA origami self-assembly process. We give results for a few special cases, showing for example that the problem remains intractable for graphs with maximum degree 8, but is polynomial time for 4-regular plane graphs if the circuit is restricted to following faces. We conclude with some implications of these results for related problems, such as biomolecular computing and mill routing problems.
[ { "type": "R", "before": "Self-assembly", "after": "Building a structure using self-assembly", "start_char_pos": 0, "end_char_pos": 13 }, { "type": "R", "before": "involves", "after": "requires", "start_char_pos": 50, "end_char_pos": 58 }, { "type": "R", "before": "NP-Hard", "after": "NP-hard", "start_char_pos": 485, "end_char_pos": 492 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 601, "end_char_pos": 604 }, { "type": "A", "before": null, "after": "some", "start_char_pos": 1004, "end_char_pos": 1004 } ]
[ 0, 132, 313, 494, 755, 986 ]
1309.4722
1
The "developmental hourglass" describes a pattern of increasing morphological divergence towards earlier and later embryonic development, separated by a period of significant conservation across distant species (the "phylotypic stage"). Recent studies have also found evidence in support of the hourglass effect at the genomic level. For instance, the phylotypic stage expresses the oldest and most conserved transcriptomes. However, the regulatory mechanism that causes the hourglass pattern remains an open question. Here, we propose an abstract model of regulatory gene interactions during development , and of their evolution . The model captures how the "functional state" of genes change as development progresses in the form of a hierarchical network. It also captures the evolution of a population under random perturbations in the structure of this regulatory network. The model predicts, under fairly general assumptions, the emergence of an hourglass pattern in terms of the number of state-transitioning genes during development. Additionally, the evolutionary age of those genes also follows an hourglass pattern, with the oldest genes concentrated at the hourglass waist. The key condition behind the hourglass effect is that developmental regulators should have an increasingly specific function as development progresses. We have confirmed the theoretical predictions of the model with gene expression profiles from Drosophila melanogaster and Arabidopsis thaliana .
The "developmental hourglass" describes a pattern of increasing morphological divergence towards earlier and later embryonic development, separated by a period of significant conservation across distant species (the "phylotypic stage"). Recent studies have found evidence in support of the hourglass effect at the genomic level. For instance, the phylotypic stage expresses the oldest and most conserved transcriptomes. However, the regulatory mechanism that causes the hourglass pattern remains an open question. Here, we use an evolutionary model of regulatory gene interactions during development to identify the conditions under which the hourglass effect can emerge in a general setting . The model focuses on the hierarchical gene regulatory network that controls the developmental process, and on the evolution of a population under random perturbations in the structure of that network. The model predicts, under fairly general assumptions, the emergence of an hourglass pattern in the structure of a temporal representation of the underlying gene regulatory network. The evolutionary age of the corresponding genes also follows an hourglass pattern, with the oldest genes concentrated at the hourglass waist. The key behind the hourglass effect is that developmental regulators should have an increasingly specific function as development progresses. Analysis of developmental gene expression profiles from Drosophila melanogaster and Arabidopsis thaliana provide consistent results with our theoretical predictions .
[ { "type": "D", "before": "also", "after": null, "start_char_pos": 257, "end_char_pos": 261 }, { "type": "R", "before": "propose an abstract", "after": "use an evolutionary", "start_char_pos": 528, "end_char_pos": 547 }, { "type": "R", "before": ", and of their evolution", "after": "to identify the conditions under which the hourglass effect can emerge in a general setting", "start_char_pos": 605, "end_char_pos": 629 }, { "type": "R", "before": "captures how the \"functional state\" of genes change as development progresses in the form of a hierarchical network. It also captures the", "after": "focuses on the hierarchical gene regulatory network that controls the developmental process, and on the", "start_char_pos": 642, "end_char_pos": 779 }, { "type": "R", "before": "this regulatory", "after": "that", "start_char_pos": 853, "end_char_pos": 868 }, { "type": "R", "before": "terms of the number of state-transitioning genes during development. Additionally, the", "after": "the structure of a temporal representation of the underlying gene regulatory network. The", "start_char_pos": 973, "end_char_pos": 1059 }, { "type": "R", "before": "those", "after": "the corresponding", "start_char_pos": 1080, "end_char_pos": 1085 }, { "type": "D", "before": "condition", "after": null, "start_char_pos": 1194, "end_char_pos": 1203 }, { "type": "R", "before": "We have confirmed the theoretical predictions of the model with", "after": "Analysis of developmental", "start_char_pos": 1338, "end_char_pos": 1401 }, { "type": "A", "before": null, "after": "provide consistent results with our theoretical predictions", "start_char_pos": 1481, "end_char_pos": 1481 } ]
[ 0, 236, 333, 424, 518, 631, 758, 877, 1041, 1185, 1337 ]
1309.4936
1
Adjusting the metabolic URLanization to the environment by tuning enzyme expression levels is crucial for cellular growth , in particular in a changing environment or during metabolic adaptation . Metabolic networks are often studied with optimization methods applied to constraint-based steady state models. But, a corresponding dynamic modeling framework including a tight interplay between metabolic fluxes and gene expression is currently lacking. Due to that, the cost of producing enzymes so far could not be taken into account in dynamic optimization of metabolic fluxes. Here , we present a modeling framework combining the metabolic network and the enzyme production costs. A rigorous mathematical approximation by a timescale separation yields a coupled model of quasi steady state constraints on the metabolic reactions, and differential equations for the substrate concentrations and biomass composition. Based on this model, we propose a dynamic optimization approach to determine reaction fluxes, explicitly taking production costs for enzymes and enzymatic capacity into account . In contrast to the established dynamic flux balance analysis, the proposed approach thereby allows to analyse dynamic changes in both the metabolic fluxes and the detailed biomass composition in situations of metabolic adaptation .
The regulation of metabolic activity by tuning enzyme expression levels is crucial to sustain cellular growth in changing environments . Metabolic networks are often studied at steady state using constraint-based models and optimization techniques. However, metabolic adaptations driven by changes in gene expression cannot be analyzed by steady state models, as these do not account for temporal changes in biomass composition. Here we present a dynamic optimization framework that integrates the metabolic network with the dynamics of biomass production and composition, explicitly taking into account enzyme production costs and enzymatic capacity . In contrast to the established dynamic flux balance analysis, our approach allows predicting dynamic changes in both the metabolic fluxes and the biomass composition during metabolic adaptations. We applied our algorithm in two case studies: a minimal nutrient uptake network, and an abstraction of core metabolic processes in bacteria. In the minimal model, we show that the optimized uptake rates reproduce the empirical Monod growth for bacterial cultures. For the network of core metabolic processes, the dynamic optimization algorithm predicted commonly observed metabolic adaptations, such as a diauxic switch with a preference ranking for different nutrients, re-utilization of waste products after depletion of the original substrate, and metabolic adaptation to an impending nutrient depletion. These examples illustrate how dynamic adaptations of enzyme expression can be predicted solely from an optimization principle .
[ { "type": "R", "before": "Adjusting the metabolic URLanization to the environment", "after": "The regulation of metabolic activity", "start_char_pos": 0, "end_char_pos": 55 }, { "type": "R", "before": "for cellular growth , in particular in a changing environment or during metabolic adaptation", "after": "to sustain cellular growth in changing environments", "start_char_pos": 102, "end_char_pos": 194 }, { "type": "R", "before": "with optimization methods applied to constraint-based steady state models. But, a corresponding dynamic modeling framework including a tight interplay between metabolic fluxes and gene expression is currently lacking. Due to that, the cost of producing enzymes so far could not be taken into account in dynamic optimization of metabolic fluxes. Here ,", "after": "at steady state using constraint-based models and optimization techniques. However, metabolic adaptations driven by changes in gene expression cannot be analyzed by steady state models, as these do not account for temporal changes in biomass composition. Here", "start_char_pos": 234, "end_char_pos": 585 }, { "type": "R", "before": "modeling framework combining", "after": "dynamic optimization framework that integrates", "start_char_pos": 599, "end_char_pos": 627 }, { "type": "R", "before": "and the enzyme production costs. A rigorous mathematical approximation by a timescale separation yields a coupled model of quasi steady state constraints on the metabolic reactions, and differential equations for the substrate concentrations and biomass composition. Based on this model, we propose a dynamic optimization approach to determine reaction fluxes, explicitly taking production costs for enzymes", "after": "with the dynamics of biomass production and composition, explicitly taking into account enzyme production costs", "start_char_pos": 650, "end_char_pos": 1057 }, { "type": "D", "before": "into account", "after": null, "start_char_pos": 1081, "end_char_pos": 1093 }, { "type": "R", "before": "the proposed approach thereby allows to analyse", "after": "our approach allows predicting", "start_char_pos": 1158, "end_char_pos": 1205 }, { "type": "R", "before": "detailed biomass composition in situations of metabolic adaptation", "after": "biomass composition during metabolic adaptations. We applied our algorithm in two case studies: a minimal nutrient uptake network, and an abstraction of core metabolic processes in bacteria. In the minimal model, we show that the optimized uptake rates reproduce the empirical Monod growth for bacterial cultures. For the network of core metabolic processes, the dynamic optimization algorithm predicted commonly observed metabolic adaptations, such as a diauxic switch with a preference ranking for different nutrients, re-utilization of waste products after depletion of the original substrate, and metabolic adaptation to an impending nutrient depletion. These examples illustrate how dynamic adaptations of enzyme expression can be predicted solely from an optimization principle", "start_char_pos": 1259, "end_char_pos": 1325 } ]
[ 0, 196, 308, 451, 578, 682, 916, 1095 ]
1309.5033
1
Bacterial spores in a metabolically dormant state can survive long periods without nutrients under extreme environmental conditions. The molecular basis of spore dormancy is not well understood, but the distribution and physical state of water within the spore is thought to play an important role. Two scenarios have been proposed for the spore's core region, containing the DNA and most enzymes. In the gel scenario, the core is a structured macromolecular framework permeated by mobile water. In the glass scenario, the entire core, including the water, is an amorphous solid and the quenched molecular diffusion accounts for the spore's dormancy and thermal stability. Here, we use 2H magnetic relaxation dispersion to selectively monitor water mobility in the core of Bacillus subtilis spores in the presence and absence of core Mn2+ ions. We also report and analyze the solid-state 2H NMR spectrum from these spores. Our NMR data clearly support the gel scenario with highly mobile core water (~ 25 ps average rotational correlation time). Furthermore, we find that the large depot of manganese in the core is nearly anhydrous, with merely 1.7 \% on average of the maximum sixfold water coordination.
Bacterial spores in a metabolically dormant state can survive long periods without nutrients under extreme environmental conditions. The molecular basis of spore dormancy is not well understood, but the distribution and physical state of water within the spore is thought to play an important role. Two scenarios have been proposed for the spore's core region, containing the DNA and most enzymes. In the gel scenario, the core is a structured macromolecular framework permeated by mobile water. In the glass scenario, the entire core, including the water, is an amorphous solid and the quenched molecular diffusion accounts for the spore's dormancy and thermal stability. Here, we use ^2H magnetic relaxation dispersion to selectively monitor water mobility in the core of Bacillus subtilis spores in the presence and absence of core Mn^{2+ ions. We also report and analyze the solid-state ^2H NMR spectrum from these spores. Our NMR data clearly support the gel scenario with highly mobile core water (~ 25 ps average rotational correlation time). Furthermore, we find that the large depot of manganese in the core is nearly anhydrous, with merely 1.7 \% on average of the maximum sixfold water coordination.
[ { "type": "R", "before": "2H", "after": "^2H", "start_char_pos": 686, "end_char_pos": 688 }, { "type": "R", "before": "Mn2+", "after": "Mn^{2+", "start_char_pos": 834, "end_char_pos": 838 }, { "type": "R", "before": "2H", "after": "^2H", "start_char_pos": 888, "end_char_pos": 890 } ]
[ 0, 132, 298, 397, 495, 672, 844, 922, 1045 ]
1309.5209
1
We unravel how functional plasticity and redundancy are essential mechanisms underlying the ability to survive of metabolic networks. For that, we perform an exhaustive computational screening of synthetic lethal reaction pairs in Escherichia coli in minimal medium and find that synthetic lethals divide in two different groups depending on whether the synthetic lethal interaction works as a back up or as a parallel use , the first corresponding to essential plasticity and the second to essential redundancy. In E. coli, the analysis of how pathways are entangled through essential plasticity and redundancy supports the view that synthetic lethality affects preferentially a single function or pathway , although with a major exception which unveils Cell Envelope Biosysthesis as an essential backup to Membrane Lipid Metabolism. When comparing E. coli and Mycoplasma pneumoniae, we find that the metabolic networks of the URLanisms exhibit opposite relationships between the relative importance of plasticity and redundancy, consistent with the conjecture that plasticity is a more sophisticated mechanism that requires a more URLanization. Finally, coessential reaction pairs are explored in different environmental conditions to uncover the interplay between the two mechanisms. We find that synthetic lethal interactions and their classification in plasticity and redundancy are basically insensitive to minimal medium composition, and are highly conserved even when the environment is enriched with nonessential compounds .
We unravel how functional plasticity and redundancy are essential mechanisms underlying the ability to survive of metabolic networks. For that, we perform an exhaustive computational screening of synthetic lethal reaction pairs in Escherichia coli in minimal medium and find that synthetic lethals divide in two different groups depending on whether the synthetic lethal interaction works as a back up or as a parallel use mechanism , the first corresponding to essential plasticity and the second to essential redundancy. In E. coli, the analysis of how pathways are entangled through essential redundancy supports the view that synthetic lethality affects preferentially a single function or pathway . In contrast, essential plasticity, the dominant class, tends to be inter-pathway but concentrated and unveils Cell Envelope Biosynthesis as an essential backup to Membrane Lipid Metabolism. When comparing E. coli and Mycoplasma pneumoniae, we find that the metabolic networks of the URLanisms exhibit opposite relationships between the relative importance of plasticity and redundancy, consistent with the conjecture that plasticity is a more sophisticated mechanism that requires a more URLanization. Finally, coessential reaction pairs are explored in different environmental conditions to uncover the interplay between the two mechanisms. We find that synthetic lethal interactions and their classification in plasticity and redundancy are basically insensitive to minimal medium composition, and are highly conserved even when the environment is enriched with nonessential compounds or overconstrained to decrease maximum biomass formation .
[ { "type": "A", "before": null, "after": "mechanism", "start_char_pos": 423, "end_char_pos": 423 }, { "type": "D", "before": "plasticity and", "after": null, "start_char_pos": 587, "end_char_pos": 601 }, { "type": "R", "before": ", although with a major exception which", "after": ". In contrast, essential plasticity, the dominant class, tends to be inter-pathway but concentrated and", "start_char_pos": 708, "end_char_pos": 747 }, { "type": "R", "before": "Biosysthesis", "after": "Biosynthesis", "start_char_pos": 770, "end_char_pos": 782 }, { "type": "A", "before": null, "after": "or overconstrained to decrease maximum biomass formation", "start_char_pos": 1533, "end_char_pos": 1533 } ]
[ 0, 133, 513, 835, 1147, 1287 ]
1309.5778
1
Collective dynamics and force generation by cytoskeletal filaments are crucial in many cellular processes. Investigating growth dynamics of a bundle of N independent cytoskeletal filaments pushing against a wall, we show that ATP/GTP hydrolysis leads to a collective phenomena that is currently unknown. Obtaining force-velocity relations for different models that capture chemical switching, we show, analytically and numerically, that the collective stall force of N filaments is greater than N times the stall force of a single filament. Simulating growing actin and microtubule bundles, considering both sequential and random hydrolysis, we make quantitative predictions of the excess forces .
Collective dynamics and force generation by cytoskeletal filaments are crucial in many cellular processes. Investigating growth dynamics of a bundle of N independent cytoskeletal filaments pushing against a wall, we show that chemical switching ( ATP/GTP hydrolysis ) leads to a collective phenomenon that is currently unknown. Obtaining force-velocity relations for different models that capture chemical switching, we show, analytically and numerically, that the collective stall force of N filaments is greater than N times the stall force of a single filament. Employing an exactly solvable toy model, we analytically prove the above result for N=2. We, further, numerically show the existence of this collective phenomenon, for N>=2, in realistic models (with random and sequential hydrolysis) that simulate actin and microtubule bundle growth. We make quantitative predictions for the excess forces , and argue that this collective effect is related to the non-equilibrium nature of chemical switching .
[ { "type": "A", "before": null, "after": "chemical switching (", "start_char_pos": 226, "end_char_pos": 226 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 246, "end_char_pos": 246 }, { "type": "R", "before": "phenomena", "after": "phenomenon", "start_char_pos": 269, "end_char_pos": 278 }, { "type": "R", "before": "Simulating growing actin and microtubule bundles, considering both sequential and random hydrolysis, we", "after": "Employing an exactly solvable toy model, we analytically prove the above result for N=2. We, further, numerically show the existence of this collective phenomenon, for N>=2, in realistic models (with random and sequential hydrolysis) that simulate actin and microtubule bundle growth. We", "start_char_pos": 543, "end_char_pos": 646 }, { "type": "R", "before": "of", "after": "for", "start_char_pos": 677, "end_char_pos": 679 }, { "type": "A", "before": null, "after": ", and argue that this collective effect is related to the non-equilibrium nature of chemical switching", "start_char_pos": 698, "end_char_pos": 698 } ]
[ 0, 106, 305, 542 ]
1309.5806
1
We decompose, within an ARCH framework, the daily volatility of stocks into overnight and intraday contributions. We find, as perhaps expected, that the overnight and intraday returns behave completely differently. For example, while past intraday returns affect equally the future intraday and overnight volatilities, past overnight returns have a weak effect on future intraday volatilities (except for the very next one) but impact substantially future overnight volatilities. The exogenous component of overnight volatilities is found to be close to zero, which means that the lion's share of overnight volatility comes from feedback effects. The residual kurtosis of returns is small for intraday returns but infinite for overnight returns. We provide a plausible interpretation for these findings, and show that our IntraDay /Overnight model significantly outperforms the standard ARCH framework based on daily returns for Out-of-Sample predictions.
We decompose, within an ARCH framework, the daily volatility of stocks into overnight and intra-day contributions. We find, as perhaps expected, that the overnight and intra-day returns behave completely differently. For example, while past intra-day returns affect equally the future intra-day and overnight volatilities, past overnight returns have a weak effect on future intra-day volatilities (except for the very next one) but impact substantially future overnight volatilities. The exogenous component of overnight volatilities is found to be close to zero, which means that the lion's share of overnight volatility comes from feedback effects. The residual kurtosis of returns is small for intra-day returns but infinite for overnight returns. We provide a plausible interpretation for these findings, and show that our Intra-Day /Overnight model significantly outperforms the standard ARCH framework based on daily returns for Out-of-Sample predictions.
[ { "type": "R", "before": "intraday", "after": "intra-day", "start_char_pos": 90, "end_char_pos": 98 }, { "type": "R", "before": "intraday", "after": "intra-day", "start_char_pos": 167, "end_char_pos": 175 }, { "type": "R", "before": "intraday", "after": "intra-day", "start_char_pos": 239, "end_char_pos": 247 }, { "type": "R", "before": "intraday", "after": "intra-day", "start_char_pos": 282, "end_char_pos": 290 }, { "type": "R", "before": "intraday", "after": "intra-day", "start_char_pos": 371, "end_char_pos": 379 }, { "type": "R", "before": "intraday", "after": "intra-day", "start_char_pos": 693, "end_char_pos": 701 }, { "type": "R", "before": "IntraDay", "after": "Intra-Day", "start_char_pos": 822, "end_char_pos": 830 } ]
[ 0, 113, 214, 479, 646, 745 ]
1309.6066
1
Both physiological response and evolutionary adaptation modify the phenotype, but they act at different time scales. Because gene regulatory networks (GRN) govern phenotypic adaptations , they reflect the trade-offs between these different forces . To identify patterns of molecular function and genetic diversity in GRNs, we studied the drought response of the common sunflower, Helianthus annuus, and how the underlying GRN has influenced its evolution. We examined the responses of 32,423 expressed sequences to drought and to the hormone abscisic acid and selected 145 co-expressed transcripts. We characterized their regulatory relationships in nine kinetic studies based on different hormones. From this, we inferred a GRN by meta-analyses of a Gaussian Graphical model and a Random Forest algorithm and studied the genetic diversity of its nodes. We identified two main hubs in the network that transport nitrate in guard cells. This suggests that this function is key in sunflower physiological response to drought. Among Helianthus populations, we observed that more highly connected nodes in the GRN had lower genetic diversity . This systems biology approach combined molecular data at different time scales and identified important physiological processes. At the evolutionary level, we propose that network topology constrained adaptation to dry environment and thus speciation .
Gene regulatory networks (GRN) govern phenotypic adaptations and reflect the trade-offs between physiological responses and evolutionary adaptation that act at different time scales . To identify patterns of molecular function and genetic diversity in GRNs, we studied the drought response of the common sunflower, Helianthus annuus, and how the underlying GRN is related to its evolution. We examined the responses of 32,423 expressed sequences to drought and to abscisic acid and selected 145 co-expressed transcripts. We characterized their regulatory relationships in nine kinetic studies based on different hormones. From this, we inferred a GRN by meta-analyses of a Gaussian graphical model and a random forest algorithm and studied the genetic differentiation among populations (FST) at nodes. We identified two main hubs in the network that transport nitrate in guard cells. This suggests that nitrate transport is a critical aspect of sunflower physiological response to drought. We observed that differentiation of the network genes in elite sunflower cultivars is correlated with their position and connectivity . This systems biology approach combined molecular data at different time scales and identified important physiological processes. At the evolutionary level, we propose that network topology could influence responses to human selection and possibly adaptation to dry environments .
[ { "type": "R", "before": "Both physiological response and evolutionary adaptation modify the phenotype, but they act at different time scales. Because gene", "after": "Gene", "start_char_pos": 0, "end_char_pos": 129 }, { "type": "R", "before": ", they", "after": "and", "start_char_pos": 186, "end_char_pos": 192 }, { "type": "R", "before": "these different forces", "after": "physiological responses and evolutionary adaptation that act at different time scales", "start_char_pos": 224, "end_char_pos": 246 }, { "type": "R", "before": "has influenced", "after": "is related to", "start_char_pos": 426, "end_char_pos": 440 }, { "type": "D", "before": "the hormone", "after": null, "start_char_pos": 530, "end_char_pos": 541 }, { "type": "R", "before": "Graphical", "after": "graphical", "start_char_pos": 760, "end_char_pos": 769 }, { "type": "R", "before": "Random Forest", "after": "random forest", "start_char_pos": 782, "end_char_pos": 795 }, { "type": "R", "before": "diversity of its", "after": "differentiation among populations (FST) at", "start_char_pos": 830, "end_char_pos": 846 }, { "type": "R", "before": "this function is key in", "after": "nitrate transport is a critical aspect of", "start_char_pos": 955, "end_char_pos": 978 }, { "type": "R", "before": "Among Helianthus populations, we observed that more highly connected nodes in the GRN had lower genetic diversity", "after": "We observed that differentiation of the network genes in elite sunflower cultivars is correlated with their position and connectivity", "start_char_pos": 1024, "end_char_pos": 1137 }, { "type": "R", "before": "constrained", "after": "could influence responses to human selection and possibly", "start_char_pos": 1329, "end_char_pos": 1340 }, { "type": "R", "before": "environment and thus speciation", "after": "environments", "start_char_pos": 1359, "end_char_pos": 1390 } ]
[ 0, 116, 248, 455, 598, 699, 853, 935, 1023, 1139, 1268 ]
1309.6141
1
This paper extends results from Mortimer and Williams (1991) about changes of probability measure up to random times. Many new classes of examples involving honest times and pseudo-stopping times are provided . Furthermore, we discuss the question of market viability up to a random time .
This paper extends results of Mortimer and Williams (1991) about changes of probability measure up to a random time under the assumptions that all martingales are continuous and that the random time avoids stopping times. We consider locally absolutely continuous measure changes up to a random time, changes of probability measure up to and after an honest time, and changes of probability measure up to a pseudo-stopping time. Moreover, we apply our results to construct a change of probability measure that is equivalent to the enlargement formula and to build for a certain class of pseudo-stopping times a class of measure changes that preserve the pseudo-stopping time property . Furthermore, we study for a price process modeled by a continuous semimartingale the stability of the No Free Lunch with Vanishing Risk (NFLVR) property up to a random time , that avoids stopping times, in the progressively enlarged filtration and provide sufficient conditions for this stability in terms of the Az\'ema supermartingale .
[ { "type": "R", "before": "from", "after": "of", "start_char_pos": 27, "end_char_pos": 31 }, { "type": "R", "before": "random times. Many new classes of examples involving honest times and", "after": "a random time under the assumptions that all martingales are continuous and that the random time avoids stopping times. We consider locally absolutely continuous measure changes up to a random time, changes of probability measure up to and after an honest time, and changes of probability measure up to a", "start_char_pos": 104, "end_char_pos": 173 }, { "type": "R", "before": "times are provided", "after": "time. Moreover, we apply our results to construct a change of probability measure that is equivalent to the enlargement formula and to build for a certain class of pseudo-stopping times a class of measure changes that preserve the pseudo-stopping time property", "start_char_pos": 190, "end_char_pos": 208 }, { "type": "R", "before": "discuss the question of market viability", "after": "study for a price process modeled by a continuous semimartingale the stability of the No Free Lunch with Vanishing Risk (NFLVR) property", "start_char_pos": 227, "end_char_pos": 267 }, { "type": "A", "before": null, "after": ", that avoids stopping times, in the progressively enlarged filtration and provide sufficient conditions for this stability in terms of the Az\\'ema supermartingale", "start_char_pos": 288, "end_char_pos": 288 } ]
[ 0, 117, 210 ]
1309.7119
1
The prediction of a stock market direction may serve as an early recommendation system for short-term investors and as an early financial distress warning system for long-term shareholders. In this paper, we propose an empirical study on the Korean and Hong Kong stock market with an integrated machine learning framework that employs Principal Component Analysis (PCA) and Support Vector Machine (SVM). We try to predict the upward or downward direction of stock market index and stock price . In the proposed framework, PCA, as a feature selection method, identifies principal components in the stock market movement and SVM, as a classifier for future stock market movement, processes them along with other economic factors in training and forecasting. We present the results of an extensive empirical study of the proposed method on the Korean composite stock price index (KOSPI) and Hangseng index (HSI), as well as the individual constituents included in the indices. In our experiment, ten years data (from January 1st, 2002 to January 1st, 2012) are collected and schemed by rolling windows to predict one-day-ahead directions. The experimental results show notably high hit ratios in predicting the movements of the individual constituents in the KOSPI and HSI. The results also varify theco-movement effect between the Korean (Hong Kong) stock market and the American stock market .
The prediction of a stock market direction may serve as an early recommendation system for short-term investors and as an early financial distress warning system for long-term shareholders. Many stock prediction studies focus on using macroeconomic indicators, such as CPI and GDP, to train the prediction model. However, daily data of the macroeconomic indicators are almost impossible to obtain. Thus, those methods are difficult to be employed in practice. In this paper, we propose a method that directly uses prices data to predict market index direction and stock price direction. An extensive empirical study of the proposed method is presented on the Korean Composite Stock Price Index (KOSPI) and Hang Seng Index (HSI), as well as the individual constituents included in the indices. The experimental results show notably high hit ratios in predicting the movements of the individual constituents in the KOSPI and HIS .
[ { "type": "A", "before": null, "after": "Many stock prediction studies focus on using macroeconomic indicators, such as CPI and GDP, to train the prediction model. However, daily data of the macroeconomic indicators are almost impossible to obtain. Thus, those methods are difficult to be employed in practice.", "start_char_pos": 190, "end_char_pos": 190 }, { "type": "R", "before": "an empirical study on the Korean and Hong Kong stock market with an integrated machine learning framework that employs Principal Component Analysis (PCA) and Support Vector Machine (SVM). We try to predict the upward or downward direction of stock market index", "after": "a method that directly uses prices data to predict market index direction", "start_char_pos": 217, "end_char_pos": 477 }, { "type": "R", "before": ". In the proposed framework, PCA, as a feature selection method, identifies principal components in the stock market movement and SVM, as a classifier for future stock market movement, processes them along with other economic factors in training and forecasting. We present the results of an", "after": "direction. An", "start_char_pos": 494, "end_char_pos": 785 }, { "type": "A", "before": null, "after": "is presented", "start_char_pos": 835, "end_char_pos": 835 }, { "type": "R", "before": "composite stock price index", "after": "Composite Stock Price Index", "start_char_pos": 850, "end_char_pos": 877 }, { "type": "R", "before": "Hangseng index", "after": "Hang Seng Index", "start_char_pos": 890, "end_char_pos": 904 }, { "type": "D", "before": "In our experiment, ten years data (from January 1st, 2002 to January 1st, 2012) are collected and schemed by rolling windows to predict one-day-ahead directions.", "after": null, "start_char_pos": 976, "end_char_pos": 1137 }, { "type": "D", "before": "HSI. The results also varify the", "after": null, "start_char_pos": 1268, "end_char_pos": 1300 }, { "type": "D", "before": "co-movement", "after": null, "start_char_pos": 1300, "end_char_pos": 1311 }, { "type": "R", "before": "effect between the Korean (Hong Kong) stock market and the American stock market", "after": "HIS", "start_char_pos": 1312, "end_char_pos": 1392 } ]
[ 0, 189, 404, 495, 756, 975, 1137, 1272 ]
1309.7643
1
We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of Cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering and MSA/MRA 2D classification .
We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of Cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering , MSA/MRA 2D classification , and their modern approximations .
[ { "type": "R", "before": "and", "after": ",", "start_char_pos": 1227, "end_char_pos": 1230 }, { "type": "A", "before": null, "after": ", and their modern approximations", "start_char_pos": 1257, "end_char_pos": 1257 } ]
[ 0, 200, 265, 442, 531, 718, 801, 894, 1034 ]
1310.1020
1
We study the shapes of the implied volatility when the underlying distribution has an atom at zero. We show that the behaviour at small strikes is uniquely determined by the mass of the atom at least up to the third asymptotic order, regardless of the properties of the remaining (absolutely continuous, or singular) distribution on the positive real line. We investigate the structural difference with the no-mass-at-zero case, showing how one can-a priori-distinguish between mass at the origin and a heavy-left-tailed distribution. An atom at zero is found in stochastic models with absorption at the boundary, such as the CEV process, and can be used to model default events, as in the class of jump-to-default structural models of credit risk. We numerically test our model-free result in such examples. Note that while Lee's moment formula tells that implied variance is at most asymptotically linear in log-strike, other celebrated results for exact smile asymptotics such as Benaim and Friz (09) or Gulisashvili (10) do not apply in this setting-essentially due to the breakdown of Put-Call symmetry-and we rely here on an alternative treatment of the problem.
We study the shapes of the implied volatility when the underlying distribution has an atom at zero. We show that the behaviour at small strikes is uniquely determined by the mass of the atom up to the third asymptotic order, under mild assumptions on the remaining distribution on the positive real line. We investigate the structural difference with the no-mass-at-zero case, showing how one can--a priori--distinguish between mass at the origin and a heavy-left-tailed distribution. An atom at zero is found in stochastic models with absorption at the boundary, such as the CEV process, and can be used to model default events, as in the class of jump-to-default structural models of credit risk. We numerically test our model-free result in such examples. Note that while Lee's moment formula tells that implied variance is at most asymptotically linear in log-strike, other celebrated results for exact smile asymptotics such as Benaim and Friz (09) or Gulisashvili (10) do not apply in this setting--essentially due to the breakdown of Put-Call symmetry--and one has to rely on a new treatment of the problem.
[ { "type": "D", "before": "at least", "after": null, "start_char_pos": 191, "end_char_pos": 199 }, { "type": "R", "before": "regardless of the properties of the remaining (absolutely continuous, or singular)", "after": "under mild assumptions on the remaining", "start_char_pos": 234, "end_char_pos": 316 }, { "type": "R", "before": "can-a priori-distinguish", "after": "can--a priori--distinguish", "start_char_pos": 445, "end_char_pos": 469 }, { "type": "R", "before": "setting-essentially", "after": "setting--essentially", "start_char_pos": 1046, "end_char_pos": 1065 }, { "type": "R", "before": "symmetry-and we rely here on an alternative", "after": "symmetry--and one has to rely on a new", "start_char_pos": 1099, "end_char_pos": 1142 } ]
[ 0, 99, 356, 534, 748, 808 ]
1310.1020
2
We study the shapes of the implied volatility when the underlying distribution has an atom at zero . We show that the behaviour at small strikes is uniquely determined by the mass of the atom up to the third asymptotic order, under mild assumptions on the remaining distribution on the positive real line. We investigate the structural difference with the no-mass-at-zero case, showing how one can--a priori--distinguish between mass at the origin and a heavy-left-tailed distribution. An atom at zero is found in stochastic models with absorption at the boundary, such as the CEV process, and can be used to model default events, as in the class of jump-to-default structural modelsof credit risk. We numerically test our model-free result in such examples. Note that while Lee's moment formula tells that implied variance is at most asymptotically linear in log-strike, other celebrated results for exact smile asymptotics such as Benaim and Friz (09) or Gulisashvili (10) do not apply in this setting--essentially due to the breakdown of Put-Call symmetry--and one has to rely on a new treatment of the problem .
We study the shapes of the implied volatility when the underlying distribution has an atom at zero and analyse the impact of a mass at zero on at-the-money implied volatility and the overall level of the smile. We further show that the behaviour at small strikes is uniquely determined by the mass of the atom up to high asymptotic order, under mild assumptions on the remaining distribution on the positive real line. We investigate the structural difference with the no-mass-at-zero case, showing how one can--theoretically--distinguish between mass at the origin and a heavy-left-tailed distribution. We numerically test our model-free results in stochastic models with absorption at the boundary, such as the CEV process, and in jump-to-default models. Note that while Lee's moment formula tells that implied variance is at most asymptotically linear in log-strike, other celebrated results for exact smile asymptotics such as Benaim and Friz (09) or Gulisashvili (10) do not apply in this setting--essentially due to the breakdown of Put-Call duality .
[ { "type": "R", "before": ". We", "after": "and analyse the impact of a mass at zero on at-the-money implied volatility and the overall level of the smile. We further", "start_char_pos": 99, "end_char_pos": 103 }, { "type": "R", "before": "the third", "after": "high", "start_char_pos": 198, "end_char_pos": 207 }, { "type": "R", "before": "can--a priori--distinguish", "after": "can--theoretically--distinguish", "start_char_pos": 394, "end_char_pos": 420 }, { "type": "R", "before": "An atom at zero is found", "after": "We numerically test our model-free results", "start_char_pos": 486, "end_char_pos": 510 }, { "type": "R", "before": "can be used to model default events, as in the class of", "after": "in", "start_char_pos": 594, "end_char_pos": 649 }, { "type": "R", "before": "structural modelsof credit risk. We numerically test our model-free result in such examples.", "after": "models.", "start_char_pos": 666, "end_char_pos": 758 }, { "type": "R", "before": "at most", "after": "at most", "start_char_pos": 827, "end_char_pos": 834 }, { "type": "R", "before": "symmetry--and one has to rely on a new treatment of the problem", "after": "duality", "start_char_pos": 1050, "end_char_pos": 1113 } ]
[ 0, 100, 305, 485, 698, 758 ]
1310.1142
1
This paper addresses the question of how an arbitrage-free semimartingale model is affected when stopped at a random horizon or when an honest time is incorporated. Precisely, we focus on No-Unbounded-Profit-with-Bounded-Risk (called NUPBR hereafter) concept, which is also known in the literature as the first kind of non-arbitrage. Herein, we prove that any quasi-left-continuous process satisfying NUPBR, will preserve the NUPBR when one stops it at any random horizon or when one incorporates a%DIFDELCMD < {\it %%% specific honest time. For the general case of semimartingale, we provide necessary and sufficient conditions for the NUPBR to be preserved. Precisely, we elaborate two types of results. For a fixed semimartingale and a random time, we provide necessary and sufficient conditions on the process and the random time for which the non-arbitrage concept remains valid . The second type of results consists of giving necessary and sufficient conditions on the random time for which the non-arbitrage is preserved for any process. Our class of honest times ---that we consider in the paper--- is much larger than the class of all stopping times, and plays an important r\^ole in classifying random times. The crucial stochastic tool that drives our analysis lies in the%DIFDELCMD < {\it %%% optional stochastic integral (or%DIFDELCMD < {\it %%% compensated stochastic integral ) that was introduced in early eighties .
This paper addresses the question of how an arbitrage-free semimartingale model is affected when stopped at a random horizon . We focus on No-Unbounded-Profit-with-Bounded-Risk (called NUPBR hereafter) concept, which is also known in the literature as the first kind of non-arbitrage. %DIFDELCMD < {\it %%% For this non-arbitrage notion, we obtain two principal results. The first result lies in describing the pairs of market model and random time for which the resulting stopped model fulfills NUPBR condition . The second main result characterises the random time %DIFDELCMD < {\it %%% %DIFDELCMD < {\it %%% models that preserve the NUPBR property after stopping for any market model. These results are elaborated in a very general market model, and we also pay attention to some particular and practical models. The analysis that drives these results is based on new stochastic developments in semimartingale theory with progressive enlargement. Furthermore, we construct explicit martingale densities (deflators) for some classes of local martingales when stopped at random time .
[ { "type": "R", "before": "or when an honest time is incorporated. Precisely, we", "after": ". We", "start_char_pos": 125, "end_char_pos": 178 }, { "type": "D", "before": "Herein, we prove that any quasi-left-continuous process satisfying NUPBR, will preserve the NUPBR when one stops it at any random horizon or when one incorporates a", "after": null, "start_char_pos": 334, "end_char_pos": 498 }, { "type": "D", "before": "specific", "after": null, "start_char_pos": 520, "end_char_pos": 528 }, { "type": "R", "before": "honest time. For the general case of semimartingale, we provide necessary and sufficient conditions for the NUPBR to be preserved. Precisely, we elaborate two types of results. For a fixed semimartingale and a random time, we provide necessary and sufficient conditions on the process and the", "after": "For this non-arbitrage notion, we obtain two principal results. The first result lies in describing the pairs of market model and", "start_char_pos": 529, "end_char_pos": 821 }, { "type": "R", "before": "non-arbitrage concept remains valid", "after": "resulting stopped model fulfills NUPBR condition", "start_char_pos": 848, "end_char_pos": 883 }, { "type": "R", "before": "type of results consists of giving necessary and sufficient conditions on", "after": "main result characterises", "start_char_pos": 897, "end_char_pos": 970 }, { "type": "D", "before": "for which the non-arbitrage is preserved for any process. Our class of honest times ---that we consider in the paper--- is much larger than the class of all stopping times, and plays an important r\\^ole in classifying random times. The crucial stochastic tool that drives our analysis lies in the", "after": null, "start_char_pos": 987, "end_char_pos": 1283 }, { "type": "D", "before": "optional stochastic integral", "after": null, "start_char_pos": 1305, "end_char_pos": 1333 }, { "type": "D", "before": "(or", "after": null, "start_char_pos": 1334, "end_char_pos": 1337 }, { "type": "D", "before": "compensated stochastic integral", "after": null, "start_char_pos": 1359, "end_char_pos": 1390 }, { "type": "R", "before": ") that was introduced in early eighties", "after": "models that preserve the NUPBR property after stopping for any market model. These results are elaborated in a very general market model, and we also pay attention to some particular and practical models. The analysis that drives these results is based on new stochastic developments in semimartingale theory with progressive enlargement. Furthermore, we construct explicit martingale densities (deflators) for some classes of local martingales when stopped at random time", "start_char_pos": 1391, "end_char_pos": 1430 } ]
[ 0, 164, 333, 541, 659, 705, 885, 1044, 1218 ]
1310.2033
1
Because of their tractability and their natural interpretations in term of market quantities, Hawkes processes are nowadays widely used in high frequency finance. However, in practice, the statistical estimation results seem to show that very often, only nearly unstable Hawkes processes are able to fit the data properly. By nearly unstable, we mean that the L1 norm of their kernel is close to unity. We study in this work such processes for which the stability condition is almost violated. Our main result states that after suitable rescaling, they asymptotically behave like integrated Cox Ingersoll Ross models. Thus, modeling financial order flows as nearly unstable Hawkes processes may be a good way to reproduce both their high and low frequency stylized facts. We then extend this result to the Hawkes based price model introduced by Bacry et al. We show that under a similar criticality condition, this process converges to a Heston model. Again, we recover well known stylized facts of prices, both at the microstructure level and at the macroscopic scale.
Because of their tractability and their natural interpretations in term of market quantities, Hawkes processes are nowadays widely used in high-frequency finance. However, in practice, the statistical estimation results seem to show that very often, only nearly unstable Hawkes processes are able to fit the data properly. By nearly unstable, we mean that the L^1 norm of their kernel is close to unity. We study in this work such processes for which the stability condition is almost violated. Our main result states that after suitable rescaling, they asymptotically behave like integrated Cox-Ingersoll-Ross models. Thus, modeling financial order flows as nearly unstable Hawkes processes may be a good way to reproduce both their high and low frequency stylized facts. We then extend this result to the Hawkes-based price model introduced by Bacry et al. Quant. Finance 13 (2013) 65-77 . We show that under a similar criticality condition, this process converges to a Heston model. Again, we recover well-known stylized facts of prices, both at the microstructure level and at the macroscopic scale.
[ { "type": "R", "before": "high frequency", "after": "high-frequency", "start_char_pos": 139, "end_char_pos": 153 }, { "type": "R", "before": "L1", "after": "L^1", "start_char_pos": 360, "end_char_pos": 362 }, { "type": "R", "before": "Cox Ingersoll Ross", "after": "Cox-Ingersoll-Ross", "start_char_pos": 591, "end_char_pos": 609 }, { "type": "R", "before": "Hawkes based", "after": "Hawkes-based", "start_char_pos": 806, "end_char_pos": 818 }, { "type": "A", "before": null, "after": "Quant. Finance 13 (2013) 65-77", "start_char_pos": 858, "end_char_pos": 858 }, { "type": "A", "before": null, "after": ".", "start_char_pos": 859, "end_char_pos": 859 }, { "type": "R", "before": "well known", "after": "well-known", "start_char_pos": 972, "end_char_pos": 982 } ]
[ 0, 162, 322, 402, 493, 617, 771, 857, 953 ]
1310.2100
1
Solutes added to solutions often dramatically impact molecular processes ranging from the suspension or precipitation of colloids to biomolecular associations and protein folding. Here we revisit the origins of the effective attractive interactions that emerge between and within macromolecules immersed in solutions containing cosolutes that are preferentially excluded from the macromolecular interfaces. Until recently, these depletion forces were considered to be entropic in nature, resulting primarily from the tendency to increase the space available to the cosolute. However, recent experimental evidence indicates the existence of energetically-dominated mechanisms. In this review we follow the emerging characteristics of the different observed mechanisms. By compiling a set of available thermodynamic data for processes ranging from protein folding to protein-protein interactions, we show that excluded cosolutes can act through different mechanisms that correlate to a large extent with their molecular properties. For many polymers at low to moderate concentrations the steric interactions and molecular crowding effects dominate, and the mechanism is entropic. To contrast, for many small excluded solutes, such as naturally occurring osmolytes, the mechanism is dominated by favorable enthalpy, whereas the entropic contribution is typically unfavorable. We review the available models for these effects , and comment on the need for new models that would be able to explain the full range of observed depletion forces.
Solutes added to solutions often dramatically impact molecular processes ranging from the suspension or precipitation of colloids to biomolecular associations and protein folding. Here we revisit the origins of the effective attractive interactions that emerge between and within macromolecules immersed in solutions containing cosolutes that are preferentially excluded from the macromolecular interfaces. Until recently, these depletion forces were considered to be entropic in nature, resulting primarily from the tendency to increase the space available to the cosolute. However, recent experimental evidence indicates the existence of additional, energetically-dominated mechanisms. In this review we follow the emerging characteristics of these different mechanisms. By compiling a set of available thermodynamic data for processes ranging from protein folding to protein-protein interactions, we show that excluded cosolutes can act through two distinct mechanisms that correlate to a large extent with their molecular properties. For many polymers at low to moderate concentrations the steric interactions and molecular crowding effects dominate, and the mechanism is entropic. To contrast, for many small excluded solutes, such as naturally occurring osmolytes, the mechanism is dominated by favorable enthalpy, whereas the entropic contribution is typically unfavorable. We review the available models for these thermodynamic mechanisms , and comment on the need for new models that would be able to explain the full range of observed depletion forces.
[ { "type": "A", "before": null, "after": "additional,", "start_char_pos": 640, "end_char_pos": 640 }, { "type": "R", "before": "the different observed", "after": "these different", "start_char_pos": 734, "end_char_pos": 756 }, { "type": "R", "before": "different", "after": "two distinct", "start_char_pos": 944, "end_char_pos": 953 }, { "type": "R", "before": "effects", "after": "thermodynamic mechanisms", "start_char_pos": 1415, "end_char_pos": 1422 } ]
[ 0, 179, 406, 574, 676, 768, 1030, 1178, 1373 ]
1310.2391
1
In the presence of ATP, molecular motors generate active force dipoles that drive suspensions of protein filaments far from thermodynamic equilibrium, leading to exotic dynamics and pattern formation. Microscopic modelling can help to quantify the relationship between individual motors plus filaments to the large-wavelength properties represented by "hydrodynamic" models . Here we present results of extensive numerical simulations of active gels where the motors and filaments are confined between two infinite parallel plates. Thermal fluctuations and excluded-volume interactions between filaments are included. A systematic variation of rates for motor motion, attachment and detachment, including a differential detachment rate from filament ends, reveals a range of non-equilibrium behaviour. Strong motor binding produces structured filament aggregates that we refer to as asters, bundles or layers, whose stability depends on motor speed and differential end-detachment. The gross features of the dependence of the observed structures on the motor rate and the filament concentration can be captured by a simple one-filament model. Reducing motor binding produces super-diffusive mass transport, where filament translocation scales with lag time with non-unique exponents that depend on motor kinetics. An empirical data collapse of filament speed as a function of motor speed and end-detachment is found, suggesting a dimensional reduction of the relevant parameter space. We conclude by discussing the perspectives of microscopic modelling in the field of active gels.
In the presence of ATP, molecular motors generate active force dipoles that drive suspensions of protein filaments far from thermodynamic equilibrium, leading to exotic dynamics and pattern formation. Microscopic modelling can help to quantify the relationship between individual motors plus filaments URLanisation and dynamics on molecular and supra-molecular length scales . Here we present results of extensive numerical simulations of active gels where the motors and filaments are confined between two infinite parallel plates. Thermal fluctuations and excluded-volume interactions between filaments are included. A systematic variation of rates for motor motion, attachment and detachment, including a differential detachment rate from filament ends, reveals a range of non-equilibrium behaviour. Strong motor binding produces structured filament aggregates that we refer to as asters, bundles or layers, whose stability depends on motor speed and differential end-detachment. The gross features of the dependence of the observed structures on the motor rate and the filament concentration can be captured by a simple one-filament model. Loosely bound aggregates exhibit super-diffusive mass transport, where filament translocation scales with lag time with non-unique exponents that depend on motor kinetics. An empirical data collapse of filament speed as a function of motor speed and end-detachment is found, suggesting a dimensional reduction of the relevant parameter space. We conclude by discussing the perspectives of microscopic modelling in the field of active gels.
[ { "type": "R", "before": "to the large-wavelength properties represented by \"hydrodynamic\" models", "after": "URLanisation and dynamics on molecular and supra-molecular length scales", "start_char_pos": 302, "end_char_pos": 373 }, { "type": "R", "before": "Reducing motor binding produces", "after": "Loosely bound aggregates exhibit", "start_char_pos": 1143, "end_char_pos": 1174 } ]
[ 0, 200, 375, 531, 617, 801, 981, 1142, 1313, 1484 ]
1310.2623
1
Control of complex processes is a major goal of network analyses. Unfortunately, deriving models accurate enough to be used for control is extremely difficult, especially for large networks of nonlinearly coupled nodes. However , system responses to perturbations are often easily measured. We show that the collection of such responses -a response surface- can be used for control. Analysis of model systems shows that response surfaces are smooth and can be approximated using data on a small set of perturbations. The methodology, here validated on nonlinear electrical circuits , can prove useful in many contexts including in reprogramming cellular states and in the design of therapies for genetic diseases .
Control of complex processes is a major goal of network analyses. Most approaches to control nonlinearly coupled systems require the network topology and/or network dynamics. Unfortunately, neither the full set of participating nodes nor the network topology is known for many important systems. On the other hand , system responses to perturbations are often easily measured. We show how the collection of such responses (a response surface) can be used for network control. Analyses of model systems show that response surfaces are smooth and hence can be approximated using low order polynomials. Importantly, these approximations are largely insensitive to stochastic fluctuations in data or measurement errors. They can be used to compute how a small set of nodes need to be altered in order to direct the network close to a pre-specified target state. These ideas, illustrated on a nonlinear electrical circuit , can prove useful in many contexts including in reprogramming cellular states .
[ { "type": "R", "before": "Unfortunately, deriving models accurate enough to be used for control is extremely difficult, especially for large networks of nonlinearly coupled nodes. However", "after": "Most approaches to control nonlinearly coupled systems require the network topology and/or network dynamics. Unfortunately, neither the full set of participating nodes nor the network topology is known for many important systems. On the other hand", "start_char_pos": 66, "end_char_pos": 227 }, { "type": "R", "before": "that", "after": "how", "start_char_pos": 299, "end_char_pos": 303 }, { "type": "R", "before": "-a response surface-", "after": "(a response surface)", "start_char_pos": 337, "end_char_pos": 357 }, { "type": "R", "before": "control. Analysis", "after": "network control. Analyses", "start_char_pos": 374, "end_char_pos": 391 }, { "type": "R", "before": "shows", "after": "show", "start_char_pos": 409, "end_char_pos": 414 }, { "type": "A", "before": null, "after": "hence", "start_char_pos": 453, "end_char_pos": 453 }, { "type": "R", "before": "data on", "after": "low order polynomials. Importantly, these approximations are largely insensitive to stochastic fluctuations in data or measurement errors. They can be used to compute how", "start_char_pos": 480, "end_char_pos": 487 }, { "type": "R", "before": "perturbations. The methodology, here validated on nonlinear electrical circuits", "after": "nodes need to be altered in order to direct the network close to a pre-specified target state. These ideas, illustrated on a nonlinear electrical circuit", "start_char_pos": 503, "end_char_pos": 582 }, { "type": "D", "before": "and in the design of therapies for genetic diseases", "after": null, "start_char_pos": 662, "end_char_pos": 713 } ]
[ 0, 65, 219, 290, 382, 517 ]
1310.3061
1
We consider the at-the-money strike derivative of implied volatility as the maturity tends to zero. Our main results quantify the growth of the slope for infinite activity exponential L\'evy models. As auxiliary results, we obtain the limiting values of short maturity digital call options . Finally, we discuss when the at-the-money slope is consistent with the steepness of the smile wings, as given by Lee's moment formula.
We consider the at-the-money strike derivative of implied volatility as the maturity tends to zero. Our main results quantify the growth of the slope for infinite activity exponential Levy models. As auxiliary results, we obtain the limiting values of short maturity digital call options , using Mellin transform asymptotics . Finally, we discuss when the at-the-money slope is consistent with the steepness of the smile wings, as given by Lee's moment formula.
[ { "type": "R", "before": "L\\'evy", "after": "Levy", "start_char_pos": 184, "end_char_pos": 190 }, { "type": "A", "before": null, "after": ", using Mellin transform asymptotics", "start_char_pos": 290, "end_char_pos": 290 } ]
[ 0, 99, 198, 292 ]
1310.3061
2
We consider the at-the-money strike derivative of implied volatility as the maturity tends to zero. Our main results quantify the growth of the slope for infinite activity exponential Levy models . As auxiliary results, we obtain the limiting values of short maturity digital call options, using Mellin transform asymptotics. Finally, we discuss when the at-the-money slope is consistent with the steepness of the smile wings, as given by Lee's moment formula.
We consider the at-the-money strike derivative of implied volatility as the maturity tends to zero. Our main results quantify the behavior of the slope for infinite activity exponential L\'evy models including a Brownian component . As auxiliary results, we obtain asymptotic expansions of short maturity at-the-money digital call options, using Mellin transform asymptotics. Finally, we discuss when the at-the-money slope is consistent with the steepness of the smile wings, as given by Lee's moment formula.
[ { "type": "R", "before": "growth", "after": "behavior", "start_char_pos": 130, "end_char_pos": 136 }, { "type": "R", "before": "Levy models", "after": "L\\'evy models including a Brownian component", "start_char_pos": 184, "end_char_pos": 195 }, { "type": "R", "before": "the limiting values", "after": "asymptotic expansions", "start_char_pos": 230, "end_char_pos": 249 }, { "type": "A", "before": null, "after": "at-the-money", "start_char_pos": 268, "end_char_pos": 268 } ]
[ 0, 99, 197, 326 ]
1310.3761
1
It has recently been shown that structural conditions on the reaction network, rather than a ` fine-tuning' of system parameters, often suffice to impart ` absolute concentration robustness' on a wide class of biologically relevant, deterministically modeled mass-action systems [Shinar and Feinberg, Science, 2010]. Many biochemical networks, however, operate on a scale insufficient to justify the assumptions of the deterministic mass-action model, which raises the question of whether the long-term dynamics of the systems are being accurately captured when the deterministic model predicts stability. We show here that fundamentally different conclusions about the long-term behavior of such systems are reached if the systems are instead modeled with stochastic dynamics and a discrete state space. Specifically we characterize a large class of models which exhibit convergence to a positive robust equilibrium in the deterministic setting, whereas trajectories of the corresponding stochastic models are necessarily absorbed by a set of states that reside on the boundary of the state space . If the time to absorption is large relative to the relevant time-scales of the system, the process will very likely seem to settle down to an equilibrium long before the resulting instability will appear . This quasi-stationary distribution is considered for two systems taken from the literature, and results consistent with absolute concentration robustness are recovered by characterizing the discrepancy between the quasi-stationary distribution for the robust species and a Poisson distribution.
It has recently been shown that structural conditions on the reaction network, rather than a ' fine-tuning' of system parameters, often suffice to impart ' absolute concentration robustness' on a wide class of biologically relevant, deterministically modeled mass-action systems [Shinar and Feinberg, Science, 2010]. We show here that fundamentally different conclusions about the long-term behavior of such systems are reached if the systems are instead modeled with stochastic dynamics and a discrete state space. Specifically , we characterize a large class of models that exhibit convergence to a positive robust equilibrium in the deterministic setting, whereas trajectories of the corresponding stochastic models are necessarily absorbed by a set of states that reside on the boundary of the state space , i.e. the system undergoes an extinction event. If the time to extinction is large relative to the relevant time-scales of the system, the process will appear to settle down to a stationary distribution long before the inevitable extinction will occur . This quasi-stationary distribution is considered for two systems taken from the literature, and results consistent with absolute concentration robustness are recovered by showing that the quasi-stationary distribution of the robust species approaches a Poisson distribution.
[ { "type": "R", "before": "`", "after": "'", "start_char_pos": 93, "end_char_pos": 94 }, { "type": "R", "before": "`", "after": "'", "start_char_pos": 154, "end_char_pos": 155 }, { "type": "D", "before": "Many biochemical networks, however, operate on a scale insufficient to justify the assumptions of the deterministic mass-action model, which raises the question of whether the long-term dynamics of the systems are being accurately captured when the deterministic model predicts stability.", "after": null, "start_char_pos": 317, "end_char_pos": 605 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 818, "end_char_pos": 818 }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 859, "end_char_pos": 864 }, { "type": "R", "before": ".", "after": ", i.e. the system undergoes an extinction event.", "start_char_pos": 1099, "end_char_pos": 1100 }, { "type": "R", "before": "absorption", "after": "extinction", "start_char_pos": 1116, "end_char_pos": 1126 }, { "type": "R", "before": "very likely seem", "after": "appear", "start_char_pos": 1205, "end_char_pos": 1221 }, { "type": "R", "before": "an equilibrium", "after": "a stationary distribution", "start_char_pos": 1240, "end_char_pos": 1254 }, { "type": "R", "before": "resulting instability will appear", "after": "inevitable extinction will occur", "start_char_pos": 1271, "end_char_pos": 1304 }, { "type": "R", "before": "characterizing the discrepancy between the", "after": "showing that the", "start_char_pos": 1478, "end_char_pos": 1520 }, { "type": "R", "before": "for", "after": "of", "start_char_pos": 1551, "end_char_pos": 1554 }, { "type": "R", "before": "and", "after": "approaches", "start_char_pos": 1574, "end_char_pos": 1577 } ]
[ 0, 316, 605, 804, 1100, 1306 ]
1310.3985
1
The denaturation of the double helix is a template for fundamental biological functions such as replication and transcription involving the formation of local fluctuational openings. The denaturation transition is studied , for heterogeneous short sequences of DNA, in the framework of a mesoscopic Hamiltonian model which accounts for the helicoidal geometry of the molecule. The model is reviewed together with the path integral method which has been developed to describe the molecule thermodynamics . The base pair displacements with respect to the ground state are treated as paths whose temperature dependent amplitudes are governed by the thermal wavelength. The ensemble of base pairs paths is selected, at any temperature, consistently with both the model potential and the second law of thermodynamics. The partition function incorporates the effects of the base pair thermal fluctuations which become stronger close to the denaturation. The transition appears as a gradual phenomenon starting from the molecule segments rich in adenine-thymine base pairs. Computing the melting profiles, I discuss the relation between nonlinear character of the base pair interactions and twisting geometry .
The denaturation of the double helix is a template for fundamental biological functions such as replication and transcription involving the formation of local fluctuational openings. The denaturation transition is studied for heterogeneous short sequences of DNA, i.e. \sim 100 base pairs, in the framework of a mesoscopic Hamiltonian model which accounts for the helicoidal geometry of the molecule. The theoretical background for the application of the path integral formalism to predictive analysis of the molecule thermodynamical properties is discussed . The base pair displacements with respect to the ground state are treated as paths whose temperature dependent amplitudes are governed by the thermal wavelength. The ensemble of base pairs paths is selected, at any temperature, consistently with both the model potential and the second law of thermodynamics. The partition function incorporates the effects of the base pair thermal fluctuations which become stronger close to the denaturation. The transition appears as a gradual phenomenon starting from the molecule segments rich in adenine-thymine base pairs. Computing the equilibrium thermodynamics, we focus on the interplay between twisting of the complementary strands around the molecule axis and nonlinear stacking potential: it is shown that the latter affects the melting profiles only if the rotational degrees of freedom are included in the Hamiltonian. The use of ladder Hamiltonian models for the DNA complementary strands in the pre-melting regime is questioned .
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 222, "end_char_pos": 223 }, { "type": "A", "before": null, "after": "i.e. \\sim 100 base pairs,", "start_char_pos": 266, "end_char_pos": 266 }, { "type": "R", "before": "model is reviewed together with the path integral method which has been developed to describe the molecule thermodynamics", "after": "theoretical background for the application of the path integral formalism to predictive analysis of the molecule thermodynamical properties is discussed", "start_char_pos": 382, "end_char_pos": 503 }, { "type": "R", "before": "melting profiles, I discuss the relation between nonlinear character of the base pair interactions and twisting geometry", "after": "equilibrium thermodynamics, we focus on the interplay between twisting of the complementary strands around the molecule axis and nonlinear stacking potential: it is shown that the latter affects the melting profiles only if the rotational degrees of freedom are included in the Hamiltonian. The use of ladder Hamiltonian models for the DNA complementary strands in the pre-melting regime is questioned", "start_char_pos": 1082, "end_char_pos": 1202 } ]
[ 0, 182, 377, 505, 666, 813, 948, 1067 ]
1310.4403
1
Conventional economic analyses of stringent climate change mitigation have generally concluded that economic austerity would result from carbon austerity. These analyses however rely critically on the assumption of an economic equilibrium, which dismisses established notions on behavioural heterogeneity, path dependence and technology transitions. Here we show that on the contrary, the decarbonisation of the electricity sector globally can lead to improvements in economic performance. By modelling the process of innovation-diffusion and non-equilibrium dynamics, we establish how climate policy instruments for emissions reductions alter economic activity through energy prices, government spending, enhanced investment and tax revenues. While higher electricity prices reduce income and output, this is over-compensated by enhanced employment generated by investments in new technology. We stress that the current dialogue on the impacts of climate policies must be revisited to reflect the real complex dynamics involved in the global economy, not captured by conventional models .
Conventional economic analysis of stringent climate change mitigation policy generally concludes various levels of economic slowdown as a result of substantial spending on low carbon technology. Equilibrium economics however could not explain or predict the current economic crisis, which is of financial nature. Meanwhile the economic impacts of climate policy find their source through investments for the diffusion of environmental innovations, in parts a financial problem. Here, we expose how results of economic analysis of climate change mitigation policy depend entirely on assumptions and theory concerning the finance of the diffusion of innovations, and that in many cases, results are simply re-iterations of model assumptions. We show that, while equilibrium economics always predict economic slowdown, methods using non-equilibrium approaches suggest the opposite could occur. We show that the solution to understanding the economic impacts of reducing greenhouse gas emissions lies with research on the dynamics of the financial sector interacting with innovation and technology developments, economic history providing powerful insights through important analogies with previous historical waves of innovation .
[ { "type": "R", "before": "analyses", "after": "analysis", "start_char_pos": 22, "end_char_pos": 30 }, { "type": "R", "before": "have generally concluded that economic austerity would result from carbon austerity. These analyses however rely critically on the assumption of an economic equilibrium, which dismisses established notions on behavioural heterogeneity, path dependence and technology transitions. Here we show that on the contrary, the decarbonisation of the electricity sector globally can lead to improvements in economic performance. By modelling the process of innovation-diffusion", "after": "policy generally concludes various levels of economic slowdown as a result of substantial spending on low carbon technology. Equilibrium economics however could not explain or predict the current economic crisis, which is of financial nature. Meanwhile the economic impacts of climate policy find their source through investments for the diffusion of environmental innovations, in parts a financial problem. Here, we expose how results of economic analysis of climate change mitigation policy depend entirely on assumptions and theory concerning the finance of the diffusion of innovations,", "start_char_pos": 70, "end_char_pos": 538 }, { "type": "A", "before": null, "after": "that in many cases, results are simply re-iterations of model assumptions. We show that, while equilibrium economics always predict economic slowdown, methods using", "start_char_pos": 543, "end_char_pos": 543 }, { "type": "R", "before": "dynamics, we establish how climate policy instruments for emissions reductions alter economic activity through energy prices, government spending, enhanced investment and tax revenues. While higher electricity prices reduce income and output, this is over-compensated by enhanced employment generated by investments in new technology. We stress that the current dialogue on the impacts of climate policies must be revisited to reflect the real complex dynamics involved in the global economy, not captured by conventional models", "after": "approaches suggest the opposite could occur. We show that the solution to understanding the economic impacts of reducing greenhouse gas emissions lies with research on the dynamics of the financial sector interacting with innovation and technology developments, economic history providing powerful insights through important analogies with previous historical waves of innovation", "start_char_pos": 560, "end_char_pos": 1088 } ]
[ 0, 154, 349, 489, 744, 894 ]
1310.4441
1
A classical analogue of the quantum geometric phase may be realised in a given biological oscillator and require the cell to correct for an additional quantity added to the phase of oscillation upon every repetition of the cell cycle .
Many intracellular processes continue to oscillate during the cell cycle, although it is not understood how they are affected by discontinuities caused in the cellular environment. It is generally assumed that oscillations remain robust provided the period of cell divisions is much larger than the period of the oscillator. Here I will show that under these conditions, a cell will in fact have to correct for an additional quantity added to the phase of oscillation upon each repetition of the cell cycle . The resulting phase shift is an analogue of the geometric phase, an abstract entity first discovered in quantum mechanics. In this letter I will discuss the theory of the geometric phase shift, and demonstrate its relevance to biological oscillations .
[ { "type": "R", "before": "A classical analogue of the quantum geometric phase may be realised in a given biological oscillator and require the cell", "after": "Many intracellular processes continue to oscillate during the cell cycle, although it is not understood how they are affected by discontinuities caused in the cellular environment. It is generally assumed that oscillations remain robust provided the period of cell divisions is much larger than the period of the oscillator. Here I will show that under these conditions, a cell will in fact have", "start_char_pos": 0, "end_char_pos": 121 }, { "type": "R", "before": "every", "after": "each", "start_char_pos": 199, "end_char_pos": 204 }, { "type": "A", "before": null, "after": ". The resulting phase shift is an analogue of the geometric phase, an abstract entity first discovered in quantum mechanics. In this letter I will discuss the theory of the geometric phase shift, and demonstrate its relevance to biological oscillations", "start_char_pos": 234, "end_char_pos": 234 } ]
[ 0 ]
1310.4441
2
Many intracellular processes continue to oscillate during the cell cycle , although it is not understood how they are affected by discontinuities caused in the cellular environment . It is generally assumed that oscillations remain robust provided the period of cell divisions is much larger than the period of the oscillator. Here I will show that under these conditions , a cell will in fact have to correct for an additional quantity added to the phase of oscillation upon each repetition of the cell cycle. The resulting phase shift is an analogue of the geometric phase, an abstract entity first discovered in quantum mechanics. In this letter I will discuss the theory of the geometric phase shift , and demonstrate its relevance to biological oscillations.
Many intracellular processes continue to oscillate during the cell cycle . Although it is not well-understood how they are affected by discontinuities in the cellular environment , the general assumption is that oscillations remain robust provided the period of cell divisions is much larger than the period of the oscillator. Here , I will show that under these conditions a cell will in fact have to correct for an additional quantity added to the phase of oscillation upon every repetition of the cell cycle. The resulting phase shift is an analogue of the geometric phase, a curious entity first discovered in quantum mechanics. In this Letter, I will discuss the theory of the geometric phase shift and demonstrate its relevance to biological oscillations.
[ { "type": "R", "before": ", although", "after": ". Although", "start_char_pos": 73, "end_char_pos": 83 }, { "type": "R", "before": "understood", "after": "well-understood", "start_char_pos": 94, "end_char_pos": 104 }, { "type": "D", "before": "caused", "after": null, "start_char_pos": 146, "end_char_pos": 152 }, { "type": "R", "before": ". It is generally assumed", "after": ", the general assumption is", "start_char_pos": 181, "end_char_pos": 206 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 332, "end_char_pos": 332 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 373, "end_char_pos": 374 }, { "type": "R", "before": "each", "after": "every", "start_char_pos": 477, "end_char_pos": 481 }, { "type": "R", "before": "an abstract", "after": "a curious", "start_char_pos": 577, "end_char_pos": 588 }, { "type": "R", "before": "letter", "after": "Letter,", "start_char_pos": 643, "end_char_pos": 649 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 705, "end_char_pos": 706 } ]
[ 0, 182, 326, 511, 634 ]
1310.4783
1
We study asymptotic properties of maximum likelihood estimators for Heston models based on continuous time observations . We distinguish three cases: subcritical (also called ergodic), critical and supercritical .
We study asymptotic properties of maximum likelihood estimators for Heston models based on continuous time observations of the log-price process . We distinguish three cases: subcritical (also called ergodic), critical and supercritical . In the subcritical case, asymptotic normality is proved for all the parameters, while in the critical and supercritical cases, non-standard asymptotic behavior is described .
[ { "type": "A", "before": null, "after": "of the log-price process", "start_char_pos": 120, "end_char_pos": 120 }, { "type": "A", "before": null, "after": ". In the subcritical case, asymptotic normality is proved for all the parameters, while in the critical and supercritical cases, non-standard asymptotic behavior is described", "start_char_pos": 213, "end_char_pos": 213 } ]
[ 0, 122 ]
1310.5091
1
Different models such as diffusion-collision and nucleation-condensation have been used to unravel how secondary and tertiary structures form during protein folding. However, a simple mechanism based on physical principles that provide an accurate description of kinetics and thermodynamics for such phenomena has not yet been identified. This study introduces the hypothesis that the synchronization of the amino acids vibrations by the cooperative movements of the peptide planes throughout the backbone may play a key role in folding as a new mechanism. Based on that, we draw a parallel between the folding process and the dynamics for a network of coupled oscillators described by the Kuramoto model . The amino acid coupling would explain the mean-field character of the force that propels an amino acid sequence into a structure through URLanization.
Different models such as diffusion-collision and nucleation-condensation have been used to unravel how secondary and tertiary structures form during protein folding. However, a simple mechanism based on physical principles that provide an accurate description of kinetics and thermodynamics for such phenomena has not yet been identified. This study introduces the hypothesis that the synchronization of the peptide plane oscillatory movements throughout the backbone may play a key role in folding as a new mechanism. Based on that, we draw a parallel between the folding process and the dynamics for a network of coupled oscillators described by the Kuramoto model . The pattern of synchronized cluster formation, growing and assembling helps to solve the Levinthal's paradox . The amino acid coupling would explain the mean-field character of the force that propels an amino acid sequence into a structure through URLanization.
[ { "type": "R", "before": "amino acids vibrations by the cooperative movements of the peptide planes", "after": "peptide plane oscillatory movements", "start_char_pos": 408, "end_char_pos": 481 }, { "type": "A", "before": null, "after": ". The pattern of synchronized cluster formation, growing and assembling helps to solve the Levinthal's paradox", "start_char_pos": 705, "end_char_pos": 705 } ]
[ 0, 165, 338, 556, 707 ]
1310.5091
2
Different models such as diffusion-collision and nucleation-condensation have been used to unravel how secondary and tertiary structures form during protein folding. However, a simple mechanism based on physical principles that provide an accurate description of kinetics and thermodynamics for such phenomena has not yet been identified. This study introduces the hypothesis that the synchronization of the peptide plane oscillatory movements throughout the backbone may play a key role in folding as a new mechanism. Based on that, we draw a parallel between the folding process and the dynamics for a network of coupled oscillators described by the Kuramoto model. The pattern of synchronized cluster formation, growing and assembling helps to solve the Levinthal's paradox. The amino acid coupling would explain the mean-field character of the force that propels an amino acid sequence into a structure through URLanization .
Different models such as diffusion-collision and nucleation-condensation have been used to unravel how secondary and tertiary structures form during protein folding. However, a simple mechanism based on physical principles that provide an accurate description of kinetics and thermodynamics for such phenomena has not yet been identified. This study introduces the hypothesis that the synchronization of the peptide plane oscillatory movements throughout the backbone must also play a key role in the folding mechanism. Based on that, we draw a parallel between the folding process and the dynamics for a network of coupled oscillators described by the Kuramoto model. The amino acid coupling may explain the mean-field character of the force that propels an amino acid sequence into a structure through URLanization . Thus, the pattern of synchronized cluster formation and growing helps to solve the Levinthal's paradox .
[ { "type": "R", "before": "may", "after": "must also", "start_char_pos": 468, "end_char_pos": 471 }, { "type": "R", "before": "folding as a new", "after": "the folding", "start_char_pos": 491, "end_char_pos": 507 }, { "type": "D", "before": "pattern of synchronized cluster formation, growing and assembling helps to solve the Levinthal's paradox. The", "after": null, "start_char_pos": 672, "end_char_pos": 781 }, { "type": "R", "before": "would", "after": "may", "start_char_pos": 802, "end_char_pos": 807 }, { "type": "A", "before": null, "after": ". Thus, the pattern of synchronized cluster formation and growing helps to solve the Levinthal's paradox", "start_char_pos": 928, "end_char_pos": 928 } ]
[ 0, 165, 338, 518, 667, 777 ]
1310.5540
1
We propose that predictability is a prerequisite for profitability on financial markets. We look at ways to measure predictability of price changes using information theoretic approach and employ them on all historical data available for Warsaw Stock Exchange . This allows us to determine whether frequency of sampling price changes affects the predictability of those. We also study the time evolution of the predictability of price changes on the sample of 20 biggest companies on Warsaw's market and investigate the relationships inside this group, as well as the time evolution of the predictability of those price changes . We also briefly comment on the complicated relationship between predictability of price changes and the profitability of algorithmic trading.
We propose that predictability is a prerequisite for profitability on financial markets. We look at ways to measure predictability of price changes using information theoretic approach and employ them on all historical data available for NYSE 100 stocks . This allows us to determine whether frequency of sampling price changes affects the predictability of those. We also relations between price changes predictability and the deviation of the price formation processes from iid as well as the stock's sector . We also briefly comment on the complicated relationship between predictability of price changes and the profitability of algorithmic trading.
[ { "type": "R", "before": "Warsaw Stock Exchange", "after": "NYSE 100 stocks", "start_char_pos": 238, "end_char_pos": 259 }, { "type": "R", "before": "study the time evolution of the predictability of price changes on the sample of 20 biggest companies on Warsaw's market and investigate the relationships inside this group,", "after": "relations between price changes predictability and the deviation of the price formation processes from iid", "start_char_pos": 379, "end_char_pos": 552 }, { "type": "R", "before": "time evolution of the predictability of those price changes", "after": "stock's sector", "start_char_pos": 568, "end_char_pos": 627 } ]
[ 0, 88, 261, 370, 629 ]
1310.6873
1
In the aftermath of the interbank market collapse of 2007-08, the traditional idea that systemic risk is primarily the risk of cascading bank defaults has evolved into the view that it involves both cascading bank defaults as well as funding liquidity shocks, and that both types of shocks impair the functioning of the remaining undefaulted banks . In current models of systemic risk, these two facets , namely funding illiquidity and insolvency, are treated as two separate phenomena. Our paper introduces a deliberately simplified model which integrates insolvency and illiquidity in financial networks and that can provide answers to the question of how illiquidity or default of one bank can influence the overall level of liquidity stress and default in the network. First, this paper proposes a stylized model of individual bank balance sheets that builds in regulatory constraints. Secondly, three different possible states of a bank, namely the normal state, the stressed state and the insolvent state, are identified with conditions on the bank's balance sheet. Thirdly, the paper models the behavioural response of a bank when it finds itself in the stressed or insolvent states. Importantly, a stressed bank seeks to protect itself from the default of its counterparties, but creates stress in the network by forcing its debtor banks to raise cash. Versions of these proposed models can be solved by large-network asymptotic cascade formulas. Details of numerical experiments are given that verify that these asymptotic formulas yield the expected quantitative agreement with Monte Carlo results for large finite networks . These experiments illustrate clearly our main conclusion that in financial networks, the average default probability is inversely related to strength of banks' stress response and therefore to the overall level of stress in the network.
In the aftermath of the interbank market collapse of 2007-08, the scope of systemic risk research has broadened to encompass a wide range of channels, notably asset correlations, default contagion, illiquidity contagion, and asset firesales . In current models of systemic risk, two facets of contagion , namely funding illiquidity and insolvency, are treated as two distinct and separate phenomena. The main goal of the double cascade model we introduce is to integrate these two facets. In a default cascade, insolvency of a given bank will create a shock to the asset side of the balance sheet of each of its creditor banks. Under some circumstances, such "downstream" shocks can cause further insolvencies that may build up to create a global insolvency cascade. On the other hand, in a stress cascade, illiquidity that hits a given bank will create a shock to the liability side of the balance sheet of each of its debtor banks. Such "upstream" shocks can cause further illiquidity stresses that may build up to create a global illiquidity cascade. Our paper introduces a deliberately simplified network model of insolvency and illiquidity that can quantify how illiquidity or default of one bank influences the overall level of liquidity stress and default in the network. Under an assumption we call "locally tree-like independence", we derive large-network asymptotic cascade formulas. Results of numerical experiments then demonstrate that these asymptotic formulas agree qualitatively with Monte Carlo results for large finite networks , and quantitatively except when the system is placed in an exceptional "knife-edge" configuration . These experiments illustrate clearly our main conclusion that in financial networks, the average default probability is inversely related to strength of banks' stress response and therefore to the overall level of stress in the network.
[ { "type": "R", "before": "traditional idea that systemic risk is primarily the risk of cascading bank defaults has evolved into the view that it involves both cascading bank defaults as well as funding liquidity shocks, and that both types of shocks impair the functioning of the remaining undefaulted banks", "after": "scope of systemic risk research has broadened to encompass a wide range of channels, notably asset correlations, default contagion, illiquidity contagion, and asset firesales", "start_char_pos": 66, "end_char_pos": 347 }, { "type": "R", "before": "these two facets", "after": "two facets of contagion", "start_char_pos": 386, "end_char_pos": 402 }, { "type": "A", "before": null, "after": "distinct and", "start_char_pos": 467, "end_char_pos": 467 }, { "type": "A", "before": null, "after": "The main goal of the double cascade model we introduce is to integrate these two facets. In a default cascade, insolvency of a given bank will create a shock to the asset side of the balance sheet of each of its creditor banks. Under some circumstances, such \"downstream\" shocks can cause further insolvencies that may build up to create a global insolvency cascade. On the other hand, in a stress cascade, illiquidity that hits a given bank will create a shock to the liability side of the balance sheet of each of its debtor banks. Such \"upstream\" shocks can cause further illiquidity stresses that may build up to create a global illiquidity cascade.", "start_char_pos": 488, "end_char_pos": 488 }, { "type": "R", "before": "model which integrates", "after": "network model of", "start_char_pos": 536, "end_char_pos": 558 }, { "type": "R", "before": "in financial networks and that can provide answers to the question of", "after": "that can quantify", "start_char_pos": 586, "end_char_pos": 655 }, { "type": "R", "before": "can influence", "after": "influences", "start_char_pos": 695, "end_char_pos": 708 }, { "type": "R", "before": "First, this paper proposes a stylized model of individual bank balance sheets that builds in regulatory constraints. Secondly, three different possible states of a bank, namely the normal state, the stressed state and the insolvent state, are identified with conditions on the bank's balance sheet. Thirdly, the paper models the behavioural response of a bank when it finds itself in the stressed or insolvent states. Importantly, a stressed bank seeks to protect itself from the default of its counterparties, but creates stress in the network by forcing its debtor banks to raise cash. Versions of these proposed models can be solved by", "after": "Under an assumption we call \"locally tree-like independence\", we derive", "start_char_pos": 775, "end_char_pos": 1413 }, { "type": "R", "before": "Details", "after": "Results", "start_char_pos": 1457, "end_char_pos": 1464 }, { "type": "R", "before": "are given that verify that", "after": "then demonstrate that", "start_char_pos": 1490, "end_char_pos": 1516 }, { "type": "R", "before": "yield the expected quantitative agreement", "after": "agree qualitatively", "start_char_pos": 1543, "end_char_pos": 1584 }, { "type": "A", "before": null, "after": ", and quantitatively except when the system is placed in an exceptional \"knife-edge\" configuration", "start_char_pos": 1636, "end_char_pos": 1636 } ]
[ 0, 349, 487, 774, 891, 1073, 1192, 1362, 1456, 1638 ]
1310.6873
3
The scope of financial systemic risk research encompasses a wide range of channels and effects, including asset correlation shocks, default contagion, illiquidity contagion, and asset firesales. For example, insolvency of a given bank will create a shock to the asset side of the balance sheet of each of its creditor banks and under some circumstances, such "downstream" shocks can cause further insolvencies that may build up to create what is called an insolvency or default cascade. On the other hand, funding illiquidity that hits a given bank will create a shock to the liability side of the balance sheet of each of its debtor banks. Under some circumstances, such "upstream" shocks can cause illiquidity in further banks that may build up to create an illiquidity cascade. This paper introduces a deliberately simplified financial network model that combines the default and liquidity stress mechanisms into a "double cascade mapping". The progress and eventual result of the crisis is obtained by iterating this mapping to its fixed point. Unlike simpler models, this model can therefore quantify how illiquidity or default of one bank influences the eventual overall level of liquidity stress and default in the system. Large-network asymptotic cascade mapping formulas are derived that can be used for efficient network computations of the double cascade. Numerical experiments then demonstrate that these asymptotic formulas agree qualitatively with Monte Carlo results for large finite networks, and quantitatively except when the initial system is placed in an exceptional "knife-edge" configuration. The experiments clearly support the main conclusion that in the absence of fire sales, the average eventual level of defaults in a financial network is negatively related to the strength of banks' liquidity stress response and the eventual level of stress in the network.
The scope of financial systemic risk research encompasses a wide range of interbank channels and effects, including asset correlation shocks, default contagion, illiquidity contagion, and asset fire sales. This paper introduces a financial network model that combines the default and liquidity stress mechanisms into a "double cascade mapping". The progress and eventual result of the crisis is obtained by iterating this mapping to its fixed point. Unlike simpler models, this model can therefore quantify how illiquidity or default of one bank influences the overall level of liquidity stress and default in the system. Large-network asymptotic cascade mapping formulas are derived that can be used for efficient network computations of the double cascade. Numerical experiments then demonstrate that these asymptotic formulas agree qualitatively with Monte Carlo results for large finite networks, and quantitatively except when the initial system is placed in an exceptional "knife-edge" configuration. The experiments clearly support the main conclusion that when banks respond to liquidity stress by hoarding liquidity, then in the absence of asset fire sales, the level of defaults in a financial network is negatively related to the strength of bank liquidity hoarding and the eventual level of stress in the network.
[ { "type": "A", "before": null, "after": "interbank", "start_char_pos": 74, "end_char_pos": 74 }, { "type": "R", "before": "firesales. For example, insolvency of a given bank will create a shock to the asset side of the balance sheet of each of its creditor banks and under some circumstances, such \"downstream\" shocks can cause further insolvencies that may build up to create what is called an insolvency or default cascade. On the other hand, funding illiquidity that hits a given bank will create a shock to the liability side of the balance sheet of each of its debtor banks. Under some circumstances, such \"upstream\" shocks can cause illiquidity in further banks that may build up to create an illiquidity cascade.", "after": "fire sales.", "start_char_pos": 185, "end_char_pos": 781 }, { "type": "D", "before": "deliberately simplified", "after": null, "start_char_pos": 806, "end_char_pos": 829 }, { "type": "D", "before": "eventual", "after": null, "start_char_pos": 1161, "end_char_pos": 1169 }, { "type": "A", "before": null, "after": "when banks respond to liquidity stress by hoarding liquidity, then", "start_char_pos": 1673, "end_char_pos": 1673 }, { "type": "A", "before": null, "after": "asset", "start_char_pos": 1692, "end_char_pos": 1692 }, { "type": "D", "before": "average eventual", "after": null, "start_char_pos": 1709, "end_char_pos": 1725 }, { "type": "R", "before": "banks' liquidity stress response", "after": "bank liquidity hoarding", "start_char_pos": 1808, "end_char_pos": 1840 } ]
[ 0, 195, 487, 641, 781, 944, 1049, 1230, 1367, 1615 ]
1310.7225
1
The hydration thermodynamics of the GXG tripeptide relative to the reference GGG defines theconditional hydration contribution of X. This quantity or the hydration thermodynamics of a small molecule analog of the side-chain or some combination of such estimates, have anchored the interpretation of many of the seminal experiments on protein stability and folding and in the genesis of the current views on dominant interactions stabilizing proteins . We show that such procedures to model protein hydration have significant limitations. We study the conditional hydration thermodynamics of the isoleucine side-chain in an extended pentapeptide and in helical deca-peptides, using as appropriate an extended penta-glycine or appropriate helical deca-peptides as reference. Hydration of butane in the gauche conformation provides a small molecule reference for the side-chain. We use the quasichemical theory to parse the hydration thermodynamics into chemical, packing, and long-range interaction contributions. The chemical contribution reflects the contribution of solvent clustering within the defined inner-shell of the solute; the chemical contribution of g-butane is substantially more negative than the conditional chemical contribution of isoleucine. The packing contribution gives the work required to create a cavity in the solvent, a quantity of interest in understanding hydrophobic hydration. The packing contribution for g-butane substantially overestimates the conditional packing of isoleucine. The net of such compensating contributions still disagrees with the conditional free energy of isoleucine but by a lesser magnitude. The hydration thermodynamics of g-butane or the conditional hydration thermodynamics of isoleucine from the GGIGG pentapeptide proves unsatisfactory in predicting the properties of either IGGGG or the isoleucine-substituted helical deca-peptides .
The hydration thermodynamics of the GXG tripeptide relative to the reference GGG is often used to define the conditional hydration contribution of X. This quantity or the hydration thermodynamics of a small molecule analog of the side-chain or some combination of such estimates, have anchored the interpretation of seminal experiments on protein stability and folding . We show that such procedures to model protein hydration have significant limitations. We study the conditional hydration thermodynamics of the isoleucine side-chain in an extended pentapeptide and in helical deca-peptides, using as appropriate an extended penta-glycine or appropriate helical deca-peptides as reference. Hydration of butane in the gauche conformation provides a small molecule reference for the side-chain. We use the quasichemical theory to parse the hydration thermodynamics into chemical, packing, and long-range interaction contributions. The chemical contribution reflects the contribution of solvent clustering within the defined inner-shell of the solute; the chemical contribution of g-butane is substantially more negative than the conditional chemical contribution of isoleucine. The packing contribution gives the work required to create a cavity in the solvent, a quantity of interest in understanding hydrophobic hydration. The packing contribution for g-butane substantially overestimates the conditional packing of isoleucine. The net of such compensating contributions still disagrees with the conditional free energy of isoleucine but by a lesser magnitude. The excess enthalpy and entropy of hydration of g-butane model are also more negative than the corresponding conditional quantities for the side-chain. The conditional solvation of isoleucine in GGIGG also proves unsatisfactory in describing the conditional solvation of isoleucine in the helical peptides .
[ { "type": "D", "before": "defines the", "after": null, "start_char_pos": 81, "end_char_pos": 92 }, { "type": "R", "before": "conditional", "after": "is often used to define the conditional", "start_char_pos": 92, "end_char_pos": 103 }, { "type": "D", "before": "many of the", "after": null, "start_char_pos": 299, "end_char_pos": 310 }, { "type": "D", "before": "and in the genesis of the current views on dominant interactions stabilizing proteins", "after": null, "start_char_pos": 364, "end_char_pos": 449 }, { "type": "R", "before": "hydration thermodynamics", "after": "excess enthalpy and entropy of hydration", "start_char_pos": 1648, "end_char_pos": 1672 }, { "type": "R", "before": "or the conditional hydration thermodynamics of isoleucine from the GGIGG pentapeptide", "after": "model are also more negative than the corresponding conditional quantities for the side-chain. The conditional solvation of isoleucine in GGIGG also", "start_char_pos": 1685, "end_char_pos": 1770 }, { "type": "R", "before": "predicting the properties of either IGGGG or the isoleucine-substituted helical deca-peptides", "after": "describing the conditional solvation of isoleucine in the helical peptides", "start_char_pos": 1796, "end_char_pos": 1889 } ]
[ 0, 132, 451, 537, 772, 875, 1011, 1131, 1258, 1405, 1510, 1643 ]
1310.7527
1
Among network modeling tasks, identifying the rewiring of network structure is particularly instrumental in revealing and pinpointing the molecular cause of a disease. Effective incorporation of biological prior knowledge into network learning algorithms can leverage domain knowledge and make data driven inference more robust and biologically relevant. We formulate the inference of condition specific network structures that incorporates relevant prior knowledge as a convex optimization problem, and develop an efficient learning algorithm to jointly infer the biological networks as well as their changes. We test the proposed method on simulation data sets and demonstrate the effectiveness of this method . We then apply our method to yeast cell line data and breast cancer microarray data and obtain biologically plausible results.
Modeling biological networks serves as both a major goal and an effective tool of systems biology in studying mechanisms that orchestrate the activities of gene products in cells. Biological networks are context specific and dynamic in nature. To systematically characterize the selectively activated regulatory components and mechanisms, the modeling tools must be able to effectively distinguish significant rewiring from random background fluctuations. We formulated the inference of differential dependency networks that incorporates both conditional data and prior knowledge as a convex optimization problem, and developed an efficient learning algorithm to jointly infer the conserved biological network and the significant rewiring across different conditions. We used a novel sampling scheme to estimate the expected error rate due to random knowledge and based on which, developed a strategy that fully exploits the benefit of this data-knowledge integrated approach. We demonstrated and validated the principle and performance of our method using synthetic datasets . We then applied our method to yeast cell line and breast cancer microarray data and obtained biologically plausible results.
[ { "type": "R", "before": "Among network modeling tasks, identifying the rewiring of network structure is particularly instrumental in revealing and pinpointing the molecular cause of a disease. Effective incorporation of biological prior knowledge into network learning algorithms can leverage domain knowledge and make data driven inference more robust and biologically relevant. We formulate", "after": "Modeling biological networks serves as both a major goal and an effective tool of systems biology in studying mechanisms that orchestrate the activities of gene products in cells. Biological networks are context specific and dynamic in nature. To systematically characterize the selectively activated regulatory components and mechanisms, the modeling tools must be able to effectively distinguish significant rewiring from random background fluctuations. We formulated", "start_char_pos": 0, "end_char_pos": 367 }, { "type": "R", "before": "condition specific network structures that incorporates relevant", "after": "differential dependency networks that incorporates both conditional data and", "start_char_pos": 385, "end_char_pos": 449 }, { "type": "R", "before": "develop", "after": "developed", "start_char_pos": 504, "end_char_pos": 511 }, { "type": "R", "before": "biological networks as well as their changes. We test the proposed method on simulation data sets and demonstrate the effectiveness of this method", "after": "conserved biological network and the significant rewiring across different conditions. We used a novel sampling scheme to estimate the expected error rate due to random knowledge and based on which, developed a strategy that fully exploits the benefit of this data-knowledge integrated approach. We demonstrated and validated the principle and performance of our method using synthetic datasets", "start_char_pos": 565, "end_char_pos": 711 }, { "type": "R", "before": "apply", "after": "applied", "start_char_pos": 722, "end_char_pos": 727 }, { "type": "D", "before": "data", "after": null, "start_char_pos": 758, "end_char_pos": 762 }, { "type": "R", "before": "obtain", "after": "obtained", "start_char_pos": 801, "end_char_pos": 807 } ]
[ 0, 167, 354, 610, 713 ]
1310.7857
1
Under proportional transaction costs, a price process is said to have a consistent price system, if there is a semimartingale with an equivalent martingale measure that evolves within the bid-ask spread. We show that a continuous, multi-asset price process has a consistent price system, under arbitrarily small proportional transaction costs, if it satisfies a natural multi-dimensional generalization of the stickiness condition introduced by Guasoni [Math. Finance 16( 2), 469-588 (2006)].
Under proportional transaction costs, a price process is said to have a consistent price system, if there is a semimartingale with an equivalent martingale measure that evolves within the bid-ask spread. We show that a continuous, multi-asset price process has a consistent price system, under arbitrarily small proportional transaction costs, if it satisfies a natural multi-dimensional generalization of the stickiness condition introduced by Guasoni [Math. Finance 16( 3), 569-582 (2006)].
[ { "type": "R", "before": "2), 469-588", "after": "3), 569-582", "start_char_pos": 472, "end_char_pos": 483 } ]
[ 0, 203, 459 ]
1310.8169
1
Collective behaviors taking place in financial markets reveal strongly correlated states especially during a crisis period. A natural hypothesis is that trend reversals are also driven by mutual influences between the different stock exchanges. Using a maximum entropy approach, we find coordinated behavior during trend reversals dominated by the pairwise component. In particular, these events are predicted with high significant accuracy by the ensemble's instantaneous state.
Collective behaviours taking place in financial markets reveal strongly correlated states especially during a crisis period. A natural hypothesis is that trend reversals are also driven by mutual influences between the different stock exchanges. Using a maximum entropy approach, we find coordinated behaviour during trend reversals dominated by the pairwise component. In particular, these events are predicted with high significant accuracy by the ensemble's instantaneous state.
[ { "type": "R", "before": "behaviors", "after": "behaviours", "start_char_pos": 11, "end_char_pos": 20 }, { "type": "R", "before": "behavior", "after": "behaviour", "start_char_pos": 299, "end_char_pos": 307 } ]
[ 0, 123, 244, 367 ]
1310.8341
2
Gene regulatory networks are commonly used for modeling biological processesand revealing underlying molecular mechanisms. The reconstruction of gene regulatory networks from observational data is a challengingtask , especially considering the large number of players (e.g. genes) involved and the small number of biological replicates available for analysis. Herein, we propose a new statistical method for estimating the number of erroneous edges in reconstructed networks that strongly enhances commonly used inference approaches. This method is based on a special relationship between correlation and causality , and allows for the identification and to removal of approximately half of all erroneous edges. Using the mathematical model of Bayesian networks and positive correlation inequalities we establish a mathematical foundation for our method. Analyzing existing biological datasets, we find a strong correlation between the results of our method and the commonly used false discovery rate (FDR) technique . Furthermore, simulation analysis demonstrates that with large networks our new method provides a more accurate estimate of network error than FDR.
Gene covariation networks are commonly used to study biological processes. The inference of gene covariation networks from observational data can be challenging , especially considering the large number of players involved and the small number of biological replicates available for analysis. We propose a new statistical method for estimating the number of erroneous edges in reconstructed networks that strongly enhances commonly used inference approaches. This method is based on a special relationship between sign of correlation (positive/negative) and directionality (up/down) of gene regulation , and allows for the identification and removal of approximately half of all erroneous edges. Using the mathematical model of Bayesian networks and positive correlation inequalities we establish a mathematical foundation for our method. Analyzing existing biological datasets, we find a strong correlation between the results of our method and false discovery rate (FDR) . Furthermore, simulation analysis demonstrates that our method provides a more accurate estimate of network error than FDR.
[ { "type": "R", "before": "regulatory", "after": "covariation", "start_char_pos": 5, "end_char_pos": 15 }, { "type": "R", "before": "for modeling biological processesand revealing underlying molecular mechanisms. The reconstruction of gene regulatory", "after": "to study biological processes. The inference of gene covariation", "start_char_pos": 43, "end_char_pos": 160 }, { "type": "R", "before": "is a challengingtask", "after": "can be challenging", "start_char_pos": 194, "end_char_pos": 214 }, { "type": "D", "before": "(e.g. genes)", "after": null, "start_char_pos": 268, "end_char_pos": 280 }, { "type": "R", "before": "Herein, we", "after": "We", "start_char_pos": 360, "end_char_pos": 370 }, { "type": "R", "before": "correlation and causality", "after": "sign of correlation (positive/negative) and directionality (up/down) of gene regulation", "start_char_pos": 589, "end_char_pos": 614 }, { "type": "D", "before": "to", "after": null, "start_char_pos": 655, "end_char_pos": 657 }, { "type": "D", "before": "the commonly used", "after": null, "start_char_pos": 962, "end_char_pos": 979 }, { "type": "D", "before": "technique", "after": null, "start_char_pos": 1007, "end_char_pos": 1016 }, { "type": "R", "before": "with large networks our new", "after": "our", "start_char_pos": 1070, "end_char_pos": 1097 } ]
[ 0, 122, 359, 533, 711, 854, 1018 ]
1311.0118
1
Regulators clearly believe that derivativescan never be risk free. Regulators have risk preferences and by imposing costly actions on banks they have made derivatives markets incomplete. These actions have idiosyncratic effects, for example the stress period for Market Risk capital is determined at the bank level, not at desk level. Idiosyncratic effects mean that no single measure makes assets and derivatives martingales for all market participants. Hence the market has no risk-neutral measure and Regulatory-compliant derivatives pricing is not risk-neutral. Market participants have idiosyncratic, multiple, risk-neutral measures but the market does not. Practically, we show that derivatives desks leak PnL (profit-and-loss) even with idealized markets providing credit protection contracts and unlimited liquidity facilities (i.e. repos with zero haircuts). This PnL leak means that derivatives desks are inherently risky as they must rely on competitive advantages to price in the costs of their risks. This strictly positive risk level means that Regulatory-required capital must also have strictly positive costs. Hence Regulatory-compliant derivatives markets are incomplete. If we relax our assumptions by permitting haircuts on repos the situation is qualitatively worse because new Regulatory-driven costs (liquidity buffers) enter the picture. These additional funding costs must be met by desks further stressing their business models. One consequence of Regulatory-driven incomplete-market pricing is that the FVA debate is resolved in favor of both sides: academics on principles (pay for risk); and practitioners on practicalities (desks do pay). As a second consequence we identify appropriate exit prices .
Regulations impose idiosyncratic capital and funding costs for holding derivatives. Capital requirements are costly because derivatives desks are risky businesses; funding is costly in part because regulations increase the minimum funding tenor. Idiosyncratic costs mean no single measure makes derivatives martingales for all market participants. Hence Regulatory-compliant pricing is not risk-neutral. This has implications for exit prices and mark-to-market .
[ { "type": "R", "before": "Regulators clearly believe that derivativescan never be risk free. Regulators have risk preferences and by imposing costly actions on banks they have made derivatives markets incomplete. These actions have idiosyncratic effects, for example the stress period for Market Risk capital is determined at the bank level, not at desk level. Idiosyncratic effects mean that", "after": "Regulations impose idiosyncratic capital and funding costs for holding derivatives. Capital requirements are costly because derivatives desks are risky businesses; funding is costly in part because regulations increase the minimum funding tenor. Idiosyncratic costs mean", "start_char_pos": 0, "end_char_pos": 366 }, { "type": "D", "before": "assets and", "after": null, "start_char_pos": 391, "end_char_pos": 401 }, { "type": "D", "before": "the market has no risk-neutral measure and", "after": null, "start_char_pos": 461, "end_char_pos": 503 }, { "type": "D", "before": "derivatives", "after": null, "start_char_pos": 525, "end_char_pos": 536 }, { "type": "R", "before": "Market participants have idiosyncratic, multiple, risk-neutral measures but the market does not. Practically, we show that derivatives desks leak PnL (profit-and-loss) even with idealized markets providing credit protection contracts and unlimited liquidity facilities (i.e. repos with zero haircuts). This PnL leak means that derivatives desks are inherently risky as they must rely on competitive advantages to price in the costs of their risks. This strictly positive risk level means that Regulatory-required capital must also have strictly positive costs. Hence Regulatory-compliant derivatives markets are incomplete. If we relax our assumptions by permitting haircuts on repos the situation is qualitatively worse because new Regulatory-driven costs (liquidity buffers) enter the picture. These additional funding costs must be met by desks further stressing their business models. One consequence of Regulatory-driven incomplete-market pricing is that the FVA debate is resolved in favor of both sides: academics on principles (pay for risk); and practitioners on practicalities (desks do pay). As a second consequence we identify appropriate exit prices", "after": "This has implications for exit prices and mark-to-market", "start_char_pos": 566, "end_char_pos": 1728 } ]
[ 0, 66, 186, 334, 454, 565, 662, 867, 1013, 1126, 1189, 1361, 1454, 1616, 1668 ]
1311.0675
1
This paper considers binomial approximation of continuous time stochastic processes. It is shown that, under some mild integrability conditions, a process can be approximated in mean square sense and in other strong metrics by adapted binomial processes, i.e., by processes with fixed size binary increments at sampling points . In addition, possibility of approximation of solutions of stochastic differential equations by solutions of ordinary equations with binary noise is established. Some consequences for the financial modelling and options pricing models are discussed.
This paper considers binomial approximation of continuous time stochastic processes. It is shown that, under some mild integrability conditions, a process can be approximated in mean square sense and in other strong metrics by binomial processes, i.e., by processes with fixed size binary increments at sampling points . Moreover, this approximation can be causal, i.e., at every time it requires only past historical values of the underlying process . In addition, possibility of approximation of solutions of stochastic differential equations by solutions of ordinary equations with binary noise is established. Some consequences for the financial modelling and options pricing models are discussed.
[ { "type": "D", "before": "adapted", "after": null, "start_char_pos": 227, "end_char_pos": 234 }, { "type": "A", "before": null, "after": ". Moreover, this approximation can be causal, i.e., at every time it requires only past historical values of the underlying process", "start_char_pos": 327, "end_char_pos": 327 } ]
[ 0, 84, 329, 490 ]
1311.0684
1
Interacting RNA complexes are studied using a filtration via their topological genus. Our main result is a new bijection for RNA-RNA interaction structures and linear time uniform sampling algorithm for RNA complexes of fixed topological genus. The bijection allows to either reduce the topological genus of an RNA-RNA interaction structure directly, or to loose connectivity by decomposing the complex into a pair of single stranded RNA structures. Our main result is proved bijectively. It provides an explicit algorithm of how to rewire the corresponding complexes . Using the concept of genus induction, we construct RNA-RNA interaction complexes of fixed topological genus g uniformly in linear time .
Interacting RNA complexes are studied via bicellular maps using a filtration via their topological genus. Our main result is a new bijection for RNA-RNA interaction structures and linear time uniform sampling algorithm for RNA complexes of fixed topological genus. The bijection allows to either reduce the topological genus of a bicellular map directly, or to lose connectivity by decomposing the complex into a pair of single stranded RNA structures. Our main result is proved bijectively. It provides an explicit algorithm of how to rewire the corresponding complexes and an unambiguous decomposition grammar . Using the concept of genus induction, we construct bicellular maps of fixed topological genus g uniformly in linear time . We present various statistics on these topological RNA complexes and compare our findings with biological complexes. Furthermore we show how to construct loop-energy based complexes using our decomposition grammar .
[ { "type": "A", "before": null, "after": "via bicellular maps", "start_char_pos": 38, "end_char_pos": 38 }, { "type": "R", "before": "an RNA-RNA interaction structure", "after": "a bicellular map", "start_char_pos": 309, "end_char_pos": 341 }, { "type": "R", "before": "loose", "after": "lose", "start_char_pos": 358, "end_char_pos": 363 }, { "type": "A", "before": null, "after": "and an unambiguous decomposition grammar", "start_char_pos": 569, "end_char_pos": 569 }, { "type": "R", "before": "RNA-RNA interaction complexes", "after": "bicellular maps", "start_char_pos": 623, "end_char_pos": 652 }, { "type": "A", "before": null, "after": ". We present various statistics on these topological RNA complexes and compare our findings with biological complexes. Furthermore we show how to construct loop-energy based complexes using our decomposition grammar", "start_char_pos": 707, "end_char_pos": 707 } ]
[ 0, 86, 245, 450, 489 ]
1311.1154
1
In this paper nonlinear time series models are used to describe volatility in financial time series data. To describe volatility two of the nonlinear time series are combined into TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto- Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.
In this paper , non-linear time series models are used to describe volatility in financial time series data. To describe volatility , two of the non-linear time series are combined into form TAR (Threshold Auto-Regressive Model) with AARCH (Asymmetric Auto-Regressive Conditional Heteroskedasticity) error term and its parameter estimation is studied.
[ { "type": "R", "before": "nonlinear", "after": ", non-linear", "start_char_pos": 14, "end_char_pos": 23 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 129, "end_char_pos": 129 }, { "type": "R", "before": "nonlinear", "after": "non-linear", "start_char_pos": 141, "end_char_pos": 150 }, { "type": "A", "before": null, "after": "form", "start_char_pos": 181, "end_char_pos": 181 }, { "type": "R", "before": "Auto- Regressive", "after": "Auto-Regressive", "start_char_pos": 243, "end_char_pos": 259 } ]
[ 0, 105 ]
1311.1301
1
One of the most challenging and long-standing problems in computational biology is the prediction of three-dimensional protein structure from amino acid sequence. A promising approach to infer spatial proximity between residues is the study of evolutionary covariance from multiple sequence alignments, especially in light of recent algorithmic improvements and the fast growing size of sequence databases. In this paper, we present a simple, fast and accurate algorithm for the prediction of residue-residue contacts based on regularized least squares. The method incorporates in a very natural manner amino acid similarity in the calculation of covariance, and accounts for low number of observations by a regularization parameter that depends on the effective number of sequences in the alignment. Most importantly, inversion of the sample covariance matrix allows the computation of partial correlations between pairs of residues, thereby removing the effect of spurious transitive correlations. When tested on a set of protein families from PFAM , we found the RLS algorithm to have superior performance compared to PSICOV, a state-of-the-art method for contact prediction .
One of the most challenging and long-standing problems in computational biology is the prediction of three-dimensional protein structure from amino acid sequence. A promising approach to infer spatial proximity between residues is the study of evolutionary covariance from multiple sequence alignments, especially in light of recent algorithmic improvements and the fast growing size of sequence databases. In this paper, we present a simple, fast and accurate algorithm for the prediction of residue-residue contacts based on regularized least squares. The basic assumption is that spatially proximal residues in a protein coevolve to maintain the physicochemical complementarity of the amino acids involved in the contact. Our regularized inversion of the sample covariance matrix allows the computation of partial correlations between pairs of residues, thereby removing the effect of spurious transitive correlations. The method also accounts for low number of observations by means of a regularization parameter that depends on the effective number of sequences in the alignment. When tested on a set of protein families from Pfam , we found the RLS algorithm to have performance comparable to state-of-the-art methods for contact prediction , while at the same time being faster and conceptually simpler .
[ { "type": "R", "before": "method incorporates in a very natural manner amino acid similarity in the calculation of covariance, and accounts for low number of observations by a regularization parameter that depends on the effective number of sequences in the alignment. Most importantly,", "after": "basic assumption is that spatially proximal residues in a protein coevolve to maintain the physicochemical complementarity of the amino acids involved in the contact. Our regularized", "start_char_pos": 558, "end_char_pos": 818 }, { "type": "A", "before": null, "after": "The method also accounts for low number of observations by means of a regularization parameter that depends on the effective number of sequences in the alignment.", "start_char_pos": 1000, "end_char_pos": 1000 }, { "type": "R", "before": "PFAM", "after": "Pfam", "start_char_pos": 1047, "end_char_pos": 1051 }, { "type": "R", "before": "superior performance compared to PSICOV, a", "after": "performance comparable to", "start_char_pos": 1089, "end_char_pos": 1131 }, { "type": "R", "before": "method", "after": "methods", "start_char_pos": 1149, "end_char_pos": 1155 }, { "type": "A", "before": null, "after": ", while at the same time being faster and conceptually simpler", "start_char_pos": 1179, "end_char_pos": 1179 } ]
[ 0, 162, 406, 553, 800, 999 ]
1311.1545
1
Motivated by marginals-mimicking results for It\^o processes via SDEs and by their applications to volatility modeling in finance, we discuss the weak convergence of the law of a hypoelliptic diffusions conditioned to belong to a target affine subspace at final time, namely L(Z_t|Y_t = y) if X_{\cdot}=(Y_\cdot,Z_{\cdot}). To do so, we revisit Varadhan-type estimates in a small-noise regime , studying the density of the lower-dimensional component Y. The application to stochastic volatility models include the small-time and, for certain models, the large-strike asymptotics of the Gyongy-Dupire's local volatility function , the final product being asymptotic formulae that can (i) motivate parameterizations of the local volatility surface and (ii) be used to extrapolate local volatilities in a given model.
Motivated by marginals-mimicking results for It\^o processes via SDEs and by their applications to volatility modeling in finance, we discuss the weak convergence of the law of a hypoelliptic diffusions conditioned to belong to a target affine subspace at final time, namely L(Z_t|Y_t = y) if X_{\cdot}=(Y_\cdot,Z_{\cdot}). To do so, we revisit Varadhan-type estimates in a small-noise regime (as opposed to small-time) , studying the density of the lower-dimensional component Y. The application to stochastic volatility models include the small-time and, for certain models, the large-strike asymptotics of the Gyongy-Dupire's local volatility function . The final product are asymptotic formulae that can (i) motivate parameterizations of the local volatility surface and (ii) be used to extrapolate local volatilities in a given model.
[ { "type": "A", "before": null, "after": "(as opposed to small-time)", "start_char_pos": 393, "end_char_pos": 393 }, { "type": "R", "before": ", the final product being", "after": ". The final product are", "start_char_pos": 629, "end_char_pos": 654 } ]
[ 0, 323, 454 ]
1311.1562
1
The existence of stationary Markov perfect equilibria in stochastic games is shown in several contexts under a general condition called " coarser transition kernels". These results include various earlier existence results on correlated equilibria, noisy stochastic games, stochastic games with mixtures of constant transition kernels as special cases. The minimality of the condition is illustrated. The results here also shed some new light on a recent example on the nonexistence of stationary equilibrium. The proofs are remarkably simple via establishing a new connection between stochastic games and conditional expectations of correspondences .
The existence of stationary Markov perfect equilibria in stochastic games is shown under a general condition called " (decomposable) coarser transition kernels". This result includes various earlier existence results on correlated equilibria, noisy stochastic games, and stochastic games with mixtures of constant transition kernels as special cases. A remarkably simple proof is provided via establishing a new connection between stochastic games and conditional expectations of correspondences. The minimality of our condition is demonstrated from a technical point of view. Our result also sheds some light on a recent example about the nonexistence of stationary equilibrium. New applications of stochastic games are presented as illustrative examples, including stochastic games with endogenous shocks and a stochastic dynamic oligopoly model .
[ { "type": "D", "before": "in several contexts", "after": null, "start_char_pos": 83, "end_char_pos": 102 }, { "type": "A", "before": null, "after": "(decomposable)", "start_char_pos": 138, "end_char_pos": 138 }, { "type": "R", "before": "These results include", "after": "This result includes", "start_char_pos": 168, "end_char_pos": 189 }, { "type": "A", "before": null, "after": "and", "start_char_pos": 274, "end_char_pos": 274 }, { "type": "A", "before": null, "after": "A remarkably simple proof is provided via establishing a new connection between stochastic games and conditional expectations of correspondences.", "start_char_pos": 355, "end_char_pos": 355 }, { "type": "R", "before": "the condition is illustrated. The results here also shed some new", "after": "our condition is demonstrated from a technical point of view. Our result also sheds some", "start_char_pos": 374, "end_char_pos": 439 }, { "type": "R", "before": "on", "after": "about", "start_char_pos": 466, "end_char_pos": 468 }, { "type": "R", "before": "The proofs are remarkably simple via establishing a new connection between stochastic games and conditional expectations of correspondences", "after": "New applications of stochastic games are presented as illustrative examples, including stochastic games with endogenous shocks and a stochastic dynamic oligopoly model", "start_char_pos": 513, "end_char_pos": 652 } ]
[ 0, 167, 354, 403, 512 ]
1311.1562
2
The existence of stationary Markov perfect equilibria in stochastic games is shown under a general condition called "(decomposable) coarser transition kernels". This result includes various earlier existence results on correlated equilibria, noisy stochastic games, and stochastic games with mixtures of constant transition kernels as special cases. A remarkably simple proof is provided via establishing a new connection between stochastic games and conditional expectations of correspondences . The minimality of our condition is demonstrated from a technical point of view. Our result also sheds some light on a recent example about the nonexistence of stationary equilibrium . New applications of stochastic games are presented as illustrative examples, including stochastic games with endogenous shocks and a stochastic dynamic oligopoly model.
The existence of stationary Markov perfect equilibria in stochastic games is shown under a general condition called "(decomposable) coarser transition kernels". This result covers various earlier existence results on correlated equilibria, noisy stochastic games, stochastic games with finite actions and state-independent transitions, and stochastic games with mixtures of constant transition kernels as special cases. A remarkably simple proof is provided via establishing a new connection between stochastic games and conditional expectations of correspondences . New applications of stochastic games are presented as illustrative examples, including stochastic games with endogenous shocks and a stochastic dynamic oligopoly model.
[ { "type": "R", "before": "includes", "after": "covers", "start_char_pos": 173, "end_char_pos": 181 }, { "type": "R", "before": "and", "after": "stochastic games with finite actions and state-independent transitions, and", "start_char_pos": 266, "end_char_pos": 269 }, { "type": "D", "before": ". The minimality of our condition is demonstrated from a technical point of view. Our result also sheds some light on a recent example about the nonexistence of stationary equilibrium", "after": null, "start_char_pos": 495, "end_char_pos": 678 } ]
[ 0, 160, 349, 496, 576, 680 ]
1311.1793
1
Currently, XML is a format widely used. In the context of computer science teaching, it is necessary to introduce students to this format and, especially, at its eco-system. We have developed a model to support the teaching of XML. We propose to represent an XML schema as a graph highlighting the structural characteristics of the valide documents. We present in this report different graphic elements of the model .---XML est un format actuellement tr\`es utilis\'e. Dans le cadre des formations en informatique, il est indispensable d'initier les \'etudiants \`a ce format et, surtout, \`a tout son \'eco-syst\`eme. Nous avons donc mis au point un mod\`ele permettant d'appuyer l'enseignement de XML. Ce mod\`ele propose de repr\'esenter un sch\'ema XML sous la forme d'un graphe mettant en valeur les caract\'eristiques structurelles des documents valides. Nous pr\'esentons dans ce rapport les diff\'erents \'el\'ements graphique du mod\`ele .
Currently, XML is a format widely used. In the context of computer science teaching, it is necessary to introduce students to this format and, especially, at its eco-system. We have developed a model to support the teaching of XML. We propose to represent an XML schema as a graph highlighting the structural characteristics of the valide documents. We present in this report different graphic elements of the model and the improvements it brings to data modeling in XML .---XML est un format actuellement tr\`es utilis\'e. Dans le cadre des formations en informatique, il est indispensable d'initier les \'etudiants \`a ce format et, surtout, \`a tout son \'eco-syst\`eme. Nous avons donc mis au point un mod\`ele permettant d'appuyer l'enseignement de XML. Ce mod\`ele propose de repr\'esenter un sch\'ema XML sous la forme d'un graphe mettant en valeur les caract\'eristiques structurelles des documents valides. Nous pr\'esentons dans ce rapport les diff\'erents \'el\'ements graphique du mod\`ele et les am\'eliorations qu'il apporte \`a la mod\'elisation de donn\'ees en XML .
[ { "type": "A", "before": null, "after": "and the improvements it brings to data modeling in XML", "start_char_pos": 416, "end_char_pos": 416 }, { "type": "A", "before": null, "after": "et les am\\'eliorations qu'il apporte \\`a la mod\\'elisation de donn\\'ees en XML", "start_char_pos": 948, "end_char_pos": 948 } ]
[ 0, 39, 173, 349, 469, 619, 861 ]
1311.2216
1
Fluctuating environments pose tremendous challenges to bacterial populations. It is widely observed in numerous bacterial species that individual bacterial cells will stochastically switch among multiple phenotypes to survive in rapidly changing environments. This phenotypic heterogeneity with stochastic phenotypic switching is generally assumed to be an adaptive bet-hedging strategy. To gain a deeper understanding how bet-hedging is achieved and the pattern and information behind experimental data , a mathematical model is needed . Traditional deterministic models cannot provide a correct description of stochastic phenotype switching , and besides, recent research has demonstrated that cellular processes during gene expression are inherently stochastic . In this article, we proposed a unified nonlinear stochastic model of multistable bacterial systems at the molecular level. We presented a mathematical explanation of phenotypic heterogeneity, stochastic phenotype switching , and bet-hedging within isogenic bacterial populations, and thus provided a theoretical framework for the analysis of experiment dataat the cellular or molecular level . In addition, we also provided a quantitative characterization of the critical state during the transition among multiple phenotypes .
Fluctuating environments pose tremendous challenges to bacterial populations. It is observed in numerous bacterial species that individual cells can stochastically switch among multiple phenotypes for the population to survive in rapidly changing environments. This kind of phenotypic heterogeneity with stochastic phenotype switching is generally understood to be an adaptive bet-hedging strategy. Mathematical models are essential to gain a deeper insight into the principle behind bet-hedging and the pattern behind experimental data . Traditional deterministic models cannot provide a correct description of stochastic phenotype switching and bet-hedging, and traditional Markov chain models at the cellular level fail to explain their underlying molecular mechanisms . In this paper, we propose a nonlinear stochastic model of multistable bacterial systems at the molecular level. It turns out that our model not only provides a clear description of stochastic phenotype switching and bet-hedging within isogenic bacterial populations, but also provides a deeper insight into the analysis of multidimensional experimental data. Moreover, we use some deep mathematical theories to show that our stochastic model and traditional Markov chain models are essentially consistent and reflect the dynamic behavior of the bacterial system at two different time scales . In addition, we provide a quantitative characterization of the critical state of multistable bacterial systems and develop an effective data-driven method to identify the critical state without resorting to specific mathematical models .
[ { "type": "D", "before": "widely", "after": null, "start_char_pos": 84, "end_char_pos": 90 }, { "type": "R", "before": "bacterial cells will", "after": "cells can", "start_char_pos": 146, "end_char_pos": 166 }, { "type": "A", "before": null, "after": "for the population", "start_char_pos": 215, "end_char_pos": 215 }, { "type": "A", "before": null, "after": "kind of", "start_char_pos": 266, "end_char_pos": 266 }, { "type": "R", "before": "phenotypic", "after": "phenotype", "start_char_pos": 308, "end_char_pos": 318 }, { "type": "R", "before": "assumed", "after": "understood", "start_char_pos": 342, "end_char_pos": 349 }, { "type": "R", "before": "To", "after": "Mathematical models are essential to", "start_char_pos": 390, "end_char_pos": 392 }, { "type": "R", "before": "understanding how", "after": "insight into the principle behind", "start_char_pos": 407, "end_char_pos": 424 }, { "type": "D", "before": "is achieved", "after": null, "start_char_pos": 437, "end_char_pos": 448 }, { "type": "D", "before": "and information", "after": null, "start_char_pos": 465, "end_char_pos": 480 }, { "type": "D", "before": ", a mathematical model is needed", "after": null, "start_char_pos": 506, "end_char_pos": 538 }, { "type": "R", "before": ", and besides, recent research has demonstrated that cellular processes during gene expression are inherently stochastic", "after": "and bet-hedging, and traditional Markov chain models at the cellular level fail to explain their underlying molecular mechanisms", "start_char_pos": 645, "end_char_pos": 765 }, { "type": "R", "before": "article, we proposed a unified", "after": "paper, we propose a", "start_char_pos": 776, "end_char_pos": 806 }, { "type": "R", "before": "We presented a mathematical explanation of phenotypic heterogeneity,", "after": "It turns out that our model not only provides a clear description of", "start_char_pos": 891, "end_char_pos": 959 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 991, "end_char_pos": 992 }, { "type": "R", "before": "and thus provided a theoretical framework for", "after": "but also provides a deeper insight into", "start_char_pos": 1048, "end_char_pos": 1093 }, { "type": "R", "before": "experiment dataat the cellular or molecular level", "after": "multidimensional experimental data. Moreover, we use some deep mathematical theories to show that our stochastic model and traditional Markov chain models are essentially consistent and reflect the dynamic behavior of the bacterial system at two different time scales", "start_char_pos": 1110, "end_char_pos": 1159 }, { "type": "R", "before": "also provided", "after": "provide", "start_char_pos": 1178, "end_char_pos": 1191 }, { "type": "R", "before": "during the transition among multiple phenotypes", "after": "of multistable bacterial systems and develop an effective data-driven method to identify the critical state without resorting to specific mathematical models", "start_char_pos": 1246, "end_char_pos": 1293 } ]
[ 0, 77, 260, 389, 540, 767, 890, 1161 ]
1311.2550
1
From the Hamilton-Jacobi-Bellman equation for the value function we derive a non-linear partial differential equation for the optimal portfolio strategy (the dynamic control). The equation is general in the sense , that it does not depend on the terminal utility , and provides additional analytical insight for some optimal investment problems with known solution. Furthermore when boundary conditions for the optimal strategy can be established independently, it is considerably simpler than the HJB to solve numerically. Using this method we calculate the Kelly growth optimal strategy subject to a periodically resetting stop-loss rule.
From the Hamilton-Jacobi-Bellman equation for the value function we derive a non-linear partial differential equation for the optimal portfolio strategy (the dynamic control). The equation is general in the sense that it does not depend on the terminal utility and provides additional analytical insight for some optimal investment problems with known solutions. Furthermore, when boundary conditions for the optimal strategy can be established independently, it is considerably simpler than the HJB to solve numerically. Using this method we calculate the Kelly growth optimal strategy subject to a periodically reset stop-loss rule.
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 213, "end_char_pos": 214 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 263, "end_char_pos": 264 }, { "type": "R", "before": "solution. Furthermore", "after": "solutions. Furthermore,", "start_char_pos": 356, "end_char_pos": 377 }, { "type": "R", "before": "resetting", "after": "reset", "start_char_pos": 615, "end_char_pos": 624 } ]
[ 0, 175, 365, 523 ]
1311.2707
1
Assuming that the effect is a mathematical function of the cause in a causal relationship, FunChisq, a chi-square test defined on a non-parametric representation of interactions, infers network topology considering both interaction directionality and nonlinearity. Here we show that both experimental and in silico biological network data suggest the importance of directionality as evidence for causality. Counter-intuitively, patterns in those interactions effectively revealed by FunChisq enlist a network inference principle of applying perturbations to a biological system such that it transits between linear and nonlinear working zones, instead of operates only at a linear working zone.
With the assumption that the effect is a mathematical function of the cause in a causal relationship, FunChisq, a chi-square test defined on a non-parametric representation of interactions, infers network topology considering both interaction directionality and nonlinearity. Here we show that both experimental and in silico biological network data suggest the importance of directionality as evidence for causality. Counter-intuitively, patterns in those interactions effectively revealed by FunChisq enlist an experimental design principle essential to network inference -- perturbations to a biological system shall make it transits between linear and nonlinear working zones, instead of operating only in a linear working zone.
[ { "type": "R", "before": "Assuming", "after": "With the assumption", "start_char_pos": 0, "end_char_pos": 8 }, { "type": "R", "before": "a network inference principle of applying", "after": "an experimental design principle essential to network inference --", "start_char_pos": 499, "end_char_pos": 540 }, { "type": "R", "before": "such that", "after": "shall make", "start_char_pos": 578, "end_char_pos": 587 }, { "type": "R", "before": "operates only at", "after": "operating only in", "start_char_pos": 655, "end_char_pos": 671 } ]
[ 0, 264, 406 ]
1311.5036
1
We study the probabilistic and statistical properties of the variation based realized third and fourth moments of financial returns . The realized moments of the return are unbiased and relative efficient estimators for the actual moments of the return distribution under a martingale condition in the return process. For the estimation of a stochastic volatility model , we employ a simple method of estimation and a generalized method of moments estimation based on the realized second and third moments . Conditional thin tale property of the return distribution with given quadratic variation of the return is discussed. We explain the structure of moments variation swaps and analyze the thin tale property of the portfolio return hedged by the third moment variation swap .
We discuss the probabilistic properties of the variation based third and fourth moments of financial returns as estimators of the actual moments of the return distributions. The moment variations are defined under non-parametric assumptions with quadratic variation method but for the computational tractability, we use a square root stochastic volatility model for the derivations of moment conditions for estimations. Using the S\&P 500 index high frequency data, the realized versions of the moment variations is used for the estimation of a stochastic volatility model . We propose a simple estimation method of a stochastic volatility model using the sample averages of the variations and ARMA estimation. In addition, we compare the results with a generalized method of moments estimation based on the successive relation between realized moments and their lagged values .
[ { "type": "R", "before": "study the probabilistic and statistical", "after": "discuss the probabilistic", "start_char_pos": 3, "end_char_pos": 42 }, { "type": "D", "before": "realized", "after": null, "start_char_pos": 77, "end_char_pos": 85 }, { "type": "R", "before": ". The realized moments of the return are unbiased and relative efficient estimators for the", "after": "as estimators of the", "start_char_pos": 132, "end_char_pos": 223 }, { "type": "R", "before": "distribution under a martingale condition in the return process. For the", "after": "distributions. The moment variations are defined under non-parametric assumptions with quadratic variation method but for the computational tractability, we use a square root stochastic volatility model for the derivations of moment conditions for estimations. Using the S\\&P 500 index high frequency data, the realized versions of the moment variations is used for the", "start_char_pos": 253, "end_char_pos": 325 }, { "type": "R", "before": ", we employ a simple method of estimation and", "after": ". We propose a simple estimation method of a stochastic volatility model using the sample averages of the variations and ARMA estimation. In addition, we compare the results with", "start_char_pos": 370, "end_char_pos": 415 }, { "type": "R", "before": "realized second and third moments . Conditional thin tale property of the return distribution with given quadratic variation of the return is discussed. We explain the structure of moments variation swaps and analyze the thin tale property of the portfolio return hedged by the third moment variation swap", "after": "successive relation between realized moments and their lagged values", "start_char_pos": 472, "end_char_pos": 777 } ]
[ 0, 133, 317, 624 ]
1311.5120
1
Various types of pooled annuity funds that enable a group of individuals to pool their mortality risk have been proposed in the literature. We determine the relationship between some of these structures and we show thatthey are not all actuarially fair .
Various types of pooled annuity funds that enable a group of individuals to pool their mortality risk have been proposed in the literature. We discuss the importance of actuarial fairness, defined as the expected benefits equalling the contributions for each member, and whether actuarial unfairness can be seen as solidarity between members. We show that, with a finite number of members in the fund, the group self-annuitization scheme is not actuarially fair: some members subsidise the other members. The implication is that the members who are subsidising the others may obtain a higher expected benefit by joining a fund with a more favourable membership profile. Since the pooled annuity funds propose different methods of pooling mortality risk, we investigate the connections between them and find that they are genuinely different for a finite heterogeneous membership profile .
[ { "type": "R", "before": "determine the relationship between some of these structures and we show thatthey are not all actuarially fair", "after": "discuss the importance of actuarial fairness, defined as the expected benefits equalling the contributions for each member, and whether actuarial unfairness can be seen as solidarity between members. We show that, with a finite number of members in the fund, the group self-annuitization scheme is not actuarially fair: some members subsidise the other members. The implication is that the members who are subsidising the others may obtain a higher expected benefit by joining a fund with a more favourable membership profile. Since the pooled annuity funds propose different methods of pooling mortality risk, we investigate the connections between them and find that they are genuinely different for a finite heterogeneous membership profile", "start_char_pos": 143, "end_char_pos": 252 } ]
[ 0, 139 ]
1311.5120
2
Various types of pooled annuity funds that enable a group of individuals to pool their mortality risk have been proposed in the literature. We discuss the importance of actuarial fairness, defined as the expected benefits equalling the contributions for each member, and whether actuarial unfairness can be seen as solidarity between members. We show that, with a finite number of members in the fund, the group self-annuitization scheme is not actuarially fair: some members subsidise the other members. The implication is that the members who are subsidising the others may obtain a higher expected benefit by joining a fund with a more favourable membership profile. Since the pooled annuity funds propose different methods of pooling mortality risk, we investigate the connections between them and find that they are genuinely different for a finite heterogeneous membership profile .
Various types of structures that enable a group of individuals to pool their mortality risk have been proposed in the literature. Collectively, the structures are called pooled annuity funds. Since the pooled annuity funds propose different methods of pooling mortality risk, we investigate the connections between them and find that they are genuinely different for a finite heterogeneous membership profile. We discuss the importance of actuarial fairness, defined as the expected benefits equalling the contributions for each member, in the context of pooling mortality risk and comment on whether actuarial unfairness can be seen as solidarity between members. We show that, with a finite number of members in the fund, the group self-annuitization scheme is not actuarially fair: some members subsidize the other members. The implication is that the members who are subsidizing the others may obtain a higher expected benefit by joining a fund with a more favourable membership profile. However, we find that the subsidies are financially significant only for very small or highly heterogeneous membership profiles .
[ { "type": "R", "before": "pooled annuity funds", "after": "structures", "start_char_pos": 17, "end_char_pos": 37 }, { "type": "A", "before": null, "after": "Collectively, the structures are called pooled annuity funds. Since the pooled annuity funds propose different methods of pooling mortality risk, we investigate the connections between them and find that they are genuinely different for a finite heterogeneous membership profile.", "start_char_pos": 140, "end_char_pos": 140 }, { "type": "R", "before": "and", "after": "in the context of pooling mortality risk and comment on", "start_char_pos": 268, "end_char_pos": 271 }, { "type": "R", "before": "subsidise", "after": "subsidize", "start_char_pos": 477, "end_char_pos": 486 }, { "type": "R", "before": "subsidising", "after": "subsidizing", "start_char_pos": 550, "end_char_pos": 561 }, { "type": "R", "before": "Since the pooled annuity funds propose different methods of pooling mortality risk, we investigate the connections between them and find that they are genuinely different for a finite heterogeneous membership profile", "after": "However, we find that the subsidies are financially significant only for very small or highly heterogeneous membership profiles", "start_char_pos": 671, "end_char_pos": 887 } ]
[ 0, 139, 343, 505, 670 ]
1311.5511
1
The fundamental concepts of the Unified Growth Theory , the three stages of growth (Malthusian Regime, Post-Malthusian Regime and Modern Growth Regime) are contradicted by data. The three stages of growth did not exist and the Industrial Revolution had no effect on the world economic growthand on the growth of human population. Whatever the Unified Growth Theory is describing it is not describing the economic growth and the growth of human population .
The Unified Growth Theory is a story based firmly on illusions created by hyperbolic distributions. The three stages of growth (Malthusian Regime, Post-Malthusian Regime and Modern Growth Regime) did not exist . The great divergence and the abrupt take-off never happened. All elaborate explanations revolving around these phantom features represent an interesting story but they are scientifically unacceptable and, therefore, they do not explain the economic growth. The Industrial Revolution had no effect on the economic growth. The data clearly indicate that the economic growth was not as complicated and untidy as incorrectly described by the Unified Growth Theory but elegantly simple .
[ { "type": "D", "before": "fundamental concepts of the", "after": null, "start_char_pos": 4, "end_char_pos": 31 }, { "type": "R", "before": ", the", "after": "is a story based firmly on illusions created by hyperbolic distributions. The", "start_char_pos": 54, "end_char_pos": 59 }, { "type": "D", "before": "are contradicted by data. The three stages of growth", "after": null, "start_char_pos": 152, "end_char_pos": 204 }, { "type": "R", "before": "and the", "after": ". The great divergence and the abrupt take-off never happened. All elaborate explanations revolving around these phantom features represent an interesting story but they are scientifically unacceptable and, therefore, they do not explain the economic growth. The", "start_char_pos": 219, "end_char_pos": 226 }, { "type": "R", "before": "world economic growthand on the growth of human population. Whatever the Unified Growth Theory is describing it is not describing the economic growth and the growth of human population", "after": "economic growth. The data clearly indicate that the economic growth was not as complicated and untidy as incorrectly described by the Unified Growth Theory but elegantly simple", "start_char_pos": 270, "end_char_pos": 454 } ]
[ 0, 177, 329 ]
1311.5511
2
The Unified Growth Theory is a story based firmly on illusions created by hyperbolic distributions. The three stages of growth (Malthusian Regime, Post-Malthusian Regime and Modern Growth Regime) did not exist . The great divergence and the abrupt take-off never happened . All elaborate explanations revolving around these phantom features represent an interesting story but they are scientifically unacceptable and, therefore , they do not explain the economic growth. The Industrial Revolution had no effect on the economic growth. The data clearly indicate that the economic growth was not as complicated and untidy as incorrectly described by the Unified Growth Theory but elegantly simple.
The Unified Growth Theory is a puzzling collection of myths based on illusions created by hyperbolic distributions. Some of these myths are discussed. The examination of data shows that the three stages of growth (Malthusian Regime, Post-Malthusian Regime and Modern Growth Regime) did not exist and that Industrial Revolution had no influence on the economic growth and on the growth of human population . All elaborate explanations revolving around phantom features created by hyperbolic illusions might be fascinating but they are scientifically unacceptable and, consequently , they do not explain the economic growth. The data clearly indicate that the economic growth was not as complicated as described by the Unified Growth Theory but elegantly simple.
[ { "type": "R", "before": "story based firmly", "after": "puzzling collection of myths based", "start_char_pos": 31, "end_char_pos": 49 }, { "type": "R", "before": "The", "after": "Some of these myths are discussed. The examination of data shows that the", "start_char_pos": 100, "end_char_pos": 103 }, { "type": "R", "before": ". The great divergence and the abrupt take-off never happened", "after": "and that Industrial Revolution had no influence on the economic growth and on the growth of human population", "start_char_pos": 210, "end_char_pos": 271 }, { "type": "R", "before": "these phantom features represent an interesting story", "after": "phantom features created by hyperbolic illusions might be fascinating", "start_char_pos": 318, "end_char_pos": 371 }, { "type": "R", "before": "therefore", "after": "consequently", "start_char_pos": 418, "end_char_pos": 427 }, { "type": "D", "before": "Industrial Revolution had no effect on the economic growth. The", "after": null, "start_char_pos": 475, "end_char_pos": 538 }, { "type": "R", "before": "and untidy as incorrectly", "after": "as", "start_char_pos": 609, "end_char_pos": 634 } ]
[ 0, 99, 211, 273, 470, 534 ]
1311.6080
1
It is well-known that an \R^n-valued random vector (X_1, X_2, \cdots, X_n) is comonotonic if and only if (X_1, X_2, \cdots, X_n) and (Q_1(U), Q_2(U),\cdots, Q_n(U)) coincide in distribution, for any uniformly distributed random variable U on the unit interval , where Q_k is the quantile function of X_k, k=1,2,\cdots, n. It is natural to ask whether (X_1, X_2, \cdots, X_n) and (Q_1(U), Q_2(U),\cdots, Q_n(U)) can coincide almost surely for some special U. In this paper, we give a positive answer to this question via construction. We then apply this result to consider a general investment problem with a law-invariant preference measure in a financial market . We show that any optimal output should be anti-comonotonic with the market pricing kernel. Unlike previous studies, our approach avoids making the assumption that the pricing kernel is atomless, and we overcome one of the major difficulties encountered when one considers general economic equilibrium models in which the pricing kernel is a yet-to-be-determined unknown random variable .
It is well-known that an \R^n-valued random vector (X_1, X_2, \cdots, X_n) is comonotonic if and only if (X_1, X_2, \cdots, X_n) and (Q_1(U), Q_2(U),\cdots, Q_n(U)) coincide in distribution, for any random variable U uniformly distributed on the unit interval (0,1), where Q_k (\cdot) are the quantile functions of X_k, k=1,2,\cdots, n. It is natural to ask whether (X_1, X_2, \cdots, X_n) and (Q_1(U), Q_2(U),\cdots, Q_n(U)) can coincide almost surely for some special U. In this paper, we give a positive answer to this question by construction. We then apply this result to a general behavioral investment model with a law-invariant preference measure and develop a universal framework to link the problem to its quantile formulation . We show that any optimal investment output should be anti-comonotonic with the market pricing kernel. Unlike previous studies, our approach avoids making the assumption that the pricing kernel is atomless, and consequently, we overcome one of the major difficulties encountered when one considers behavioral economic equilibrium models in which the pricing kernel is a yet-to-be-determined unknown random variable . The method is applicable to many other models such as risk sharing model .
[ { "type": "D", "before": "uniformly distributed", "after": null, "start_char_pos": 199, "end_char_pos": 220 }, { "type": "A", "before": null, "after": "uniformly distributed", "start_char_pos": 239, "end_char_pos": 239 }, { "type": "R", "before": ",", "after": "(0,1),", "start_char_pos": 261, "end_char_pos": 262 }, { "type": "R", "before": "is the quantile function", "after": "(\\cdot) are the quantile functions", "start_char_pos": 273, "end_char_pos": 297 }, { "type": "R", "before": "via", "after": "by", "start_char_pos": 517, "end_char_pos": 520 }, { "type": "R", "before": "consider a general investment problem", "after": "a general behavioral investment model", "start_char_pos": 564, "end_char_pos": 601 }, { "type": "R", "before": "in a financial market", "after": "and develop a universal framework to link the problem to its quantile formulation", "start_char_pos": 642, "end_char_pos": 663 }, { "type": "A", "before": null, "after": "investment", "start_char_pos": 691, "end_char_pos": 691 }, { "type": "A", "before": null, "after": "consequently,", "start_char_pos": 866, "end_char_pos": 866 }, { "type": "R", "before": "general", "after": "behavioral", "start_char_pos": 940, "end_char_pos": 947 }, { "type": "A", "before": null, "after": ". The method is applicable to many other models such as risk sharing model", "start_char_pos": 1054, "end_char_pos": 1054 } ]
[ 0, 458, 534, 665, 757 ]
1311.6080
2
It is well-known that an %DIFDELCMD < \R%%% ^n -valued random vector (X_1, X_2, \cdots, X_n) is comonotonic if and only if (X_1, X_2, \cdots, X_n) and (Q_1(U), Q_2(U),\cdots, Q_n(U)) coincide in distribution, for any random variable U uniformly distributed on the unit interval (0,1), where Q_k(\cdot) are the quantile functions of X_k, k=1,2,\cdots, n. It is natural to ask whether (X_1, X_2, \cdots, X_n) and (Q_1(U), Q_2(U),\cdots, Q_n(U)) can coincide almost surely for some special U. In this paper, we give a positive answer to this question by construction. We then apply this result to a general behavioral investment model with a law-invariant preference measure and develop a universal framework to link the problem to its quantile formulation. We show that any optimal investment output should be anti-comonotonic with the market pricing kernel. Unlike previous studies, our approach avoids making the assumption that the pricing kernel is atomless, and consequently, we overcome one of the major difficulties encountered when one considers behavioral economic equilibrium models in which the pricing kernel is a yet-to-be-determined unknown random variable. The method is applicable to many other models such as risk sharing model.
It is well-known that an %DIFDELCMD < \R%%% \mathbb{R -valued random vector (X_1, X_2, \cdots, X_n) is comonotonic if and only if (X_1, X_2, \cdots, X_n) and (Q_1(U), Q_2(U),\cdots, Q_n(U)) coincide in distribution, for any random variable U uniformly distributed on the unit interval (0,1), where Q_k(\cdot) are the quantile functions of X_k, k=1,2,\cdots, n. It is natural to ask whether (X_1, X_2, \cdots, X_n) and (Q_1(U), Q_2(U),\cdots, Q_n(U)) can coincide almost surely for some special U. In this paper, we give a positive answer to this question by construction. We then apply this result to a general behavioral investment model with a law-invariant preference measure and develop a universal framework to link the problem to its quantile formulation. We show that any optimal investment output should be anti-comonotonic with the market pricing kernel. Unlike previous studies, our approach avoids making the assumption that the pricing kernel is atomless, and consequently, we overcome one of the major difficulties encountered when one considers behavioral economic equilibrium models in which the pricing kernel is a yet-to-be-determined unknown random variable. The method is applicable to many other models such as risk sharing model.
[ { "type": "R", "before": "^n", "after": "\\mathbb{R", "start_char_pos": 44, "end_char_pos": 46 } ]
[ 0, 489, 564, 754, 856, 1169 ]
1311.6187
1
We show that Lyons' rough path integral is a natural tool to use in model free financial mathematics by proving that it is possible to make an arbitrarily large profit by investing in those paths which do not have a rough path associated to them. We also show that in certain situations, the rough path integral can be constructed as a limit of Riemann sums, and not just as a limit of compensated Riemann sums which are usually used to define it. This proves that the rough path integral is really an extension of F\"ollmer's pathwise It\^o integral. Moreover, we construct a "model free It\^o integral" in the spirit of Karandikar .
We present two different approaches to stochastic integration in frictionless model free financial mathematics . The first one is in the spirit of It\^o's integral and based on a certain topology which is induced by the outer measure corresponding to the minimal superhedging price. The second one is based on the controlled rough path integral. We prove that every "typical price path" has a naturally associated It\^o rough path, and justify the application of the controlled rough path integral in finance by showing that it is the limit of non-anticipating Riemann sums, a new result in itself. Compared to the first approach, rough paths have the disadvantage of severely restricting the space of integrands, but the advantage of being a Banach space theory. Both approaches are based entirely on financial arguments and do not require any probabilistic structure .
[ { "type": "R", "before": "show that Lyons' rough path integral is a natural tool to use in", "after": "present two different approaches to stochastic integration in frictionless", "start_char_pos": 3, "end_char_pos": 67 }, { "type": "R", "before": "by proving that it is possible to make an arbitrarily large profit by investing in those paths which do not have a", "after": ". The first one is in the spirit of It\\^o's integral and based on a certain topology which is induced by the outer measure corresponding to the minimal superhedging price. The second one is based on the controlled rough path integral. We prove that every \"typical price path\" has a naturally associated It\\^o rough path, and justify the application of the controlled", "start_char_pos": 101, "end_char_pos": 215 }, { "type": "D", "before": "associated to them. We also show that in certain situations, the rough path", "after": null, "start_char_pos": 227, "end_char_pos": 302 }, { "type": "R", "before": "can be constructed as a limit of", "after": "in finance by showing that it is the limit of non-anticipating", "start_char_pos": 312, "end_char_pos": 344 }, { "type": "R", "before": "and not just as a limit of compensated Riemann sums which are usually used to define it. This proves that the rough path integral is really an extension of F\\\"ollmer's pathwise It\\^o integral. Moreover, we construct a \"model free It\\^o integral\" in the spirit of Karandikar", "after": "a new result in itself. Compared to the first approach, rough paths have the disadvantage of severely restricting the space of integrands, but the advantage of being a Banach space theory. Both approaches are based entirely on financial arguments and do not require any probabilistic structure", "start_char_pos": 359, "end_char_pos": 632 } ]
[ 0, 246, 447, 551 ]
1311.7075
1
A biomimetic minimalist model membrane is used to study the mechanism and kinetics of the in vitro HIV-1 Gag budding from a giant unilamellar vesicle (GUV). The real time interaction of the Gag, RNA and lipid leading to the formation of minivesicles is measured in real time using confocal microscopy. The Gag is found to lead to resolution limited punctae on the lipid membranes of the GUV . The introduction of the Gag to a GUV solution containing RNA led to the budding of minivesicles on the inside surface of the GUV. The diameter of the GUV decreased due to the bud formation. The corresponding rate of decrease of the GUV diameter was found to be linear in time. The bud formation and the decrease in GUV size were found to be proportional to the Gag concentration. The method is promising and will allow the systematic study of the dynamics of assembly of immature HIV and help classify the hierarchy of factors that impact the Gag protein initiated assembly of retroviruses such as HIV . The GUV system might also be a good platform for HIV-1 drug screening .
A biomimetic minimalist model membrane was used to study the mechanism and kinetics of cell-free in vitro HIV-1 Gag budding from a giant unilamellar vesicle (GUV). Real time interaction of Gag, RNA and lipid leading to the formation of mini-vesicles was measured using confocal microscopy. Gag forms resolution limited punctae on the GUV lipid membrane. Introduction of the Gag and urea to a GUV solution containing RNA led to the budding of mini-vesicles on the inside surface of the GUV. The GUV diameter showed a linear decrease in time due to bud formation. Both bud formation and decrease in GUV size were proportional to Gag concentration. In the absence of RNA, addition of urea to GUVs incubated with Gag also resulted in subvesicle formation but exterior to the surface. These observations suggest the possibility that clustering of GAG proteins leads to membrane invagination even in the absence of host cell proteins. The method presented here is promising, and allows for systematic study of the dynamics of assembly of immature HIV and help classify the hierarchy of factors that impact the Gag protein initiated assembly of retroviruses such as HIV .
[ { "type": "R", "before": "is", "after": "was", "start_char_pos": 39, "end_char_pos": 41 }, { "type": "R", "before": "the", "after": "cell-free", "start_char_pos": 86, "end_char_pos": 89 }, { "type": "R", "before": "The real", "after": "Real", "start_char_pos": 157, "end_char_pos": 165 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 186, "end_char_pos": 189 }, { "type": "R", "before": "minivesicles is measured in real time", "after": "mini-vesicles was measured", "start_char_pos": 237, "end_char_pos": 274 }, { "type": "R", "before": "The Gag is found to lead to", "after": "Gag forms", "start_char_pos": 302, "end_char_pos": 329 }, { "type": "R", "before": "lipid membranes of the GUV . The introduction", "after": "GUV lipid membrane. Introduction", "start_char_pos": 364, "end_char_pos": 409 }, { "type": "A", "before": null, "after": "and urea", "start_char_pos": 421, "end_char_pos": 421 }, { "type": "R", "before": "minivesicles", "after": "mini-vesicles", "start_char_pos": 477, "end_char_pos": 489 }, { "type": "R", "before": "diameter of the GUV decreased due to the", "after": "GUV diameter showed a linear decrease in time due to", "start_char_pos": 528, "end_char_pos": 568 }, { "type": "R", "before": "The corresponding rate of decrease of the GUV diameter was found to be linear in time. The", "after": "Both", "start_char_pos": 584, "end_char_pos": 674 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 693, "end_char_pos": 696 }, { "type": "R", "before": "found to be proportional to the", "after": "proportional to", "start_char_pos": 723, "end_char_pos": 754 }, { "type": "R", "before": "The method is promising and will allow the", "after": "In the absence of RNA, addition of urea to GUVs incubated with Gag also resulted in subvesicle formation but exterior to the surface. These observations suggest the possibility that clustering of GAG proteins leads to membrane invagination even in the absence of host cell proteins. The method presented here is promising, and allows for", "start_char_pos": 774, "end_char_pos": 816 }, { "type": "D", "before": ". The GUV system might also be a good platform for HIV-1 drug screening", "after": null, "start_char_pos": 996, "end_char_pos": 1067 } ]
[ 0, 156, 301, 523, 583, 670, 773 ]
1312.0128
1
We present a dialogue on Funding Costs and Counterparty Credit Risk modeling, inclusive of collateral, wrong way risk, gap risk and possible Central Clearing implementation through CCPs. This dialogue is the continuation of the previous FAQ "Counterparty Risk, Collateral and Funding FAQ" by Brigo (2011). In this dialogue we focus more on funding costs for the hedging strategy of a portfolio of trades, on the non-linearities emerging from assuming borrowing and lending rates to be different, on the resulting aggregation-dependent valuation process and its operational challenges, on the closeout boundary conditions at default, on the implications of the onset of central clearing, on the macro and micro effects on valuation and risk of the onset of CCPs, on initial and variation margins impact on valuation and on multiple discount curves. Through questions and answers and by referring to the growing body of literature on the subject we present a unified view of valuation (and risk) that takes all such aspects into account . We argue that the full onset of CCPs will not lead to the end of valuation and risk models but rather to the use of such models to verify and check that CCPs margin pricing is fair and adequate for the risks being covered. The dialogue is in the form of a Q A between a senior expert and a recently hired colleague .
We present a dialogue on Funding Costs and Counterparty Credit Risk modeling, inclusive of collateral, wrong way risk, gap risk and possible Central Clearing implementation through CCPs. This framework is important following the fact that derivatives valuation and risk analysis has moved from exotic derivatives managed on simple single asset classes to simple derivatives embedding the new or previously neglected types of complex and interconnected nonlinear risks we address here. This dialogue is the continuation of the "Counterparty Risk, Collateral and Funding FAQ" by Brigo (2011). In this dialogue we focus more on funding costs for the hedging strategy of a portfolio of trades, on the non-linearities emerging from assuming borrowing and lending rates to be different, on the resulting aggregation-dependent valuation process and its operational challenges, on the implications of the onset of central clearing, on the macro and micro effects on valuation and risk of the onset of CCPs, on initial and variation margins impact on valuation , and on multiple discount curves. Through questions and answers (Q A) between a senior expert and a junior colleague, and by referring to the growing body of literature on the subject , we present a unified view of valuation (and risk) that takes all such aspects into account .
[ { "type": "A", "before": null, "after": "framework is important following the fact that derivatives valuation and risk analysis has moved from exotic derivatives managed on simple single asset classes to simple derivatives embedding the new or previously neglected types of complex and interconnected nonlinear risks we address here. This", "start_char_pos": 192, "end_char_pos": 192 }, { "type": "D", "before": "previous FAQ", "after": null, "start_char_pos": 229, "end_char_pos": 241 }, { "type": "D", "before": "closeout boundary conditions at default, on the", "after": null, "start_char_pos": 593, "end_char_pos": 640 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 816, "end_char_pos": 816 }, { "type": "R", "before": "and", "after": "(Q", "start_char_pos": 880, "end_char_pos": 883 }, { "type": "A", "before": null, "after": "A) between a senior expert and a junior colleague, and", "start_char_pos": 884, "end_char_pos": 884 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 947, "end_char_pos": 947 }, { "type": "D", "before": ". We argue that the full onset of CCPs will not lead to the end of valuation and risk models but rather to the use of such models to verify and check that CCPs margin pricing is fair and adequate for the risks being covered. The dialogue is in the form of a Q", "after": null, "start_char_pos": 1039, "end_char_pos": 1298 }, { "type": "D", "before": "A between a senior expert and a recently hired colleague", "after": null, "start_char_pos": 1299, "end_char_pos": 1355 } ]
[ 0, 186, 306, 405, 849, 1040, 1263 ]
1312.0323
1
I sketch a broad program for a microeconomic theory of the business cycle as a recurring episode of disequilibrium, driven by incompleteness of the financial market and by information asymmetries between borrowers and lenders. This proposal seeks to incorporate five distinct but connected processes that have been discussed at varying lengths in the literature: the leverage cycle, financial panic, debt deflation, debt overhang, and deleveraging of households. In the wake of the 2007 financial crisis, policy responses by central banks have addressed only financial panic and debt deflation. Debt overhang and the slowness of household deleveraging account for the Keynesian "excessive saving" seen in recessions, which raises questions about the suitability of the standard Keynesian remedies.
I sketch a program for a microeconomic theory of the main component of the business cycle as a recurring disequilibrium, driven by incompleteness of the financial market and by information asymmetries between borrowers and lenders. This proposal seeks to incorporate five distinct but connected processes that have been discussed at varying lengths in the literature: the leverage cycle, financial panic, debt deflation, debt overhang, and deleveraging of households. In the wake of the 2007-08 financial crisis, policy responses by central banks have addressed only financial panic and debt deflation. Debt overhang and the slowness of household deleveraging account for the Keynesian "excessive saving" seen in recessions, which raises questions about the suitability of the standard Keynesian remedies.
[ { "type": "D", "before": "broad", "after": null, "start_char_pos": 11, "end_char_pos": 16 }, { "type": "A", "before": null, "after": "main component of the", "start_char_pos": 59, "end_char_pos": 59 }, { "type": "D", "before": "episode of", "after": null, "start_char_pos": 90, "end_char_pos": 100 }, { "type": "R", "before": "2007", "after": "2007-08", "start_char_pos": 483, "end_char_pos": 487 } ]
[ 0, 227, 463, 595 ]
1312.0557
1
The asymptotic distribution of the Markowitz portfolio is derived, for the general case (assuming fourth moments of returns exist), and for the case of multivariate normal returns. The derivation allows for inference which is robust to heteroskedasticity and autocorrelation of moments up to order four. As a side effect, one can estimate the proportion of error in the Markowitz portfolio due to mis-estimation of the covariance matrix. A likelihood ratio test is given which generalizes Dempster's Covariance Selection test to allow inference on linear combinations of the precision matrix and the Markowitz portfolio. Extensions of the main method to deal with hedged portfolios, conditional heteroskedasticity, and conditional expectation are given .
The asymptotic distribution of the Markowitz portfolio is derived, for the general case (assuming fourth moments of returns exist), and for the case of multivariate normal returns. The derivation allows for inference which is robust to heteroskedasticity and autocorrelation of moments up to order four. As a side effect, one can estimate the proportion of error in the Markowitz portfolio due to mis-estimation of the covariance matrix. A likelihood ratio test is given which generalizes Dempster's Covariance Selection test to allow inference on linear combinations of the precision matrix and the Markowitz portfolio. Extensions of the main method to deal with hedged portfolios, conditional heteroskedasticity, conditional expectation, and constrained estimation are given. It is shown that the Hotelling-Lawley statistic generalizes the (squared) Sharpe ratio under the conditional expectation model. Asymptotic distributions of all four of the common `MGLH' statistics are found, assuming random covariates. Examples are given demonstrating the possible uses of these results .
[ { "type": "R", "before": "and conditional expectation are given", "after": "conditional expectation, and constrained estimation are given. It is shown that the Hotelling-Lawley statistic generalizes the (squared) Sharpe ratio under the conditional expectation model. Asymptotic distributions of all four of the common `MGLH' statistics are found, assuming random covariates. Examples are given demonstrating the possible uses of these results", "start_char_pos": 715, "end_char_pos": 752 } ]
[ 0, 180, 303, 437, 620 ]
1312.1195
1
Cytoskeleton is known as an important part of animal cells that supports various cellular functions while maintains the integrity of cells. Due to their diverse functions, different types of cells may have distinct cytoskeleton structures. Recent development in stochastic optical reconstruction microscopy (STORM) revealed the hitherto unknown periodic cytoskeleton structureof axons, which consists of co-axile actin rings connected by multiple parallel spectrins (Xu et al. Science 2013). In this experiment, the average spacing between adjacent actin rings, as well as the variance of the spacing, are measured. In this paper, we model the spectrins in this actin-spectrin network as worm-like chains(WLCs), which are stretched to lengths close to their contour lengths. The result attained shows that the observed variance in the separation between actin rings is consistent with the thermal fluctuation of spectrins predicted by the WLC model . The analytical result can be used as an alternative method to infer the contour length and persistence length of polymers by measuring their average extension and longitudinal fluctuations along the stretching (force ) direction. It also provides an additional criterion to check the region of validity of the WLC model .
The macroscopic properties, the properties of individual components, and how those components arrange themselves are three important aspects of a complex structure. Knowing two of them will provide us information of the third. Here we perform a theoretical study of a composited system that is slender and can be coarse-grained as a simple smooth 3-dimensional curve. Focusing on biological systems, especially the cytoskeletal networks, we show how the combination of the properties of the network and the individual components puts constraints on the local configurations and dynamics. When the network can be modeled as a single linear chain, similar as single polymer chains, its overall conformation is dominated by the competition between the internal energy and the thermal agitations. The conformational fluctuations of a composited chain reveal not only its elastic properties but also the local arrangements and dynamics of its components. We first show a general form of the internal energy of a coarse-grained composited chain, and discuss briefly how it is related to existing models and may contribute to building new models. Under certain limits this general expression of energy is reduced to the worm-like chain (WLC) model. Using this simplified energy in the strong-stretching limit, we obtain analytical solutions for all the cumulants of the end-to-end distance projected to the force direction . Finally we apply our results to recent experimental observations that revealed the hitherto unknown periodic cytoskeleton structure of axons and measured the longitudinal fluctuations, and show how the comparison between our results and experiments limits possible local configurations and dynamics of the spectrin tetramers in the axonal cytoskeleton .
[ { "type": "R", "before": "Cytoskeleton is known as an important part of animal cells that supports various cellular functions while maintains the integrity of cells. Due to their diverse functions, different types of cells may have distinct cytoskeleton structures. Recent development in stochastic optical reconstruction microscopy (STORM) revealed the hitherto unknown periodic cytoskeleton structureof axons, which consists of co-axile actin rings connected by multiple parallel spectrins (Xu et al. Science 2013). In this experiment,", "after": "The macroscopic properties, the properties of individual components, and how those components arrange themselves are three important aspects of a complex structure. Knowing two of them will provide us information of the third. Here we perform a theoretical study of a composited system that is slender and can be coarse-grained as a simple smooth 3-dimensional curve. Focusing on biological systems, especially the cytoskeletal networks, we show how the combination of the properties of the network and the individual components puts constraints on the local configurations and dynamics. When the network can be modeled as a single linear chain, similar as single polymer chains, its overall conformation is dominated by the competition between the internal energy and the thermal agitations. The conformational fluctuations of a composited chain reveal not only its elastic properties but also the local arrangements and dynamics of its components. We first show a general form of the internal energy of a coarse-grained composited chain, and discuss briefly how it is related to existing models and may contribute to building new models. Under certain limits this general expression of energy is reduced to the worm-like chain (WLC) model. Using this simplified energy in the strong-stretching limit, we obtain analytical solutions for all the cumulants of", "start_char_pos": 0, "end_char_pos": 511 }, { "type": "R", "before": "average spacing between adjacent actin rings, as well as the variance of the spacing, are measured. In this paper, we model the spectrins in this actin-spectrin network as worm-like chains(WLCs), which are stretched to lengths close to their contour lengths. The result attained shows that the observed variance in the separation between actin rings is consistent with the thermal fluctuation of spectrins predicted by the WLC model", "after": "end-to-end distance projected to the force direction", "start_char_pos": 516, "end_char_pos": 948 }, { "type": "R", "before": "The analytical result can be used as an alternative method to infer the contour length and persistence length of polymers by measuring their average extension and longitudinal fluctuations along the stretching (force ) direction. It also provides an additional criterion to check the region of validity of the WLC model", "after": "Finally we apply our results to recent experimental observations that revealed the hitherto unknown periodic cytoskeleton structure of axons and measured the longitudinal fluctuations, and show how the comparison between our results and experiments limits possible local configurations and dynamics of the spectrin tetramers in the axonal cytoskeleton", "start_char_pos": 951, "end_char_pos": 1270 } ]
[ 0, 139, 239, 476, 491, 615, 774, 950, 1180 ]
1312.1195
2
The macroscopic properties, the properties of individual components , and how those components arrange themselves are three important aspects of a complex structure. Knowing two of them will provide us information of the third. Here we perform a theoretical study of a composited system that is slender and can be coarse-grained as a simple smooth 3-dimensional curve. Focusing on biological systems, especially the cytoskeletal networks, we show how the combination of the properties of the network and the individual components puts constraints on the local configurations and dynamics. When the network can be modeled as a single linear chain, similar as single polymer chains, its overall conformation is dominated by the competition between the internal energy and the thermal agitations. The conformational fluctuations of a composited chain reveal not only its elastic properties but also the local arrangements and dynamics of its components. We first show a general form of the internal energy of a coarse-grained composited chain, and discuss briefly how it is related to existing models and may contribute to building new models. Under certain limits this general expression of energy is reduced to the worm-like chain (WLC) model . Using this simplified energy in the strong-stretching limit, we obtain analytical solutions for all the cumulants of the end-to-end distance projected to the force direction. Finally we apply our results to recent experimental observations that revealed the hitherto unknown periodic cytoskeleton structure of axons and measured the longitudinal fluctuations , and show how the comparison between our results and experiments limits possible local configurations and dynamics of the spectrin tetramers in the axonal cytoskeleton .
The macroscopic properties, the properties of individual components and how those components interact with each other are three important aspects of a composited structure. An understanding of the interplay between them is essential in the study of complex systems. Using axonal cytoskeleton as an example system, here we perform a theoretical study of slender structures that can be coarse-grained as a simple smooth 3-dimensional curve. We first present a generic model for such systems based on the fundamental theorem of curves. We use this generic model to demonstrate the applicability of the well-known worm-like chain (WLC) model to the network level and investigate the situation when the system is stretched by strong forces (weakly bending limit). We specifically studied recent experimental observations that revealed the hitherto unknown periodic cytoskeleton structure of axons and measured the longitudinal fluctuations . Instead of focusing on single molecules, we apply analytical results from the WLC model to both single molecule and network levels and focus on the relations between extensions and fluctuations. We show how this approach introduces constraints to possible local dynamics of the spectrin tetramers in the axonal cytoskeleton and finally suggests simple but self-consistent dynamics of spectrins in which the spectrins in one spatial period of axons fluctuate in-sync .
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 68, "end_char_pos": 69 }, { "type": "R", "before": "arrange themselves", "after": "interact with each other", "start_char_pos": 95, "end_char_pos": 113 }, { "type": "R", "before": "complex structure. Knowing two of them will provide us information of the third. Here", "after": "composited structure. An understanding of the interplay between them is essential in the study of complex systems. Using axonal cytoskeleton as an example system, here", "start_char_pos": 147, "end_char_pos": 232 }, { "type": "R", "before": "a composited system that is slender and", "after": "slender structures that", "start_char_pos": 267, "end_char_pos": 306 }, { "type": "R", "before": "Focusing on biological systems, especially the cytoskeletal networks, we show how the combination of the properties of the network and the individual components puts constraints on the local configurations and dynamics. When the network can be modeled as a single linear chain, similar as single polymer chains, its overall conformation is dominated by the competition between the internal energy and the thermal agitations. The conformational fluctuations of a composited chain reveal not only its elastic properties but also the local arrangements and dynamics of its components. We first show a general form of the internal energy of a coarse-grained composited chain, and discuss briefly how it is related to existing models and may contribute to building new models. Under certain limits this general expression of energy is reduced to the", "after": "We first present a generic model for such systems based on the fundamental theorem of curves. We use this generic model to demonstrate the applicability of the well-known", "start_char_pos": 369, "end_char_pos": 1213 }, { "type": "R", "before": ". Using this simplified energy in the strong-stretching limit, we obtain analytical solutions for all the cumulants of the end-to-end distance projected to the force direction. Finally we apply our results to", "after": "to the network level and investigate the situation when the system is stretched by strong forces (weakly bending limit). We specifically studied", "start_char_pos": 1242, "end_char_pos": 1450 }, { "type": "R", "before": ", and show how the comparison between our results and experiments limits possible local configurations and", "after": ". Instead of focusing on single molecules, we apply analytical results from the WLC model to both single molecule and network levels and focus on the relations between extensions and fluctuations. We show how this approach introduces constraints to possible local", "start_char_pos": 1603, "end_char_pos": 1709 }, { "type": "A", "before": null, "after": "and finally suggests simple but self-consistent dynamics of spectrins in which the spectrins in one spatial period of axons fluctuate in-sync", "start_char_pos": 1772, "end_char_pos": 1772 } ]
[ 0, 165, 227, 368, 588, 793, 950, 1140, 1418 ]
1312.1298
1
Semiflexible polymers characterized by the contour length L and persistent length \ell_p confined in a spatial region D have been described as a series of ``{\em spherical blobs}'' and ``{\em deflecting lines}'' by de Gennes and Odjik for \ell_p < D and \ell_p \gg D respectively. Recently new intermediate regimes ({\em extended de Gennes} and {\em Gauss-de Gennes}) have been investigated by Tree {\em et al.} [Phys. Rev. Lett. {\bf 110}, 208103 (2013)]. In this letter we derive scaling relations to characterize these transitions in terms of universal scaled fluctuations in d-dimension as a function of L,\ell_p, and D, and show that the Gauss-de Gennes regime is absent and extended de Gennes regime is vanishingly small for polymers confined in a 2D strip. We validate our claim by extensive Brownian dynamics (BD) simulation which also reveals that the prefactor A used to describe the chain extension in the Odjik limit is independent of physical dimension d and is the same as previously found by Burkhardt {\em et al.}[ T. W. Burkhardt, Y. Yang, G. Gompper, Phys. Rev. E {\bf 82 }, 041801 (2010 )]. Our studies are relevant for optical maps of DNA stretched inside a nano-strip.
Semiflexible polymers characterized by the contour length L and persistent length \ell_p confined in a spatial region D have been described as a series of ``{\em spherical blobs}'' and ``{\em deflecting lines}'' by de Gennes and Odjik for \ell_p < D and \ell_p \gg D respectively. Recently new intermediate regimes ({\em extended de Gennes} and {\em Gauss-de Gennes}) have been investigated by Tree {\em et al.} [Phys. Rev. Lett. {\bf 110}, 208103 (2013)]. In this letter we derive scaling relations to characterize these transitions in terms of universal scaled fluctuations in d-dimension as a function of L,\ell_p, and D, and show that the Gauss-de Gennes regime is absent and extended de Gennes regime is vanishingly small for polymers confined in a 2D strip. We validate our claim by extensive Brownian dynamics (BD) simulation which also reveals that the prefactor A used to describe the chain extension in the Odjik limit is independent of physical dimension d and is the same as previously found by Yang {\em et al.}[ Y. Yang, T. W. Burkhardt, G. Gompper, Phys. Rev. E {\bf 76 }, 011804 (2007 )]. Our studies are relevant for optical maps of DNA stretched inside a nano-strip.
[ { "type": "R", "before": "Burkhardt", "after": "Yang", "start_char_pos": 1007, "end_char_pos": 1016 }, { "type": "A", "before": null, "after": "Y. Yang,", "start_char_pos": 1031, "end_char_pos": 1031 }, { "type": "D", "before": "Y. Yang,", "after": null, "start_char_pos": 1049, "end_char_pos": 1057 }, { "type": "R", "before": "82", "after": "76", "start_char_pos": 1088, "end_char_pos": 1090 }, { "type": "R", "before": "041801 (2010", "after": "011804 (2007", "start_char_pos": 1094, "end_char_pos": 1106 } ]
[ 0, 280, 410, 429, 456, 763, 1028, 1075, 1110 ]
1312.1401
1
The epidermis renewal and homeostasis is maintained by a multistage process including cell proliferation, differentiation, migration, apoptosis and desquamation. We present a computational model of the spatial-temporal dynamics of the epidermis . The model consists of a population kinetics model of the central transition pathway of keratinocyte proliferation, differentiation and loss and an agent-based cell migration model that propagates cell movements and generates the stratified epidermis. The model visualizes the epidermal renewal by embedding stochastic events of population kinetics into the cell migration events. The model reproduces observed cell density distribution and the epidermal turnover time. We apply the model to study the onset and phototherapy-induced remission of psoriasis. The model considers the psoriasis as a parallel homeostasis of normal and psoriatic keratinocytes originated from a shared stem cell niche environment and predicts two steady-state modes of the psoriasis: a disease mode and a quiescent mode. The bimodal psoriasis is established by the interaction between psoriatic stem cells and the immune system . The prediction of a quiescent state potentially explains the efficacy of the multi-episode UVB irradiation therapy and reoccurrence of psoriasis .
We present a computational model of the spatial-temporal dynamics of the epidermis homeostasis . The model consists of a population kinetics model of the central transition pathway of keratinocyte proliferation, differentiation and loss and an agent-based cell migration model that propagates cell movements and generates the stratified epidermis. The model recapitulates observed cell density distribution and the epidermal turnover time. We apply the model to study the onset , recurrence and phototherapy-induced remission of psoriasis. The model considers the psoriasis as a parallel homeostasis of normal and psoriatic keratinocytes originated from a shared stem-cell niche environment and predicts two steady-state modes of the psoriasis: a disease mode and a quiescent mode. The interconversion between the two modes is established by the interaction between psoriatic stem cells and the immune system and by the normal and psoriatic stem cells competing for niches . The prediction of a quiescent state potentially explains the efficacy of the multi-episode UVB irradiation therapy and recurrence of psoriasis plaques, which can further guide designs of therapeutics that target the immune system and/or the keratinocytes .
[ { "type": "D", "before": "The epidermis renewal and homeostasis is maintained by a multistage process including cell proliferation, differentiation, migration, apoptosis and desquamation.", "after": null, "start_char_pos": 0, "end_char_pos": 161 }, { "type": "A", "before": null, "after": "homeostasis", "start_char_pos": 245, "end_char_pos": 245 }, { "type": "R", "before": "visualizes the epidermal renewal by embedding stochastic events of population kinetics into the cell migration events. The model reproduces", "after": "recapitulates", "start_char_pos": 509, "end_char_pos": 648 }, { "type": "A", "before": null, "after": ", recurrence", "start_char_pos": 755, "end_char_pos": 755 }, { "type": "R", "before": "stem cell", "after": "stem-cell", "start_char_pos": 928, "end_char_pos": 937 }, { "type": "R", "before": "bimodal psoriasis", "after": "interconversion between the two modes", "start_char_pos": 1051, "end_char_pos": 1068 }, { "type": "A", "before": null, "after": "and by the normal and psoriatic stem cells competing for niches", "start_char_pos": 1154, "end_char_pos": 1154 }, { "type": "R", "before": "reoccurrence of psoriasis", "after": "recurrence of psoriasis plaques, which can further guide designs of therapeutics that target the immune system and/or the keratinocytes", "start_char_pos": 1276, "end_char_pos": 1301 } ]
[ 0, 161, 247, 498, 627, 716, 804, 1046, 1156 ]
1312.1401
2
We present a computational model of the spatial-temporal dynamics of the epidermis homeostasis . The model consists of a population kinetics model of the central transition pathway of keratinocyte proliferation, differentiation and loss and an agent-based cell migration model that propagates cell movements and generates the stratified epidermis. The model recapitulates observed cell density distribution and the epidermal turnover time . We apply the model to study the onset, recurrence and phototherapy-induced remission of psoriasis. The model considers the psoriasis as a parallel homeostasis of normal and psoriatic keratinocytes originated from a shared stem-cell niche environment and predicts two steady-state modes of the psoriasis: a disease mode and a quiescent mode. The interconversion between the two modes is established by the interaction between psoriatic stem cells and the immune system and by the normal and psoriatic stem cells competing for niches. The prediction of a quiescent state potentially explains the efficacy of the multi-episode UVB irradiation therapy and recurrence of psoriasis plaques, which can further guide designs of therapeutics that target the immune system and/or the keratinocytes.
We present a computational model to study the spatiotemporal dynamics of the epidermis homeostasis under normal and pathological conditions . The model consists of a population kinetics model of the central transition pathway of keratinocyte proliferation, differentiation and loss and an agent-based model that propagates cell movements and generates the stratified epidermis. The model recapitulates observed homeostatic cell density distribution , the epidermal turnover time and the multilayered tissue structure. We extend the model to study the onset, recurrence and phototherapy-induced remission of psoriasis. The model considers the psoriasis as a parallel homeostasis of normal and psoriatic keratinocytes originated from a shared stem-cell niche environment and predicts two homeostatic modes of the psoriasis: a disease mode and a quiescent mode. Interconversion between the two modes can be controlled by interactions between psoriatic stem cells and the immune system and by the normal and psoriatic stem cells competing for growth niches. The prediction of a quiescent state potentially explains the efficacy of the multi-episode UVB irradiation therapy and recurrence of psoriasis plaques, which can further guide designs of therapeutics that specifically target the immune system and/or the keratinocytes.
[ { "type": "R", "before": "of the spatial-temporal", "after": "to study the spatiotemporal", "start_char_pos": 33, "end_char_pos": 56 }, { "type": "A", "before": null, "after": "under normal and pathological conditions", "start_char_pos": 95, "end_char_pos": 95 }, { "type": "D", "before": "cell migration", "after": null, "start_char_pos": 257, "end_char_pos": 271 }, { "type": "A", "before": null, "after": "homeostatic", "start_char_pos": 382, "end_char_pos": 382 }, { "type": "R", "before": "and", "after": ",", "start_char_pos": 409, "end_char_pos": 412 }, { "type": "R", "before": ". We apply", "after": "and the multilayered tissue structure. We extend", "start_char_pos": 441, "end_char_pos": 451 }, { "type": "R", "before": "steady-state", "after": "homeostatic", "start_char_pos": 710, "end_char_pos": 722 }, { "type": "R", "before": "The interconversion", "after": "Interconversion", "start_char_pos": 784, "end_char_pos": 803 }, { "type": "R", "before": "is established by the interaction", "after": "can be controlled by interactions", "start_char_pos": 826, "end_char_pos": 859 }, { "type": "A", "before": null, "after": "growth", "start_char_pos": 968, "end_char_pos": 968 }, { "type": "A", "before": null, "after": "specifically", "start_char_pos": 1182, "end_char_pos": 1182 } ]
[ 0, 97, 348, 442, 541, 783, 976 ]
1312.1645
1
Expected Shortfall (ES) has been widely accepted as a risk measure that is conceptually superior to Value-at-Risk (VaR). At the same time, however, it has been criticised for issues relating to backtesting. In particular, ES has been found not to be elicitable which means that backtesting for ES is less straight-forward than, e.g., backtesting for VaR. Expectiles have been suggested as potentially better alternatives to both ES and VaR. In this paper, we revisit commonly accepted desirable properties of risk measures like coherence, comonotonic additivity, robustness and elicitability. We check VaR, ES and Expectiles with regard to whether or not they enjoy these properties, with particular emphasis on Expectiles. We also consider their impact on capital allocation, an important issue in risk management. We find that, despite the caveats that apply to the estimation and backtesting of ES, it can be considered a good risk measure. In particular , there is no sufficient evidence to justify an all-inclusive replacement of ES by expectiles in applications, especially as we provide an alternative way for backtesting of ES .
Expected Shortfall (ES) has been widely accepted as a risk measure that is conceptually superior to Value-at-Risk (VaR). At the same time, however, it has been criticised for issues relating to backtesting. In particular, ES has been found not to be elicitable which means that backtesting for ES is less straightforward than, e.g., backtesting for VaR. Expectiles have been suggested as potentially better alternatives to both ES and VaR. In this paper, we revisit commonly accepted desirable properties of risk measures like coherence, comonotonic additivity, robustness and elicitability. We check VaR, ES and Expectiles with regard to whether or not they enjoy these properties, with particular emphasis on Expectiles. We also consider their impact on capital allocation, an important issue in risk management. We find that, despite the caveats that apply to the estimation and backtesting of ES, it can be considered a good risk measure. As a consequence , there is no sufficient evidence to justify an all-inclusive replacement of ES by Expectiles in applications. For backtesting ES, we propose an empirical approach that consists in replacing ES by a set of four quantiles, which should allow to make use of backtesting methods for VaR .
[ { "type": "R", "before": "straight-forward", "after": "straightforward", "start_char_pos": 305, "end_char_pos": 321 }, { "type": "R", "before": "In particular", "after": "As a consequence", "start_char_pos": 944, "end_char_pos": 957 }, { "type": "R", "before": "expectiles in applications, especially as we provide an alternative way for backtesting of ES", "after": "Expectiles in applications. For backtesting ES, we propose an empirical approach that consists in replacing ES by a set of four quantiles, which should allow to make use of backtesting methods for VaR", "start_char_pos": 1041, "end_char_pos": 1134 } ]
[ 0, 120, 206, 354, 440, 592, 723, 815, 943 ]
1312.1911
1
Motor enzymes are remarkable molecular machines that use the energy derived from the hydrolysis of a nucleoside triphosphate to generate mechanical movement, achieved through different steps that constitute their kinetic cycle. These macromolecules, nowadays investigated with advanced experimental techniques to unveil their molecular mechanisms and the properties of their kinetic cycles, are implicated in many biological processes, ranging from biopolymerisation ( RNA polymerases, ribosomes,... ) to intracellular transport (motor proteins such as kinesins or dyneins). Although the kinetics of individual motors is well studied on both theoretical and experimental grounds, the repercussions of their stepping cycle on the collective dynamics is still to be understood . Advances in this direction will improve our comprehension of transport process in the natural intracellular medium, where processive motor enzymes might operate in crowded conditions. In this work, we therefore extend the current statistical kinetic analysis to study collective transport phenomena of motors in terms of lattice gas models belonging to the exclusion process class. Via numerical simulations, we show how to interpret and use the randomness calculated from single particle trajectories in crowded conditions. Importantly, we also show that time fluctuations and non-Poissonian behavior are intrinsically related to spatial correlations and the emergence of large, but finite, clusters of co-moving motors. The properties unveiled by our analysis have important biological implications on the collective transport characteristics of processive motor enzymes in crowded conditions.
Motor enzymes are remarkable molecular machines that use the energy derived from the hydrolysis of a nucleoside triphosphate to generate mechanical movement, achieved through different steps that constitute their kinetic cycle. These macromolecules, nowadays investigated with advanced experimental techniques to unveil their molecular mechanisms and the properties of their kinetic cycles, are implicated in many biological processes, ranging from biopolymerisation ( e.g. RNA polymerases and ribosomes ) to intracellular transport (motor proteins such as kinesins or dyneins). Although the kinetics of individual motors is well studied on both theoretical and experimental grounds, the repercussions of their stepping cycle on the collective dynamics still remains unclear . Advances in this direction will improve our comprehension of transport process in the natural intracellular medium, where processive motor enzymes might operate in crowded conditions. In this work, we therefore extend the current statistical kinetic analysis to study collective transport phenomena of motors in terms of lattice gas models belonging to the exclusion process class. Via numerical simulations, we show how to interpret and use the randomness calculated from single particle trajectories in crowded conditions. Importantly, we also show that time fluctuations and non-Poissonian behavior are intrinsically related to spatial correlations and the emergence of large, but finite, clusters of co-moving motors. The properties unveiled by our analysis have important biological implications on the collective transport characteristics of processive motor enzymes in crowded conditions.
[ { "type": "R", "before": "RNA polymerases, ribosomes,...", "after": "e.g. RNA polymerases and ribosomes", "start_char_pos": 469, "end_char_pos": 499 }, { "type": "R", "before": "is still to be understood", "after": "still remains unclear", "start_char_pos": 749, "end_char_pos": 774 } ]
[ 0, 227, 574, 776, 960, 1158, 1301, 1498 ]
1312.2281
1
We compute a small-time expansion for implied volatility under a general uncorrelated local-stochastic volatility model, with mild linear growth conditions on the drift and vol-of-vol. For this we use the BellaicheBel81 heat kernel expansion combined with Laplace's method to integrate over the volatility variable on a compact set, and (after a gauge transformation) we use the DaviesDav88 upper bound for the heat kernel on a manifold with bounded Ricci curvature to deal with the tail integrals. We also consider the case when the correlation \rho%DIFDELCMD < \le %%% 0 ; in this case our approach still works if the drift of the volatility takes a specific functional form and there is no local volatility component, and our results include the SABR model for \beta=1, \rho \le \lm\ne \lm \to 0pt%DIFAUXCMD , and we verify our results numerically for the SABR model using Monte Carlo simulation and the exact closed-form solution given in Antonov\&Spector\mbox{%DIFAUXCMD AS12 }\hspace{0pt}%DIFAUXCMD for the case \rho=} 0.
We compute a sharp small-time estimate for implied volatility under a general uncorrelated local-stochastic volatility model, with mild linear growth conditions on the drift and vol-of-vol. For this we use the BellaicheBel81 heat kernel expansion combined with Laplace's method to integrate over the volatility variable on a compact set, and (after a gauge transformation) we use the DaviesDav88 upper bound for the heat kernel on a manifold with bounded Ricci curvature to deal with the tail integrals. %DIFDELCMD < \le %%% For \rho < 0 , our approach still works if the drift of the volatility takes a specific functional form and there is no local volatility component, and our results include the SABR model for \beta=1, \rho \le 0. We later augment the model with a single jump-to-default with intensity\lm, which produces qualitatively different behaviour for the short-maturity smile; in particular, for \rho=0, log-moneyness x\ne 0, the implied volatility increases by\lm f(x) t +o(t) for some symmetric function f(x) which blows up at x=0, and we see that the jump affects the smile convexity but not the skew at leading order as t\to 0. Finally, we compare our result with the general asymptotic expansion in Lorig,Pagliarani\&Pascucci\mbox{%DIFAUXCMD LPP130pt%DIFAUXCMD , and we verify our results numerically for the SABR model using Monte Carlo simulation and the exact closed-form solution given in Antonov\&Spector\mbox{%DIFAUXCMD AS12 }\hspace{0pt}%DIFAUXCMD for the case \rho=} 0.
[ { "type": "A", "before": null, "after": "sharp", "start_char_pos": 13, "end_char_pos": 13 }, { "type": "R", "before": "expansion", "after": "estimate", "start_char_pos": 25, "end_char_pos": 34 }, { "type": "D", "before": "We also consider the case when the correlation \\rho", "after": null, "start_char_pos": 500, "end_char_pos": 551 }, { "type": "A", "before": null, "after": "For \\rho <", "start_char_pos": 572, "end_char_pos": 572 }, { "type": "R", "before": "; in this case", "after": ",", "start_char_pos": 575, "end_char_pos": 589 }, { "type": "A", "before": null, "after": "0. We later augment the model with a single jump-to-default with intensity", "start_char_pos": 784, "end_char_pos": 784 }, { "type": "A", "before": null, "after": ", which produces qualitatively different behaviour for the short-maturity smile; in particular, for \\rho=0, log-moneyness x", "start_char_pos": 787, "end_char_pos": 787 }, { "type": "A", "before": null, "after": "0, the implied volatility increases by", "start_char_pos": 791, "end_char_pos": 791 }, { "type": "A", "before": null, "after": "f(x) t +o(t) for some symmetric function f(x) which blows up at x=0, and we see that the jump affects the smile convexity but not the skew at leading order as t", "start_char_pos": 795, "end_char_pos": 795 }, { "type": "A", "before": null, "after": "0. Finally, we compare our result with the general asymptotic expansion in Lorig,Pagliarani\\&Pascucci\\mbox{%DIFAUXCMD LPP13", "start_char_pos": 799, "end_char_pos": 799 } ]
[ 0, 185, 499, 576 ]
1312.2281
2
We compute a sharp small-time estimate for implied volatility under a general uncorrelated local-stochastic volatility model , with mild linear growth conditions on the drift and vol-of-vol . For this we use the Bellaiche Bel81 heat kernel expansion combined with Laplace's method to integrate over the volatility variable on a compact set, and (after a gauge transformation) we use the Davies Dav88 upper bound for the heat kernel on a manifold with bounded Ricci curvature to deal with the tail integrals. For \rho < 0, our approach still works if the drift of the volatility takes a specific functional form and there is no local volatility component, and our results include the SABR model for \beta=1, \rho \le 0. For uncorrelated stochastic volatility models, our results also include a SABR-type model with \beta=1 and an affine mean-reverting drift, and the exponential Ornstein-Uhlenbeck model. We later augment the model with a single jump-to-default with intensity \lm, which produces qualitatively different behaviour for the short-maturity smile; in particular, for \rho=0, log-moneyness x %DIFDELCMD < \ne %%% 0, the implied volatility increases by \lm f(x) t +o(t) for some symmetric function f(x) which blows up at x=0, and we see that the jump affects the smile convexity but not the skew at leading order as t%DIFDELCMD < \to %%% 0. Finally, we compare our result with the general asymptotic expansion in Lorig, Pagliarani \& Pascucci \mbox{%DIFAUXCMD LPP13 , and we verify our results numerically for the SABR model using Monte Carlo simulation and the exact closed-form solution given in Antonov \& Spector AS12 for the case \rho=0.
We compute a sharp small-time estimate for implied volatility under a general uncorrelated local-stochastic volatility model . For this we use the Bellaiche Bel81 heat kernel expansion combined with Laplace's method to integrate over the volatility variable on a compact set, and (after a gauge transformation) we use the Davies Dav88 upper bound for the heat kernel on a manifold with bounded Ricci curvature to deal with the tail integrals. If the correlation \rho < 0, our approach still works if the drift of the volatility takes a specific functional form and there is no local volatility component, and our results include the SABR model for \beta=1, \rho \le 0. For uncorrelated stochastic volatility models, our results also include a SABR-type model with \beta=1 and an affine mean-reverting drift, and the exponential Ornstein-Uhlenbeck model. We later augment the model with a single jump-to-default with intensity \lm, which produces qualitatively different behaviour for the short-maturity smile; in particular, for \rho=0, log-moneyness x %DIFDELCMD < \ne %%% > 0, the implied volatility increases by \lm f(x) t +o(t) for some function f(x) which blows up %DIFDELCMD < \to %%% as x \searrow 0. Finally, we compare our result with the general asymptotic expansion in Lorig, Pagliarani \& Pascucci \mbox{%DIFAUXCMD LPP15 , and we verify our results numerically for the SABR model using Monte Carlo simulation and the exact closed-form solution given in Antonov \& Spector AS12 for the case \rho=0.
[ { "type": "D", "before": ", with mild linear growth conditions on the drift and vol-of-vol", "after": null, "start_char_pos": 125, "end_char_pos": 189 }, { "type": "R", "before": "For", "after": "If the correlation", "start_char_pos": 508, "end_char_pos": 511 }, { "type": "A", "before": null, "after": ">", "start_char_pos": 1124, "end_char_pos": 1124 }, { "type": "D", "before": "symmetric", "after": null, "start_char_pos": 1190, "end_char_pos": 1199 }, { "type": "D", "before": "at x=0, and we see that the jump affects the smile convexity but not the skew at leading order as t", "after": null, "start_char_pos": 1229, "end_char_pos": 1328 }, { "type": "A", "before": null, "after": "as x \\searrow", "start_char_pos": 1349, "end_char_pos": 1349 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD LPP13", "after": "\\mbox{%DIFAUXCMD LPP15", "start_char_pos": 1455, "end_char_pos": 1477 } ]
[ 0, 191, 507, 718, 903, 1059, 1172 ]
1312.2988
1
Protein contacts contain important information for protein structure and functional study, but contact prediction is very challenging especially for protein families without many sequence homologs. Recently evolutionary coupling (EC) analysis, which predicts contacts by analyzing residue co-evolution in a single target family, has made good progress due to better statistical and optimization techniques. Different from these single-family EC methods, this paper presents a joint multi-family EC analysis method that predicts contacts of one target family by jointly modeling residue co-evolution in itself and also (distantly) related families with divergent sequences but similar folds, and enforcing their co-evolution pattern consistency based upon their evolutionary distance. We implement this multi-family strategy by co-estimating their inverse covariance matrices subject to the constraint that these matrices shall have similar patterns to some degree. Experiments show that joint multi-family EC analysis can reveal many more native contacts than single-family analysis even for a target family with 4000 sequence homologs, which makes many more protein families amenable to co-evolution-based structure and function prediction. We also find out that contact prediction may be worsened by merging multiple related families into a single one followed by single-family EC analysis, or by consensus of single-family EC analysis results.
Protein contacts contain important information for protein structure and functional study, but contact prediction is very challenging especially for protein families without many sequence homologs. Recently evolutionary coupling (EC) analysis, which predicts contacts by analyzing residue co-evolution in a single target family, has made good progress due to better statistical and optimization techniques. Different from these single-family EC methods, this paper presents a joint multi-family EC analysis method that predicts contacts of one target family by jointly modeling residue co-evolution in itself and also (distantly) related families with divergent sequences but similar folds, and enforcing their co-evolution pattern consistency based upon their evolutionary distance. To implement this multi-family EC analysis strategy, we use a set of correlated multivariate Gaussian distributions to model the related families and then co-estimate the inverse covariance matrices of the Gaussian distributions subject to the constraint that these matrices shall have similar patterns to some degree. Experiments show that joint multi-family EC analysis can reveal many more native contacts than single-family analysis even for a target family with 4000 sequence homologs, which makes many more protein families amenable to co-evolution-based structure and function prediction. We also find out that contact prediction may be worsened by merging multiple related families into a single one followed by single-family EC analysis, or by consensus of single-family EC analysis results.
[ { "type": "R", "before": "We", "after": "To", "start_char_pos": 784, "end_char_pos": 786 }, { "type": "R", "before": "strategy by co-estimating their", "after": "EC analysis strategy, we use a set of correlated multivariate Gaussian distributions to model the related families and then co-estimate the", "start_char_pos": 815, "end_char_pos": 846 }, { "type": "A", "before": null, "after": "of the Gaussian distributions", "start_char_pos": 875, "end_char_pos": 875 } ]
[ 0, 197, 406, 783, 965, 1242 ]
1312.2988
2
Protein contacts contain important information for protein structure and functional study, but contact prediction is very challenging especially for protein families without many sequence homologs. Recently evolutionary coupling (EC) analysis, which predicts contacts by analyzing residue co-evolution in a single target family, has made good progress due to better statistical and optimization techniques. Different from these single-family EC methods , this paper presents a joint multi-family EC analysis method that predicts contacts of one target family by jointly modeling residue co-evolution in itself and also (distantly) related families with divergent sequences but similar folds, and enforcing their co-evolution pattern consistency based upon their evolutionary distance. To implement this multi-family EC analysis strategy, we use a set of correlated multivariate Gaussian distributions to model the related families and then co-estimate the inverse covariance matrices of the Gaussian distributions subject to the constraint that these matrices shall have similar patternsto some degree. Experiments show that joint multi-family EC analysis can reveal many more native contacts than single-family analysis even for a target family with 4000 sequence homologs, which makes many more protein families amenable to co-evolution-based structure and function prediction . We also find out that contact prediction may be worsened by merging multiple related families into a single one followed by single-family EC analysis, or by consensus of single-family EC analysis results .
Protein contacts contain important information for protein structure and functional study, but contact prediction is very challenging especially for protein families without many sequence homologs. Recently evolutionary coupling (EC) analysis, which predicts contacts by analyzing residue co-evolution in a single target family, has made good progress due to better statistical and optimization techniques. Different from these single-family EC methods that focus on only a single protein family , this paper presents a joint multi-family EC analysis method that predicts contacts of one target family by jointly modeling residue co-evolution in itself and also (distantly) related families with divergent sequences but similar folds, and enforcing their co-evolution pattern consistency based upon their evolutionary distance. To implement this multi-family EC analysis strategy, this paper presents a novel joint graphical lasso method to model a set of related protein families. In particular, we model a set of related families using a set of correlated multivariate Gaussian distributions , the inverse covariance matrix (or precision matrix) of each distribution encoding the contact pattern of one family. Then we co-estimate the precision matrices by maximizing the occurring probability of all the involved sequences, subject to the constraint that the matrices shall share similar patterns. Finally we solve this optimization problem using Alternating Direction Method of Multipliers (ADMM). Experiments show that joint multi-family EC analysis can reveal many more native contacts than single-family analysis even for a target family with 4000-5000 non-redundant sequence homologs, which makes many more protein families amenable to co-evolution-based structure and function prediction .
[ { "type": "A", "before": null, "after": "that focus on only a single protein family", "start_char_pos": 453, "end_char_pos": 453 }, { "type": "R", "before": "we use", "after": "this paper presents a novel joint graphical lasso method to model a set of related protein families. In particular, we model", "start_char_pos": 839, "end_char_pos": 845 }, { "type": "A", "before": null, "after": "related families using a set of", "start_char_pos": 855, "end_char_pos": 855 }, { "type": "R", "before": "to model the related families and then", "after": ", the inverse covariance matrix (or precision matrix) of each distribution encoding the contact pattern of one family. Then we", "start_char_pos": 903, "end_char_pos": 941 }, { "type": "R", "before": "the inverse covariance matrices of the Gaussian distributions", "after": "the precision matrices by maximizing the occurring probability of all the involved sequences,", "start_char_pos": 954, "end_char_pos": 1015 }, { "type": "R", "before": "these matrices shall have similar patternsto some degree.", "after": "the matrices shall share similar patterns. Finally we solve this optimization problem using Alternating Direction Method of Multipliers (ADMM).", "start_char_pos": 1047, "end_char_pos": 1104 }, { "type": "R", "before": "4000", "after": "4000-5000 non-redundant", "start_char_pos": 1253, "end_char_pos": 1257 }, { "type": "D", "before": ". We also find out that contact prediction may be worsened by merging multiple related families into a single one followed by single-family EC analysis, or by consensus of single-family EC analysis results", "after": null, "start_char_pos": 1381, "end_char_pos": 1586 } ]
[ 0, 197, 406, 785, 1104, 1382 ]
1312.3455
1
Datacenters are the cornerstone of the big data infrastructure supporting numerous online services. The demand for interactivity, which significantly impacts user experience and provider revenue, is translated into stringent timing requirements for flows in datacenter networks. Thus low latency networking , not well supported by current network design, is becoming a major concern of both industry and academia. We provide a short survey of recent progress made by the networking community for low latency datacenter networks. We propose a taxonomy to categorize existing works based on three main techniques, reducing queue length, prioritizing mice flows, and exploiting multi-path. Then we review select papers, highlight the principal ideas, and discuss their pros and cons. We also present our perspective of the research challenges and opportunities, hoping to aspire more future work in this space.
Datacenters are the cornerstone of the big data infrastructure supporting numerous online services. The demand for interactivity, which significantly impacts user experience and provider revenue, is translated into stringent timing requirements for flows in datacenter networks. Thus low latency networking is becoming a major concern of both industry and academia. We provide a short survey of recent progress made by the networking community for low latency datacenter networks. We propose a taxonomy to categorize existing work based on four main techniques, reducing queue length, accelerating retransmissions, prioritizing mice flows, and exploiting multi-path. Then we review select papers, highlight the principal ideas, and discuss their pros and cons. We also present our perspectives of the research challenges and opportunities, hoping to aspire more future work in this space.
[ { "type": "D", "before": ", not well supported by current network design,", "after": null, "start_char_pos": 307, "end_char_pos": 354 }, { "type": "R", "before": "works based on three", "after": "work based on four", "start_char_pos": 574, "end_char_pos": 594 }, { "type": "A", "before": null, "after": "accelerating retransmissions,", "start_char_pos": 635, "end_char_pos": 635 }, { "type": "R", "before": "perspective", "after": "perspectives", "start_char_pos": 802, "end_char_pos": 813 } ]
[ 0, 99, 278, 413, 528, 687, 781 ]
1312.3669
1
To provide insight into the early process of degradation often occurring in severely debilitating diseases with myelin pathology an increased level of spatial structural resolution is needed to bear in the biological realm. Although many observations have connected changes in the periodicity of myelin with illness, few information exist about the microscopic process in the early period of damage of the nerve and how these changes time percolate in space . Here we fill this gap by using first, a short time scale for data collection of scanning micro X-ray diffraction microscopy and second, methods of statistical physics for the analysis of time evolution of this non-invasive local structure experimental approach . We have mapped the time evolution of the fluctuations in myelin period in the degradation nerve process in a freshly extracted sciatic nerve of Xenopus laevis with a spatial resolution of 1 micron. We identify the first stage of myelin degradation with the period evolving through a bimodal distribution with a spatial phase separation, and evidence that the orientation of axons in the fresh sample show fractal fluctuations that are reduced with aging .
Degradation of the myelin sheath is a common pathology underlying demyelinating neurological diseases from Multiple Sclerosis to Leukodistrophies. Although large malformations of myelin ultrastructure in the advanced stages of Wallerian degradation is known, its subtle structural variations at early stages of demyelination remains poorly characterized. This is partly due to the lack of suitable and non-invasive experimental probes possessing sufficient resolution to detect the degradation . Here we report the feasibility of the application of an innovative non-invasive local structure experimental approach for imaging the changes of statistical structural fluctuations in the first stage of myelin degeneration. Scanning micro X-ray diffraction, using advances in synchrotron x-ray beam focusing, fast data collection, paired with spatial statistical analysis, has been used to unveil temporal changes in the myelin structure of dissected nerves following extraction of the Xenopus laevis sciatic nerve. The early myelin degeneration is a specific ordered compacted phase preceding the swollen myelin phase of Wallerian degradation. Our demonstration of the feasibility of the statistical analysis of SmXRD measurements using biological tissue paves the way for further structural investigations of degradation and death of neurons and other cells and tissues in diverse pathological states where nanoscale structural changes may be uncovered .
[ { "type": "R", "before": "To provide insight into the early process of degradation often occurring in severely debilitating diseases with myelin pathology an increased level of spatial structural resolution is needed to bear in the biological realm. Although many observations have connected changes in the periodicity of myelin with illness, few information exist about the microscopic process in the early period of damage of the nerve and how these changes time percolate in space", "after": "Degradation of the myelin sheath is a common pathology underlying demyelinating neurological diseases from Multiple Sclerosis to Leukodistrophies. Although large malformations of myelin ultrastructure in the advanced stages of Wallerian degradation is known, its subtle structural variations at early stages of demyelination remains poorly characterized. This is partly due to the lack of suitable and non-invasive experimental probes possessing sufficient resolution to detect the degradation", "start_char_pos": 0, "end_char_pos": 457 }, { "type": "R", "before": "fill this gap by using first, a short time scale for data collection of scanning micro X-ray diffraction microscopy and second, methods of statistical physics for the analysis of time evolution of this", "after": "report the feasibility of the application of an innovative", "start_char_pos": 468, "end_char_pos": 669 }, { "type": "R", "before": ". We have mapped the time evolution of the fluctuations in myelin period in the degradation nerve process in a freshly extracted sciatic nerve of Xenopus laevis with a spatial resolution of 1 micron. We identify the first stage of myelin degradation with the period evolving through a bimodal distribution with a spatial phase separation, and evidence that the orientation of axons in the fresh sample show fractal fluctuations that are reduced with aging", "after": "for imaging the changes of statistical structural fluctuations in the first stage of myelin degeneration. Scanning micro X-ray diffraction, using advances in synchrotron x-ray beam focusing, fast data collection, paired with spatial statistical analysis, has been used to unveil temporal changes in the myelin structure of dissected nerves following extraction of the Xenopus laevis sciatic nerve. The early myelin degeneration is a specific ordered compacted phase preceding the swollen myelin phase of Wallerian degradation. Our demonstration of the feasibility of the statistical analysis of SmXRD measurements using biological tissue paves the way for further structural investigations of degradation and death of neurons and other cells and tissues in diverse pathological states where nanoscale structural changes may be uncovered", "start_char_pos": 721, "end_char_pos": 1176 } ]
[ 0, 223, 459, 722, 920 ]
1312.3917
1
We consider a notion of weak no arbitrage condition commonly known as Robust No Unbounded Profit with Bounded Risk (RNUPBR) in the context of continuous time markets with small proportional transaction costs . We show that the RNUPBR condition on terminal liquidation value holds if and only if there exists a strictly consistent local martingale system (SCLMS) . Moreover, we show that RNUPBR condition implies the existence of optimal solution of the utility maximization problem defined on the terminal liquidation value .
This paper studies the market viability with proportional transaction costs in the sense that the utility maximization problems defined on terminal liquidation values admit optimal solutions. Instead of requiring the existence of strictly consistent price systems (SCPS) as in the literature, we show that strictly consistent local martingale systems (SCLMS) can successfully serve as the dual elements such that the market viability can be verified. We introduce two weaker notions of no arbitrage conditions on market models named no unbounded profit with bounded risk (NUPBR) and no local arbitrage with bounded portfolios (NLABP). In particular, we reveal that the NUPBR and NLABP conditions in the robust sense for the smaller bid-ask spreads is the equivalent characterization of the existence of SCLMS for general market models. As a consequence, the relationship between NUPBR and NLABP conditions in the robust sense and the market viability is examined. Moreover, different types of arbitrage opportunities with transaction costs are also discussed and the comparison between our setting and the frictionless market models is also presented .
[ { "type": "R", "before": "We consider a notion of weak no arbitrage condition commonly known as Robust No Unbounded Profit with Bounded Risk (RNUPBR) in the context of continuous time markets with small", "after": "This paper studies the market viability with", "start_char_pos": 0, "end_char_pos": 176 }, { "type": "R", "before": ". We show that the RNUPBR condition", "after": "in the sense that the utility maximization problems defined", "start_char_pos": 208, "end_char_pos": 243 }, { "type": "R", "before": "value holds if and only if there exists a strictly consistent local martingale system", "after": "values admit optimal solutions. Instead of requiring the existence of strictly consistent price systems (SCPS) as in the literature, we show that strictly consistent local martingale systems", "start_char_pos": 268, "end_char_pos": 353 }, { "type": "R", "before": ". Moreover, we show that RNUPBR condition implies the existence of optimal solution of", "after": "can successfully serve as the dual elements such that the market viability can be verified. We introduce two weaker notions of no arbitrage conditions on market models named no unbounded profit with bounded risk (NUPBR) and no local arbitrage with bounded portfolios (NLABP). In particular, we reveal that", "start_char_pos": 362, "end_char_pos": 448 }, { "type": "R", "before": "utility maximization problem defined on the terminal liquidation value", "after": "NUPBR and NLABP conditions in the robust sense for the smaller bid-ask spreads is the equivalent characterization of the existence of SCLMS for general market models. As a consequence, the relationship between NUPBR and NLABP conditions in the robust sense and the market viability is examined. Moreover, different types of arbitrage opportunities with transaction costs are also discussed and the comparison between our setting and the frictionless market models is also presented", "start_char_pos": 453, "end_char_pos": 523 } ]
[ 0, 209 ]
1312.3917
2
This paper studies the market viability with proportional transaction costs in the sense that the utility maximization problems defined on terminal liquidation values admit optimal solutions . Instead of requiring the existence of strictly consistent price systems (SCPS) as in the literature, we show that strictly consistent local martingale systems (SCLMS) can successfully serve as the dual elements such that the market viability can be verified. We introduce two weaker notions of no arbitrage conditions on market models named no unbounded profit with bounded risk (NUPBR) and no local arbitrage with bounded portfolios (NLABP). In particular, we reveal that the NUPBR and NLABP conditions in the robust sense for the smaller bid-ask spreads is the equivalent characterization of the existence of SCLMS for general market models. As a consequence, the relationship between NUPBR and NLABP conditions in the robust sense and the market viability is examined. Moreover, different types of arbitrage opportunities with transaction costs are discussed and the comparison between our setting and the frictionless market models is also presented .
This paper studies the market viability with proportional transaction costs . Instead of requiring the existence of strictly consistent price systems (SCPS) as in the literature, we show that strictly consistent local martingale systems (SCLMS) can successfully serve as the dual elements such that the market viability can be verified. We introduce two weaker notions of no arbitrage conditions on market models named no unbounded profit with bounded risk (NUPBR) and no local arbitrage with bounded portfolios (NLABP). In particular, we show that the NUPBR and NLABP conditions in the robust sense for the smaller bid-ask spreads is the equivalent characterization of the existence of SCLMS for general market models. We also discuss the implications for the utility maximization problem .
[ { "type": "D", "before": "in the sense that the utility maximization problems defined on terminal liquidation values admit optimal solutions", "after": null, "start_char_pos": 76, "end_char_pos": 190 }, { "type": "R", "before": "reveal", "after": "show", "start_char_pos": 654, "end_char_pos": 660 }, { "type": "R", "before": "As a consequence, the relationship between NUPBR and NLABP conditions in the robust sense and the market viability is examined. Moreover, different types of arbitrage opportunities with transaction costs are discussed and the comparison between our setting and the frictionless market models is also presented", "after": "We also discuss the implications for the utility maximization problem", "start_char_pos": 837, "end_char_pos": 1146 } ]
[ 0, 192, 451, 635, 836, 964 ]
1312.4196
1
Deterministic detailed balance (DDB ) is a property possessed by certain chemical re- action networks whose reaction rate parameters are appropriately constrained . Stochastic detailed balance ( SDB)is achieved in Markov chains when the transition rates satisfy certain constraints. We show that if a chemical reaction network modeled with mass-action kinetics possesses DDB, then it also possesses SDB. In order to show that the converse is not true in general, we present examples of networks which have SDB but not DDB. The conditions on rate constants that result in networks with SDB but without DDB are stringent, and thus examples of this phenomenon are rare, a notable exception to this is a network whose stochastic model is a birth and death process. Using the fact that DDB implies SDB, we obtain an explicit formula for the stationary distribution of networks with DDB .
Detailed balance in reversible chemical reaction networks (CRNs ) is a property possessed by certain chemical reaction networks (CRNs) when modeled as a deterministic dynamical system taken with mass-action kinetics whose reaction rate parameters are appropriately constrained , the constraints being imposed by the network structure of the CRN. We will refer to this property as reaction network detailed balance ( RNDB). Markov chains (whether arising as models of CRNs or otherwise) have their own notion of detailed balance, imposed by the network structure of the graph of the transition matrix of the Markov chain. When considering Markov chains arising from chemical reaction networks with mass-action kinetics , we will refer to this property as Markov chain detailed balance (MCDB). Finally, we refer to the stochastic analog of RNDB as Whittle stochastic detailed balance (WSDB). It is known that RNDB and WSDB are equivalent, in the sense that they require an identical set of conditions on the rate constants. We prove that WSDB and MCDB are also intimately related but are not equivalent, although they are sometimes confused for each other. While RNDB implies MCDB, the converse is not true. The conditions on rate constants that result in networks with MCDB but without RNDB are stringent, and thus examples of this phenomenon are rare, a notable exception is a network whose Markov chain is a birth and death process. Using the fact that RNDB implies MCDB, we give a new algorithm to find conditions on the rate constants that are required for MCDB and we obtain an explicit formula for the stationary distribution of networks with RNDB .
[ { "type": "R", "before": "Deterministic detailed balance (DDB", "after": "Detailed balance in reversible chemical reaction networks (CRNs", "start_char_pos": 0, "end_char_pos": 35 }, { "type": "R", "before": "re- action networks", "after": "reaction networks (CRNs) when modeled as a deterministic dynamical system taken with mass-action kinetics", "start_char_pos": 82, "end_char_pos": 101 }, { "type": "R", "before": ". Stochastic", "after": ", the constraints being imposed by the network structure of the CRN. We will refer to this property as reaction network", "start_char_pos": 163, "end_char_pos": 175 }, { "type": "R", "before": "SDB)is achieved in Markov chains when the transition rates satisfy certain constraints. We show that if a chemical reaction network modeled", "after": "RNDB). Markov chains (whether arising as models of CRNs or otherwise) have their own notion of detailed balance, imposed by the network structure of the graph of the transition matrix of the Markov chain. When considering Markov chains arising from chemical reaction networks", "start_char_pos": 195, "end_char_pos": 334 }, { "type": "R", "before": "possesses DDB, then it also possesses SDB. In order to show that the converse is not true in general, we present examples of networks which have SDB but not DDB.", "after": ", we will refer to this property as Markov chain detailed balance (MCDB). Finally, we refer to the stochastic analog of RNDB as Whittle stochastic detailed balance (WSDB). It is known that RNDB and WSDB are equivalent, in the sense that they require an identical set of conditions on the rate constants. We prove that WSDB and MCDB are also intimately related but are not equivalent, although they are sometimes confused for each other. While RNDB implies MCDB, the converse is not true.", "start_char_pos": 361, "end_char_pos": 522 }, { "type": "R", "before": "SDB but without DDB", "after": "MCDB but without RNDB", "start_char_pos": 585, "end_char_pos": 604 }, { "type": "D", "before": "to this", "after": null, "start_char_pos": 687, "end_char_pos": 694 }, { "type": "R", "before": "stochastic model", "after": "Markov chain", "start_char_pos": 714, "end_char_pos": 730 }, { "type": "R", "before": "DDB implies SDB, we", "after": "RNDB implies MCDB, we give a new algorithm to find conditions on the rate constants that are required for MCDB and we", "start_char_pos": 781, "end_char_pos": 800 }, { "type": "R", "before": "DDB", "after": "RNDB", "start_char_pos": 877, "end_char_pos": 880 } ]
[ 0, 164, 282, 522, 760 ]