doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1608.01900
2
Innovation is URLanizations what evolution is URLanisms: it is how they adapt to changes in the environment and improve. Yet despite steady advances in how evolutionworks , what drives innovation remains elusive. We derive a theory of innovationin which products are composed of components and new components are adopted one at a time. We test it on data from language, gastronomy and technology. We show that the rate of innovation depends on the size distribution of products, and that a small number of simple products can dramatically increase the innovation rate. By strategically choosing which components to adopt , we show how to increase the innovation rate to achieve short-term gain or long-term growth.
Innovation is URLanizations what evolution is URLanisms: it is URLanisations adapt to changes in the environment and improve. Yet despite steady advances in our understanding of evolution , what drives innovation remains elusive. There is a tension between a managerial school, which seeks a systematic prescription for innovation, and a visionary school, which ascribes innovation to serendipity and the intuition of great minds. We therefore provide a mathematical foundation for innovation---in which products are made of components and components are acquired one at a time---which serves as a common framework for both. We apply our model to data from language, gastronomy and technology. By strategically choosing which components to adopt as the innovation process unfolds, we can alter the innovation rate to achieve short-term gain or long-term growth.
[ { "type": "R", "before": "how they", "after": "URLanisations", "start_char_pos": 63, "end_char_pos": 71 }, { "type": "R", "before": "how evolutionworks", "after": "our understanding of evolution", "start_char_pos": 152, "end_char_pos": 170 }, { "type": "R", "before": "We derive a theory of innovationin which products are composed", "after": "There is a tension between a managerial school, which seeks a systematic prescription for innovation, and a visionary school, which ascribes innovation to serendipity and the intuition of great minds. We therefore provide a mathematical foundation for innovation---in which products are made", "start_char_pos": 213, "end_char_pos": 275 }, { "type": "R", "before": "new components are adopted", "after": "components are acquired", "start_char_pos": 294, "end_char_pos": 320 }, { "type": "R", "before": "time. We test it on", "after": "time---which serves as a common framework for both. We apply our model to", "start_char_pos": 330, "end_char_pos": 349 }, { "type": "D", "before": "We show that the rate of innovation depends on the size distribution of products, and that a small number of simple products can dramatically increase the innovation rate.", "after": null, "start_char_pos": 397, "end_char_pos": 568 }, { "type": "R", "before": ", we show how to increase", "after": "as the innovation process unfolds, we can alter", "start_char_pos": 621, "end_char_pos": 646 } ]
[ 0, 120, 212, 335, 396, 568 ]
1608.01900
3
Innovation is URLanizations what evolution is URLanisms: it is URLanisations adapt to changes in the environment and improve. Yet despite steady advances in our understanding of evolution, what drives innovation remains elusive. There is a tension between a managerial school, which seeks a systematic prescription for innovation, and a visionary school, which ascribes innovation to serendipity and the intuition of great minds. We therefore provide a mathematical foundation for innovation---in which products are made of components and components are acquired one at a time---which serves as a common framework for both. We apply our model to data from language, gastronomy and technology. By strategically choosing which components to adopt as the innovation process unfolds, we can alter the innovation rate to achieve short-term gain or long-term growth .
Innovation is URLanizations what evolution is URLanisms: it is URLanisations adapt to changes in the environment and improve. Governments, institutions and firms that innovate are more likely to prosper and stand the test of time; those that fail to do so fall behind their competitors and succumb to market and environmental change. Yet despite steady advances in our understanding of evolution, what drives innovation remains elusive. On the one URLanizations invest heavily in systematic strategies to drive innovation. On the other, historical analysis and individual experience suggest that serendipity plays a significant role in the discovery process. To unify these two perspectives, we analyzed the mathematics of innovation as a search process for viable designs across a universe of building blocks. We then tested our insights using historical data from language, gastronomy and technology. By measuring the number of makeable designs as we acquire more components, we observed that the relative usefulness of different components is not fixed, but cross each other over time. When these crossovers are unanticipated, they appear to be the result of serendipity. But when we can predict crossovers ahead of time, they offer an opportunity to strategically increase the growth of our product space. Thus we find that the serendipitous and strategic visions of innovation can be viewed as different manifestations of the same thing: the changing importance of component building blocks over time .
[ { "type": "A", "before": null, "after": "Governments, institutions and firms that innovate are more likely to prosper and stand the test of time; those that fail to do so fall behind their competitors and succumb to market and environmental change.", "start_char_pos": 126, "end_char_pos": 126 }, { "type": "R", "before": "There is a tension between a managerial school, which seeks a systematic prescription for innovation, and a visionary school, which ascribes innovation to serendipity and the intuition of great minds. We therefore provide a mathematical foundation for innovation---in which products are made of components and components are acquired one at a time---which serves as a common framework for both. We apply our model to", "after": "On the one URLanizations invest heavily in systematic strategies to drive innovation. On the other, historical analysis and individual experience suggest that serendipity plays a significant role in the discovery process. To unify these two perspectives, we analyzed the mathematics of innovation as a search process for viable designs across a universe of building blocks. We then tested our insights using historical", "start_char_pos": 230, "end_char_pos": 646 }, { "type": "R", "before": "strategically choosing which components to adopt as the innovation process unfolds, we can alter the innovation rate to achieve short-term gain or long-term growth", "after": "measuring the number of makeable designs as we acquire more components, we observed that the relative usefulness of different components is not fixed, but cross each other over time. When these crossovers are unanticipated, they appear to be the result of serendipity. But when we can predict crossovers ahead of time, they offer an opportunity to strategically increase the growth of our product space. Thus we find that the serendipitous and strategic visions of innovation can be viewed as different manifestations of the same thing: the changing importance of component building blocks over time", "start_char_pos": 697, "end_char_pos": 860 } ]
[ 0, 125, 229, 430, 624, 693 ]
1608.01912
1
Strongly correlated electrostatics of DNA systems has drawn the interest of many groups, especially the condensation and overcharging of DNA by multivalent counterions. By adding counterions of different valencies and shapes, one can enhance or reduce DNA overcharging. In this letter , we focus on the effect of multivalent co-ions, specifically divalent coion such as SO_4^{2-} , on the strongly correlated electrostatics of DNA condensation problem . A computational experiment of DNA condensation using Monte-Carlo simulation in grand canonical ensemble is carried out where DNA system is in equilibirium with a bulk solution containing a mixture of salt of different valency of co-ions. Compared to system with purely monovalent co-ions, the influence of divalent co-ions shows up in multiple aspects. Divalent co-ions lead to an increase of monovalent salt in the DNA condensate. Because monovalent salts mostly participate in linear screening of electrostatic interactions in the system, more monovalent salt molecules enter the condensate leads to screening out of short-range DNA-DNA like charge attraction and weaker DNA condensation free energy. Additionally, strong repulsions between DNA and divalent co-ions and among divalent co-ions themselves leads to a {\em depletion} of negative ions near DNA surface as compared to the case without divalent co-ions. This leads to less screened and stronger electrostatic correlations of divalent counterions condensed on the DNA surface. This in turns results in a stronger overcharging of DNA by multivalent counterions .
Strongly correlated electrostatics of DNA systems has drawn the interest of many groups, especially the condensation and overcharging of DNA by multivalent counterions. By adding counterions of different valencies and shapes, one can enhance or reduce DNA overcharging. In this papers , we focus on the effect of multivalent co-ions, specifically divalent co-ions such as SO_4^{2-} . A computational experiment of DNA condensation using Monte-Carlo simulation in grand canonical ensemble is carried out where DNA system is in equilibrium with a bulk solution containing a mixture of salt of different valency of co-ions. Compared to system with purely monovalent co-ions, the influence of divalent co-ions shows up in multiple aspects. Divalent co-ions lead to an increase of monovalent salt in the DNA condensate. Because monovalent salts mostly participate in linear screening of electrostatic interactions in the system, more monovalent salt molecules enter the condensate leads to screening out of short-range DNA-DNA like charge attraction and weaker DNA condensation free energy. The overcharging of DNA by multivalent counterions is also reduced in the presence of divalent co-ions. Strong repulsions between DNA and divalent co-ions and among divalent co-ions themselves leads to a {\em depletion} of negative ions near DNA surface as compared to the case without divalent co-ions. At large distance, the DNA-DNA repulsive interaction is stronger in the presence of divalent co-ions, suggesting that divalent co-ions role is not only that of simple stronger linear screening .
[ { "type": "R", "before": "letter", "after": "papers", "start_char_pos": 278, "end_char_pos": 284 }, { "type": "R", "before": "coion", "after": "co-ions", "start_char_pos": 356, "end_char_pos": 361 }, { "type": "D", "before": ", on the strongly correlated electrostatics of DNA condensation problem", "after": null, "start_char_pos": 380, "end_char_pos": 451 }, { "type": "R", "before": "equilibirium", "after": "equilibrium", "start_char_pos": 596, "end_char_pos": 608 }, { "type": "R", "before": "Additionally, strong", "after": "The overcharging of DNA by multivalent counterions is also reduced in the presence of divalent co-ions. Strong", "start_char_pos": 1157, "end_char_pos": 1177 }, { "type": "R", "before": "This leads to less screened and stronger electrostatic correlations of divalent counterions condensed on the DNA surface. This in turns results in a stronger overcharging of DNA by multivalent counterions", "after": "At large distance, the DNA-DNA repulsive interaction is stronger in the presence of divalent co-ions, suggesting that divalent co-ions role is not only that of simple stronger linear screening", "start_char_pos": 1371, "end_char_pos": 1575 } ]
[ 0, 168, 269, 453, 691, 806, 885, 1156, 1370, 1492 ]
1608.02523
1
We review production function and the hypothesis of equilibrium in the neoclassical framework. We notify that in a soup of sectors in economy while capital and labor resemble extensive variables, wage and rate of return on capital act as intensive variables. As a result, Baumol and Bowen's statement of equal wages is inevitable from thermodynamics point of view. We then try to see how aggregation can be performed concerning the extensive variables in a soup of firms. Finally, we provide a toy model to aggregate production and the labor income as extensive quantities in a neoclassical framework.
We review the production function and the hypothesis of equilibrium in the neoclassical framework. We notify that in a soup of sectors in economy , while capital and labor resemble extensive variables, wage and rate of return on capital act as intensive variables. As a result, Baumol and Bowen's statement of equal wages is inevitable from the thermodynamics point of view. We try to see how aggregation can be performed concerning the extensive variables in a soup of firms. We provide a toy model to perform aggregation for production and the labor income as extensive quantities in a neoclassical framework.
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 10, "end_char_pos": 10 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 143, "end_char_pos": 143 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 337, "end_char_pos": 337 }, { "type": "D", "before": "then", "after": null, "start_char_pos": 371, "end_char_pos": 375 }, { "type": "R", "before": "Finally, we", "after": "We", "start_char_pos": 475, "end_char_pos": 486 }, { "type": "R", "before": "aggregate", "after": "perform aggregation for", "start_char_pos": 510, "end_char_pos": 519 } ]
[ 0, 95, 260, 367, 474 ]
1608.02550
1
We consider the dividends problem for both de Finetti's and Dual models for spectrally one-sided L\'evy processes subject to a constraint on the time of ruin . We0pt%DIFAUXCMD , now in context of one-sided L\'evy risk models. We consider de Finetti's problem in both scenarios with and without fix transaction costs, e.g. taxes. We also study the constrained analog to the so called Dual model. To characterize the solution to the aforementioned models we } introduce the dual problem and show that the complementary slackness condition in both models are satisfied , thus there is no duality gap. Therefore the optimal value function can be obtained as the point-wise infimum of auxiliary value functions indexed by Lagrange multipliers. We also present a numerical example .
We introduce a longevity feature to the classical optimal dividend problem by adding a constraint on the time of ruin of the firm. We extend the results in \mbox{%DIFAUXCMD HJ150pt%DIFAUXCMD , now in context of one-sided L\'evy risk models. We consider de Finetti's problem in both scenarios with and without fix transaction costs, e.g. taxes. We also study the constrained analog to the so called Dual model. To characterize the solution to the aforementioned models we } introduce the dual problem and show that the complementary slackness conditions are satisfied and therefore there is no duality gap. As a consequence the optimal value function can be obtained as the pointwise infimum of auxiliary value functions indexed by Lagrange multipliers. Finally, we illustrate our findings with a series of numerical examples .
[ { "type": "R", "before": "consider the dividends problem for both de Finetti's and Dual models for spectrally one-sided L\\'evy processes subject to", "after": "introduce a longevity feature to the classical optimal dividend problem by adding", "start_char_pos": 3, "end_char_pos": 124 }, { "type": "R", "before": ". We", "after": "of the firm. We extend the results in \\mbox{%DIFAUXCMD HJ15", "start_char_pos": 158, "end_char_pos": 162 }, { "type": "R", "before": "condition in both models are satisfied , thus", "after": "conditions are satisfied and therefore", "start_char_pos": 527, "end_char_pos": 572 }, { "type": "R", "before": "Therefore", "after": "As a consequence", "start_char_pos": 598, "end_char_pos": 607 }, { "type": "R", "before": "point-wise", "after": "pointwise", "start_char_pos": 658, "end_char_pos": 668 }, { "type": "R", "before": "We also present a numerical example", "after": "Finally, we illustrate our findings with a series of numerical examples", "start_char_pos": 739, "end_char_pos": 774 } ]
[ 0, 159, 225, 328, 394, 597, 738 ]
1608.03145
1
We treat proteins as amorphous learning matter: A `gene' encodes bonds in an `amino acid ' network making a `protein' . The gene is evolved until the network forms a shear band across the protein, which allows for long-range soft modes required for protein function. The evolution projects the high-dimensional sequence space onto a low-dimensional space of mechanical modes, in accord with the observed dimensional reduction between genotype and phenotype of proteins. Spectral analysis shows correspondence between localization around the shear band of both mechanical modes and sequence ripples .
How DNA is mapped to functional proteins is a basic question of living matter. We introduce and study a physical model of protein evolution which suggests a mechanical basis for this map. Many proteins rely on large-scale motion to function. We therefore treat protein as learning amorphous matter that evolves towards such a mechanical function: Genes are binary sequences that encode the connectivity of the amino acid network that makes a protein . The gene is evolved until the network forms a shear band across the protein, which allows for long-range , soft modes required for protein function. The evolution reduces the high-dimensional sequence space to a low-dimensional space of mechanical modes, in accord with the observed dimensional reduction between genotype and phenotype of proteins. Spectral analysis of the space of 10^6 solutions shows a strong correspondence between localization around the shear band of both mechanical modes and the sequence structure. Specifically, our model shows how mutations of the gene and their correlations occur at amino acids whose interactions determine the functional mode .
[ { "type": "R", "before": "We treat proteins as amorphous learning matter: A `gene' encodes bonds in an `amino acid ' network making a `protein'", "after": "How DNA is mapped to functional proteins is a basic question of living matter. We introduce and study a physical model of protein evolution which suggests a mechanical basis for this map. Many proteins rely on large-scale motion to function. We therefore treat protein as learning amorphous matter that evolves towards such a mechanical function: Genes are binary sequences that encode the connectivity of the amino acid network that makes a protein", "start_char_pos": 0, "end_char_pos": 117 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 225, "end_char_pos": 225 }, { "type": "R", "before": "projects", "after": "reduces", "start_char_pos": 282, "end_char_pos": 290 }, { "type": "R", "before": "onto", "after": "to", "start_char_pos": 327, "end_char_pos": 331 }, { "type": "R", "before": "shows", "after": "of the space of 10^6 solutions shows a strong", "start_char_pos": 489, "end_char_pos": 494 }, { "type": "R", "before": "sequence ripples", "after": "the sequence structure. Specifically, our model shows how mutations of the gene and their correlations occur at amino acids whose interactions determine the functional mode", "start_char_pos": 582, "end_char_pos": 598 } ]
[ 0, 119, 267, 470 ]
1608.03993
1
Mathematical modeling has become an established tool for studying biological dynamics . Current applications range from building models that reproduce quantitative data to identifying models with predefined qualitative features, such as switching behavior , bistability or oscillations. Mathematically, the latter question amounts to identifying parameter values associated with a given qualitative feature. We introduce an algorithm to partition the parameter space of a parameterized system of ordinary differential equations into regions for which the system has a unique or multiple equilibria. The algorithm is based on a simple idea, the computation of the Brouwer degree, and creates a multivariate polynomial with parameter depending coefficients. Using algebraic techniques, the signs of the coefficients reveal parameter regions with and without multistationarity. We demonstrate the algorithm on models of gene transcription and cell signaling, and argue that the parameter constraints defining each region have biological meaningful interpretations .
Mathematical modelling has become an established tool for studying the dynamics of biological systems . Current applications range from building models that reproduce quantitative data to identifying systems with predefined qualitative features, such as switching behaviour , bistability or oscillations. Mathematically, the latter question amounts to identifying parameter values associated with a given qualitative feature. We introduce a procedure to partition the parameter space of a parameterized system of ordinary differential equations into regions for which the system has a unique or multiple equilibria. The procedure is based on the computation of the Brouwer degree, and it creates a multivariate polynomial with parameter depending coefficients. The signs of the coefficients determine parameter regions with and without multistationarity. A particular strength of the procedure is the avoidance of numerical analysis and parameter sampling. The procedure consists of a number of steps. Each of these steps might be addressed algorithmically using various computer programs and available software, or manually. We demonstrate our procedure on several models of gene transcription and cell signalling, and show that in many cases we obtain a complete partitioning of the parameter space with respect to multistationarity .
[ { "type": "R", "before": "modeling", "after": "modelling", "start_char_pos": 13, "end_char_pos": 21 }, { "type": "R", "before": "biological dynamics", "after": "the dynamics of biological systems", "start_char_pos": 66, "end_char_pos": 85 }, { "type": "R", "before": "models", "after": "systems", "start_char_pos": 184, "end_char_pos": 190 }, { "type": "R", "before": "behavior", "after": "behaviour", "start_char_pos": 247, "end_char_pos": 255 }, { "type": "R", "before": "an algorithm", "after": "a procedure", "start_char_pos": 421, "end_char_pos": 433 }, { "type": "R", "before": "algorithm", "after": "procedure", "start_char_pos": 603, "end_char_pos": 612 }, { "type": "D", "before": "a simple idea,", "after": null, "start_char_pos": 625, "end_char_pos": 639 }, { "type": "A", "before": null, "after": "it", "start_char_pos": 683, "end_char_pos": 683 }, { "type": "R", "before": "Using algebraic techniques, the", "after": "The", "start_char_pos": 757, "end_char_pos": 788 }, { "type": "R", "before": "reveal", "after": "determine", "start_char_pos": 815, "end_char_pos": 821 }, { "type": "R", "before": "We demonstrate the algorithm on", "after": "A particular strength of the procedure is the avoidance of numerical analysis and parameter sampling. The procedure consists of a number of steps. Each of these steps might be addressed algorithmically using various computer programs and available software, or manually. We demonstrate our procedure on several", "start_char_pos": 876, "end_char_pos": 907 }, { "type": "R", "before": "signaling, and argue that the parameter constraints defining each region have biological meaningful interpretations", "after": "signalling, and show that in many cases we obtain a complete partitioning of the parameter space with respect to multistationarity", "start_char_pos": 946, "end_char_pos": 1061 } ]
[ 0, 286, 407, 598, 756, 875 ]
1608.04683
1
This paper introduces a more general modeling world than available under the classical no-arbitrage paradigm in finance. New research questions and interesting related econometric studies emerge naturally. To explain in this paper the new approach and illustrate first important consequences, we show how to hedge a zero coupon bond with a smaller amount of initial capital than required by the classical risk neutral paradigm, whose (trivial) hedging strategy does not suggest to invest in the risky assets. Long dated zero coupon bonds we derive, invest first primarily in risky securities and when approaching more and more the maturity date they increase also more and more the fraction invested in fixed income. The conventional wisdom of financial planners suggesting investor to invest in risky securities when they are young and mostly in fixed income when they approach retirement, is here made rigorous . The main reason for the existence of less expensive zero coupon bonds is the strict supermartingale property of benchmarked savings accounts under the real world probability measure, which the calibrated parameters identify under the proposed model. We provide intuition and insight on the strict supermartingale property. The less expensive zero coupon bonds provide only one first example that is indicative for the changes that the new approach offers in the much wider modeling world . The paper provides a strong warning for life insurers, pension fund managers and long term investors to take the possibility of less expensive products seriously to avoid the adverse consequences of the low interest rate regimes that many developed economies face.
In this paper we show how to hedge a zero coupon bond with a smaller amount of initial capital than required by the classical risk neutral paradigm, whose (trivial) hedging strategy does not suggest to invest in the risky assets. Long dated zero coupon bonds we derive, invest first primarily in risky securities and when approaching more and more the maturity date they increase also more and more the fraction invested in fixed income. The conventional wisdom of financial planners suggesting investor to invest in risky securities when they are young and mostly in fixed income when they approach retirement, is here made rigorous . The paper provides a strong warning for life insurers, pension fund managers and long term investors to take the possibility of less expensive products seriously to avoid the adverse consequences of the low interest rate regimes that many developed economies face.
[ { "type": "R", "before": "This paper introduces a more general modeling world than available under the classical no-arbitrage paradigm in finance. New research questions and interesting related econometric studies emerge naturally. To explain in this paper the new approach and illustrate first important consequences,", "after": "In this paper", "start_char_pos": 0, "end_char_pos": 292 }, { "type": "D", "before": ". The main reason for the existence of less expensive zero coupon bonds is the strict supermartingale property of benchmarked savings accounts under the real world probability measure, which the calibrated parameters identify under the proposed model. We provide intuition and insight on the strict supermartingale property. The less expensive zero coupon bonds provide only one first example that is indicative for the changes that the new approach offers in the much wider modeling world", "after": null, "start_char_pos": 913, "end_char_pos": 1402 } ]
[ 0, 120, 205, 508, 716, 914, 1164, 1237, 1404 ]
1608.05498
1
Conditional forecasts of risk measures play an important role in internal risk management of financial institutions as well as in regulatory capital calculations. In order to assess forecasting performance of a risk measurement procedure, risk measure forecasts are compared to the realized financial losses over a period of time and a statistical test of correctness of the procedure is conducted. This process is known as backtesting. Such traditional backtests are concerned with assessing some optimality property of a set of risk measure estimates. However, they are not suited to compare different risk estimation procedures. We investigate the proposal of comparative backtests, which are better suited for method comparisons on the basis of forecasting accuracy, but necessitate an elicitable risk measure. The discussion focuses on three risk measures, value-at-risk , expected shortfall and expectiles, and is supported by a simulation study and data analysis.
Conditional forecasts of risk measures play an important role in internal risk management of financial institutions as well as in regulatory capital calculations. In order to assess forecasting performance of a risk measurement procedure, risk measure forecasts are compared to the realized financial losses over a period of time and a statistical test of correctness of the procedure is conducted. This process is known as backtesting. Such traditional backtests are concerned with assessing some optimality property of a set of risk measure estimates. However, they are not suited to compare different risk estimation procedures. We investigate the proposal of comparative backtests, which are better suited for method comparisons on the basis of forecasting accuracy, but necessitate an elicitable risk measure. We argue that supplementing traditional backtests with comparative backtests will enhance the existing trading book regulatory framework for banks by providing the correct incentive for accuracy of risk measure forecasts. In addition, the comparative backtesting framework could be used by banks internally as well as by researchers to guide selection of forecasting methods. The discussion focuses on three risk measures, Value-at-Risk , expected shortfall and expectiles, and is supported by a simulation study and data analysis.
[ { "type": "A", "before": null, "after": "We argue that supplementing traditional backtests with comparative backtests will enhance the existing trading book regulatory framework for banks by providing the correct incentive for accuracy of risk measure forecasts. In addition, the comparative backtesting framework could be used by banks internally as well as by researchers to guide selection of forecasting methods.", "start_char_pos": 815, "end_char_pos": 815 }, { "type": "R", "before": "value-at-risk", "after": "Value-at-Risk", "start_char_pos": 863, "end_char_pos": 876 } ]
[ 0, 162, 398, 436, 553, 631, 814 ]
1608.05585
1
Given a finite set of European call option prices on a single underlying, we want to know when there is a market model which is consistent with these prices. In contrast to previous studies, we allow models where the underlying trades at a bid-ask spread. The main question then is how large (in terms of a deterministic bound) this spread must be to explain the given prices. We fully solve this problem in the case of a single maturity, and give several partial results for multiple maturities. For the latter, our main mathematical tool is a recent generalization of Strassen's theorem [S. Gerhold, I.C. G\"ul\"um , arXiv:1512.06640] , which characterizes the existence of martingales in balls w.r.t. the infinity Wasserstein distance .
Given a finite set of European call option prices on a single underlying, we want to know when there is a market model which is consistent with these prices. In contrast to previous studies, we allow models where the underlying trades at a bid-ask spread. The main question then is how large (in terms of a deterministic bound) this spread must be to explain the given prices. We fully solve this problem in the case of a single maturity, and give several partial results for multiple maturities. For the latter, our main mathematical tool is a recent result on approximation by peacocks [S. Gerhold, I.C. G\"ul\"uum , arXiv:1512.06640] .
[ { "type": "R", "before": "generalization of Strassen's theorem", "after": "result on approximation by peacocks", "start_char_pos": 552, "end_char_pos": 588 }, { "type": "R", "before": "G\\\"ul\\\"um", "after": "G\\\"ul\\\"uum", "start_char_pos": 607, "end_char_pos": 616 }, { "type": "D", "before": ", which characterizes the existence of martingales in balls w.r.t. the infinity Wasserstein distance", "after": null, "start_char_pos": 637, "end_char_pos": 737 } ]
[ 0, 157, 255, 376, 496 ]
1608.05900
1
We consider a dynamic market model where buyers and sellers submit limit orders . If at a given moment in time, the buyer is unable to complete his entire order due to the shortage of sell orders at the required limit price, the unmatched part of the order is recorded in the order book. Subsequently these buy unmatched orders may be matched with new incoming sell orders. The resulting demand curve constitutes the sole input to our model. The clearing price is then mechanically calculated using the market clearing condition. We model liquidity by considering the impact of a large trader on the market and on the clearing price. We assume a continuous model for the demand curve. We show that generically there exists an equivalent martingale measure for the clearing price , for all possible strategies of the large trader, if the driving noise is a Brownian sheet, while there may not be if the driving noise is multidimensional Brownian motion. Another contribution of this paper is to prove that, if there exists such an equivalent martingalemeasure, then, under mild conditions, there is no arbitrage. We use the Ito-Wentzell formula to obtain both results. We also characterize the dynamics of the demand curve and of the clearing price in the equivalent martingale measure. We find that the volatility of the clearing price is inversely proportional to the sum of buy and sell order flow density (evaluated at the clearing price ), which confirms the intuition that volatility is inversely proportional to volume. We also demonstrate that our approach is implementable. We use real order book data and simulate option prices under a particularly simple parameterization of our model. The no-arbitrage conditions we obtain are applicable to a wide class of models, in the same way that the Heath-Jarrow-Morton conditions apply to a wide class of interest rate models .
We consider a dynamic market model of liquidity where unmatched buy and sell limit orders are stored in order books. The resulting net demand surface constitutes the sole input to the model. We prove that generically there is no arbitrage in the model when the driving noise is a stochastic string. Under the equivalent martingale measure , the clearing price is a martingale, and options can be priced under the no-arbitrage hypothesis. We consider several parameterized versions of the model, and show some advantages of specifying the demand curve as quantity as a function of price (as opposed to price as a function of quantity). We calibrate our model to real order book data , compute option prices by Monte Carlo simulation, and compare the results to observed data .
[ { "type": "R", "before": "where buyers and sellers submit limit orders . If at a given moment in time, the buyer is unable to complete his entire order due to the shortage of sell orders at the required limit price, the unmatched part of the order is recorded in the order book. Subsequently these buy unmatched orders may be matched with new incoming sell orders. The resulting demand curve", "after": "of liquidity where unmatched buy and sell limit orders are stored in order books. The resulting net demand surface", "start_char_pos": 35, "end_char_pos": 400 }, { "type": "R", "before": "our model. The clearing price is then mechanically calculated using the market clearing condition. We model liquidity by considering the impact of a large trader on the market and on the clearing price. We assume a continuous model for the demand curve. We show that generically there exists an", "after": "the model. We prove that generically there is no arbitrage in the model when the driving noise is a stochastic string. Under the", "start_char_pos": 431, "end_char_pos": 725 }, { "type": "R", "before": "for", "after": ",", "start_char_pos": 756, "end_char_pos": 759 }, { "type": "R", "before": ", for all possible strategies of the large trader, if the driving noise is a Brownian sheet, while there may not be if the driving noise is multidimensional Brownian motion. Another contribution of this paper is to prove that, if there exists such an equivalent martingalemeasure, then, under mild conditions, there is no arbitrage. We use the Ito-Wentzell formula to obtain both results. We also characterize the dynamics of", "after": "is a martingale, and options can be priced under", "start_char_pos": 779, "end_char_pos": 1204 }, { "type": "A", "before": null, "after": "no-arbitrage hypothesis. We consider several parameterized versions of the model, and show some advantages of specifying the", "start_char_pos": 1209, "end_char_pos": 1209 }, { "type": "R", "before": "and of the clearing price in the equivalent martingale measure. We find that the volatility of the clearing price is inversely proportional to the sum of buy and sell order flow density (evaluated at the clearing price ), which confirms the intuition that volatility is inversely proportional to volume. We also demonstrate that our approach is implementable. We use", "after": "as quantity as a function of price (as opposed to price as a function of quantity). We calibrate our model to", "start_char_pos": 1223, "end_char_pos": 1589 }, { "type": "R", "before": "and simulate option prices under a particularly simple parameterization of our model. The no-arbitrage conditions we obtain are applicable to a wide class of models, in the same way that the Heath-Jarrow-Morton conditions apply to a wide class of interest rate models", "after": ", compute option prices by Monte Carlo simulation, and compare the results to observed data", "start_char_pos": 1611, "end_char_pos": 1878 } ]
[ 0, 287, 373, 441, 529, 633, 684, 952, 1111, 1167, 1286, 1526, 1582, 1696 ]
1608.06376
1
The classical derivation of the well-known Vasicek model for interest rates is reformulated in terms of the associated pricing kernel. An advantage of the pricing kernel method is that it allows one to generalize the construction to the L\'evy-Vasicek case, avoiding issues of market incompleteness. In the L\'evy-Vasicek model the short rate is taken in the real-world measure to be a mean-reverting process with a general one-dimensional L\'evy driver admitting exponential moments. Expressions are obtained for the L\'evy-Vasicek bond prices and interest rates, along with a formula for the corresponding long-bond return process defined by L_t = lim _{T %DIFDELCMD < \to %%% \infty} P_{tT} / P_{0T}, where P_{tT} is the price at time t of a T-maturity discount bond. We show that the pricing kernel of a L\'evy-Vasicek model is uniformly integrable if and only if the long rate of interest is strictly positive.
The classical derivation of the well-known Vasicek model for interest rates is reformulated in terms of the associated pricing kernel. An advantage of the pricing kernel method is that it allows one to generalize the construction to the L\'evy-Vasicek case, avoiding issues of market incompleteness. In the L\'evy-Vasicek model the short rate is taken in the real-world measure to be a mean-reverting process with a general one-dimensional L\'evy driver admitting exponential moments. Expressions are obtained for the L\'evy-Vasicek bond prices and interest rates, along with a formula for the return on a unit investment in the long bond, defined by L_t = \lim _{T %DIFDELCMD < \to %%% \rightarrow \infty} P_{tT} / P_{0T}, where P_{tT} is the price at time t of a T-maturity discount bond. We show that the pricing kernel of a L\'evy-Vasicek model is uniformly integrable if and only if the long rate of interest is strictly positive.
[ { "type": "R", "before": "corresponding long-bond return process", "after": "return on a unit investment in the long bond,", "start_char_pos": 594, "end_char_pos": 632 }, { "type": "R", "before": "lim", "after": "\\lim", "start_char_pos": 650, "end_char_pos": 653 }, { "type": "A", "before": null, "after": "\\rightarrow", "start_char_pos": 679, "end_char_pos": 679 } ]
[ 0, 134, 299, 484, 771 ]
1608.06476
1
Multiple biological processes are driven by oscillatory gene expression at different time scales. Oscillatory dynamics are thought to be widespread due to their superior information encoding capabilities, and single cell live imaging of gene expression has lead to a surge of dynamic, possibly oscillatory, data for different gene networks. However, the regulation of gene expression at the level of an individual cell involves reactions between finite numbers of molecules, and this can result in inherent randomness in oscillatory dynamics. Furthermore, the process of transcription has been shown to be bursty, which blurs the boundaries between aperiodic fluctuations and noisy oscillators. Thus, there is an acute need for an objective statistical method for classifying whether an experimentally derived noisy time series is periodic. Here we present a new data analysis method that combines mechanistic stochastic modelling with the powerful methods of Bayesian non-parametric regression . Our method can distinguish oscillatory expression from random fluctuations of non-oscillatory gene expression in single cell data , despite peak-to-peak variability in period and amplitude of single cell oscillations. We show that our method outperforms the Lomb-Scargle periodogram in successfully classifying cells as oscillatory or non-oscillatory in data simulated from a simple genetic oscillator model . Analysis of bioluminescent live cell imaging shows a significantly greater number of oscillatory cells when luciferase is driven by a {\it Hes1 promoter (10/19), which has previously been reported to oscillate, than the constitutive MMLV promoter (0/25). The method can be applied to data from any gene network to both quantify the proportion of oscillating cells within a population and to measure the period and quality of oscillations. It is publicly available as a MATLAB package.
Multiple biological processes are driven by oscillatory gene expression at different time scales. Pulsatile dynamics are thought to be widespread , and single-cell live imaging of gene expression has lead to a surge of dynamic, possibly oscillatory, data for different gene networks. However, the regulation of gene expression at the level of an individual cell involves reactions between finite numbers of molecules, and this can result in inherent randomness in expression dynamics, which blurs the boundaries between aperiodic fluctuations and noisy oscillators. Thus, there is an acute need for an objective statistical method for classifying whether an experimentally derived noisy time series is periodic. Here we present a new data analysis method that combines mechanistic stochastic modelling with the powerful methods of non-parametric regression with Gaussian processes . Our method can distinguish oscillatory gene expression from random fluctuations of non-oscillatory expression in single-cell time series , despite peak-to-peak variability in period and amplitude of single-cell oscillations. We show that our method outperforms the Lomb-Scargle periodogram in successfully classifying cells as oscillatory or non-oscillatory in data simulated from a simple genetic oscillator model and in experimental data . Analysis of bioluminescent live cell imaging shows a significantly greater number of oscillatory cells when luciferase is driven by a {\it Hes1 promoter (10/19), which has previously been reported to oscillate, than the constitutive MoMuLV 5' LTR (MMLV) promoter (0/25). The method can be applied to data from any gene network to both quantify the proportion of oscillating cells within a population and to measure the period and quality of oscillations. It is publicly available as a MATLAB package.
[ { "type": "R", "before": "Oscillatory", "after": "Pulsatile", "start_char_pos": 98, "end_char_pos": 109 }, { "type": "R", "before": "due to their superior information encoding capabilities, and single cell", "after": ", and single-cell", "start_char_pos": 148, "end_char_pos": 220 }, { "type": "R", "before": "oscillatory dynamics. Furthermore, the process of transcription has been shown to be bursty,", "after": "expression dynamics,", "start_char_pos": 521, "end_char_pos": 613 }, { "type": "D", "before": "Bayesian", "after": null, "start_char_pos": 960, "end_char_pos": 968 }, { "type": "A", "before": null, "after": "with Gaussian processes", "start_char_pos": 995, "end_char_pos": 995 }, { "type": "A", "before": null, "after": "gene", "start_char_pos": 1037, "end_char_pos": 1037 }, { "type": "R", "before": "gene expression in single cell data", "after": "expression in single-cell time series", "start_char_pos": 1093, "end_char_pos": 1128 }, { "type": "R", "before": "single cell", "after": "single-cell", "start_char_pos": 1191, "end_char_pos": 1202 }, { "type": "A", "before": null, "after": "and in experimental data", "start_char_pos": 1407, "end_char_pos": 1407 }, { "type": "R", "before": "MMLV", "after": "MoMuLV 5' LTR (MMLV)", "start_char_pos": 1643, "end_char_pos": 1647 } ]
[ 0, 97, 340, 542, 694, 840, 997, 1216, 1664, 1848 ]
1608.06582
1
Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the Chemical Master Equation. Despite its simple structure, no analytic solutions to the Chemical Master Equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic models for chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight various of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics.
Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the Chemical Master Equation. Despite its simple structure, no analytic solutions to the Chemical Master Equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics.
[ { "type": "R", "before": "models for", "after": "methods for modelling", "start_char_pos": 854, "end_char_pos": 864 }, { "type": "R", "before": "various", "after": "some", "start_char_pos": 1360, "end_char_pos": 1367 } ]
[ 0, 81, 165, 292, 404, 541, 676, 791, 946, 1151, 1258, 1417, 1563 ]
1608.07226
1
In this paper we investigate the hedging problem of a defaultable claim with recovery at default time via the local risk-minimization approach when investors have a restricted information on the market. We assume that the stock price process dynamics depends on an exogenous unobservable stochastic factor and that at any time, investors may observe the risky asset price and know if default has occurred or not . We characterize the optimal strategy in terms of the integrand in the Galtchouk-Kunita-Watanabe decomposition of the defaultable claim with respect to the minimal martingale measure and the available information flow. Finally, we provide an explicit formula by means of predictable projection of the corresponding hedging strategy under full information with respect to the natural filtration of the risky asset price and the minimal martingale measure in a Markovian setting via filtering.
In this paper we investigate the hedging problem of a unit-linked life insurance contract via the local risk-minimization approach , when the insurer has a restricted information on the market. In particular, we consider an endowment insurance contract, that is a combination of a term insurance policy and a pure endowment, whose final value depends on the trend of a stock market where the premia the policyholder pays are invested. We assume that the stock price process dynamics depends on an exogenous unobservable stochastic factor that also influences the mortality rate of the policyholder. To allow for mutual dependence between the financial and the insurance markets, we use the progressive enlargement of filtration approach . We characterize the optimal hedging strategy in terms of the integrand in the Galtchouk-Kunita-Watanabe decomposition of the insurance claim with respect to the minimal martingale measure and the available information flow. We provide an explicit formula by means of predictable projection of the corresponding hedging strategy under full information with respect to the natural filtration of the risky asset price and the minimal martingale measure . Finally, we discuss applications in a Markovian setting via filtering.
[ { "type": "R", "before": "defaultable claim with recovery at default time", "after": "unit-linked life insurance contract", "start_char_pos": 54, "end_char_pos": 101 }, { "type": "R", "before": "when investors have", "after": ", when the insurer has", "start_char_pos": 143, "end_char_pos": 162 }, { "type": "A", "before": null, "after": "In particular, we consider an endowment insurance contract, that is a combination of a term insurance policy and a pure endowment, whose final value depends on the trend of a stock market where the premia the policyholder pays are invested.", "start_char_pos": 203, "end_char_pos": 203 }, { "type": "R", "before": "and that at any time, investors may observe the risky asset price and know if default has occurred or not", "after": "that also influences the mortality rate of the policyholder. To allow for mutual dependence between the financial and the insurance markets, we use the progressive enlargement of filtration approach", "start_char_pos": 307, "end_char_pos": 412 }, { "type": "A", "before": null, "after": "hedging", "start_char_pos": 443, "end_char_pos": 443 }, { "type": "R", "before": "defaultable", "after": "insurance", "start_char_pos": 533, "end_char_pos": 544 }, { "type": "R", "before": "Finally, we", "after": "We", "start_char_pos": 634, "end_char_pos": 645 }, { "type": "A", "before": null, "after": ". Finally, we discuss applications", "start_char_pos": 869, "end_char_pos": 869 } ]
[ 0, 202, 414, 633 ]
1608.07663
1
The living cell uses a variety of molecular receptors and signaling networks to read and process chemical signals that may vary in space and time. We model the dynamics of such molecular level measurements as Markov processes in steady state, with a unidirectional coupling between the receptor and the signal. We prove exactly that, when the receptor system does not perturb the signal dynamics , the free energy consumed by the measurement process is lower bounded by the product of the mutual informationand the time-scale of signal dynamics. Our results apply to arbitrary network topologies and transition rates , and therefore should hold as a general principle for biomolecular information processing .
The living cell uses a variety of molecular receptors and signaling networks to read and process chemical signals that may vary in space and time. We model the dynamics of such molecular level measurements as Markov processes in steady state, with a coupling between the receptor and the signal. We prove exactly that, when the the signal dynamics is not perturbed by the receptors, lower bounded by a quantity proportional to the mutual information. Our result is completely independent of the receptor architecture and dependent on signal properties alone , and therefore holds as a general principle for molecular information processing. A Maxwell's Demon performing non-perturbing measurements must produce entropy at a rate greater than or equal to our bound, irrespective of how it is designed .
[ { "type": "D", "before": "unidirectional", "after": null, "start_char_pos": 250, "end_char_pos": 264 }, { "type": "D", "before": "receptor system does not perturb", "after": null, "start_char_pos": 343, "end_char_pos": 375 }, { "type": "R", "before": ", the free energy consumed by the measurement process is", "after": "is not perturbed by the receptors,", "start_char_pos": 396, "end_char_pos": 452 }, { "type": "R", "before": "the product of the mutual informationand the time-scale of signal dynamics. Our results apply to arbitrary network topologies and transition rates", "after": "a quantity proportional to the mutual information. Our result is completely independent of the receptor architecture and dependent on signal properties alone", "start_char_pos": 470, "end_char_pos": 616 }, { "type": "R", "before": "should hold", "after": "holds", "start_char_pos": 633, "end_char_pos": 644 }, { "type": "R", "before": "biomolecular information processing", "after": "molecular information processing. A Maxwell's Demon performing non-perturbing measurements must produce entropy at a rate greater than or equal to our bound, irrespective of how it is designed", "start_char_pos": 672, "end_char_pos": 707 } ]
[ 0, 146, 310, 545 ]
1608.07663
2
The living cell uses a variety of molecular receptors and signaling networks to read and process chemical signals that may vary in space and time. We model the dynamics of such molecular level measurements as Markov processes in steady state, with a coupling between the receptor and the signal. We prove exactly that, when the the signal dynamics is not perturbed by the receptors, lower bounded by a quantity proportional to the mutual information. Our result is completely independent of the receptor architecture and dependent on signal properties alone, and therefore holds as a general principle for molecular information processing . A Maxwell's Demon performing non-perturbing measurements must produce entropy at a rate greater than or equal to our bound, irrespective of how it is designed .
The living cell uses a variety of molecular receptors to read and process chemical signals that vary in space and time. We model the dynamics of such molecular level measurements as Markov processes in steady state, with a coupling between the receptor and the signal. We prove exactly that, when the the signal dynamics is not perturbed by the receptors, the free energy consumed by the measurement process is lower bounded by a quantity proportional to the mutual information. Our result is completely independent of the receptor architecture and dependent on signal properties alone, and therefore holds as a general principle for molecular information processing .
[ { "type": "D", "before": "and signaling networks", "after": null, "start_char_pos": 54, "end_char_pos": 76 }, { "type": "D", "before": "may", "after": null, "start_char_pos": 119, "end_char_pos": 122 }, { "type": "A", "before": null, "after": "the free energy consumed by the measurement process is", "start_char_pos": 383, "end_char_pos": 383 }, { "type": "D", "before": ". A Maxwell's Demon performing non-perturbing measurements must produce entropy at a rate greater than or equal to our bound, irrespective of how it is designed", "after": null, "start_char_pos": 640, "end_char_pos": 800 } ]
[ 0, 146, 295, 451 ]
1608.07752
1
The behavior of stock market returns over a period of 1-60 days has been investigated for S&P 500 and Nasdaq within the framework of nonextensive Tsallis statistics. Even for such long terms, the distributions of the returns are non-Gaussian. They have fat tails indicating long range correlations persist . In this work, a good fit to a Tsallis q-Gaussian distribution is obtained for the distributions of all the returns using the method of Maximum Likelihood Estimate. For all the regions of data considered, the values of the scaling parameter q, estimated from one day returns, lie in the range 1.4 to 1.65. The estimated inverse mean square deviations \b{eta show a power law behavior in time with exponent values between -0.91 and -1.1 indicating normal to mildly subdiffusive behavior. Quite often, the dynamics of market return distributions is modelled by a Fokker-Plank (FP) equation either with a linear drift and a nonlinear diffusion term or with just a nonlinear diffusion term. Both of these cases support a q-Gaussian distribution as a solution. The distributions obtained from current estimated parameters are compared with the solutions of the FP equations. For negligible drift term, the inverse mean square deviation \b{eta from the FP model follows a power law with exponent values between -1.25 and -1.48 indicating superdiffusion. When the drift term is non-negligible, the corresponding \b{eta not follow a power law and becomes stationary after a certain characteristic time that depends on the values of the drift parameter and q. Neither of these behaviors is supported by the results of the empirical fit
The behavior of stock market returns over a period of 1-60 days has been investigated for S&P 500 and Nasdaq within the framework of nonextensive Tsallis statistics. Even for such long terms, the distributions of the returns are non-Gaussian. They have fat tails indicating that the stock returns do not follow a random walk model . In this work, a good fit to a Tsallis q-Gaussian distribution is obtained for the distributions of all the returns using the method of Maximum Likelihood Estimate. For all the regions of data considered, the values of the scaling parameter q, estimated from one day returns, lie in the range 1.4 to 1.65. The estimated inverse mean square deviations (beta) show a power law behavior in time with exponent values between -0.91 and -1.1 indicating normal to mildly subdiffusive behavior. Quite often, the dynamics of market return distributions is modelled by a Fokker-Plank (FP) equation either with a linear drift and a nonlinear diffusion term or with just a nonlinear diffusion term. Both of these cases support a q-Gaussian distribution as a solution. The distributions obtained from current estimated parameters are compared with the solutions of the FP equations. For negligible drift term, the inverse mean square deviations (betaFP) from the FP model follow a power law with exponent values between -1.25 and -1.48 indicating superdiffusion. When the drift term is non-negligible, the corresponding betaFP do not follow a power law and become stationary after certain characteristic times that depend on the values of the drift parameter and q. Neither of these behaviors is supported by the results of the empirical fit .
[ { "type": "R", "before": "long range correlations persist", "after": "that the stock returns do not follow a random walk model", "start_char_pos": 274, "end_char_pos": 305 }, { "type": "R", "before": "\\b{eta", "after": "(beta)", "start_char_pos": 658, "end_char_pos": 664 }, { "type": "R", "before": "deviation \\b{eta", "after": "deviations (betaFP)", "start_char_pos": 1228, "end_char_pos": 1244 }, { "type": "R", "before": "follows", "after": "follow", "start_char_pos": 1263, "end_char_pos": 1270 }, { "type": "R", "before": "\\b{eta", "after": "betaFP do", "start_char_pos": 1412, "end_char_pos": 1418 }, { "type": "R", "before": "becomes stationary after a certain characteristic time that depends", "after": "become stationary after certain characteristic times that depend", "start_char_pos": 1446, "end_char_pos": 1513 }, { "type": "A", "before": null, "after": ".", "start_char_pos": 1634, "end_char_pos": 1634 } ]
[ 0, 165, 242, 307, 471, 612, 793, 993, 1062, 1176, 1354 ]
1608.07863
1
In this article, we consider the small-time asymptotics of options on a Leveraged Exchange-Traded Fund (LETF) when the underlying Exchange Traded Fund (ETF) exhibits both local volatility and jumps of either finite or infinite activity. Our main results are closed-form expressions for the leading order terms of off-the-money European call and put LETF option prices, near expiration, with explicit error bounds. We show that the price of an out-of-the-money European call on a LETF with positive (negative) leverage is asymptotically equivalent, in short-time, to the price of an out-of-the-money European call (put) on the underlying ETF, but with modified spot and strike prices. Similar relationships hold for other off-the-money European options. In particular, our results suggest a method to hedge off-the-money LETF options near expiration using options on the underlying ETF. Finally, a second order expansion for the corresponding implied volatilities is also derived and illustrated numerically.
In this article, we consider the small-time asymptotics of options on a Leveraged Exchange-Traded Fund (LETF) when the underlying Exchange Traded Fund (ETF) exhibits both local volatility and jumps of either finite or infinite activity. Our main results are closed-form expressions for the leading order terms of off-the-money European call and put LETF option prices, near expiration, with explicit error bounds. We show that the price of an out-of-the-money European call on a LETF with positive (negative) leverage is asymptotically equivalent, in short-time, to the price of an out-of-the-money European call (put) on the underlying ETF, but with modified spot and strike prices. Similar relationships hold for other off-the-money European options. In particular, our results suggest a method to hedge off-the-money LETF options near expiration using options on the underlying ETF. Finally, a second order expansion for the corresponding implied volatility is also derived and illustrated numerically.
[ { "type": "R", "before": "Leveraged Exchange-Traded Fund", "after": "Leveraged Exchange-Traded Fund", "start_char_pos": 72, "end_char_pos": 102 }, { "type": "R", "before": "volatilities", "after": "volatility", "start_char_pos": 950, "end_char_pos": 962 } ]
[ 0, 236, 413, 683, 752, 885 ]
1608.07967
1
DNA replication is a process which is common to all domains of life yet different replication mechanisms are seen among URLanisms. The mechanism by which Acanthamoeba polyphaga mimivirus (APMV) undergoes replication is not characterized. Presence of intergenic short terminal repeats reveals that Mimivirus genome assumes a unique Q shape during replication . The mechanism of replication on such a structure is not yet understood. With this bigger picture in mind, we have initiated the characterization of a putative primase that is probably involved in the DNA replicationand/or repair in Mimivirus. Sequence alignment of gp0577 the protein encoded by gene L537 with other primases reveals that it contains the motifs common to the superfamily Archaeo-Eukaryotic Primase (AEP) and it aligns with Primpols, a novel type of primase which has RNA and DNA polymerase activities. Our initial analysis revealed that gp0577 probably exists as a dimer in its native state. We also identified the presence of conserved motifs: DxD, sxH, h- and Zn binding motif . Using Dpni mediated site directed mutagenesis we have generated active site mutants for all motifs. Templates of defined sequence were utilized to investigate the mechanism of primer synthesis by gp0577. Previous studies have suggested that the rate-limiting step of primer synthesis occurs during primer initiation during the formation of the dinucleotide. Thus we checked the effect of different concentrations of rNTPs involved in initial dinucleotide synthesis on primer synthesis. Consistent with this idea, increasing the concentration of NTPs required for dinucleotide synthesis increased the rate of primer synthesis, whereas increasing the concentration of NTPs not involved in dinucleotide synthesis inhibited primer synthesis .
DNA replication is a process which is common to all domains of life yet different replication mechanisms are seen among URLanisms. The mechanism of replication on such a structure is not yet understood. With this bigger picture in mind, we have initiated the characterization involved in DNA replication .
[ { "type": "R", "before": "by which Acanthamoeba polyphaga mimivirus (APMV) undergoes replication is not characterized. Presence of intergenic short terminal repeats reveals that Mimivirus genome assumes a unique Q shape during replication . The mechanism of replication", "after": "of replication", "start_char_pos": 145, "end_char_pos": 388 }, { "type": "R", "before": "of a putative primase that is probably involved in the DNA replicationand/or repair in Mimivirus. Sequence alignment of gp0577 the protein encoded by gene L537 with other primases reveals that it contains the motifs common to the superfamily Archaeo-Eukaryotic Primase (AEP) and it aligns with Primpols, a novel type of primase which has RNA and DNA polymerase activities. Our initial analysis revealed that gp0577 probably exists as a dimer in its native state. We also identified the presence of conserved motifs: DxD, sxH, h- and Zn binding motif . Using Dpni mediated site directed mutagenesis we have generated active site mutants for all motifs. Templates of defined sequence were utilized to investigate the mechanism of primer synthesis by gp0577. Previous studies have suggested that the rate-limiting step of primer synthesis occurs during primer initiation during the formation of the dinucleotide. Thus we checked the effect of different concentrations of rNTPs involved in initial dinucleotide synthesis on primer synthesis. Consistent with this idea, increasing the concentration of NTPs required for dinucleotide synthesis increased the rate of primer synthesis, whereas increasing the concentration of NTPs not involved in dinucleotide synthesis inhibited primer synthesis", "after": "involved in DNA replication", "start_char_pos": 505, "end_char_pos": 1793 } ]
[ 0, 130, 237, 359, 431, 602, 877, 967, 1056, 1156, 1260, 1414, 1542 ]
1608.08007
1
Ultrasensitive response motifs, which are capable of converting graded stimulus in binary responses, are very well-conserved in signal transduction networks. Although it has been shown that a cascade arrangement of multiple ultrasensitive modules can produce an enhancement of the system's ultrasensitivity, how the combination of layers affects the cascade's ultrasensitivity remains an open-ended question for the general case. Here we have developed a methodology that allowed us to quantify the effective contribution of each module to the overall cascade's ultrasensitivity and to determine the impact of sequestration effects in the overall system's ultrasensitivity . The proposed analysis framework provided a natural link between global and local ultrasensitivity descriptors and was particularly well-suited to study the ultrasensitivity in MAP kinase cascades. We used our methodology to revisit O'Shaughnessy et al. tunable synthetic MAPK cascade, in which they claim to have found a new source of ultrasensitivity: ultrasensitivity generated de novo, which arises due to cascade structure itself. In this respect, we showed that the system's ultrasensitivity in its single-step cascade did not come from a cascading effect but from a hidden first-order ultrasensitivity process in one of the cascade's layer. Our analysis also highlighted the impact of the detailed functional form of a module's response curve on the overall system's ultrasensitivity in cascade architectures. Local sensitivity features of the involved transfer functions were found to be of the uttermost importance in this kind of setting and could be at the core of non-trivial phenomenology associated to ultrasensitive motifs .
Ultrasensitive response motifs, which are capable of converting graded stimulus in binary responses, are very well-conserved in signal transduction networks. Although it has been shown that a cascade arrangement of multiple ultrasensitive modules can produce an enhancement of the system's ultrasensitivity, how the combination of layers affects the cascade's ultrasensitivity remains an open question for the general case. Here we introduced a methodology that allowed us to determine the presence of sequestration effects and to quantify the relative contribution of each module to the overall cascade's ultrasensitivity . The proposed analysis framework provides a natural link between global and local ultrasensitivity descriptors and is particularly well-suited to characterize and better understand mathematical models used to study real biological systems. As a case study we considered three mathematical models introduced by O'Shaughnessy et al. to study a tunable synthetic MAPK cascade, and showed how our methodology might help modelers to better understand modeling alternatives .
[ { "type": "R", "before": "open-ended", "after": "open", "start_char_pos": 388, "end_char_pos": 398 }, { "type": "R", "before": "have developed", "after": "introduced", "start_char_pos": 438, "end_char_pos": 452 }, { "type": "R", "before": "quantify the effective", "after": "determine the presence of sequestration effects and to quantify the relative", "start_char_pos": 486, "end_char_pos": 508 }, { "type": "D", "before": "and to determine the impact of sequestration effects in the overall system's ultrasensitivity", "after": null, "start_char_pos": 579, "end_char_pos": 672 }, { "type": "R", "before": "provided", "after": "provides", "start_char_pos": 707, "end_char_pos": 715 }, { "type": "R", "before": "was", "after": "is", "start_char_pos": 789, "end_char_pos": 792 }, { "type": "R", "before": "study the ultrasensitivity in MAP kinase cascades. We used our methodology to revisit", "after": "characterize and better understand mathematical models used to study real biological systems. As a case study we considered three mathematical models introduced by", "start_char_pos": 821, "end_char_pos": 906 }, { "type": "A", "before": null, "after": "to study a", "start_char_pos": 928, "end_char_pos": 928 }, { "type": "R", "before": "in which they claim to have found a new source of ultrasensitivity: ultrasensitivity generated de novo, which arises due to cascade structure itself. In this respect, we showed that the system's ultrasensitivity in its single-step cascade did not come from a cascading effect but from a hidden first-order ultrasensitivity process in one of the cascade's layer. Our analysis also highlighted the impact of the detailed functional form of a module's response curve on the overall system's ultrasensitivity in cascade architectures. Local sensitivity features of the involved transfer functions were found to be of the uttermost importance in this kind of setting and could be at the core of non-trivial phenomenology associated to ultrasensitive motifs", "after": "and showed how our methodology might help modelers to better understand modeling alternatives", "start_char_pos": 961, "end_char_pos": 1712 } ]
[ 0, 157, 429, 674, 871, 1110, 1322, 1491 ]
1608.08468
1
Dynamic covariance estimation for multivariate time series suffers from the curse of dimensionality. This renders parsimonious estimation methods essential for conducting reliable statistical inference. In this paper , the issue is addressed by modeling the underlying co-volatility dynamics of a time series vector through a lower dimensional collection of latent time-varying stochastic factors. Furthermore, we apply a Normal-Gamma prior to the elements of the factor loadings matrix. This hierarchical shrinkage prior effectively pulls the factor loadings of unimportant factors towards zero, thereby increasing parsimony even more. We apply the model to simulated data as well as daily log-returns of 300 S&P 500 stocks and demonstrate the effectiveness of the shrinkage prior to obtain sparse loadings matrices and more precise correlation estimates. Moreover, we investigate predictive performance and discuss different choices for the number of latent factors . Additionally to being a stand-alone tool, the algorithm is designed to act as a "plug and play" extension for other MCMC samplers .
Dynamic covariance estimation for multivariate time series suffers from the curse of dimensionality. Consequently, parsimonious estimation methods are essential for conducting reliable statistical inference. In the paper at hand, this issue is addressed by modeling the underlying co-volatility dynamics of a time series vector through a lower dimensional collection of latent time-varying stochastic factors. Furthermore, we propose using a Normal-Gamma prior for the elements of the factor loadings matrix. This hierarchical shrinkage prior effectively pulls the factor loadings of unimportant factors towards zero, thereby increasing parsimony even more. We apply the model to simulated data as well as daily log-returns of 300 S&P 500 stocks and demonstrate its effectiveness to obtain sparse loadings matrices and more precise correlation estimates. To assess predictive accuracy, our approach is compared to more traditional approaches via log predictive scores and implied minimum variance portfolio performance. Thereby, different choices for the number of latent factors are discussed. In addition to serving as a stand-alone tool, the algorithm is designed to complement other MCMC samplers as a "plug and play" extension . It can easily be used by means of the R package factorstochvol .
[ { "type": "R", "before": "This renders", "after": "Consequently,", "start_char_pos": 101, "end_char_pos": 113 }, { "type": "A", "before": null, "after": "are", "start_char_pos": 146, "end_char_pos": 146 }, { "type": "R", "before": "this paper , the", "after": "the paper at hand, this", "start_char_pos": 207, "end_char_pos": 223 }, { "type": "R", "before": "apply", "after": "propose using", "start_char_pos": 415, "end_char_pos": 420 }, { "type": "R", "before": "to", "after": "for", "start_char_pos": 442, "end_char_pos": 444 }, { "type": "R", "before": "the effectiveness of the shrinkage prior", "after": "its effectiveness", "start_char_pos": 742, "end_char_pos": 782 }, { "type": "R", "before": "Moreover, we investigate predictive performance and discuss", "after": "To assess predictive accuracy, our approach is compared to more traditional approaches via log predictive scores and implied minimum variance portfolio performance. Thereby,", "start_char_pos": 858, "end_char_pos": 917 }, { "type": "R", "before": ". Additionally to being", "after": "are discussed. In addition to serving as", "start_char_pos": 969, "end_char_pos": 992 }, { "type": "R", "before": "act", "after": "complement other MCMC samplers", "start_char_pos": 1042, "end_char_pos": 1045 }, { "type": "R", "before": "for other MCMC samplers", "after": ". It can easily be used by means of the R package factorstochvol", "start_char_pos": 1077, "end_char_pos": 1100 } ]
[ 0, 100, 203, 398, 488, 637, 857, 970 ]
1608.08468
2
Dynamic covariance estimation for multivariate time series suffers from the curse of dimensionality . Consequently, parsimonious estimation methods are essential for conducting reliable statistical inference. In the paper at hand, this issue is addressed by modeling the underlying co-volatility dynamics of a time series vector through a lower dimensional collection of latent time-varying stochastic factors. Furthermore, we propose using a Normal-Gamma prior for the elements of the factor loadings matrix . This hierarchical shrinkage prior effectively pulls the factor loadings of unimportant factors towards zero , thereby increasing parsimony even more. We apply the model to simulated data as well as daily log-returns of 300 S&P 500 stocks and demonstrate its effectiveness to obtain sparse loadings matrices and more precise correlation estimates . To assess predictive accuracy, our approach is compared to more traditional approaches via log predictive scores and implied minimum variance portfolio performance . Thereby, different choices for the number of latent factors are discussed. In addition to serving as a stand-alone tool, the algorithm is designed to complement other MCMC samplers as a "plug and play" extension. It can easily be used by means of the R package factorstochvol .
We address the curse of dimensionality in dynamic covariance estimation by modeling the underlying co-volatility dynamics of a time series vector through latent time-varying stochastic factors. The use of a global-local shrinkage prior for the elements of the factor loadings matrix pulls loadings on superfluous factors towards zero . To demonstrate the merits of the proposed framework, the model is applied to simulated data as well as to daily log-returns of 300 S&P 500 members. Our approach yields precise correlation estimates , strong implied minimum variance portfolio performance and superior forecasting accuracy in terms of log predictive scores when compared to typical benchmarks .
[ { "type": "R", "before": "Dynamic covariance estimation for multivariate time series suffers from", "after": "We address", "start_char_pos": 0, "end_char_pos": 71 }, { "type": "R", "before": ". Consequently, parsimonious estimation methods are essential for conducting reliable statistical inference. In the paper at hand, this issue is addressed", "after": "in dynamic covariance estimation", "start_char_pos": 100, "end_char_pos": 254 }, { "type": "D", "before": "a lower dimensional collection of", "after": null, "start_char_pos": 337, "end_char_pos": 370 }, { "type": "R", "before": "Furthermore, we propose using a Normal-Gamma", "after": "The use of a global-local shrinkage", "start_char_pos": 411, "end_char_pos": 455 }, { "type": "R", "before": ". This hierarchical shrinkage prior effectively pulls the factor loadings of unimportant", "after": "pulls loadings on superfluous", "start_char_pos": 509, "end_char_pos": 597 }, { "type": "R", "before": ", thereby increasing parsimony even more. We apply the model", "after": ". To demonstrate the merits of the proposed framework, the model is applied", "start_char_pos": 619, "end_char_pos": 679 }, { "type": "A", "before": null, "after": "to", "start_char_pos": 709, "end_char_pos": 709 }, { "type": "R", "before": "stocks and demonstrate its effectiveness to obtain sparse loadings matrices and more", "after": "members. Our approach yields", "start_char_pos": 743, "end_char_pos": 827 }, { "type": "R", "before": ". To assess predictive accuracy, our approach is compared to more traditional approaches via log predictive scores and", "after": ", strong", "start_char_pos": 858, "end_char_pos": 976 }, { "type": "R", "before": ". Thereby, different choices for the number of latent factors are discussed. In addition to serving as a stand-alone tool, the algorithm is designed to complement other MCMC samplers as a \"plug and play\" extension. It can easily be used by means of the R package factorstochvol", "after": "and superior forecasting accuracy in terms of log predictive scores when compared to typical benchmarks", "start_char_pos": 1024, "end_char_pos": 1301 } ]
[ 0, 101, 208, 410, 510, 660, 859, 1025, 1100, 1238 ]
1608.08490
1
In this article, inspired by Shi, et al. we investigate the optimal portfolio selection with one risk-free asset and one risky asset in a multiple period set- ting under cumulative prospect theory (CPT). Compared with their study, our novelty is that we consider probability distortions , and portfolio constraints. In doing numerical analysis, we test the sensitivity of the optimal CPT-investment strategies to different model parameters .
In this article, inspired by Shi, et al. we investigate the optimal portfolio selection with one risk-free asset and one risky asset in a multiple period setting under cumulative prospect theory (CPT). Compared with their study, our novelty is that we consider a stochastic benchmark , and portfolio constraints. We test the sensitivity of the optimal CPT-investment strategies to different model parameters by performing a numerical analysis .
[ { "type": "R", "before": "set- ting", "after": "setting", "start_char_pos": 154, "end_char_pos": 163 }, { "type": "R", "before": "probability distortions", "after": "a stochastic benchmark", "start_char_pos": 263, "end_char_pos": 286 }, { "type": "R", "before": "In doing numerical analysis, we", "after": "We", "start_char_pos": 316, "end_char_pos": 347 }, { "type": "A", "before": null, "after": "by performing a numerical analysis", "start_char_pos": 440, "end_char_pos": 440 } ]
[ 0, 203, 315 ]
1609.00680
1
Protein contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem , but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method for contact prediction that predicts contacts by integrating both evolutionary coupling (EC) information and sequence conservation information through an ultra-deep neural network consisting of two deep residual neural networks. The two residual networks conduct a series of convolutional transformation of protein features including sequence profile, EC information and pairwise potential. This neural network allows us to model very complex relationship between sequence and contact map as well as long-range interdependency between contacts and thus, obtain high-quality contact prediction . Our method greatly outperforms existing contact prediction methods and leads to much more accurate contact-assisted protein folding. For example, on the 105 CASP11 test proteins, the L/10 long-range accuracy obtained by our methodis 83.3\% while that by CCMpred and MetaPSICOV ( the CASP11 winner ) is 43.4\% and 60.2\%, respectively. On the 398 membrane proteins, the L/10 long-range accuracy obtained by our methodis 77.3\% while that by CCMpred and MetaPSICOV is 51.8\% and 61.2\% , respectively. Ab initio folding guided by our predicted contacts can yield correct folds (i.e., TMscore>0.6) for 224 of the 579 test proteins, while that by MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them , respectively. Further, our contact-assisted models also have much better quality (especially for membrane proteins ) than template-based models .
Protein contact prediction from sequence is an important problem. Recently exciting progress has been made , but the predicted contacts for proteins without many sequence homologs is still of low quality and not extremely useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. This deep neural network allows us to model very complex relationship between sequence and contact map as well as long-range interdependency between contacts . Our method greatly outperforms existing contact prediction methods and leads to much more accurate contact-assisted protein folding. Tested on three datasets of 579 proteins, the average top L long-range prediction accuracy obtained our method, the representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77 , 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints can yield correct folds (i.e., TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 proteins , respectively. Further, our contact-assisted models have much better quality than template-based models. Using our predicted contacts as restraints, we can (ab initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast, when the training proteins of our method are used as templates, homology modeling can only do so for 10 of them. One interesting finding is that even if we do not train our prediction models with any membrane proteins, our method still works well on membrane protein prediction .
[ { "type": "D", "before": "on this problem", "after": null, "start_char_pos": 107, "end_char_pos": 122 }, { "type": "R", "before": "very", "after": "extremely", "start_char_pos": 228, "end_char_pos": 232 }, { "type": "D", "before": "for contact prediction", "after": null, "start_char_pos": 321, "end_char_pos": 343 }, { "type": "D", "before": "information", "after": null, "start_char_pos": 414, "end_char_pos": 425 }, { "type": "R", "before": "consisting of", "after": "formed by", "start_char_pos": 501, "end_char_pos": 514 }, { "type": "R", "before": "The two residual networks conduct a series of convolutional transformation of protein features including sequence profile, EC information and pairwise potential. This", "after": "This deep", "start_char_pos": 550, "end_char_pos": 716 }, { "type": "D", "before": "and thus, obtain high-quality contact prediction", "after": null, "start_char_pos": 865, "end_char_pos": 913 }, { "type": "R", "before": "For example, on the 105 CASP11 test", "after": "Tested on three datasets of 579", "start_char_pos": 1049, "end_char_pos": 1084 }, { "type": "R", "before": "L/10", "after": "average top L", "start_char_pos": 1099, "end_char_pos": 1103 }, { "type": "R", "before": "accuracy obtained by our methodis 83.3\\% while that by CCMpred and MetaPSICOV (", "after": "prediction accuracy obtained our method, the representative EC method CCMpred and", "start_char_pos": 1115, "end_char_pos": 1194 }, { "type": "R", "before": ") is 43.4\\% and 60.2\\%, respectively. On the 398 membrane proteins, the", "after": "MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top", "start_char_pos": 1213, "end_char_pos": 1284 }, { "type": "R", "before": "obtained by our methodis 77.3\\% while that by", "after": "of our method,", "start_char_pos": 1310, "end_char_pos": 1355 }, { "type": "R", "before": "51.8\\% and 61.2\\%", "after": "0.77", "start_char_pos": 1382, "end_char_pos": 1399 }, { "type": "A", "before": null, "after": "0.47 and 0.59,", "start_char_pos": 1402, "end_char_pos": 1402 }, { "type": "R", "before": "guided by", "after": "using", "start_char_pos": 1435, "end_char_pos": 1444 }, { "type": "A", "before": null, "after": "as restraints", "start_char_pos": 1468, "end_char_pos": 1468 }, { "type": "R", "before": "224 of the 579", "after": "203", "start_char_pos": 1517, "end_char_pos": 1531 }, { "type": "R", "before": "by", "after": "using", "start_char_pos": 1558, "end_char_pos": 1560 }, { "type": "R", "before": "of them", "after": "proteins", "start_char_pos": 1633, "end_char_pos": 1640 }, { "type": "D", "before": "also", "after": null, "start_char_pos": 1694, "end_char_pos": 1698 }, { "type": "R", "before": "(especially for membrane proteins ) than template-based models", "after": "than template-based models. Using our predicted contacts as restraints, we can (ab initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast, when the training proteins of our method are used as templates, homology modeling can only do so for 10 of them. One interesting finding is that even if we do not train our prediction models with any membrane proteins, our method still works well on membrane protein prediction", "start_char_pos": 1724, "end_char_pos": 1786 } ]
[ 0, 65, 273, 549, 711, 915, 1048, 1250, 1416, 1656 ]
1609.00680
2
Protein contact prediction from sequence is an important problem. Recently exciting progress has been made , but the predicted contacts for proteins without many sequence homologs is still of low quality and not extremely useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. This deep neural network allows us to model very complex relationship between sequence and contact map as well as long-range interdependency between contacts . Our method greatly outperforms existing contact prediction methods and leads to much more accurate contact-assisted protein folding. Tested on three datasets of 579 proteins, the average top L long-range prediction accuracy obtained our method, the representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints can yield correct folds (i.e., TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 proteins, respectively. Further, our contact-assisted models have much better quality than template-based models. Using our predicted contacts as restraints, we can (ab initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast, when the training proteins of our method are used as templates, homology modeling can only do so for 10 of them. One interesting finding is that even if we do not train our prediction models with any membrane proteins, our method still works well on membrane protein prediction .
Recently exciting progress has been made on protein contact prediction , but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual networks. This deep neural network allows us to model very complex sequence-contact relationship as well as long-range inter-contact correlation . Our method greatly outperforms existing contact prediction methods and leads to much more accurate contact-assisted protein folding. Tested on three datasets of 579 proteins, the average top L long-range prediction accuracy obtained our method, the representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints can yield correct folds (i.e., TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 proteins, respectively. Further, our contact-assisted models have much better quality than template-based models. Using our predicted contacts as restraints, we can (ab initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast, when the training proteins of our method are used as templates, homology modeling can only do so for 10 of them. One interesting finding is that even if we do not train our prediction models with any membrane proteins, our method works very well on membrane protein prediction . Finally, in recent CAMEO benchmark our method successfully folded a mainly-beta protein of 182 residues with a novel fold .
[ { "type": "D", "before": "Protein contact prediction from sequence is an important problem.", "after": null, "start_char_pos": 0, "end_char_pos": 65 }, { "type": "A", "before": null, "after": "on protein contact prediction", "start_char_pos": 107, "end_char_pos": 107 }, { "type": "R", "before": "extremely", "after": "very", "start_char_pos": 213, "end_char_pos": 222 }, { "type": "D", "before": "neural", "after": null, "start_char_pos": 484, "end_char_pos": 490 }, { "type": "R", "before": "relationship between sequence and contact map", "after": "sequence-contact relationship", "start_char_pos": 558, "end_char_pos": 603 }, { "type": "R", "before": "interdependency between contacts", "after": "inter-contact correlation", "start_char_pos": 626, "end_char_pos": 658 }, { "type": "R", "before": "still works", "after": "works very", "start_char_pos": 1824, "end_char_pos": 1835 }, { "type": "A", "before": null, "after": ". Finally, in recent CAMEO benchmark our method successfully folded a mainly-beta protein of 182 residues with a novel fold", "start_char_pos": 1872, "end_char_pos": 1872 } ]
[ 0, 65, 263, 500, 660, 793, 1013, 1130, 1371, 1461, 1580, 1706 ]
1609.00680
3
Recently exciting progress has been made on protein contact prediction, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual networks. This deep neural network allows us to model very complex sequence-contact relationship as well as long-range inter-contact correlation. Our method greatly outperforms existing contact prediction methods and leads to much more accurate contact-assisted protein folding. Tested on three datasets of 579 proteins, the average top L long-range prediction accuracy obtained our method, the representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints can yield correct folds (i.e., TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 proteins, respectively. Further, our contact-assisted models have much better quality than template-based models. Using our predicted contacts as restraints, we can (ab initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast, when the training proteins of our method are used as templates, homology modeling can only do so for 10 of them. One interesting finding is that even if we do not train our prediction models with any membrane proteins, our method works very well on membrane protein prediction. Finally, in recent CAMEO benchmark our method successfully folded a mainly-beta protein of 182 residues with a novel fold.
Recently exciting progress has been made on protein contact prediction, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual networks. This deep neural network allows us to model very complex sequence-contact relationship as well as long-range inter-contact correlation. Our method greatly outperforms existing contact prediction methods and leads to much more accurate contact-assisted protein folding. Tested on three datasets of 579 proteins, the average top L long-range prediction accuracy obtained our method, the representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints can yield correct folds (i.e., TMscore>0.6) for 203 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 proteins, respectively. Further, our contact-assisted models have much better quality than template-based models. Using our predicted contacts as restraints, we can (ab initio) fold 208 of the 398 membrane proteins with TMscore>0.5. By contrast, when the training proteins of our method are used as templates, homology modeling can only do so for 10 of them. One interesting finding is that even if we do not train our prediction models with any membrane proteins, our method works very well on membrane protein prediction. Finally, in recent blind CAMEO benchmark our method successfully folded 4 test proteins with a novel fold.
[ { "type": "A", "before": null, "after": "blind", "start_char_pos": 1817, "end_char_pos": 1817 }, { "type": "R", "before": "a mainly-beta protein of 182 residues", "after": "4 test proteins", "start_char_pos": 1865, "end_char_pos": 1902 } ]
[ 0, 220, 450, 586, 719, 939, 1056, 1297, 1387, 1506, 1632, 1797 ]
1609.00987
1
We establish an explicit pricing formula for a class of non-Gaussian models (the Levy-stable, or Log-Levy model with finite moments and stability parameter between 1 and 2) allowing a straightforward evaluation of an European option , without numerical simulations and with as much accuracy as one wishes . The formula can be used by any practitioner, even if not familiar with the underlying multidimensional residue theory . We test the efficiency of the formula, and compare it with numerical methods.
We establish an explicit pricing formula for the class of L\'evy-stable models with maximal negative asymmetry (Log-L\'evy model with finite moments and stability parameter 1 <\alpha\leq 2) in the form of rapidly converging series. The series is obtained with help of Mellin transform and the residue theory in \mathbb{C straightforward evaluation of an European option with arbitrary accuracy without the use of numerical techniques . The formula can be used by any practitioner, even if not familiar with the underlying mathematical techniques . We test the efficiency of the formula, and compare it with numerical methods.
[ { "type": "R", "before": "a class of non-Gaussian models (the Levy-stable, or Log-Levy", "after": "the class of L\\'evy-stable models with maximal negative asymmetry (Log-L\\'evy", "start_char_pos": 45, "end_char_pos": 105 }, { "type": "D", "before": "between", "after": null, "start_char_pos": 156, "end_char_pos": 163 }, { "type": "R", "before": "and", "after": "<\\alpha\\leq", "start_char_pos": 166, "end_char_pos": 169 }, { "type": "R", "before": "allowing a", "after": "in the form of rapidly converging series. The series is obtained with help of Mellin transform and the residue theory in \\mathbb{C", "start_char_pos": 173, "end_char_pos": 183 }, { "type": "R", "before": ", without numerical simulations and with as much accuracy as one wishes", "after": "with arbitrary accuracy without the use of numerical techniques", "start_char_pos": 233, "end_char_pos": 304 }, { "type": "R", "before": "multidimensional residue theory", "after": "mathematical techniques", "start_char_pos": 393, "end_char_pos": 424 } ]
[ 0, 306, 426 ]
1609.01100
1
The field of cryo-electron microscopy has made astounding advancements in the past few years, mainly due to improvements in the hardware of the microscopes . Yet, one of the key open challenges of the field remains the processing of heterogeneous data sets, produced from samples containing particles at several different conformational states. For such data sets, one must first classify their images into homogeneous groups , where each group corresponds to the same underlying structure , followed by reconstruction of a three-dimensional model from each of the homogeneous groups. This task has been proven to be extremely difficult . In this paper we present an iterative algorithm for processing heterogeneous data sets that combines the classification and reconstruction steps. We prove accuracy and stability bounds on the algorithm, and demonstrate it on simulated as well as experimental datasets .
The field of cryo-electron microscopy has made astounding advancements in the past few years, mainly due to advancements in electron detectors' technology . Yet, one of the key open challenges of the field remains the processing of heterogeneous data sets, produced from samples containing particles at several different conformational states. For such data sets, the algorithms must include some classification procedure to identify homogeneous groups within the data, so that the images in each group correspond to the same underlying structure . The fundamental importance of the heterogeneity problem in cryo-electron microscopy has drawn many research efforts, and resulted in significant progress in classification algorithms for heterogeneous data sets. While these algorithms are extremely useful and effective in practice, they lack rigorous mathematical analysis and performance guarantees . In this paper , we attempt to make the first steps towards rigorous mathematical analysis of the heterogeneity problem in cryo-electron microscopy. To that end, we present an algorithm for processing heterogeneous data sets , and prove accuracy and stability bounds for it. We also suggest an extension of this algorithm that combines the classification and reconstruction steps. We demonstrate it on simulated data, and compare its performance to the state-of-the-art algorithm in RELION .
[ { "type": "R", "before": "improvements in the hardware of the microscopes", "after": "advancements in electron detectors' technology", "start_char_pos": 108, "end_char_pos": 155 }, { "type": "R", "before": "one must first classify their images into homogeneous groups , where each group corresponds", "after": "the algorithms must include some classification procedure to identify homogeneous groups within the data, so that the images in each group correspond", "start_char_pos": 365, "end_char_pos": 456 }, { "type": "R", "before": ", followed by reconstruction of a three-dimensional model from each of the homogeneous groups. This task has been proven to be extremely difficult", "after": ". The fundamental importance of the heterogeneity problem in cryo-electron microscopy has drawn many research efforts, and resulted in significant progress in classification algorithms for heterogeneous data sets. While these algorithms are extremely useful and effective in practice, they lack rigorous mathematical analysis and performance guarantees", "start_char_pos": 490, "end_char_pos": 636 }, { "type": "R", "before": "we present an iterative", "after": ", we attempt to make the first steps towards rigorous mathematical analysis of the heterogeneity problem in cryo-electron microscopy. To that end, we present an", "start_char_pos": 653, "end_char_pos": 676 }, { "type": "A", "before": null, "after": ", and prove accuracy and stability bounds for it. We also suggest an extension of this algorithm", "start_char_pos": 726, "end_char_pos": 726 }, { "type": "D", "before": "prove accuracy and stability bounds on the algorithm, and", "after": null, "start_char_pos": 789, "end_char_pos": 846 }, { "type": "R", "before": "as well as experimental datasets", "after": "data, and compare its performance to the state-of-the-art algorithm in RELION", "start_char_pos": 875, "end_char_pos": 907 } ]
[ 0, 157, 344, 584, 638, 785 ]
1609.01274
1
We develop models to price long term loans in the securities lending business. These longer horizon deals can be viewed as contracts with optionality embedded in them and can be priced using established methods from derivatives theory, becoming to our limited knowledge, the first application that can lead to greater synergies between the operations of derivative and delta-one trading desks, perhaps even being able to combine certain aspects of the day to day operations of these seemingly disparate entities. We run numerical simulations to demonstrate the practical applicability of these models. These models are part of one of the least explored yet profit laden areas of modern investment management. The methodologies developed here could be potentially useful for inventory management , for dealing with other financial instruments, non-financial commodities and many forms of uncertainty.
We develop models to price long term loans in the securities lending business. These longer horizon deals can be viewed as contracts with optionality embedded in them and can be priced using established methods from derivatives theory, becoming to our limited knowledge, the first application that can lead to greater synergies between the operations of derivative and delta-one trading desks, perhaps even being able to combine certain aspects of the day to day operations of these seemingly disparate entities. We develop a heuristic that can mitigate the loss of information that sets in, when parameters are estimated first and then the valuation is performed, by directly calculating the valuation using the historical time series. We run numerical simulations to demonstrate the practical applicability of these models. These models are part of one of the least explored yet profit laden areas of modern investment management. We illustrate how the methodologies developed here could be useful for inventory management . All these techniques could have applications for dealing with other financial instruments, non-financial commodities and many forms of uncertainty.
[ { "type": "A", "before": null, "after": "develop a heuristic that can mitigate the loss of information that sets in, when parameters are estimated first and then the valuation is performed, by directly calculating the valuation using the historical time series. We", "start_char_pos": 516, "end_char_pos": 516 }, { "type": "R", "before": "The", "after": "We illustrate how the", "start_char_pos": 710, "end_char_pos": 713 }, { "type": "D", "before": "potentially", "after": null, "start_char_pos": 752, "end_char_pos": 763 }, { "type": "R", "before": ",", "after": ". All these techniques could have applications", "start_char_pos": 796, "end_char_pos": 797 } ]
[ 0, 78, 512, 602, 709 ]
1609.01274
2
We develop models to price long term loans in the securities lending business. These longer horizon deals can be viewed as contracts with optionality embedded in them and can be priced using established methods from derivatives theory, becoming to our limited knowledge, the first application that can lead to greater synergies between the operations of derivative and delta-one trading desks, perhaps even being able to combine certain aspects of the day to day operations of these seemingly disparate entities. We develop a heuristic that can mitigate the loss of information that sets in , when parameters are estimated first and then the valuation is performed , by directly calculating the valuation using the historical time series. We run numerical simulations to demonstrate the practical applicability of these models . These models are part of one of the least explored yet profit laden areas of modern investment management. We illustrate how the methodologies developed here could be useful for inventory management. All these techniques could have applications for dealing with other financial instruments, non-financial commodities and many forms of uncertainty . Admittedly, our initial ambitions to produce a normative theory on long term loan valuations are undone by the present state of affairs in social science modeling. Though we consider many elements of a securities lending system at face value, this cannot be termed a positive theory. For now, if it ends up producing a useful theory, our work is done.
We develop models to price long term loans in the securities lending business. These longer horizon deals can be viewed as contracts with optionality embedded in them and can be priced using established methods from derivatives theory, becoming to our limited knowledge, the first application that can lead to greater synergies between the operations of derivative and delta-one trading desks, perhaps even being able to combine certain aspects of the day to day operations of these seemingly disparate entities. We run numerical simulations to demonstrate the practical applicability of these models. These models are part of one of the least explored yet profit laden areas of modern investment management. We develop a heuristic that can mitigate the loss of information that sets in when parameters are estimated first and then the valuation is performed by directly calculating the valuation using the historical time series. This can lead to reduced models errors and greater financial stability. We illustrate how the methodologies developed here could be useful for inventory management. All these techniques could have applications for dealing with other financial instruments, non-financial commodities and many forms of uncertainty . An unintended consequence of our efforts, has become a review of the vast literature on options pricing, which can be useful for anyone that attempts to apply the corresponding techniques to the problems mentioned here . Admittedly, our initial ambitions to produce a normative theory on long term loan valuations are undone by the present state of affairs in social science modeling. Though we consider many elements of a securities lending system at face value, this cannot be termed a positive theory. For now, if it ends up producing a useful theory, our work is done.
[ { "type": "A", "before": null, "after": "run numerical simulations to demonstrate the practical applicability of these models. These models are part of one of the least explored yet profit laden areas of modern investment management. We", "start_char_pos": 516, "end_char_pos": 516 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 592, "end_char_pos": 593 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 666, "end_char_pos": 667 }, { "type": "R", "before": "We run numerical simulations to demonstrate the practical applicability of these models . These models are part of one of the least explored yet profit laden areas of modern investment management.", "after": "This can lead to reduced models errors and greater financial stability.", "start_char_pos": 740, "end_char_pos": 936 }, { "type": "A", "before": null, "after": ". An unintended consequence of our efforts, has become a review of the vast literature on options pricing, which can be useful for anyone that attempts to apply the corresponding techniques to the problems mentioned here", "start_char_pos": 1177, "end_char_pos": 1177 } ]
[ 0, 78, 512, 739, 829, 936, 1029, 1179, 1343, 1463 ]
1609.01621
1
We derive abstract as well as deterministic conditions for the absence and existence of free lunch with vanishing risk, arbitrage , generalized arbitrage, and unbounded profit with bounded risk in a general multidimensional diffusion framework. Moreover, we give conditions for the absence and presence of financial bubbles. In particular, we provide criteria for the (strict local) martingale property of certain stochastic exponentials . As an application, we illustrate the influence of the market dimension, i.e. the number of stocks in the market, on free lunch with vanishing risk and generalized arbitrage . Our proofs are based on explosion criteria for martingale problems, local measure changes, and comparison arguments .
We derive abstract as well as deterministic conditions for the absence and existence of arbitrage and financial bubbles in a general (multi- and infinite-dimensional) semimartingale-diffusion markets, and a Heath-Jarrow-Morton-Musiela framework. We also provide deterministic conditions for the martingale property of stochastic exponentials which are driven by solution to generalized martingale problems, respectively stochastic partial differential equations . As an application, we construct a financial market in which the number of assets determines the absence of arbitrage while the sources of risk have the same dimension .
[ { "type": "R", "before": "free lunch with vanishing risk, arbitrage , generalized arbitrage, and unbounded profit with bounded risk", "after": "arbitrage and financial bubbles", "start_char_pos": 88, "end_char_pos": 193 }, { "type": "R", "before": "multidimensional diffusion framework. Moreover, we give conditions for the absence and presence of financial bubbles. In particular, we provide criteria for the (strict local)", "after": "(multi- and infinite-dimensional) semimartingale-diffusion markets, and a Heath-Jarrow-Morton-Musiela framework. We also provide deterministic conditions for the", "start_char_pos": 207, "end_char_pos": 382 }, { "type": "R", "before": "certain stochastic exponentials", "after": "stochastic exponentials which are driven by solution to generalized martingale problems, respectively stochastic partial differential equations", "start_char_pos": 406, "end_char_pos": 437 }, { "type": "R", "before": "illustrate the influence of the market dimension, i.e.", "after": "construct a financial market in which", "start_char_pos": 462, "end_char_pos": 516 }, { "type": "R", "before": "stocks in the market, on free lunch with vanishing risk and generalized arbitrage . Our proofs are based on explosion criteria for martingale problems, local measure changes, and comparison arguments", "after": "assets determines the absence of arbitrage while the sources of risk have the same dimension", "start_char_pos": 531, "end_char_pos": 730 } ]
[ 0, 244, 324, 439, 614 ]
1609.01621
2
We derive abstract as well as deterministic conditions for the absence and existence of arbitrage and financial bubbles in a general (multi- and infinite-dimensional) semimartingale-diffusion markets, and a Heath-Jarrow-Morton-Musiela framework. We also provide deterministic conditions for the martingale property of stochastic exponentials which are driven by solution to generalized martingale problems, respectively stochastic partial differential equations . As an application, we construct a financial market in which the number of assets determines the absence of arbitrage while the sources of risk have the same dimension .
In this article, we study the set of equivalent (local) martingale measures for financial markets driven by multi-dimensional diffusions. We give conditions for the existence of equivalent (local) martingale measures in terms of existence and uniqueness properties of martingale problems. Based on these we derive deterministic criteria for the existence and non-existence of equivalent (local) martingale measures . As an application, we construct a financial market in which the number of risky assets determines the absence of arbitrage and equals the number of sources of risk .
[ { "type": "R", "before": "We derive abstract as well as deterministic", "after": "In this article, we study the set of equivalent (local) martingale measures for financial markets driven by multi-dimensional diffusions. We give", "start_char_pos": 0, "end_char_pos": 43 }, { "type": "R", "before": "absence and existence of arbitrage and financial bubbles in a general (multi- and infinite-dimensional) semimartingale-diffusion markets, and a Heath-Jarrow-Morton-Musiela framework. We also provide deterministic conditions for the martingale property of stochastic exponentials which are driven by solution to generalized martingale problems, respectively stochastic partial differential equations", "after": "existence of equivalent (local) martingale measures in terms of existence and uniqueness properties of martingale problems. Based on these we derive deterministic criteria for the existence and non-existence of equivalent (local) martingale measures", "start_char_pos": 63, "end_char_pos": 461 }, { "type": "A", "before": null, "after": "risky", "start_char_pos": 538, "end_char_pos": 538 }, { "type": "R", "before": "while the", "after": "and equals the number of", "start_char_pos": 582, "end_char_pos": 591 }, { "type": "D", "before": "have the same dimension", "after": null, "start_char_pos": 608, "end_char_pos": 631 } ]
[ 0, 245, 463 ]
1609.01621
3
In this article, we study the set of equivalent (local) martingale measures for financial markets driven by multi-dimensional diffusions. We give conditions for the existence of equivalent (local) martingale measures in terms of existence and uniqueness properties of martingale problems. Based on these we derive deterministic criteria for the existence and non-existence of equivalent (local) martingale measures . As an application, we construct a financial market in which the number of risky assets determines the absence of arbitrage and equals the number of sources of risk .
We derive deterministic criteria for the existence and non-existence of equivalent (local) martingale measures for financial markets driven by multi-dimensional time-inhomogeneous diffusions. Our conditions can be used to construct financial markets in which the no unbounded profit with bounded risk condition holds, while the classicalno free lunch with vanishing risk condition fails .
[ { "type": "R", "before": "In this article, we study the set of equivalent (local) martingale measures for financial markets driven by multi-dimensional diffusions. We give conditions for the existence of equivalent (local) martingale measures in terms of existence and uniqueness properties of martingale problems. Based on these we", "after": "We", "start_char_pos": 0, "end_char_pos": 306 }, { "type": "R", "before": ". As an application, we construct a financial market", "after": "for financial markets driven by multi-dimensional time-inhomogeneous diffusions. Our conditions can be used to construct financial markets", "start_char_pos": 415, "end_char_pos": 467 }, { "type": "R", "before": "number of risky assets determines the absence of arbitrage and equals the number of sources of risk", "after": "no unbounded profit with bounded risk", "start_char_pos": 481, "end_char_pos": 580 }, { "type": "A", "before": null, "after": "condition holds, while the classical", "start_char_pos": 581, "end_char_pos": 581 }, { "type": "A", "before": null, "after": "no free lunch with vanishing risk", "start_char_pos": 581, "end_char_pos": 581 }, { "type": "A", "before": null, "after": "condition fails", "start_char_pos": 582, "end_char_pos": 582 } ]
[ 0, 137, 288, 416 ]
1609.02133
1
Mitochondrial networks have been shown to exhibit a variety of complex behaviors, including cell-wide oscillations of mitochondrial energy states , as well as a phase transition in response to oxidative stress. Since functional status and structural properties are often intertwined, in this work we look at the structural properties of URLanelle in normal mouse embryonic fibroblasts , describing its most relevant features . Subsequently we manipulated mitochondrial morphology using two interventions with opposite effects: over-expression of mitofusin 1, a protein that promotes mitochondria fusion , and paraquat treatment, a compound that induces mitochondrial fragmentation due to oxidative stress . Quantitative analysis of URLanelle's structural clusters revealed that healthy mitochondrial networks were in a status intermediate between the extremes of highly fragmented and completely fusioned networks. This was confirmed by a comparison of our empirical findings with those of a recently described computational model of network growth based on fusion-fission balance. These results , offer an objective methodology to parametrize the mitochondrial status under a variety of both physiological and pathological cellular conditions, and overall add weight to the fission-fusion model for the mitochondrial reticulum dynamics .
Mitochondrial networks exhibit a variety of complex behaviors, including coordinated cell-wide oscillations of energy states as well as a phase transition (depolarization) in response to oxidative stress. Since functional and structural properties are often interwinded, here we characterize the structure of mitochondrial networks in mouse embryonic fibroblasts using network tools and percolation theory . Subsequently we perturbed the system either by promoting the fusion of mitochondrial segments or by inducing mitochondrial fission . Quantitative analysis of mitochondrial clusters revealed that the structural parameters of healthy mitochondria lay in between the extremes of highly fragmented and completely fusioned networks. We confirmed our results by contrasting our emprirical findings with the predictions of a recently described computational model of mitochondrial network emergence based on fission-fusion kinetics. Altogether these results not only offer an objective methodology to parametrize the complexity of URLanelle but add weight to the idea that mitochondrial networks behave as critical systems and undergo structural phase transitions .
[ { "type": "D", "before": "have been shown to", "after": null, "start_char_pos": 23, "end_char_pos": 41 }, { "type": "A", "before": null, "after": "coordinated", "start_char_pos": 92, "end_char_pos": 92 }, { "type": "R", "before": "mitochondrial energy states ,", "after": "energy states", "start_char_pos": 119, "end_char_pos": 148 }, { "type": "A", "before": null, "after": "(depolarization)", "start_char_pos": 179, "end_char_pos": 179 }, { "type": "D", "before": "status", "after": null, "start_char_pos": 230, "end_char_pos": 236 }, { "type": "R", "before": "intertwined, in this work we look at the structural properties of URLanelle in normal", "after": "interwinded, here we characterize the structure of mitochondrial networks in", "start_char_pos": 273, "end_char_pos": 358 }, { "type": "R", "before": ", describing its most relevant features", "after": "using network tools and percolation theory", "start_char_pos": 387, "end_char_pos": 426 }, { "type": "R", "before": "manipulated mitochondrial morphology using two interventions with opposite effects: over-expression of mitofusin 1, a protein that promotes mitochondria fusion , and paraquat treatment, a compound that induces mitochondrial fragmentation due to oxidative stress", "after": "perturbed the system either by promoting the fusion of mitochondrial segments or by inducing mitochondrial fission", "start_char_pos": 445, "end_char_pos": 706 }, { "type": "R", "before": "URLanelle's structural", "after": "mitochondrial", "start_char_pos": 734, "end_char_pos": 756 }, { "type": "R", "before": "healthy mitochondrial networks were in a status intermediate", "after": "the structural parameters of healthy mitochondria lay in", "start_char_pos": 780, "end_char_pos": 840 }, { "type": "R", "before": "This was confirmed by a comparison of our empirical findings with those", "after": "We confirmed our results by contrasting our emprirical findings with the predictions", "start_char_pos": 917, "end_char_pos": 988 }, { "type": "R", "before": "network growth based on fusion-fission balance. These results ,", "after": "mitochondrial network emergence based on fission-fusion kinetics. Altogether these results not only", "start_char_pos": 1036, "end_char_pos": 1099 }, { "type": "R", "before": "mitochondrial status under a variety of both physiological and pathological cellular conditions, and overall", "after": "complexity of URLanelle but", "start_char_pos": 1150, "end_char_pos": 1258 }, { "type": "R", "before": "fission-fusion model for the mitochondrial reticulum dynamics", "after": "idea that mitochondrial networks behave as critical systems and undergo structural phase transitions", "start_char_pos": 1277, "end_char_pos": 1338 } ]
[ 0, 212, 528, 916, 1083 ]
1609.02193
1
Energy transparency is a concept that makes a program's energy consumption visible from hardware up to software, through the different system layers. Such transparency can enable energy optimizations at each layer and between layers, and help both programmers and operating systems make energy-aware decisions. In this paper, we focus on deeply embedded devices, typically used for Internet of Things (IoT) applications, and demonstrate how to enable energy transparency through existing Static Resource Analysis (SRA) techniques and a new target-agnostic profiling technique, without the need of hardware energy measurements. A novel mapping technique enables software energy consumption estimations at a higher level than the Instruction Set Architecture (ISA), namely the LLVM Intermediate Representation (IR) level, and therefore introduces energy transparency directly to the LLVM optimizer. We apply our energy estimation techniques to a comprehensive set of benchmarks, including single-threaded and also multi-threaded embedded programs from two commonly used concurrency patterns, task farms and pipelines. Using SRA, our LLVM IR results demonstrate a high accuracy with a deviation in the range of 1 %DIFDELCMD < \\%%% %DIF < from the ISA SRA. Our profiling technique captures the actual energy consumption at the LLVM IR level with an average error of less than 3\\%.abstract
Energy transparency is a concept that makes a program's energy consumption visible , from hardware up to software, through the different system layers. Such transparency can enable energy optimizations at each layer and between layers, and help both programmers and operating systems make energy-aware decisions. In this paper, we focus on deeply embedded devices, typically used for Internet of Things (IoT) applications, and demonstrate how to enable energy transparency through existing Static Resource Analysis (SRA) techniques and a new target-agnostic profiling technique, without hardware energy measurements. Our novel mapping technique enables software energy consumption estimations at a higher level than the Instruction Set Architecture (ISA), namely the LLVM Intermediate Representation (IR) level, and therefore introduces energy transparency directly to the LLVM optimizer. We apply our energy estimation techniques to a comprehensive set of benchmarks, including single- and also multi-threaded embedded programs from two commonly used concurrency patterns, task farms and pipelines. Using SRA, our LLVM IR results demonstrate a high accuracy with a deviation in the range of 1 %DIFDELCMD < \\%%% %DIF < from the ISA SRA. Our profiling technique captures the actual energy consumption at the LLVM IR level with an average error of less than 3\\%.abstract \% from the ISA SRA. Our profiling technique captures the actual energy consumption at the LLVM IR level with an average error of 3\%.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 83, "end_char_pos": 83 }, { "type": "D", "before": "the need of", "after": null, "start_char_pos": 586, "end_char_pos": 597 }, { "type": "R", "before": "A", "after": "Our", "start_char_pos": 628, "end_char_pos": 629 }, { "type": "R", "before": "single-threaded", "after": "single-", "start_char_pos": 988, "end_char_pos": 1003 }, { "type": "A", "before": null, "after": "\\% from the ISA SRA. Our profiling technique captures the actual energy consumption at the LLVM IR level with an average error of 3\\%.", "start_char_pos": 1388, "end_char_pos": 1388 } ]
[ 0, 150, 311, 627, 897, 1116, 1254 ]
1609.02288
1
The application of physical layer security in wireless ad hoc networks (WANETs) has attracted considerable academic attention recently. However, the available studies mainly focus on the single-hop and two-hop network scenarios, and the price in terms of degradation of communication quality of service (QoS) caused by improving security is largely uninvestigated. As a step to address these issues, this paper explores the physical layer security-aware routing and performance tradeoffs in a multi-hop WANET . Specifically, for any given end-to-end path in a general multi-hop WANET, we first derive its connection outage probability (COP) and secrecy outage probability (SOP) in closed-form, which serve as the performance metrics of communication QoS and transmission security, respectively. Based on the closed-form expressions, we then study the QoS-security tradeoffs to minimize COP (resp. SOP) conditioned on that SOP (resp. COP) is guaranteed. With the help of analysis of a given path, we further propose the routing algorithms which can achieve the optimal performance tradeoffs for any pair of source and destination nodes in a distributed manner. Finally, simulation and numerical results are presented to validate the efficiency of our theoretical analysis, as well as to illustrate the QoS-security tradeoffs and the routing performance.
The application of physical layer security in ad hoc networks has attracted considerable academic attention recently. However, the available studies mainly focus on the single-hop and two-hop network scenarios, and the price in terms of degradation of communication quality of service (QoS) caused by improving security is largely uninvestigated. As a step to address these issues, this paper explores the physical layer security-aware routing and performance tradeoffs in a multi-hop ad hoc network . Specifically, for any given end-to-end path we first derive its connection outage probability (COP) and secrecy outage probability (SOP) in closed-form, which serve as the performance metrics of communication QoS and transmission security, respectively. Based on the closed-form expressions, we then study the security-QoS tradeoffs to minimize COP (resp. SOP) conditioned on that SOP (resp. COP) is guaranteed. With the help of analysis of a given path, we further propose the routing algorithms which can achieve the optimal performance tradeoffs for any pair of source and destination nodes in a distributed manner. Finally, simulation and numerical results are presented to validate the efficiency of our theoretical analysis, as well as to illustrate the security-QoS tradeoffs and the routing performance.
[ { "type": "D", "before": "wireless", "after": null, "start_char_pos": 46, "end_char_pos": 54 }, { "type": "D", "before": "(WANETs)", "after": null, "start_char_pos": 71, "end_char_pos": 79 }, { "type": "R", "before": "WANET", "after": "ad hoc network", "start_char_pos": 503, "end_char_pos": 508 }, { "type": "D", "before": "in a general multi-hop WANET,", "after": null, "start_char_pos": 555, "end_char_pos": 584 }, { "type": "R", "before": "QoS-security", "after": "security-QoS", "start_char_pos": 851, "end_char_pos": 863 }, { "type": "R", "before": "QoS-security", "after": "security-QoS", "start_char_pos": 1301, "end_char_pos": 1313 } ]
[ 0, 135, 364, 794, 896, 932, 952, 1159 ]
1609.02349
1
Using Vovk's outer measure, which corresponds to a minimal superhedging price, the existence of quadratic variation is shown for "typical price paths" in the space of non-negative c\`adl\`ag functions . In particular, this implies the existence of quadratic variation in the sense of F\"ollmer quasi surely under all martingale measures. Based on the robust existence of quadratic variation and a certain topology which is induced by Vovk's outer measure, model-free It\^o integration is developed on the space of continuous paths, of non-negative c\`adl\`ag paths and of c\`adl\`ag paths with mildly restricted jumps .
Using Vovk's outer measure, which corresponds to a minimal superhedging price, the existence of quadratic variation is shown for "typical price paths" in the space of c\`adl\`ag functions possessing a mild restriction on the jumps directed downwards . In particular, this result includes the existence of quadratic variation of "typical price paths" in the space of non-negative c\`adl\`ag paths and implies the existence of quadratic variation in the sense of F\"ollmer quasi surely under all martingale measures. Based on the robust existence of the quadratic variation, a model-free It\^o integration is developed .
[ { "type": "D", "before": "non-negative", "after": null, "start_char_pos": 167, "end_char_pos": 179 }, { "type": "A", "before": null, "after": "possessing a mild restriction on the jumps directed downwards", "start_char_pos": 201, "end_char_pos": 201 }, { "type": "A", "before": null, "after": "result includes the existence of quadratic variation of \"typical price paths\" in the space of non-negative c\\`adl\\`ag paths and", "start_char_pos": 224, "end_char_pos": 224 }, { "type": "R", "before": "quadratic variation and a certain topology which is induced by Vovk's outer measure,", "after": "the quadratic variation, a", "start_char_pos": 373, "end_char_pos": 457 }, { "type": "D", "before": "on the space of continuous paths, of non-negative c\\`adl\\`ag paths and of c\\`adl\\`ag paths with mildly restricted jumps", "after": null, "start_char_pos": 500, "end_char_pos": 619 } ]
[ 0, 203, 339 ]
1609.02867
1
Two probability distributions \mu and \nu in second stochastic order can be coupled by a supermartingale, and in fact by many. Is there a canonical choice? We construct and investigate two couplings which arise as optimizers for constrained Monge-Kantorovich optimal transport problems where only supermartingales are allowed as transports. Much like the Hoeffding-Fr\'echet coupling of classical transport and its symmetric counterpart, the Antitone coupling, these can be characterized by order-theoretic minimality properties, as simultaneous optimal transports for certain classes of reward (or cost) functions, and through no-crossing conditions on their supports . However , our two couplings have asymmetric geometries due to the directed nature of the supermartingale constraint .
Two probability distributions \mu and \nu in second stochastic order can be coupled by a supermartingale, and in fact by many. Is there a canonical choice? We construct and investigate two couplings which arise as optimizers for constrained Monge-Kantorovich optimal transport problems where only supermartingales are allowed as transports. Much like the Hoeffding-Fr\'echet coupling of classical transport and its symmetric counterpart, the antitone coupling, these can be characterized by order-theoretic minimality properties, as simultaneous optimal transports for certain classes of reward (or cost) functions, and through no-crossing conditions on their supports ; however , our two couplings have asymmetric geometries . Remarkably, supermartingale optimal transport decomposes into classical and martingale transport in several ways .
[ { "type": "R", "before": "Antitone", "after": "antitone", "start_char_pos": 442, "end_char_pos": 450 }, { "type": "R", "before": ". However", "after": "; however", "start_char_pos": 669, "end_char_pos": 678 }, { "type": "R", "before": "due to the directed nature of the supermartingale constraint", "after": ". Remarkably, supermartingale optimal transport decomposes into classical and martingale transport in several ways", "start_char_pos": 726, "end_char_pos": 786 } ]
[ 0, 126, 155, 340, 670 ]
1609.03257
1
This paper applies the Bayesian Model Averaging (BMA) statistical ensemble technique to estimate small molecule solvation free energies. There is a wide range methods for predicting solvation free energies, ranging from empirical statistical models to ab initio quantum mechanical approaches. Each of these methods are based on a set of conceptual assumptions that can affect a method's predictive accuracy and transferability. Using an iterative statistical process, we have selected and combined solvation energy estimates using an ensemble of 17 diverse methods from the SAMPL4 blind prediction study to form a single, aggregated solvation energy estimate. The ensemble design process evaluates the statistical information in each individual method as well as the performance of the aggregate estimate obtained from the ensemble as a whole. Methods that possess minimal or redundant information are pruned from the ensemble and the evaluation process repeats until aggregate predictive performance can no longer be improved. We show that this process results in a final aggregate estimate that outperforms all individual methods by reducing estimate errors by as much as 91\% to 1.2 kcal/mol accuracy. We also compare our iterative refinement approach to other statistical ensemble approaches and demonstrate that this iterative process reduces estimate errors by as much as 61\%. This work provides a new approach for accurate solvation free energy prediction and lays the foundation for future work on aggregate models that can balance computational cost with predictive accuracy.
This paper applies the Bayesian Model Averaging (BMA) statistical ensemble technique to estimate small molecule solvation free energies. There is a wide range of methods available for predicting solvation free energies, ranging from empirical statistical models to ab initio quantum mechanical approaches. Each of these methods is based on a set of conceptual assumptions that can affect predictive accuracy and transferability. Using an iterative statistical process, we have selected and combined solvation energy estimates using an ensemble of 17 diverse methods from the fourth Statistical Assessment of Modeling of Proteins and Ligands (SAMPL) blind prediction study to form a single, aggregated solvation energy estimate. The ensemble design process evaluates the statistical information in each individual method as well as the performance of the aggregate estimate obtained from the ensemble as a whole. Methods that possess minimal or redundant information are pruned from the ensemble and the evaluation process repeats until aggregate predictive performance can no longer be improved. We show that this process results in a final aggregate estimate that outperforms all individual methods by reducing estimate errors by as much as 91\% to 1.2 kcal/mol accuracy. We also compare our iterative refinement approach to other statistical ensemble approaches and demonstrate that this iterative process reduces estimate errors by as much as 61\%. This work provides a new approach for accurate solvation free energy prediction and lays the foundation for future work on aggregate models that can balance computational cost with prediction accuracy.
[ { "type": "R", "before": "methods", "after": "of methods available", "start_char_pos": 159, "end_char_pos": 166 }, { "type": "R", "before": "are", "after": "is", "start_char_pos": 315, "end_char_pos": 318 }, { "type": "D", "before": "a method's", "after": null, "start_char_pos": 376, "end_char_pos": 386 }, { "type": "R", "before": "SAMPL4", "after": "fourth Statistical Assessment of Modeling of Proteins and Ligands (SAMPL)", "start_char_pos": 574, "end_char_pos": 580 }, { "type": "R", "before": "predictive", "after": "prediction", "start_char_pos": 1565, "end_char_pos": 1575 } ]
[ 0, 136, 292, 427, 659, 843, 1027, 1204, 1383 ]
1609.04890
1
We construct a price impact model between stocks in a correlated market. For the price change of a given stock induced by the short-run liquidity of this stock itself and of the information about other stocks, we introduce an internal and a cross-impact function of the time lag. We model the average cross-response functions for individual stocks employing the impact functions of the time lag, the impact functions of traded volumes and the trade-sign correlators. To reduce the complexity of the model and the number of fit parameters, we focus on three scenarios and carry out numerical simulations. We also introduce a diffusion function that measures the correlated motion of prices from different stocks to test our simulated results. It turns out that both the sign cross-and self-correlators are connected with the cross-responses. The internal and cross-impact functions are indispensable to compensate amplification effects which are due to the sign correlators integrated over time. We further quantify and interpret the price impacts of time lag in terms of temporary and permanent components. To support our model, we also analyze empirical data, in particular the memory properties of the sign self- and average cross-correlators. The relation between the average cross-responses and the traded volumes which are smaller than their average is of exponential form.
We construct a price impact model between stocks in a correlated market. For the price change of a given stock induced by the short-run liquidity of this stock itself and of the information about other stocks, we introduce a self- and a cross-impact function of the time lag. We model the average cross-response functions for individual stocks employing the impact functions of the time lag, the impact functions of traded volumes and the trade-sign correlators. To quantify the self- and cross-impacts, we propose a construction to fix the parameters in the impact functions. These parameters are further corroborated by a diffusion function that measures the correlated motion of prices from different stocks . This construction is mainly ad hoc and alternative ones are not excluded. It turns out that both the sign cross- and self-correlators are connected with the cross-responses. The self- and cross-impact functions are indispensable to compensate amplification effects which are due to the sign correlators integrated over time. We further quantify and interpret the price impacts of time lag in terms of temporary and permanent components. To support our model, we also analyze empirical data, in particular the memory properties of the sign self- and average cross-correlators. The relation between the average cross-responses and the traded volumes which are smaller than their average is of power-law form.
[ { "type": "R", "before": "an internal", "after": "a self-", "start_char_pos": 223, "end_char_pos": 234 }, { "type": "R", "before": "reduce the complexity of the model and the number of fit parameters, we focus on three scenarios and carry out numerical simulations. We also introduce", "after": "quantify the self- and cross-impacts, we propose a construction to fix the parameters in the impact functions. These parameters are further corroborated by", "start_char_pos": 470, "end_char_pos": 621 }, { "type": "R", "before": "to test our simulated results.", "after": ". This construction is mainly ad hoc and alternative ones are not excluded.", "start_char_pos": 711, "end_char_pos": 741 }, { "type": "R", "before": "cross-and", "after": "cross- and", "start_char_pos": 774, "end_char_pos": 783 }, { "type": "R", "before": "internal", "after": "self-", "start_char_pos": 845, "end_char_pos": 853 }, { "type": "R", "before": "exponential", "after": "power-law", "start_char_pos": 1361, "end_char_pos": 1372 } ]
[ 0, 72, 279, 466, 603, 741, 840, 994, 1106, 1245 ]
1609.05061
1
Predicting the 3D structure of a macromolecule, such as a protein or an RNA molecule, is ranked top among the most difficult and attractive problems in bioinformatics and computational biology. In recent years, computational methods have made huge progress due to advance in computation speed and machine learning methods. These methods only need the sequence information to predict 3D structures by employing various mathematical models and machine learning methods. The success of computational methods is highly dependent on a large database of the proteins and RNA with known structures. However, the performance of computational methods are always expected to be improved. There are several reasons for this. First, we are facing, and will continue to face sparseness of data. The number of known 3D structures increased rapidly in the fast few years, but still falls behind the number of sequences. Structure data is much more expensive when compared with sequence data. Secondly, the 3D structure space is too large for our computational capability. The computing speed is not nearly enough to simulate the atom-level fold process when computing the physical energy among all the atoms. The two obstacles can be removed by knowledge-based methods, which combine knowledge learned from the known structures and biologists knowledge of the folding process of protein or RNA. In the dissertation, I will present my results in building a knowledge-based method by using machine learning methods to tackle this problem. My methods include the knowledge constraints on intermediate states, which can highly reduce the solution space of a protein or RNA, in turn increasing the efficiency of the structure folding method and improving its accuracy.
Predicting the 3D structure of a macromolecule, such as a protein or an RNA molecule, is ranked top among the most difficult and attractive problems in bioinformatics and computational biology. Its importance comes from the relationship between the 3D structure and the function of a given protein or RNA. 3D structures also help to find the ligands of the protein, which are usually small molecules, a key step in drug design. Unfortunately, there is no shortcut to accurately obtain the 3D structure of a macromolecule. Many physical measurements of macromolecular 3D structures cannot scale up, due to their large labor costs and the requirements for lab conditions. In recent years, computational methods have made huge progress due to advance in computation speed and machine learning methods. These methods only need the sequence information to predict 3D structures by employing various mathematical models and machine learning methods. The success of computational methods is highly dependent on a large database of the proteins and RNA with known structures. However, the performance of computational methods are always expected to be improved. There are several reasons for this. First, we are facing, and will continue to face sparseness of data. Secondly, the 3D structure space is too large for our computational capability. The two obstacles can be removed by knowledge-based methods, which combine knowledge learned from the known structures and biologists ' knowledge of the folding process of protein or RNA. In the dissertation, I will present my results in building a knowledge-based method by using machine learning methods to tackle this problem. My methods include the knowledge constraints on intermediate states, which can highly reduce the solution space of a protein or RNA, in turn increasing the efficiency of the structure folding method and improving its accuracy.
[ { "type": "A", "before": null, "after": "Its importance comes from the relationship between the 3D structure and the function of a given protein or RNA. 3D structures also help to find the ligands of the protein, which are usually small molecules, a key step in drug design. Unfortunately, there is no shortcut to accurately obtain the 3D structure of a macromolecule. Many physical measurements of macromolecular 3D structures cannot scale up, due to their large labor costs and the requirements for lab conditions.", "start_char_pos": 194, "end_char_pos": 194 }, { "type": "D", "before": "The number of known 3D structures increased rapidly in the fast few years, but still falls behind the number of sequences. Structure data is much more expensive when compared with sequence data.", "after": null, "start_char_pos": 783, "end_char_pos": 977 }, { "type": "D", "before": "computing speed is not nearly enough to simulate the atom-level fold process when computing the physical energy among all the atoms. The", "after": null, "start_char_pos": 1062, "end_char_pos": 1198 }, { "type": "A", "before": null, "after": "'", "start_char_pos": 1329, "end_char_pos": 1329 } ]
[ 0, 193, 323, 468, 592, 678, 714, 782, 905, 977, 1057, 1194, 1523 ]
1609.05513
1
Genetically identical microbial cells typically display significant variability in every measurable property. In particular, highly abundant proteins - which can determine cellular behavior - exhibit large variability in copy number among individuals. Their distribution has a universal shape common to different proteins and URLanisms; the same distribution shape is measured both in populations and in single-cell temporal traces. Moreover, different highly expressed proteins are statistically correlated with cell size and with cell-cycle time. These results indicate coupling between measurable properties in the cell and buffering of their statistics from the microscopic scale. We propose a modeling framework in which the complex intracellular processes produce a phenotype composed of many effectively interacting components . These interactions, as well as the imperfect nature of cell division events, provide a simple model that reconstructs many properties of phenotypic variability on several timescales. These include fluctuating accumulation rates along consecutive cell-cycles, with correlations among the rates of phenotype components; universal and non-universal properties of distributions; correlations between cell-cycle time and different phenotype components; and temporally structured autocorrelation functions with long (\sim 10 generation) timescales .
Cellular phenotype is characterized by different components such as cell size, protein content and cell cycle time. These are global variables that are the outcome of multiple internal microscopic processes. Accordingly, they display some universal statistical properties and scaling relations, such as distribution collapse and relation between moments. Cell size statistics and its relation to growth and division has been mostly studied separately from proteins and other cellular variables. Here we present experimental and theoretical analyses of these phenotype components in a unified framework that reveals their correlations and interactions inside the cell. We measure these components simultaneously in single cells over dozens of generations, quantify their correlations, and compare population to temporal statistics. We find that cell size and highly expressed proteins have very similar dynamics over growth and division cycles, which result in parallel statistical properties, both universal and individual. In particular, while distribution shapes of fluctuations along time are common to all cells and components, other properties are variable and remain distinct in individual cells for a surprisingly large number of generations. These include temporal averages of cell size and protein content, and the structure of their auto-correlation functions. We explore possible roles of the different components in controlling cell growth and division. We find that in order to stabilize exponential accumulation and division of all components across generations, coupled dynamics among them is required. Finally, we incorporate effective coupling within the cell cycle with a phenomenological mapping across consecutive cycles, and show that this model reproduces the entire array of experimental observations .
[ { "type": "R", "before": "Genetically identical microbial cells typically display significant variability in every measurable property. In particular, highly abundant proteins - which can determine cellular behavior - exhibit large variability in copy number among individuals. Their distribution has a universal shape common to different proteins and URLanisms; the same distribution shape is measured both in populations and in single-cell temporal traces. Moreover, different", "after": "Cellular phenotype is characterized by different components such as cell size, protein content and cell cycle time. These are global variables that are the outcome of multiple internal microscopic processes. Accordingly, they display some universal statistical properties and scaling relations, such as distribution collapse and relation between moments. Cell size statistics and its relation to growth and division has been mostly studied separately from proteins and other cellular variables. Here we present experimental and theoretical analyses of these phenotype components in a unified framework that reveals their correlations and interactions inside the cell. We measure these components simultaneously in single cells over dozens of generations, quantify their correlations, and compare population to temporal statistics. We find that cell size and", "start_char_pos": 0, "end_char_pos": 452 }, { "type": "R", "before": "are statistically correlated with cell size and with cell-cycle time. These results indicate coupling between measurable properties in the cell and buffering of their statistics from the microscopic scale. We propose a modeling framework in which the complex intracellular processes produce a phenotype composed of many effectively interacting components . These interactions, as well as the imperfect nature of cell division events, provide a simple model that reconstructs many properties of phenotypic variability on several timescales. These include fluctuating accumulation rates along consecutive cell-cycles, with correlations among the rates of phenotype components; universal and non-universal properties of distributions; correlations between cell-cycle time and different phenotype components; and temporally structured autocorrelation functions with long (\\sim 10 generation) timescales", "after": "have very similar dynamics over growth and division cycles, which result in parallel statistical properties, both universal and individual. In particular, while distribution shapes of fluctuations along time are common to all cells and components, other properties are variable and remain distinct in individual cells for a surprisingly large number of generations. These include temporal averages of cell size and protein content, and the structure of their auto-correlation functions. We explore possible roles of the different components in controlling cell growth and division. We find that in order to stabilize exponential accumulation and division of all components across generations, coupled dynamics among them is required. Finally, we incorporate effective coupling within the cell cycle with a phenomenological mapping across consecutive cycles, and show that this model reproduces the entire array of experimental observations", "start_char_pos": 479, "end_char_pos": 1377 } ]
[ 0, 109, 251, 336, 432, 548, 684, 835, 1018, 1153, 1210, 1283 ]
1609.05554
1
The copying of a polymer sequence into a new polymer is central to biology. Although the growth of a copy attached to its template has received much attention, effective copies must persist after template separation. We show that this separation has three fundamental thermodynamic effects. Firstly, attractive polymer-template interactions favor polymerization but inhibit separation, playing a double-edged role. Secondly, given separation, more work is always necessary to create a specific copy than a non-specific polymer of the same length. Finally, the mixing of copies from distinct templates makes correlations between template and copy sequences unexploitable, combining with copying inaccuracy to reduce the free energy stored in a polymer ensemble. This lower stored free energy in turn reduces the minimal entropy generation during non-specific depolymerization .
Living cells use readout molecules to record the state of receptor proteins, similar to measurements or copies in typical computational devices. But is this analogy rigorous? Can cells be optimally efficient, and if not, why? We show that , as in computation, a canonical biochemical readout network generates correlations; extracting no work from these correlations sets a lower bound on dissipation. For general input, the biochemical network cannot reach this bound, even with arbitrarily slow reactions or weak thermodynamic driving. It faces an accuracy-dissipation trade-off that is qualitatively distinct from and worse than implied by the bound, and more complex steady-state copy processes cannot perform better. Nonetheless, the cost remains close to the thermodynamic bound unless accuracy is extremely high. Additionally, we show that biomolecular reactions could be used in thermodynamically optimal devices under exogenous manipulation of chemical fuels, suggesting an experimental system for testing computational thermodynamics .
[ { "type": "R", "before": "The copying of a polymer sequence into a new polymer is central to biology. Although the growth of a copy attached to its template has received much attention, effective copies must persist after template separation.", "after": "Living cells use readout molecules to record the state of receptor proteins, similar to measurements or copies in typical computational devices. But is this analogy rigorous? Can cells be optimally efficient, and if not, why?", "start_char_pos": 0, "end_char_pos": 216 }, { "type": "R", "before": "this separation has three fundamental thermodynamic effects. Firstly, attractive polymer-template interactions favor polymerization but inhibit separation, playing a double-edged role. Secondly, given separation, more work is always necessary to create a specific copy than", "after": ", as in computation,", "start_char_pos": 230, "end_char_pos": 503 }, { "type": "R", "before": "non-specific polymer of the same length. Finally, the mixing of copies from distinct templates makes correlations between template and copy sequences unexploitable, combining with copying inaccuracy to reduce the free energy stored in a polymer ensemble. This lower stored free energy in turn reduces the minimal entropy generation during non-specific depolymerization", "after": "canonical biochemical readout network generates correlations; extracting no work from these correlations sets a lower bound on dissipation. For general input, the biochemical network cannot reach this bound, even with arbitrarily slow reactions or weak thermodynamic driving. It faces an accuracy-dissipation trade-off that is qualitatively distinct from and worse than implied by the bound, and more complex steady-state copy processes cannot perform better. Nonetheless, the cost remains close to the thermodynamic bound unless accuracy is extremely high. Additionally, we show that biomolecular reactions could be used in thermodynamically optimal devices under exogenous manipulation of chemical fuels, suggesting an experimental system for testing computational thermodynamics", "start_char_pos": 506, "end_char_pos": 874 } ]
[ 0, 75, 216, 290, 414, 546, 760 ]
1609.05865
1
We consider a jump-type Cox-Ingersoll-Ross (CIR) process driven by a subordinator, and we study asymptotic properties of the maximum likelihood estimator (MLE) for its growth rate. We distinguish three cases: subcritical, critical and supercritical. In the subcritical case we prove weak consistency and asymptotic normality, and, under an additional moment assumption, strong consistency as well. In the supercritical case, we prove strong consistency and mixed normal (but non-normal) asymptotic behavior, while in the critical case, weak consistency and non-standard asymptotic behavior are described. We specialize our results to so-called basic affine jump-diffusions as well. Concerning the asymptotic behavior of the MLE in the supercritical case, we derive a stochastic representation of the limiting mixed normal distribution containing a jump-type supercritical CIR process , which is a new phenomena, compared to the critical case, where a diffusion-type critical CIR process comes into play .
We consider a jump-type Cox-Ingersoll-Ross (CIR) process driven by a subordinator, and we study asymptotic properties of the maximum likelihood estimator (MLE) for its growth rate. We distinguish three cases: subcritical, critical and supercritical. In the subcritical case we prove weak consistency and asymptotic normality, and, under an additional moment assumption, strong consistency as well. In the supercritical case, we prove strong consistency and mixed normal (but non-normal) asymptotic behavior, while in the critical case, weak consistency and non-standard asymptotic behavior are described. We specialize our results to so-called basic affine jump-diffusions as well. Concerning the asymptotic behavior of the MLE in the supercritical case, we derive a stochastic representation of the limiting mixed normal distribution , where the almost sure limit of an appropriately scaled jump-type supercritical CIR process comes into play. This is a new phenomena, compared to the critical case, where a diffusion-type critical CIR process plays a role .
[ { "type": "R", "before": "containing a", "after": ", where the almost sure limit of an appropriately scaled", "start_char_pos": 835, "end_char_pos": 847 }, { "type": "R", "before": ", which", "after": "comes into play. This", "start_char_pos": 884, "end_char_pos": 891 }, { "type": "R", "before": "comes into play", "after": "plays a role", "start_char_pos": 987, "end_char_pos": 1002 } ]
[ 0, 180, 249, 397, 604, 681 ]
1609.05865
2
We consider a jump-type Cox-Ingersoll-Ross (CIR) process driven by a subordinator, and we study asymptotic properties of the maximum likelihood estimator (MLE) for its growth rate. We distinguish three cases: subcritical, critical and supercritical. In the subcritical case we prove weak consistency and asymptotic normality, and, under an additional moment assumption, strong consistency as well. In the supercritical case, we prove strong consistency and mixed normal (but non-normal) asymptotic behavior, while in the critical case, weak consistency and non-standard asymptotic behavior are described. We specialize our results to so-called basic affine jump-diffusions as well. Concerning the asymptotic behavior of the MLE in the supercritical case, we derive a stochastic representation of the limiting mixed normal distribution, where the almost sure limit of an appropriately scaled jump-type supercritical CIR process comes into play. This is a new phenomena, compared to the critical case, where a diffusion-type critical CIR process plays a role.
We consider a jump-type Cox--Ingersoll--Ross (CIR) process driven by a standard Wiener process and a subordinator, and we study asymptotic properties of the maximum likelihood estimator (MLE) for its growth rate. We distinguish three cases: subcritical, critical and supercritical. In the subcritical case we prove weak consistency and asymptotic normality, and, under an additional moment assumption, strong consistency as well. In the supercritical case, we prove strong consistency and mixed normal (but non-normal) asymptotic behavior, while in the critical case, weak consistency and non-standard asymptotic behavior are described. We specialize our results to so-called basic affine jump-diffusions as well. Concerning the asymptotic behavior of the MLE in the supercritical case, we derive a stochastic representation of the limiting mixed normal distribution, where the almost sure limit of an appropriately scaled jump-type supercritical CIR process comes into play. This is a new phenomena, compared to the critical case, where a diffusion-type critical CIR process plays a role.
[ { "type": "R", "before": "Cox-Ingersoll-Ross", "after": "Cox--Ingersoll--Ross", "start_char_pos": 24, "end_char_pos": 42 }, { "type": "A", "before": null, "after": "a standard Wiener process and", "start_char_pos": 67, "end_char_pos": 67 } ]
[ 0, 181, 250, 398, 605, 682, 944 ]
1609.05939
1
We analyze the linear response of a market network to shocks based on the bipartite market model we introduced in an earlier paper, which we claimed to be able to identify the time-line of the 2009-2011 Eurozone crisis correctly. We show that this model has three distinct phases that can broadly be categorized as "stable" and "unstable". While the stable phase describes periods where investors and traders have confidence in the market , the unstable phase can describe "boom-bust" periods . We analytically derive these phases and where the phase transition happens using a mean field approximation of the model. We show that the condition for stability is \alpha \beta <1 with \alpha being the inverse of the "price elasticity" and \beta the "income elasticity of demand", which measures how rash the investors make decisions. We also show that in the mean-field limit this model reduces to the Langevin model by Bouchaud et al. for price returns.
We analyze the linear response of a market network to shocks based on the bipartite market model we introduced in an earlier paper, which we claimed to be able to identify the time-line of the 2009-2011 Eurozone crisis correctly. We show that this model has three distinct phases that can broadly be categorized as "stable" and "unstable". Based on the interpretation of our behavioral parameters, the stable phase describes periods where investors and traders have confidence in the market (e.g. predict that the market rebounds from a loss). We show that the unstable phase happens when there is a lack of confidence and seems to describe "boom-bust" periods in which changes in prices are exponential . We analytically derive these phases and where the phase transition happens using a mean field approximation of the model. We show that the condition for stability is \alpha \beta <1 with \alpha being the inverse of the "price elasticity" and \beta the "income elasticity of demand", which measures how rash the investors make decisions. We also show that in the mean-field limit this model reduces to the Langevin model by Bouchaud et al. for price returns.
[ { "type": "R", "before": "While the", "after": "Based on the interpretation of our behavioral parameters, the", "start_char_pos": 340, "end_char_pos": 349 }, { "type": "R", "before": ", the unstable phase can", "after": "(e.g. predict that the market rebounds from a loss). We show that the unstable phase happens when there is a lack of confidence and seems to", "start_char_pos": 439, "end_char_pos": 463 }, { "type": "A", "before": null, "after": "in which changes in prices are exponential", "start_char_pos": 493, "end_char_pos": 493 } ]
[ 0, 229, 339, 495, 617, 832 ]
1609.06109
1
Video resolutions used in variety of media are constantly rising. While manufacturers struggle to perfect their screens it also important to ensure high quality of displayed image. Overall quality can be measured using Mean Opinion Score (MOS). Video quality can be affected by miscellaneous artifacts, appearing at every stage of video creation and transmission. In this paper we present a solution to calculate four distinct video quality metrics that can be applied to a real time video quality assessment system. Our assessment module is capable of processing 8K resolution in real time manner set at the level of 30 frames per second. Throughput of 2.19 GB/s surpasses performance of puresoftware solutions. To concentrate on architectural optimization module was created using high level language.
Video resolutions used in variety of media are constantly rising. While manufacturers struggle to perfect their screens it is also important to ensure high quality of displayed image. Overall quality can be measured using Mean Opinion Score (MOS). Video quality can be affected by miscellaneous artifacts, appearing at every stage of video creation and transmission. In this paper , we present a solution to calculate four distinct video quality metrics that can be applied to a real time video quality assessment system. Our assessment module is capable of processing 8K resolution in real time set at the level of 30 frames per second. Throughput of 2.19 GB/s surpasses performance of pure software solutions. To concentrate on architectural optimization , the module was created using high level language.
[ { "type": "A", "before": null, "after": "is", "start_char_pos": 123, "end_char_pos": 123 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 379, "end_char_pos": 379 }, { "type": "D", "before": "manner", "after": null, "start_char_pos": 593, "end_char_pos": 599 }, { "type": "R", "before": "puresoftware", "after": "pure software", "start_char_pos": 691, "end_char_pos": 703 }, { "type": "A", "before": null, "after": ", the", "start_char_pos": 760, "end_char_pos": 760 } ]
[ 0, 65, 181, 245, 364, 518, 641, 714 ]
1610.00332
1
We study the empirical properties of realized volatility of the E-mini S&P 500 futures contract at various time scales, ranging from a few minutes to one day. Our main finding is that intraday volatility is remarkably rough and persistent. What is more, by further studying daily realized volatility measures of more than five thousand individual US equities, we find that both roughness and persistence appear to be universal properties of volatility. Inspired by the empirical findings, we introduce a new class of continuous-time stochastic volatility models, capable of decoupling roughness ( fine properties ) from long memory and persistence (long-term behavior) in a simple and parsimonious way, which allows us to successfully model volatility at all intraday time scales. Our prime model is based on the so-called Brownian semistationary process and we derive a number of theoretical properties of this process, relevant to volatility modeling. Finally, in a forecasting study, we find that our new models outperform a wide array of benchmarks considerably, indicating that it pays off to exploit both roughness and persistence in volatility forecasting.
We study the empirical properties of realized volatility of the E-mini S&P 500 futures contract at various time scales, ranging from a few minutes to one day. Our main finding is that intraday volatility is remarkably rough and persistent. What is more, by further studying daily realized volatility measures of close to two thousand individual US equities, we find that both roughness and persistence appear to be universal properties of volatility. Inspired by the empirical findings, we introduce a new class of continuous-time stochastic volatility models, capable of decoupling roughness ( short-term behavior ) from long memory and persistence (long-term behavior) in a simple and parsimonious way, which allows us to successfully model volatility at all intraday time scales. Our prime model is based on the so-called Brownian semistationary process and we derive a number of theoretical properties of this process, relevant to volatility modeling. As an illustration of the usefulness our new models, we conduct an extensive forecasting study; we find that the models proposed in this paper outperform a wide array of benchmarks considerably, indicating that it pays off to exploit both roughness and persistence in volatility forecasting.
[ { "type": "R", "before": "more than five", "after": "close to two", "start_char_pos": 312, "end_char_pos": 326 }, { "type": "R", "before": "fine properties", "after": "short-term behavior", "start_char_pos": 597, "end_char_pos": 612 }, { "type": "R", "before": "Finally, in a forecasting study,", "after": "As an illustration of the usefulness our new models, we conduct an extensive forecasting study;", "start_char_pos": 954, "end_char_pos": 986 }, { "type": "R", "before": "our new models", "after": "the models proposed in this paper", "start_char_pos": 1000, "end_char_pos": 1014 } ]
[ 0, 158, 239, 452, 780, 953 ]
1610.00332
2
We study the empirical properties of realized volatility of the E-mini S P 500 futures contract at various time scales, ranging from a few minutes to one day. Our main finding is that intraday volatility is remarkably rough and persistent. What is more, by further studying daily realized volatility measures of close to two thousand individual US equities, we find that both roughness and persistence appear to be universal properties of volatility. Inspired by the empirical findings, we introduce a new class of continuous-time stochastic volatility models, capable of decoupling roughness (short-term behavior) from long memoryand persistence (long-term behavior) in a simple and parsimonious way, which allows us to successfully model volatility at all intraday time scales . Our prime model is based on the so-called Brownian semistationary process and we derive a number of theoretical properties of this process, relevant to volatility modeling. As an illustration of the usefulness our new models , we conduct an extensive forecasting study; we find that the models proposed in this paper outperform a wide array of benchmarks considerably, indicating that it pays off to exploit both roughness and persistence in volatility forecasting.
We introduce a new class of continuous-time models of the stochastic volatility of asset prices. The models can simultaneously incorporate roughness and slowly decaying autocorrelations, including proper long memory, which are two stylized facts often found in volatility data . Our prime model is based on the so-called Brownian semistationary process and we derive a number of theoretical properties of this process, relevant to volatility modeling. Applying the models to realized volatility measures covering a vast panel of assets, we find evidence consistent with the hypothesis that time series of realized measures of volatility are both rough and very persistent. Lastly, we illustrate the utility of the models in an extensive forecasting study; we find that the models proposed in this paper outperform a wide array of benchmarks considerably, indicating that it pays off to exploit both roughness and persistence in volatility forecasting.
[ { "type": "D", "before": "study the empirical properties of realized volatility of the E-mini S", "after": null, "start_char_pos": 3, "end_char_pos": 72 }, { "type": "D", "before": "P 500 futures contract at various time scales, ranging from a few minutes to one day. Our main finding is that intraday volatility is remarkably rough and persistent. What is more, by further studying daily realized volatility measures of close to two thousand individual US equities, we find that both roughness and persistence appear to be universal properties of volatility. Inspired by the empirical findings, we", "after": null, "start_char_pos": 73, "end_char_pos": 489 }, { "type": "R", "before": "stochastic volatility models, capable of decoupling roughness (short-term behavior) from long memoryand persistence (long-term behavior) in a simple and parsimonious way, which allows us to successfully model volatility at all intraday time scales", "after": "models of the stochastic volatility of asset prices. The models can simultaneously incorporate roughness and slowly decaying autocorrelations, including proper long memory, which are two stylized facts often found in volatility data", "start_char_pos": 531, "end_char_pos": 778 }, { "type": "R", "before": "As an illustration of the usefulness our new models , we conduct", "after": "Applying the models to realized volatility measures covering a vast panel of assets, we find evidence consistent with the hypothesis that time series of realized measures of volatility are both rough and very persistent. Lastly, we illustrate the utility of the models in", "start_char_pos": 954, "end_char_pos": 1018 } ]
[ 0, 158, 239, 450, 780, 953, 1050 ]
1610.00404
1
Determining the three-dimensional structure of proteins and protein complexes at atomic resolution is a fundamental task in structural biology. Over the last decade, remarkable progress has been made using "single particle" cryo-electron microscopy (cryo-EM) for this purpose. In cryo-EM, hundreds of thousands of two-dimensional images are obtained of individual copies of the same particle, each held in a thin sheet of ice at some unknown orientation. Each image corresponds to the noisy projection of the particle's electron-scattering density. The reconstruction of a high-resolution image from this data is typically formulated as a nonlinear, non-convex optimization problem for unknowns which encode the angular pose and lateral offset of each particle. Since there are hundreds of thousands of such parameters, this leads to a very CPU-intensive task---limiting both the number of particle images which can be processed and the number of independent reconstructions which can be carried out for the purpose of statistical validation. Here, we propose a deterministic method for high-resolution reconstruction given a very low resolution initial guess, that requires a predictable and relatively modest amount of computational effort .
Determining the three-dimensional structure of proteins and protein complexes at atomic resolution is a fundamental task in structural biology. Over the last decade, remarkable progress has been made using "single particle" cryo-electron microscopy (cryo-EM) for this purpose. In cryo-EM, hundreds of thousands of two-dimensional images are obtained of individual copies of the same particle, each held in a thin sheet of ice at some unknown orientation. Each image corresponds to the noisy projection of the particle's electron-scattering density. The reconstruction of a high-resolution image from this data is typically formulated as a nonlinear, non-convex optimization problem for unknowns which encode the angular pose and lateral offset of each particle. Since there are hundreds of thousands of such parameters, this leads to a very CPU-intensive task---limiting both the number of particle images which can be processed and the number of independent reconstructions which can be carried out for the purpose of statistical validation. Here, we propose a deterministic method for high-resolution reconstruction that operates in an ab initio manner---that is, without the need for an initial guess. It requires a predictable and relatively modest amount of computational effort , by marching out radially in the Fourier domain from low to high frequency, increasing the resolution by a fixed increment at each step .
[ { "type": "R", "before": "given a very low resolution initial guess, that", "after": "that operates in an ab initio manner---that is, without the need for an initial guess. It", "start_char_pos": 1118, "end_char_pos": 1165 }, { "type": "A", "before": null, "after": ", by marching out radially in the Fourier domain from low to high frequency, increasing the resolution by a fixed increment at each step", "start_char_pos": 1242, "end_char_pos": 1242 } ]
[ 0, 143, 276, 454, 548, 761, 1042 ]
1610.00560
1
We consider a network of processor-sharing queues with state-dependent service rates. These are allocated according to balanced fairness within a polymatroid capacity set. Balanced fairness is known to be both insensitive and Pareto-efficient in such networks , which ensures that the performance metrics, when computable, will provide robust insights into the real performance of the system considered. We first show that these performance metrics can be evaluated with a complexity that is polynomial in the system size when we allow for some controlled asymmetry, in the sense that the network contains a fixed number of parts wherein all queues are `exchangeable ' . This in turn allows us to derive stochastic bounds for a larger class of networks which satisfy less restrictive symmetry assumptions. These results are applied to practical examples of tree data networks and computer clusters.
We consider a system of processor-sharing queues with state-dependent service rates. These are allocated according to balanced fairness within a polymatroid capacity set. Balanced fairness is known to be both insensitive and Pareto-efficient in such systems , which ensures that the performance metrics, when computable, will provide robust insights into the real performance of the system considered. We first show that these performance metrics can be evaluated with a complexity that is polynomial in the system size if the system is partitioned into a finite number of parts , so that queues are exchangeable within each part and asymmetric across different parts . This in turn allows us to derive stochastic bounds for a larger class of systems which satisfy less restrictive symmetry assumptions. These results are applied to practical examples of tree data networks , such as backhaul networks of Internet service providers, and computer clusters.
[ { "type": "R", "before": "network", "after": "system", "start_char_pos": 14, "end_char_pos": 21 }, { "type": "R", "before": "networks", "after": "systems", "start_char_pos": 251, "end_char_pos": 259 }, { "type": "R", "before": "when we allow for some controlled asymmetry, in the sense that the network contains a fixed", "after": "if the system is partitioned into a finite", "start_char_pos": 522, "end_char_pos": 613 }, { "type": "R", "before": "wherein all queues are `exchangeable '", "after": ", so that queues are exchangeable within each part and asymmetric across different parts", "start_char_pos": 630, "end_char_pos": 668 }, { "type": "R", "before": "networks", "after": "systems", "start_char_pos": 744, "end_char_pos": 752 }, { "type": "A", "before": null, "after": ", such as backhaul networks of Internet service providers,", "start_char_pos": 876, "end_char_pos": 876 } ]
[ 0, 85, 171, 403, 805 ]
1610.00795
1
We propose a credit risk approach in which financial institutions , modelled as a portfolio of risky assets characterized by a probability of default and a correlation matrix, are the nodes of a network whose links are credit exposures that would be partially lost in case of neighbours' default . The systemic risk of the network is described in terms of the loss distribution over time obtained with a multi-period Montecarlo simulation process, during which the nodes can default, triggering a change in the probability of default in their neighbourhood as a contagion mechanism. In particular, we have considered the expected loss and introduced new measures of network stress called PDImpact and PDRank . They are expressed in monetary terms as the already known DebtRank and can be used to assess the importance of a node in the network. The model exhibits two regimes of 'weak' and 'strong' contagion, the latter characterized by the depletion of the loss distribution at intermediate losses in favour of fatter tails. Also, in systems with strong contagion , low average correlation between nodes corresponds to larger losses. This seems at odds with the diversification benefit obtained in standard credit risk models . Results suggest that the credit exposure network of the European global systemically important banks is in a weak contagion regime, but strong contagion could be approached in periods characterized by extreme volatility or in cases where the financial institutions are not adequately capitalized .
The interconnectedness of financial institutions affects instability and credit crises. To quantify systemic risk we introduce here the PD model, a dynamic model that combines credit risk techniques with a contagion mechanism on the network of exposures among banks. A potential loss distribution is obtained through a multi-period Monte Carlo simulation that considers the probability of default (PD) of the banks and their tendency of defaulting in the same time interval. A contagion process increases the PD of banks exposed toward distressed counterparties . The systemic risk is measured by statistics of the loss distribution , while the contribution of each node is quantified by the new measures PDRank and PDImpact. We illustrate how the model works on the network of the European Global Systemically Important Banks. For a certain range of the banks' capital and of their assets volatility, our results reveal the emergence of a strong contagion regime where lower default correlation between banks corresponds to higher losses. This is the opposite of the diversification benefits postulated by standard credit risk models used by banks and regulators who could therefore underestimate the capital needed to overcome a period of crisis, thereby contributing to the financial system instability .
[ { "type": "R", "before": "We propose a credit risk approach in which financial institutions , modelled as a portfolio of risky assets characterized by a", "after": "The interconnectedness of financial institutions affects instability and credit crises. To quantify systemic risk we introduce here the PD model, a dynamic model that combines credit risk techniques with a contagion mechanism on the network of exposures among banks. A potential loss distribution is obtained through a multi-period Monte Carlo simulation that considers the", "start_char_pos": 0, "end_char_pos": 126 }, { "type": "R", "before": "and a correlation matrix, are the nodes of a network whose links are credit exposures that would be partially lost in case of neighbours' default", "after": "(PD) of the banks and their tendency of defaulting in the same time interval. A contagion process increases the PD of banks exposed toward distressed counterparties", "start_char_pos": 150, "end_char_pos": 295 }, { "type": "R", "before": "of the network is described in terms", "after": "is measured by statistics", "start_char_pos": 316, "end_char_pos": 352 }, { "type": "R", "before": "over time obtained with a multi-period Montecarlo simulation process, during which the nodes can default, triggering a change in the probability of default in their neighbourhood as a contagion mechanism. In particular, we have considered the expected loss and introduced new measures of network stress called PDImpact and PDRank . They are expressed in monetary terms as the already known DebtRank and can be used to assess the importance of a node in the network. The model exhibits two regimes of 'weak' and 'strong' contagion,", "after": ", while the contribution of each node is quantified by the new measures PDRank and PDImpact. We illustrate how the model works on the network of", "start_char_pos": 378, "end_char_pos": 908 }, { "type": "R", "before": "latter characterized by the depletion of the loss distribution at intermediate losses in favour of fatter tails. Also, in systems with strong contagion , low average correlation between nodes corresponds to larger", "after": "European Global Systemically Important Banks. For a certain range of the banks' capital and of their assets volatility, our results reveal the emergence of a strong contagion regime where lower default correlation between banks corresponds to higher", "start_char_pos": 913, "end_char_pos": 1126 }, { "type": "R", "before": "seems at odds with the diversification benefit obtained in", "after": "is the opposite of the diversification benefits postulated by", "start_char_pos": 1140, "end_char_pos": 1198 }, { "type": "R", "before": ". Results suggest that the credit exposure network of the European global systemically important banks is in a weak contagion regime, but strong contagion could be approached in periods characterized by extreme volatility or in cases where the financial institutions are not adequately capitalized", "after": "used by banks and regulators who could therefore underestimate the capital needed to overcome a period of crisis, thereby contributing to the financial system instability", "start_char_pos": 1227, "end_char_pos": 1524 } ]
[ 0, 297, 582, 709, 843, 1025, 1134 ]
1610.00999
1
We consider the robust exponential utility maximization problem in discrete time: An investor maximizes the worst case expected exponential utility with respect to a family of non-dominated probabilistic models of her endowment by dynamically investing in a financial market . We show that, for any measurable random endowment (regardless of whether the problem is finite or not) an optimal strategy exists, a dual representation in terms of martingale measures holds true, and that the problem satisfies the dynamic programming principle .
We consider the robust exponential utility maximization problem in discrete time: An investor maximizes the worst case expected exponential utility with respect to a family of nondominated probabilistic models of her endowment by dynamically investing in a financial market , and statically in available options . We show that, for any measurable random endowment (regardless of whether the problem is finite or not) an optimal strategy exists, a dual representation in terms of (calibrated) martingale measures holds true, and that the problem satisfies the dynamic programming principle (in case of no options). Further it is shown that the value of the utility maximization problem converges to the robust superhedging price as the risk aversion parameter gets large, and examples of nondominated probabilistic models are discussed .
[ { "type": "R", "before": "non-dominated", "after": "nondominated", "start_char_pos": 176, "end_char_pos": 189 }, { "type": "A", "before": null, "after": ", and statically in available options", "start_char_pos": 275, "end_char_pos": 275 }, { "type": "A", "before": null, "after": "(calibrated)", "start_char_pos": 443, "end_char_pos": 443 }, { "type": "A", "before": null, "after": "(in case of no options). Further it is shown that the value of the utility maximization problem converges to the robust superhedging price as the risk aversion parameter gets large, and examples of nondominated probabilistic models are discussed", "start_char_pos": 541, "end_char_pos": 541 } ]
[ 0, 277 ]
1610.02320
1
Nucleation processes are at the heart of a large number of phenomena, from cloud formation to protein crystallization. A recently emerging area where nucleation is highly relevant is the initiation of filamentous protein self-assembly, a process that has broad implications from medicine to nanotechnology. As such, spontaneous nucleation of protein fibrils has received much attention in recent years with many theoretical and experimental studies focusing on the underlying physical principles. In this paper we make a step forward in this direction and explore the early time behaviour of filamentous protein growth in the context of nucleation theory. We first provide an overview of the thermodynamics and kinetics of spontaneous nucleation in protein filaments in the presence of one relevant degree of freedom, namely the cluster size. In this case, we review how key kinetic observables, such as the reaction order of spontaneous nucleation, are directly related to the physical size of the critical nucleus. We then focus on the increasingly prominent case of filament nucleation that includes a conformational conversion of the nucleating building-block as an additional slow step in the nucleation process. Using computer simulations, we study the concentration dependence of the nucleation rate. We find that, under these circumstances, the reaction order of spontaneous nucleation with respect to the free monomer does no longer relate to the overall physical size of the nucleating aggregate but rather to the subset of proteins within the aggregate that actively participate in the conformational conversion. Our results thus provide a novel interpretation of the kinetic descriptors of protein filament formation, including the reaction order of the nucleation step or the scaling exponent of lag times, and put into perspective current theoretical descriptions of protein aggregation.
Nucleation processes are at the heart of a large number of phenomena, from cloud formation to protein crystallization. A recently emerging area where nucleation is highly relevant is the initiation of filamentous protein self-assembly, a process that has broad implications from medicine to nanotechnology. As such, spontaneous nucleation of protein fibrils has received much attention in recent years with many theoretical and experimental studies focussing on the underlying physical principles. In this paper we make a step forward in this direction and explore the early time behaviour of filamentous protein growth in the context of nucleation theory. We first provide an overview of the thermodynamics and kinetics of spontaneous nucleation of protein filaments in the presence of one relevant degree of freedom, namely the cluster size. In this case, we review how key kinetic observables, such as the reaction order of spontaneous nucleation, are directly related to the physical size of the critical nucleus. We then focus on the increasingly prominent case of filament nucleation that includes a conformational conversion of the nucleating building-block as an additional slow step in the nucleation process. Using computer simulations, we study the concentration dependence of the nucleation rate. We find that, under these circumstances, the reaction order of spontaneous nucleation with respect to the free monomer does no longer relate to the overall physical size of the nucleating aggregate but rather to the portion of the aggregate that actively participates in the conformational conversion. Our results thus provide a novel interpretation of the common kinetic descriptors of protein filament formation, including the reaction order of the nucleation step or the scaling exponent of lag times, and put into perspective current theoretical descriptions of protein aggregation.
[ { "type": "R", "before": "focusing", "after": "focussing", "start_char_pos": 449, "end_char_pos": 457 }, { "type": "R", "before": "in", "after": "of", "start_char_pos": 746, "end_char_pos": 748 }, { "type": "R", "before": "subset of proteins within", "after": "portion of", "start_char_pos": 1524, "end_char_pos": 1549 }, { "type": "R", "before": "participate", "after": "participates", "start_char_pos": 1578, "end_char_pos": 1589 }, { "type": "A", "before": null, "after": "common", "start_char_pos": 1679, "end_char_pos": 1679 } ]
[ 0, 118, 306, 496, 655, 842, 1016, 1217, 1307, 1623 ]
1610.02940
1
The classical duality theory of Kantorovich and Kellerer for the classical optimal transport is generalized to an abstract framework and a characterization of the dual elements is provided. This abstract generalization is set in a Banach lattice %DIFDELCMD < \cal %%% X with a unit order . The primal problem is given as the supremum over a convex subset of the positive unit sphere of the topological dual of %DIFDELCMD < \cal %%% X and the dual problem is defined on the bidual of%DIFDELCMD < \cal %%% X . These results are then applied to several extensions of the classical optimal transport . In particular, an alternate proof of Kellerer's result is given without using the Choquet Theorem .
The classical duality theory of Kantorovich and Kellerer for the classical optimal transport is generalized to an abstract framework and a characterization of the dual elements is provided. This abstract generalization is set in a Banach lattice %DIFDELCMD < \cal %%% with a order unit . The primal problem is given as the supremum over a convex subset of the positive unit sphere of the topological dual of %DIFDELCMD < \cal %%% and the dual problem is defined on the %DIFDELCMD < \cal %%% bi-dual of . These results are then applied to several extensions of the classical optimal transport .
[ { "type": "R", "before": "X with a unit order", "after": "with a order unit", "start_char_pos": 268, "end_char_pos": 287 }, { "type": "D", "before": "X", "after": null, "start_char_pos": 432, "end_char_pos": 433 }, { "type": "D", "before": "bidual of", "after": null, "start_char_pos": 473, "end_char_pos": 482 }, { "type": "R", "before": "X", "after": "bi-dual of", "start_char_pos": 504, "end_char_pos": 505 }, { "type": "D", "before": ". In particular, an alternate proof of Kellerer's result is given without using the Choquet Theorem", "after": null, "start_char_pos": 596, "end_char_pos": 695 } ]
[ 0, 189, 289, 597 ]
1610.03050
1
We introduce a class of flexible and tractable static factor models for the joint term structure of default probabilities, the factor copula models. These high dimensional models remain parsimonious with pair copula constructions, and nest numerous standard models as special cases. With finitely supported random losses, the loss distributions of credit portfolios and derivatives can be exactly and efficiently computed . Numerical examples on collateral debt obligation (CDO), CDO squared, and credit index swaption illustrate the versatility of our framework . An empirical exercise shows that a simple model specification can fit credit index tranche prices.
We present a class of flexible and tractable static factor models for the term structure of joint default probabilities, the factor copula models. These high dimensional models remain parsimonious with pair copula constructions, and nest many standard models as special cases. The loss distribution of a portfolio of contingent claims can be exactly and efficiently computed when individual losses are discretely supported on a finite grid . Numerical examples study the key features affecting the loss distribution and multi-name credit derivatives prices . An empirical exercise illustrates the flexibility of our approach by fitting credit index tranche prices.
[ { "type": "R", "before": "introduce", "after": "present", "start_char_pos": 3, "end_char_pos": 12 }, { "type": "D", "before": "joint", "after": null, "start_char_pos": 76, "end_char_pos": 81 }, { "type": "A", "before": null, "after": "joint", "start_char_pos": 100, "end_char_pos": 100 }, { "type": "R", "before": "numerous", "after": "many", "start_char_pos": 241, "end_char_pos": 249 }, { "type": "R", "before": "With finitely supported random losses, the loss distributions of credit portfolios and derivatives", "after": "The loss distribution of a portfolio of contingent claims", "start_char_pos": 284, "end_char_pos": 382 }, { "type": "A", "before": null, "after": "when individual losses are discretely supported on a finite grid", "start_char_pos": 423, "end_char_pos": 423 }, { "type": "R", "before": "on collateral debt obligation (CDO), CDO squared, and credit index swaption illustrate the versatility of our framework", "after": "study the key features affecting the loss distribution and multi-name credit derivatives prices", "start_char_pos": 445, "end_char_pos": 564 }, { "type": "R", "before": "shows that a simple model specification can fit", "after": "illustrates the flexibility of our approach by fitting", "start_char_pos": 589, "end_char_pos": 636 } ]
[ 0, 149, 283, 425, 566 ]
1610.03086
1
Here we develop an option pricing method based on Legendre series expansion of the density function. The key insight, relying on the close relation of the characteristic function with the series coefficients, allows to recover the density function rapidly and accurately. Approximations formulas for pricing European type option are derivedand a robust, stable algorithm for its implementation is proposed. An error analysis on the option pricing provides an estimate for the rate of convergence, which depends essentially on the smoothness of the density function and not on the payoff function . The numerical experiments show exponential convergence.
Here we develop an option pricing method based on Legendre series expansion of the density function. The key insight, relying on the close relation of the characteristic function with the series coefficients, allows to recover the density function rapidly and accurately. Based on this representation for the density function, approximations formulas for pricing European type options are derived. To obtain highly accurate result for European call option, the implementation involves integrating high degree Legendre polynomials against exponential function. Some numerical instabilities arise because of serious subtractive cancellations in its formulation (96) in proposition 7.1. To overcome this difficulty, we rewrite this quantity as solution of a second-order linear difference equation and solve it using a robust and stable algorithm from Olver. Derivation of the pricing method has been accompanied by an error analysis. Errors bounds have been derived and the study relies more on smoothness properties which are not provided by the payoff? functions, but rather by the density function of the underlying stochastic models. This is particularly relevant for options pricing where the payoff of the contract are generally not smooth functions . The numerical experiments on a class of models widely used in quantitative finance show exponential convergence.
[ { "type": "R", "before": "Approximations", "after": "Based on this representation for the density function, approximations", "start_char_pos": 272, "end_char_pos": 286 }, { "type": "R", "before": "option are derivedand a robust, stable algorithm for its implementation is proposed. An error analysis on the option pricing provides an estimate for the rate of convergence, which depends essentially on the smoothness", "after": "options are derived. To obtain highly accurate result for European call option, the implementation involves integrating high degree Legendre polynomials against exponential function. Some numerical instabilities arise because of serious subtractive cancellations in its formulation (96) in proposition 7.1. To overcome this difficulty, we rewrite this quantity as solution of a second-order linear difference equation and solve it using a robust and stable algorithm from Olver. Derivation of the pricing method has been accompanied by an error analysis. Errors bounds have been derived and the study relies more on smoothness properties which are not provided by the payoff? functions, but rather by the density function", "start_char_pos": 322, "end_char_pos": 540 }, { "type": "R", "before": "density function and not on the payoff function", "after": "underlying stochastic models. This is particularly relevant for options pricing where the payoff of the contract are generally not smooth functions", "start_char_pos": 548, "end_char_pos": 595 }, { "type": "A", "before": null, "after": "on a class of models widely used in quantitative finance", "start_char_pos": 624, "end_char_pos": 624 } ]
[ 0, 100, 271, 406, 597 ]
1610.03230
1
The purpose of this work is to investigate the pricing of financial options under the 2-hypergeometric stochastic volatility model. This is an analytically tractable model which has recently been introduced as an attempt to tackle one of the most serious shortcomings of the famous Black and Scholes option pricing model: the fact that it does not reproduce the volatility smile and skew effects which are commonly seen in observed price datafrom option markets. After a review of the basic theory of option pricing under stochastic volatility, we employ the regular perturbation method from asymptotic analysis of partial differential equations to derive an explicit and easily computable approximate formula for the pricing of barrier options under the 2-hypergeometric stochastic volatility model. The asymptotic convergence of the method is proved under appropriate regularity conditions, and a multi-stage method for improving the quality of the approximation is discussed. Numerical examples are also provided.
We investigate the pricing of financial options under the 2-hypergeometric stochastic volatility model. This is an analytically tractable model that reproduces the volatility smile and skew effects observed in empirical market data. Using a regular perturbation method from asymptotic analysis of partial differential equations , we derive an explicit and easily computable approximate formula for the pricing of barrier options under the 2-hypergeometric stochastic volatility model. The asymptotic convergence of the method is proved under appropriate regularity conditions, and a multi-stage method for improving the quality of the approximation is discussed. Numerical examples are also provided.
[ { "type": "R", "before": "The purpose of this work is to", "after": "We", "start_char_pos": 0, "end_char_pos": 30 }, { "type": "R", "before": "which has recently been introduced as an attempt to tackle one of the most serious shortcomings of the famous Black and Scholes option pricing model: the fact that it does not reproduce", "after": "that reproduces", "start_char_pos": 172, "end_char_pos": 357 }, { "type": "R", "before": "which are commonly seen in observed price datafrom option markets. After a review of the basic theory of option pricing under stochastic volatility, we employ the", "after": "observed in empirical market data. Using a", "start_char_pos": 396, "end_char_pos": 558 }, { "type": "R", "before": "to", "after": ", we", "start_char_pos": 646, "end_char_pos": 648 } ]
[ 0, 131, 462, 800, 978 ]
1610.03596
1
A Markovian lattice model for photoreceptor cells is introduced to describe the growth of mosaic patterns on fish retina. The radial stripe pattern observed in wild-type zebrafish is shown to be selected naturally during the retina growth, against the geometrically equivalent, circular stripe pattern. The mechanism of such dynamical pattern selection is clarified on the basis of both numerical simulations and theoretical analyses, finding that successive emergence of local defects plays a critical role to realize the wild-type pattern . Physical and biological implications are also discussed .
A Markovian lattice model for photoreceptor cells is introduced to describe the growth of mosaic patterns on fish retina. The radial stripe pattern observed in wild-type zebrafish is shown to be selected naturally during the retina growth, against the geometrically equivalent, circular stripe pattern. The mechanism of such dynamical pattern selection is clarified on the basis of both numerical simulations and theoretical analyses, which find that the successive emergence of local defects plays a critical role in the realization of the wild-type pattern .
[ { "type": "R", "before": "finding that", "after": "which find that the", "start_char_pos": 435, "end_char_pos": 447 }, { "type": "R", "before": "to realize the", "after": "in the realization of the", "start_char_pos": 508, "end_char_pos": 522 }, { "type": "D", "before": ". Physical and biological implications are also discussed", "after": null, "start_char_pos": 541, "end_char_pos": 598 } ]
[ 0, 121, 302 ]
1610.04085
1
We provide a characterization in terms of Fatou closedness for weakly closed monotone sets in the space of \Pcal-quasisure bounded random variables, where \Pcal is a (possibly non-dominated) class of probability measures. Our results can be applied to obtain a topological deduction of the First Fundamental Theorem of Asset Pricing for discrete time processes and the robust dual representation of (quasi)convex increasing functionals .
We provide a characterization in terms of Fatou closedness for weakly closed monotone convex sets in the space of \Pcal-quasisure bounded random variables, where \Pcal is a (possibly non-dominated) class of probability measures. We illustrate the relevance of our results by applications in the field of Mathematical Finance .
[ { "type": "A", "before": null, "after": "convex", "start_char_pos": 86, "end_char_pos": 86 }, { "type": "R", "before": "Our results can be applied to obtain a topological deduction of the First Fundamental Theorem of Asset Pricing for discrete time processes and the robust dual representation of (quasi)convex increasing functionals", "after": "We illustrate the relevance of our results by applications in the field of Mathematical Finance", "start_char_pos": 223, "end_char_pos": 436 } ]
[ 0, 222 ]
1610.04085
2
We provide a characterization in terms of Fatou closedness for weakly closed monotone convex sets in the space of %DIFDELCMD < \Pcal%%% -quasisure bounded random variables, where %DIFDELCMD < \Pcal %%% is a (possibly non-dominated) class of probability measures. We illustrate the relevance of our results by applications in the field of Mathematical Finance.
We provide a characterization in terms of Fatou closedness for weakly closed monotone convex sets in the space of %DIFDELCMD < \Pcal%%% \mathcal{P -quasisure bounded random variables, where %DIFDELCMD < \Pcal %%% \mathcal{P is a (possibly non-dominated) class of probability measures. We illustrate the relevance of our results by applications in the field of Mathematical Finance.
[ { "type": "A", "before": null, "after": "\\mathcal{P", "start_char_pos": 136, "end_char_pos": 136 }, { "type": "A", "before": null, "after": "\\mathcal{P", "start_char_pos": 203, "end_char_pos": 203 } ]
[ 0, 264 ]
1610.04085
3
We provide a characterization in terms of Fatou closedness for weakly closed monotone convex sets in the space of P-quasisure bounded random variables, where P is a (possibly non-dominated) class of probability measures. We illustrate the relevance of our results by applications in the field of Mathematical Finance .
We provide a characterization in terms of Fatou closedness for weakly closed monotone convex sets in the space of P-quasisure bounded random variables, where P is a (possibly non-dominated) class of probability measures. Applications of our results lie within robust versions the Fundamental Theorem of Asset Pricing or dual representation of convex risk measures .
[ { "type": "R", "before": "We illustrate the relevance", "after": "Applications", "start_char_pos": 221, "end_char_pos": 248 }, { "type": "R", "before": "by applications in the field of Mathematical Finance", "after": "lie within robust versions the Fundamental Theorem of Asset Pricing or dual representation of convex risk measures", "start_char_pos": 264, "end_char_pos": 316 } ]
[ 0, 220 ]
1610.04982
1
Device-to-Device (D2D) communication, which enables direct communication between nearby mobile devices, is an attractive add-on component to improve spectrum efficiency and user experience by reusing licensed cellular spectrum . Nowadays, LTE-unlicensed (LTE-U) emerges to extend the cellular network to the unlicensed spectrum to alleviate the spectrum scarcity issue. In this paper, we propose to enable D2D communication in unlicensed spectrum (D2D-U) as an underlay of the uplink cellular network for further booming the network capacity. A sensing-based protocol is designed to support the unlicensed channel access for both LTE users and D2D pairs, based on which we investigate the subchannel allocation problem to maximize the sum rate of LTE users and D2D pairs while taking into account their interference to the existing Wi-Fi systems. Specifically, we formulate the subchannel allocation as a many-to-many matching problem with externalities, and develop an iterative usersubchannel swap algorithm. Analytical and simulation results show that the proposed D2D-U scheme can significantly improve the network capacity .
Device-to-Device (D2D) communication, which enables direct communication between nearby mobile devices, is an attractive add-on component to improve spectrum efficiency and user experience by reusing licensed cellular spectrum in 5G system. In this paper, we propose to enable D2D communication in unlicensed spectrum (D2D-U) as an underlay of the uplink LTE network for further booming the network capacity. A sensing-based protocol is designed to support the unlicensed channel access for both LTE and D2D users. We further investigate the subchannel allocation problem to maximize the sum rate of LTE and D2D users while taking into account their interference to the existing Wi-Fi systems. Specifically, we formulate the subchannel allocation as a many-to-many matching problem with externalities, and develop an iterative user-subchannel swap algorithm. Analytical and simulation results show that the proposed D2D-U scheme can significantly improve the system sum-rate .
[ { "type": "R", "before": ". Nowadays, LTE-unlicensed (LTE-U) emerges to extend the cellular network to the unlicensed spectrum to alleviate the spectrum scarcity issue.", "after": "in 5G system.", "start_char_pos": 227, "end_char_pos": 369 }, { "type": "R", "before": "cellular", "after": "LTE", "start_char_pos": 484, "end_char_pos": 492 }, { "type": "D", "before": "users", "after": null, "start_char_pos": 634, "end_char_pos": 639 }, { "type": "R", "before": "pairs, based on which we", "after": "users. We further", "start_char_pos": 648, "end_char_pos": 672 }, { "type": "D", "before": "users", "after": null, "start_char_pos": 751, "end_char_pos": 756 }, { "type": "R", "before": "pairs", "after": "users", "start_char_pos": 765, "end_char_pos": 770 }, { "type": "R", "before": "usersubchannel", "after": "user-subchannel", "start_char_pos": 980, "end_char_pos": 994 }, { "type": "R", "before": "network capacity", "after": "system sum-rate", "start_char_pos": 1111, "end_char_pos": 1127 } ]
[ 0, 228, 369, 542, 846, 1010 ]
1610.05018
1
The optimal investment problem is one of the most important problems in mathematical finance. The main contribution of the present paper is an explicit formula for the optimal portfolio process. Our optimal investment problem is that of maximizing the expected value of a standard general utility function of terminal wealth in a standard complete Wiener driven financial market . In order to derive the formula for the optimal portfolio we use the recently developed functional It\^o calculus and more specifically an explicit martingale representation theorem. A main component in the formula for the optimal portfolio is a vertical derivative with respect to the driving Wiener process. The vertical derivative is an important component of functional It\^o calculus .
We consider a standard optimal investment problem in a complete financial market driven by a Wiener process and derive an explicit formula for the optimal portfolio process in terms of the vertical derivative from functional It^o calculus. An advantage with this approach compared to the Malliavin calculus approach is that it relies only on an integrability condition .
[ { "type": "R", "before": "The", "after": "We consider a standard", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "R", "before": "is one of the most important problems in mathematical finance. The main contribution of the present paper is an explicit formula for the optimal portfolio process. Our optimal investment problem is that of maximizing the expected value of a standard general utility function of terminal wealth in a standard complete Wiener driven financial market . In order to derive the formula for the optimal portfolio we use the recently developed functional It\\^o calculus and more specifically an explicit martingale representation theorem. A main component in the", "after": "in a complete financial market driven by a Wiener process and derive an explicit", "start_char_pos": 31, "end_char_pos": 586 }, { "type": "R", "before": "is a vertical derivative with respect to the driving Wiener process. The vertical derivative is an important component of functional It\\^o calculus", "after": "process in terms of the vertical derivative from functional It^o calculus. An advantage with this approach compared to the Malliavin calculus approach is that it relies only on an integrability condition", "start_char_pos": 621, "end_char_pos": 768 } ]
[ 0, 93, 194, 380, 562, 689 ]
1610.05494
1
Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selection scheme, any other procedure being biased towards unrealistically large, or small, link density . We then introduce our core technique for reconstructing in detail both the topology and the link weights of the unknown network . When tested on real economic and financial data , our method achieves a remarkable accuracy and is very robust with respect to the nodes sampled , thus representing a reliable practical tool whenever the available topological information is restricted to a small subset of nodes.
Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its (global) link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selection scheme, any other procedure being biased towards unrealistically large, or small, link densities . We then introduce our core technique for reconstructing both the topology and the link weights of the unknown network in detail . When tested on real economic and financial data sets , our method achieves a remarkable accuracy and is very robust with respect to the sampled subsets , thus representing a reliable practical tool whenever the available topological information is restricted to small portions of nodes.
[ { "type": "A", "before": null, "after": "(global)", "start_char_pos": 871, "end_char_pos": 871 }, { "type": "R", "before": "density", "after": "densities", "start_char_pos": 1115, "end_char_pos": 1122 }, { "type": "D", "before": "in detail", "after": null, "start_char_pos": 1181, "end_char_pos": 1190 }, { "type": "A", "before": null, "after": "in detail", "start_char_pos": 1253, "end_char_pos": 1253 }, { "type": "A", "before": null, "after": "sets", "start_char_pos": 1304, "end_char_pos": 1304 }, { "type": "R", "before": "nodes sampled", "after": "sampled subsets", "start_char_pos": 1388, "end_char_pos": 1401 }, { "type": "R", "before": "a small subset", "after": "small portions", "start_char_pos": 1512, "end_char_pos": 1526 } ]
[ 0, 151, 345, 620, 746, 959, 1124, 1255 ]
1610.06773
1
Markov state models (MSMs) and Master equation models are popular approaches to approximate molecular kinetics, equilibria, metastable states, and reaction coordinates in terms of a state space discretization usually obtained by clustering. Recently, a powerful generalization of MSMs has been introduced, the variational approach of conformation dynamics (VAC) and its special case the time-lagged independent component analysis (TICA), which allow us to approximate molecular kinetics and reaction coordinates by linear combinations of smooth basis functions or order parameters. While MSMs can be learned from trajectories whose starting points are not sampled from an equilibrium ensemble, TICA and VAC have as yet not enjoyed this property, and thus previous TICA /VAC estimates have been strongly biased when used with ensembles of short trajectories . Here, we employ Koopman operator theory and ideas from dynamic mode decomposition (DMD) to show how TICA /VAC can be used to estimate the unbiased equilibrium distribution from short-trajectory data and further this result in order to construct unbiased estimators for expectations, covariance matrices, TICA/VAC eigenvectors, relaxation timescales , and reaction coordinates .
Markov state models (MSMs) and Master equation models are popular approaches to approximate molecular kinetics, equilibria, metastable states, and reaction coordinates in terms of a state space discretization usually obtained by clustering. Recently, a powerful generalization of MSMs has been introduced, the variational approach (VA) of molecular kinetics and its special case the time-lagged independent component analysis (TICA), which allow us to approximate slow collective variables and molecular kinetics by linear combinations of smooth basis functions or order parameters. While it is known how to estimate MSMs from trajectories whose starting points are not sampled from an equilibrium ensemble, this has not yet been the case for TICA and the VA. Previous estimates from short trajectories, have been strongly biased and thus not variationally optimal . Here, we employ Koopman operator theory and ideas from dynamic mode decomposition (DMD) to extend the VA and TICA to non-equilibrium data. The main insight is that the VA and TICA provide a coefficient matrix that we call Koopman model, as it approximates the underlying dynamical (Koopman) operator in conjunction with the basis set used. This Koopman model can be used to compute a stationary vector to reweight the data to equilibrium. From such a Koopman-reweighted sample, equilibrium expectation values and variationally optimal reversible Koopman models can be constructed even with short simulations. The Koopman model can be used to propagate densities, and its eigenvalue decomposition provide estimates of relaxation timescales and slow collective variables for dimension reduction. Koopman models are generalizations of Markov state models, TICA and the linear VA and allow molecular kinetics to be described without a cluster discretization .
[ { "type": "R", "before": "of conformation dynamics (VAC)", "after": "(VA) of molecular kinetics", "start_char_pos": 331, "end_char_pos": 361 }, { "type": "R", "before": "molecular kinetics and reaction coordinates", "after": "slow collective variables and molecular kinetics", "start_char_pos": 468, "end_char_pos": 511 }, { "type": "R", "before": "MSMs can be learned", "after": "it is known how to estimate MSMs", "start_char_pos": 588, "end_char_pos": 607 }, { "type": "R", "before": "TICA and VAC have as yet not enjoyed this property, and thus previous TICA /VAC estimates", "after": "this has not yet been the case for TICA and the VA. Previous estimates from short trajectories,", "start_char_pos": 694, "end_char_pos": 783 }, { "type": "R", "before": "when used with ensembles of short trajectories", "after": "and thus not variationally optimal", "start_char_pos": 810, "end_char_pos": 856 }, { "type": "R", "before": "show how TICA /VAC", "after": "extend the VA and TICA to non-equilibrium data. The main insight is that the VA and TICA provide a coefficient matrix that we call Koopman model, as it approximates the underlying dynamical (Koopman) operator in conjunction with the basis set used. This Koopman model", "start_char_pos": 950, "end_char_pos": 968 }, { "type": "R", "before": "estimate the unbiased equilibrium distribution from short-trajectory data and further this result in order to construct unbiased estimators for expectations, covariance matrices, TICA/VAC eigenvectors, relaxation timescales , and reaction coordinates", "after": "compute a stationary vector to reweight the data to equilibrium. From such a Koopman-reweighted sample, equilibrium expectation values and variationally optimal reversible Koopman models can be constructed even with short simulations. The Koopman model can be used to propagate densities, and its eigenvalue decomposition provide estimates of relaxation timescales and slow collective variables for dimension reduction. Koopman models are generalizations of Markov state models, TICA and the linear VA and allow molecular kinetics to be described without a cluster discretization", "start_char_pos": 984, "end_char_pos": 1234 } ]
[ 0, 240, 581, 858 ]
1610.06805
1
This paper studies a robust continuous-time Markowitz portfolio selection problem where the model uncertainty carries on the variance-covariance matrix of the risky assets. This problem is formulated into a min-max mean-variance problem over a set of non-dominated probability measures that is solved by a McKean-Vlasov dynamic programming approach, which allows us to characterize the solution in terms of a Bellman-Isaacs equation in the Wasserstein space of probability measures. We provide explicit solutions for the optimal robust portfolio strategies in the case of uncertain volatilities and ambiguous correlation between two risky assets , and then derive the robust efficient frontier in closed-form . We obtain a lower bound for the Sharpe ratio of any robust efficient portfolio strategy , and compare the performance of Sharpe ratios for a robust investor and for an investor with a misspecified model. MSC Classification: 91G10, 91G80, 60H30
This paper studies a robust continuous-time Markowitz portfolio selection pro\-blem where the model uncertainty carries on the covariance matrix of multiple risky assets. This problem is formulated into a min-max mean-variance problem over a set of non-dominated probability measures that is solved by a McKean-Vlasov dynamic programming approach, which allows us to characterize the solution in terms of a Bellman-Isaacs equation in the Wasserstein space of probability measures. We provide explicit solutions for the optimal robust portfolio strategies and illustrate our results in the case of uncertain volatilities and ambiguous correlation between two risky assets . We then derive the robust efficient frontier in closed-form , and obtain a lower bound for the Sharpe ratio of any robust efficient portfolio strategy . Finally, we compare the performance of Sharpe ratios for a robust investor and for an investor with a misspecified model. MSC Classification: 91G10, 91G80, 60H30
[ { "type": "R", "before": "problem", "after": "pro\\-blem", "start_char_pos": 74, "end_char_pos": 81 }, { "type": "R", "before": "variance-covariance matrix of the", "after": "covariance matrix of multiple", "start_char_pos": 125, "end_char_pos": 158 }, { "type": "A", "before": null, "after": "and illustrate our results", "start_char_pos": 557, "end_char_pos": 557 }, { "type": "R", "before": ", and", "after": ". We", "start_char_pos": 647, "end_char_pos": 652 }, { "type": "R", "before": ". We", "after": ", and", "start_char_pos": 710, "end_char_pos": 714 }, { "type": "R", "before": ", and", "after": ". Finally, we", "start_char_pos": 800, "end_char_pos": 805 } ]
[ 0, 172, 482, 711, 915 ]
1610.07277
1
To address the large gap between time scales that can be easily reached by molecular simulations and those required to understand protein dynamics, we propose a new methodology that computes a self-consistent approximation of the side chain free energy at every integration step. In analogy with the adiabatic Born-Oppenheimer approximation in which the nuclear dynamics are governed by the energy of the instantaneously-equilibrated electronic degrees of freedom, the protein backbone dynamics are simulated as preceding according to the dictates of the free energy of an instantaneously-equilibrated side chain potential. The side chain free energy is computed on the fly; hence, the protein backbone dynamics traverse a greatly smoothed energetic landscape, resulting in extremely rapid equilibration and sampling of the Boltzmann distribution. Because our method employs a reduced model involving single-bead side chains, we also provide a novel, maximum-likelihood type method to parameterize the side chain model using input data from high resolution protein crystal structures. The potential applications of our method are illustrated by simulations of small proteins using replica exchange techniques .
To address the large gap between time scales that can be easily reached by molecular simulations and those required to understand protein dynamics, we propose a new methodology that computes a self-consistent approximation of the side chain free energy at every integration step. In analogy with the adiabatic Born-Oppenheimer approximation in which the nuclear dynamics are governed by the energy of the instantaneously-equilibrated electronic degrees of freedom, the protein backbone dynamics are simulated as preceding according to the dictates of the free energy of an instantaneously-equilibrated side chain potential. The side chain free energy is computed on the fly; hence, the protein backbone dynamics traverse a greatly smoothed energetic landscape, resulting in extremely rapid equilibration and sampling of the Boltzmann distribution. Because our method employs a reduced model involving single-bead side chains, we also provide a novel, maximum-likelihood method to parameterize the side chain model using input data from high resolution protein crystal structures. We demonstrate state-of-the-art accuracy for predicting \chi_1 rotamer states while consuming only milliseconds of CPU time. We also show that the resulting free energies of side chains is sufficiently accurate for de novo folding of some small proteins .
[ { "type": "D", "before": "type", "after": null, "start_char_pos": 970, "end_char_pos": 974 }, { "type": "R", "before": "The potential applications of our method are illustrated by simulations of small proteins using replica exchange techniques", "after": "We demonstrate state-of-the-art accuracy for predicting \\chi_1 rotamer states while consuming only milliseconds of CPU time. We also show that the resulting free energies of side chains is sufficiently accurate for de novo folding of some small proteins", "start_char_pos": 1085, "end_char_pos": 1208 } ]
[ 0, 279, 623, 674, 847, 1084 ]
1610.07277
2
To address the large gap between time scales that can be easily reached by molecular simulations and those required to understand protein dynamics, we propose a new methodology that computes a self-consistent approximation of the side chain free energy at every integration step. In analogy with the adiabatic Born-Oppenheimer approximation in which the nuclear dynamics are governed by the energy of the instantaneously-equilibrated electronic degrees of freedom , the protein backbone dynamics are simulated as preceding according to the dictates of the free energy of an instantaneously-equilibrated side chain potential. The side chain free energy is computed on the fly ; hence, the protein backbone dynamics traverse a greatly smoothed energetic landscape , resulting in extremely rapid equilibration and sampling of the Boltzmann distribution. Because our method employs a reduced model involving single-bead side chains, we also provide a novel, maximum-likelihood method to parameterize the side chain model using input data from high resolution protein crystal structures. We demonstrate state-of-the-art accuracy for predicting \chi_1 rotamer states while consuming only milliseconds of CPU time. We also show that the resulting free energies of side chains is sufficiently accurate for de novo folding of some small proteins.
To address the large gap between time scales that can be easily reached by molecular simulations and those required to understand protein dynamics, we propose a rapid self-consistent approximation of the side chain free energy at every integration step. In analogy with the adiabatic Born-Oppenheimer approximation for electronic structure , the protein backbone dynamics are simulated as preceding according to the dictates of the free energy of an instantaneously-equilibrated side chain potential. The side chain free energy is computed on the fly , allowing the protein backbone dynamics to traverse a greatly smoothed energetic landscape . This results in extremely rapid equilibration and sampling of the Boltzmann distribution. Because our method employs a reduced model involving single-bead side chains, we also provide a novel, maximum-likelihood method to parameterize the side chain model using input data from high resolution protein crystal structures. We demonstrate state-of-the-art accuracy for predicting \chi_1 rotamer states while consuming only milliseconds of CPU time. We also show that the resulting free energies of side chains is sufficiently accurate for de novo folding of some proteins.
[ { "type": "R", "before": "new methodology that computes a", "after": "rapid", "start_char_pos": 161, "end_char_pos": 192 }, { "type": "R", "before": "in which the nuclear dynamics are governed by the energy of the instantaneously-equilibrated electronic degrees of freedom", "after": "for electronic structure", "start_char_pos": 341, "end_char_pos": 463 }, { "type": "R", "before": "; hence,", "after": ", allowing", "start_char_pos": 675, "end_char_pos": 683 }, { "type": "A", "before": null, "after": "to", "start_char_pos": 714, "end_char_pos": 714 }, { "type": "R", "before": ", resulting", "after": ". This results", "start_char_pos": 763, "end_char_pos": 774 }, { "type": "D", "before": "small", "after": null, "start_char_pos": 1323, "end_char_pos": 1328 } ]
[ 0, 279, 624, 676, 851, 1083, 1208 ]
1610.07694
1
We develop an efficient method for solving dynamic portfolio selection problems in the presence of transaction cost, liquidity cost and market impact. Our method , based on least-squares Monte Carlo simulation, has no restriction on return dynamics, portfolio constraints, intermediate consumption and investor's objective. We model return dynamics as exogenous state variables and model portfolio weights, price dynamics and portfolio value as endogenous state variables . This separation allows for incorporation of any formation of transaction cost, liquidity cost and market impact. We first perform a forward simulation for both exogenous and endogenous state variables, then use a least-squares regression to approximate the backward recursive dynamic programs on a discrete grid of controls. Finally, we use a local interpolation and an adaptive refinement grid to enhance the optimal allocation estimates. The computational runtime of this framework grows polynomially with dimension. Its viability is illustrated on a realistic portfolio allocation example with twelve risky assets .
We present a simulation-and-regression method for solving dynamic portfolio allocation problems in the presence of general transaction costs, liquidity costs and market impacts. This method extends the classical least squares Monte Carlo algorithm to incorporate switching costs, corresponding to transaction costs and transient liquidity costs, as well as multiple endogenous state variables , namely the portfolio value and the asset prices subject to permanent market impacts. To do so, we improve the accuracy of the control randomization approach in the case of discrete controls, and propose a global iteration procedure to further improve the allocation estimates. We validate our numerical method by solving a realistic cash-and-stock portfolio with a power-law liquidity model. We quantify the certainty equivalent losses associated with ignoring liquidity effects, and illustrate how our dynamic allocation protects the investor's capital under illiquid market conditions. Lastly, we analyze, under different liquidity conditions, the sensitivities of certainty equivalent returns and optimal allocations with respect to trading volume, stock price volatility, initial investment amount, risk-aversion level and investment horizon .
[ { "type": "R", "before": "develop an efficient", "after": "present a simulation-and-regression", "start_char_pos": 3, "end_char_pos": 23 }, { "type": "R", "before": "selection", "after": "allocation", "start_char_pos": 61, "end_char_pos": 70 }, { "type": "R", "before": "transaction cost, liquidity cost and market impact. Our method , based on least-squares Monte Carlo simulation, has no restriction on return dynamics, portfolio constraints, intermediate consumption and investor's objective. We model return dynamics as exogenous state variables and model portfolio weights, price dynamics and portfolio value as", "after": "general transaction costs, liquidity costs and market impacts. This method extends the classical least squares Monte Carlo algorithm to incorporate switching costs, corresponding to transaction costs and transient liquidity costs, as well as multiple", "start_char_pos": 99, "end_char_pos": 444 }, { "type": "R", "before": ". This separation allows for incorporation of any formation of transaction cost, liquidity cost and market impact. We first perform a forward simulation for both exogenous and endogenous state variables, then use a least-squares regression to approximate the backward recursive dynamic programs on a discrete grid of controls. Finally, we use a local interpolation and an adaptive refinement grid to enhance the optimal", "after": ", namely the portfolio value and the asset prices subject to permanent market impacts. To do so, we improve the accuracy of the control randomization approach in the case of discrete controls, and propose a global iteration procedure to further improve the", "start_char_pos": 472, "end_char_pos": 891 }, { "type": "R", "before": "The computational runtime of this framework grows polynomially with dimension. Its viability is illustrated on a realistic portfolio allocation example with twelve risky assets", "after": "We validate our numerical method by solving a realistic cash-and-stock portfolio with a power-law liquidity model. We quantify the certainty equivalent losses associated with ignoring liquidity effects, and illustrate how our dynamic allocation protects the investor's capital under illiquid market conditions. Lastly, we analyze, under different liquidity conditions, the sensitivities of certainty equivalent returns and optimal allocations with respect to trading volume, stock price volatility, initial investment amount, risk-aversion level and investment horizon", "start_char_pos": 914, "end_char_pos": 1090 } ]
[ 0, 150, 323, 473, 586, 798, 913, 992 ]
1610.08416
1
Based on a recently proposed q-dependent detrended cross-correlation coefficient \rho_q (J.~Kwapie\'n, P.~O\'swi\k{ecimka, S.~Dro\.zd\.z, Phys. Rev.~E 92} , 052815 (2015) ), we introduce a family of q-dependent minimum spanning trees (qMST) that are selective to cross-correlations between different fluctuation amplitudes and different time scales of multivariate data . They inherit this ability directly from the coefficients \rho_q that are processed here to construct a distance matrix being the input to the MST-constructing Kruskal's algorithm. In order to illustrate their performance, we apply the qMSTs to sample empirical data from the American stock market and discuss the results. We show that the qMST graphs can complement \rho_q in detection of "hidden" correlations that cannot be observed by the MST graphs based on \rm DCCA and, therefore, they can be useful in many areas where the multivariate cross-correlations are of interest (e. g. , in portfolio analysis)\ne .
Based on a recently proposed q-dependent detrended cross-correlation coefficient \rho_q cimka, S.~Dro\.zd\.z, Phys. Rev.~E 92} , we generalize the concept of minimum spanning tree (MST) by introducing a family of q-dependent minimum spanning trees (qMST) that are selective to cross-correlations between different fluctuation amplitudes and different time scales . They inherit this ability directly from the coefficients \rho_q that are processed here to construct a distance matrix . Conventional MST with detrending corresponds in this context to q=2. We apply the qMSTs to sample empirical data from the stock market and discuss the results. We show that the qMST graphs can complement \rho_q in disentangling correlations that cannot be observed by the MST graphs based on \rm DCCA and, therefore, they can be useful in many areas where the multivariate cross-correlations are of interest . We apply our method to data from the stock market and obtain more information about correlation structure of the data than by using q=2 only. We show that two sets of signals that differ from each other statistically can give comparable trees for q=2, while only by using the trees for q\ne 2 we become able to distinguish between these sets. We also show that a family of qMSTs for a range of q express the diversity of correlations in a manner resembling the multifractal analysis, where one computes a spectrum of the generalized fractal dimensions, the generalized Hurst exponents, or the multifractal singularity spectra: the more diverse the correlations are, the more variable the tree topology is for different qs. Our analysis exhibits that the stocks belonging to the same or similar industrial sectors are correlated via the fluctuations of moderate amplitudes, while the largest fluctuations often happen to synchronize in those stocks that do not necessarily belong to the same industry .
[ { "type": "D", "before": "(J.~Kwapie\\'n, P.~O\\'swi\\k{e", "after": null, "start_char_pos": 88, "end_char_pos": 116 }, { "type": "R", "before": "052815 (2015) ), we introduce", "after": "we generalize the concept of minimum spanning tree (MST) by introducing", "start_char_pos": 157, "end_char_pos": 186 }, { "type": "D", "before": "of multivariate data", "after": null, "start_char_pos": 349, "end_char_pos": 369 }, { "type": "R", "before": "being the input to the MST-constructing Kruskal's algorithm. In order to illustrate their performance, we", "after": ". Conventional MST with detrending corresponds in this context to q=2. We", "start_char_pos": 491, "end_char_pos": 596 }, { "type": "D", "before": "American", "after": null, "start_char_pos": 647, "end_char_pos": 655 }, { "type": "R", "before": "detection of \"hidden\"", "after": "disentangling", "start_char_pos": 748, "end_char_pos": 769 }, { "type": "R", "before": "(e. g. , in portfolio analysis)", "after": ". We apply our method to data from the stock market and obtain more information about correlation structure of the data than by using q=2 only. We show that two sets of signals that differ from each other statistically can give comparable trees for q=2, while only by using the trees for q", "start_char_pos": 950, "end_char_pos": 981 }, { "type": "A", "before": null, "after": "2 we become able to distinguish between these sets. We also show that a family of qMSTs for a range of q express the diversity of correlations in a manner resembling the multifractal analysis, where one computes a spectrum of the generalized fractal dimensions, the generalized Hurst exponents, or the multifractal singularity spectra: the more diverse the correlations are, the more variable the tree topology is for different qs. Our analysis exhibits that the stocks belonging to the same or similar industrial sectors are correlated via the fluctuations of moderate amplitudes, while the largest fluctuations often happen to synchronize in those stocks that do not necessarily belong to the same industry", "start_char_pos": 985, "end_char_pos": 985 } ]
[ 0, 143, 371, 551, 693 ]
1610.08631
1
Motivation: The investigation of topological modifications of the gene interaction networks in cancer cells is essential for understanding the desease. We study gene interaction networks in various human cancer cells with the random matrix theory. This study is based on the Cancer Network Galaxy (TCNG) database which is the repository of huge gene interactions inferred by Bayesian network algorithms from 256 microarray experimental data downloaded from NCBI GEO . The original GEO data are provided by the high-throughput microarray expression experiments on various human cancer cells. We apply the random matrix theory to the computationally inferred gene interaction networks in TCNG in order to detect the universality in the topology of the gene interaction networks in cancer cells . Results: We found the universal behavior in almost one half of the 256 gene interaction networks in TCNG. The distribution of nearest neighbor level spacing of the gene interaction matrix becomes the Wigner distribution when the network is large (condensed) , and it behaves as Poisson distributionwhen the network is smaller. We also observe the transition between the Poisson and the Wigner distributions as the threshold of confidence factor of the gene interactions is shifted. We expect that the random matrix theory provides an effective analytical method for investigating the huge interaction networks of the various transcripts in cancer cells .
Investigations of topological uniqueness of gene interaction networks in cancer cells are essential for understanding the disease. Based on the random matrix theory, we study the distribution of the nearest neighbor level spacings P(s) of interaction matrices for gene networks in human cancer cells . The interaction matrices are formed using the Cancer Network Galaxy (TCNG) database , which is a repository of gene interactions inferred by a Bayesian network model. In TCNG database, 256 NCBI GEO entries regarding gene expressions in human cancer cells were selected for the Bayesian network calculations. We observe the Wigner distribution of P(s) when the gene networks are dense networks that have large numbers of edges. In the opposite case, when the networks have small numbers of edges, P(s) becomes the Poisson distribution . We investigate relevance of P(s) both to the size of the networks and to edge frequencies that manifest reliance of the inferred gene interactions .
[ { "type": "R", "before": "Motivation: The investigation of topological modifications of the", "after": "Investigations of topological uniqueness of", "start_char_pos": 0, "end_char_pos": 65 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 108, "end_char_pos": 110 }, { "type": "R", "before": "desease. We study gene interaction networks in various", "after": "disease. Based on the random matrix theory, we study the distribution of the nearest neighbor level spacings P(s) of interaction matrices for gene networks in", "start_char_pos": 143, "end_char_pos": 197 }, { "type": "R", "before": "with the random matrix theory. This study is based on", "after": ". The interaction matrices are formed using", "start_char_pos": 217, "end_char_pos": 270 }, { "type": "R", "before": "which is the repository of huge", "after": ", which is a repository of", "start_char_pos": 313, "end_char_pos": 344 }, { "type": "R", "before": "Bayesian network algorithms from", "after": "a Bayesian network model. In TCNG database,", "start_char_pos": 375, "end_char_pos": 407 }, { "type": "R", "before": "microarray experimental data downloaded from NCBI GEO . The original GEO data are provided by the high-throughput microarray expression experiments on various human cancer cells. We apply the random matrix theory to the computationally inferred gene interaction networks in TCNG in order to detect the universality in the topology of the gene interaction networks in cancer cells", "after": "NCBI GEO entries regarding gene expressions in human cancer cells were selected for the Bayesian network calculations. We observe the Wigner distribution of P(s) when the gene networks are dense networks that have large numbers of edges. In the opposite case, when the networks have small numbers of edges, P(s) becomes the Poisson distribution", "start_char_pos": 412, "end_char_pos": 791 }, { "type": "R", "before": "Results: We found the universal behavior in almost one half of", "after": "We investigate relevance of P(s) both to the size of the networks and to edge frequencies that manifest reliance of", "start_char_pos": 794, "end_char_pos": 856 }, { "type": "R", "before": "256 gene interaction networks in TCNG. The distribution of nearest neighbor level spacing of the gene interaction matrix becomes the Wigner distribution when the network is large (condensed) , and it behaves as Poisson distributionwhen the network is smaller. We also observe the transition between the Poisson and the Wigner distributions as the threshold of confidence factor of the gene interactions is shifted. We expect that the random matrix theory provides an effective analytical method for investigating the huge interaction networks of the various transcripts in cancer cells", "after": "inferred gene interactions", "start_char_pos": 861, "end_char_pos": 1446 } ]
[ 0, 151, 247, 467, 590, 802, 1120, 1275 ]
1610.08631
2
Investigations of topological uniqueness of gene interaction networks in cancer cells are essential for understanding the disease. Based on the random matrix theory, we study the distribution of the nearest neighbor level spacings P(s) of interaction matrices for gene networks in human cancer cells. The interaction matrices are formed using the Cancer Network Galaxy (TCNG) database, which is a repository of gene interactions inferred by a Bayesian network model. In TCNG database, 256 NCBI GEO entries regarding gene expressions in human cancer cells were selected for the Bayesian network calculations . We observe the Wigner distribution of P(s) when the gene networks are dense networks that have large numbers of edges. In the opposite case, when the networks have small numbers of edges, P(s) becomes the Poisson distribution. We investigate relevance of P(s) both to the size of the networks and to edge frequencies that manifest reliance of the inferred gene interactions.
Investigations of topological uniqueness of gene interaction networks in cancer cells are essential for understanding this disease. Based on the random matrix theory, we study the distribution of the nearest neighbor level spacings P(s) of interaction matrices for gene networks in human cancer cells. The interaction matrices are computed using the Cancer Network Galaxy (TCNG) database, which is a repository of gene interactions inferred by a Bayesian network model. 256 NCBI GEO entries regarding gene expressions in human cancer cells have been selected for the Bayesian network calculations in TCNG . We observe the Wigner distribution of P(s) when the gene networks are dense networks that have more than \sim 38,000 edges. In the opposite case, when the networks have smaller numbers of edges, the distribution P(s) becomes the Poisson distribution. We investigate relevance of P(s) both to the size of the networks and to edge frequencies that manifest reliance of the inferred gene interactions.
[ { "type": "R", "before": "the", "after": "this", "start_char_pos": 118, "end_char_pos": 121 }, { "type": "R", "before": "formed", "after": "computed", "start_char_pos": 330, "end_char_pos": 336 }, { "type": "D", "before": "In TCNG database,", "after": null, "start_char_pos": 467, "end_char_pos": 484 }, { "type": "R", "before": "were", "after": "have been", "start_char_pos": 555, "end_char_pos": 559 }, { "type": "A", "before": null, "after": "in TCNG", "start_char_pos": 607, "end_char_pos": 607 }, { "type": "R", "before": "large numbers of", "after": "more than \\sim 38,000", "start_char_pos": 705, "end_char_pos": 721 }, { "type": "R", "before": "small", "after": "smaller", "start_char_pos": 774, "end_char_pos": 779 }, { "type": "A", "before": null, "after": "the distribution", "start_char_pos": 798, "end_char_pos": 798 } ]
[ 0, 130, 300, 466, 609, 728, 837 ]
1610.08806
2
The objective of this paper is to present a comprehensive study of dual representations of risk measures and convex functionals defined on an Orlicz space L^\Phi or an Orlicz heart H^\Phi. The first part of our study is devoted to the Orlicz pair (L^\Phi,L^\Psi). In this setting, we present a thorough analysis of the relationship between order closedness of a convex set \mathcal{C ] in L^\Phi and the closedness of \mathcal{C topology \sigma(L^\Phi,L^\Psi) , culminating in the following surprising result:If (and only if) an Orlicz function \Phi and its conjugate \Psi both fail the \Delta_2-condition, then there exists a coherent risk measure on L^\Phi that has the Fatou property but does not admit a dual representation via L^{\Psi . This result answers an open problem in the representation theory of risk measures. In the second part of our study,we explore the representation problem for the pair (H^\Phi,H^\Psi) . This part complements the study for the pair (H ^\Phi,L^\Psi) investigated in 8%DIFDELCMD < ] %%% and the study for the pair (L^{\Phi,H^{\Psi})investigated in }%DIFDELCMD < [%%% 19 . This paper contains new results and developments on the interplay between topology and order in Orlicz spaces that are of independent interest\emph{ .
Let (\Phi,\Psi) be a conjugate pair of Orlicz functions. A set in the Orlicz space L^\Phi is said to be order closed if it is closed with respect to dominated convergence of sequences of functions. A well known problem arising from the theory of risk measures in financial mathematics asks whether order closedness of a convex set in L^\Phi characterizes closedness with respect to the topology \sigma (L^\Phi,L^\Psi). (See 26, p.3585].) In this paper, we show that for a norm bounded convex set in L^\Phi, order closedness and \sigma(L^\Phi,L^\Psi)-closedness are indeed equivalent. In general, however, coincidence of order closedness and \sigma(L^\Phi,L^\Psi)-closedness of convex sets in L^\Phi is equivalent to the validity of the Krein-Smulian Theorem for the topology \sigma(L^\Phi,L^\Psi) ; that is, a convex set is \sigma(L^\Phi,L^\Psi)-closed if and only if it is closed with respect to the bounded-\sigma(L^\Phi,L^\Psi) topology. As a result, we show that order closedness and \sigma(L ^\Phi,L^\Psi) %DIFDELCMD < ] %%% ,H^{\Psi})investigated in }%DIFDELCMD < [%%% -closedness of convex sets in L^\Phi are equivalent if and only if either \Phi or \Psi satisfies the \Delta_2-condition. Using this, we prove the surprising result that:\emph{If (and only if) \Phi and \Psi both fail the \Delta_2-condition, then there exists a coherent risk measure on L^\Phi that has the Fatou property but fails the Fenchel-Moreau dual representation with respect to the dual pair (L^\Phi, L^\Psi) . A similar analysis is carried out for the dual pair of Orlicz hearts (H^\Phi,H^\Psi) .
[ { "type": "R", "before": "The objective of this paper is to present a comprehensive study of dual representations of risk measures and convex functionals defined on an Orlicz space L^\\Phi or an Orlicz heart H^\\Phi. The first part of our study is devoted to the Orlicz pair", "after": "Let (\\Phi,\\Psi) be a conjugate pair of Orlicz functions. A set in the Orlicz space L^\\Phi is said to be order closed if it is closed with respect to dominated convergence of sequences of functions. A well known problem arising from the theory of risk measures in financial mathematics asks whether order closedness of a convex set in L^\\Phi characterizes closedness with respect to the topology \\sigma", "start_char_pos": 0, "end_char_pos": 246 }, { "type": "R", "before": "In this setting, we present a thorough analysis of the relationship between order closedness of a convex set \\mathcal{C", "after": "(See", "start_char_pos": 264, "end_char_pos": 383 }, { "type": "A", "before": null, "after": "26, p.3585", "start_char_pos": 384, "end_char_pos": 384 }, { "type": "A", "before": null, "after": ".) In this paper, we show that for a norm bounded convex set in L^\\Phi, order closedness and \\sigma(L^\\Phi,L^\\Psi)-closedness are indeed equivalent. In general, however, coincidence of order closedness and \\sigma(L^\\Phi,L^\\Psi)-closedness of convex sets", "start_char_pos": 385, "end_char_pos": 385 }, { "type": "R", "before": "and the closedness of \\mathcal{C", "after": "is equivalent to the validity of the Krein-Smulian Theorem for the", "start_char_pos": 396, "end_char_pos": 428 }, { "type": "D", "before": ", culminating in the following surprising result:", "after": null, "start_char_pos": 460, "end_char_pos": 509 }, { "type": "D", "before": "If (and only if) an Orlicz function \\Phi and its conjugate \\Psi both fail the \\Delta_2-condition, then there exists a coherent risk measure on L^\\Phi that has the Fatou property but does not admit a dual representation via L^{\\Psi", "after": null, "start_char_pos": 509, "end_char_pos": 739 }, { "type": "R", "before": ". This result answers an open problem in the representation theory of risk measures. In the second part of our study,we explore the representation problem for the pair (H^\\Phi,H^\\Psi) . This part complements the study for the pair (H", "after": "; that is, a convex set is \\sigma(L^\\Phi,L^\\Psi)-closed if and only if it is closed with respect to the bounded-\\sigma(L^\\Phi,L^\\Psi) topology. As a result, we show that order closedness and \\sigma(L", "start_char_pos": 740, "end_char_pos": 973 }, { "type": "D", "before": "investigated in", "after": null, "start_char_pos": 988, "end_char_pos": 1003 }, { "type": "D", "before": "8", "after": null, "start_char_pos": 1004, "end_char_pos": 1005 }, { "type": "D", "before": "and the study for the pair (L^{\\Phi", "after": null, "start_char_pos": 1024, "end_char_pos": 1059 }, { "type": "D", "before": "19", "after": null, "start_char_pos": 1104, "end_char_pos": 1106 }, { "type": "R", "before": ". This paper contains new results and developments on the interplay between topology and order in Orlicz spaces that are of independent interest", "after": "-closedness of convex sets in L^\\Phi are equivalent if and only if either \\Phi or \\Psi satisfies the \\Delta_2-condition. Using this, we prove the surprising result that:", "start_char_pos": 1107, "end_char_pos": 1251 }, { "type": "A", "before": null, "after": "If (and only if) \\Phi and \\Psi both fail the \\Delta_2-condition, then there exists a coherent risk measure on L^\\Phi that has the Fatou property but fails the Fenchel-Moreau dual representation with respect to the dual pair (L^\\Phi, L^\\Psi)", "start_char_pos": 1257, "end_char_pos": 1257 }, { "type": "A", "before": null, "after": ". A similar analysis is carried out for the dual pair of Orlicz hearts (H^\\Phi,H^\\Psi)", "start_char_pos": 1258, "end_char_pos": 1258 } ]
[ 0, 188, 263, 741, 824, 925 ]
1610.09234
1
We study super-replication of contingent claims in markets with fixed transaction costs. The first result in this paper reveals that in reasonable continuous time financial market the super--replication price is prohibitively costly and leads to trivial buy--and--hold strategies. Our second result is derives non trivial scaling limits of super--replication prices in the binomial models with small fixed costs.
We study super--replication of contingent claims in markets with fixed transaction costs. This can be viewed as a stochastic impulse control problem with a terminal state constraint. The first result in this paper reveals that in reasonable continuous time financial market models the super--replication price is prohibitively costly and leads to trivial buy--and--hold strategies. Our second result derives nontrivial scaling limits of super--replication prices for binomial models with small fixed costs.
[ { "type": "R", "before": "super-replication", "after": "super--replication", "start_char_pos": 9, "end_char_pos": 26 }, { "type": "A", "before": null, "after": "This can be viewed as a stochastic impulse control problem with a terminal state constraint.", "start_char_pos": 89, "end_char_pos": 89 }, { "type": "A", "before": null, "after": "models", "start_char_pos": 181, "end_char_pos": 181 }, { "type": "R", "before": "is derives non trivial", "after": "derives nontrivial", "start_char_pos": 301, "end_char_pos": 323 }, { "type": "R", "before": "in the", "after": "for", "start_char_pos": 368, "end_char_pos": 374 } ]
[ 0, 88, 282 ]
1610.09292
1
In this paper we derive the optimal linear shrinkage estimator for the large-dimensional mean vector using random matrix theory. The results are obtained under the assumption that both the dimension p and the sample size n tend to infinity such that n ^{-1p^{1-\gamma} } \to c\in(0,+\infty) and \gamma\in 0, 1) . Under weak conditions imposed on the the underlying data generating process, we find the asymptotic equivalents to the optimal shrinkage intensities , prove their asymptotic normality, and estimate them consistently. The obtained non-parametric estimator for the high-dimensional mean vector has a simple structure and is proven to minimize asymptotically with probability 1 the quadratic loss in the case of c\in(0,1). For c\in(1,+\infty) we modify the suggested estimator by using a feasible estimator for the precision covariance matrix. At the end, an exhaustive simulation study and an application to real data are provided where the proposed estimator is compared with known benchmarks from the literature .
In this paper we derive the optimal linear shrinkage estimator for the large-dimensional mean vector using random matrix theory. The results are obtained under the assumption that both the dimension p and the sample size n tend to infinity such that p/ n p^{1-\gamma} } \to c\in(0,+\infty) . Under weak conditions imposed on the underlying data generating process, we find the asymptotic equivalents to the optimal shrinkage intensities and estimate them consistently. The obtained non-parametric estimator for the high-dimensional mean vector has a simple structure and is proven to minimize asymptotically with probability 1 the quadratic loss in the case of c\in(0,1). For c\in(1,+\infty) we modify the suggested estimator by using a feasible estimator for the precision covariance matrix. To this end, an exhaustive simulation study and an application to real data are provided where the proposed estimator is compared with known benchmarks from the literature . It turns out that the existent estimators of the mean vector including the suggested one converge to the sample mean vector when the true mean vector possesses unbounded Euclidean norm .
[ { "type": "A", "before": null, "after": "p/", "start_char_pos": 250, "end_char_pos": 250 }, { "type": "D", "before": "^{-1", "after": null, "start_char_pos": 253, "end_char_pos": 257 }, { "type": "D", "before": "and \\gamma\\in", "after": null, "start_char_pos": 292, "end_char_pos": 305 }, { "type": "D", "before": "0, 1)", "after": null, "start_char_pos": 306, "end_char_pos": 311 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 351, "end_char_pos": 354 }, { "type": "D", "before": ", prove their asymptotic normality,", "after": null, "start_char_pos": 463, "end_char_pos": 498 }, { "type": "R", "before": "At the", "after": "To this", "start_char_pos": 855, "end_char_pos": 861 }, { "type": "A", "before": null, "after": ". It turns out that the existent estimators of the mean vector including the suggested one converge to the sample mean vector when the true mean vector possesses unbounded Euclidean norm", "start_char_pos": 1026, "end_char_pos": 1026 } ]
[ 0, 128, 530, 733, 854 ]
1610.09292
2
In this paper we derive the optimal linear shrinkage estimator for the large-dimensional mean vector using random matrix theory. The results are obtained under the assumption that both the dimension p and the sample size n tend to infinity such that p/n \to c\in(0, + \infty). Under weak conditions imposed on the underlying data generating process , we find the asymptotic equivalents to the optimal shrinkage intensities and estimate them consistently. The obtained non-parametric estimator for the high-dimensional mean vector has a simple structure and is proven to minimize asymptotically with probability 1 the quadratic loss in the case of c\in(0,1). For c\in(1, + \infty) we modify the suggested estimator by using a feasible estimator for the precision covariance matrix. To this end, an exhaustive simulation study and an application to real data are provided where the proposed estimator is compared with known benchmarks from the literature. It turns out that the existent estimators of the mean vector including the suggested one converge to the sample mean vector when the true mean vector possesses unbounded Euclidean norm.
In this paper we derive the optimal linear shrinkage estimator for the high-dimensional mean vector using random matrix theory. The results are obtained under the assumption that both the dimension p and the sample size n tend to infinity in such a way that p/n \to c\in(0, \infty). Under weak conditions imposed on the underlying data generating mechanism , we find the asymptotic equivalents to the optimal shrinkage intensities and estimate them consistently. The proposed nonparametric estimator for the high-dimensional mean vector has a simple structure and is proven to minimize asymptotically , with probability 1 , the quadratic loss when c\in(0,1). When c\in(1, \infty) we modify the estimator by using a feasible estimator for the precision covariance matrix. To this end, an exhaustive simulation study and an application to real data are provided where the proposed estimator is compared with known benchmarks from the literature. It turns out that the existing estimators of the mean vector , including the new proposal, converge to the sample mean vector when the true mean vector has an unbounded Euclidean norm.
[ { "type": "R", "before": "large-dimensional", "after": "high-dimensional", "start_char_pos": 71, "end_char_pos": 88 }, { "type": "R", "before": "such", "after": "in such a way", "start_char_pos": 240, "end_char_pos": 244 }, { "type": "D", "before": "+", "after": null, "start_char_pos": 266, "end_char_pos": 267 }, { "type": "R", "before": "process", "after": "mechanism", "start_char_pos": 341, "end_char_pos": 348 }, { "type": "R", "before": "obtained non-parametric", "after": "proposed nonparametric", "start_char_pos": 459, "end_char_pos": 482 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 594, "end_char_pos": 594 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 614, "end_char_pos": 614 }, { "type": "R", "before": "in the case of", "after": "when", "start_char_pos": 634, "end_char_pos": 648 }, { "type": "R", "before": "For", "after": "When", "start_char_pos": 660, "end_char_pos": 663 }, { "type": "D", "before": "+", "after": null, "start_char_pos": 672, "end_char_pos": 673 }, { "type": "D", "before": "suggested", "after": null, "start_char_pos": 696, "end_char_pos": 705 }, { "type": "R", "before": "existent", "after": "existing", "start_char_pos": 978, "end_char_pos": 986 }, { "type": "R", "before": "including the suggested one", "after": ", including the new proposal,", "start_char_pos": 1017, "end_char_pos": 1044 }, { "type": "R", "before": "possesses", "after": "has an", "start_char_pos": 1106, "end_char_pos": 1115 } ]
[ 0, 128, 276, 454, 659, 782, 955 ]
1610.09542
1
To quantify and manage systemic risk in the interbank market , we propose a weighted, directed random network model. The vertices in the network are financial institutions and the weighted edges represent monetary exposures between them. Our model resembles the strong degree of heterogeneity observed in empirical data and the parameters of the model can easily be fitted to market data . We derive asymptotic results that, based on these parameters, allow to determine the impact of local shocks to the entire system and the wider economy. Furthermore , we characterize resilient and non-resilient cases. For networks with degree sequences without second moment, a small number of initially defaulted banks can trigger a substantial default cascade even under the absence of so called contagious links. Paralleling regulatory discussions we determine minimal capital requirements for financial institutions sufficient to make the network resilient to small shocks .
The aim of this paper is to quantify and manage systemic risk in the interbank market . We model the market as a random directed network, where the vertices represent financial institutions and the weighted edges monetary exposures between them. Our model captures the strong degree of heterogeneity observed in empirical data and the parameters can easily be fitted to real data sets. One of our main results allows us to determine the impact of local shocks , where initially some banks default, to the entire system and the wider economy. Here the impact is measured by some index of total systemic importance of all eventually defaulted institutions. As a central application , we characterize resilient and non-resilient cases. In particular, for the prominent case where the network has a degree sequence without second moment, we show that a small number of initially defaulted banks can trigger a substantial default cascade . Our results complement and extend significantly earlier findings derived in the configuration model where the existence of a second moment of the degree distribution is assumed. Moreover, paralleling regulatory discussions, we determine minimal capital requirements for financial institutions sufficient to make the network resilient to small shocks . An appealing feature of these capital requirements is that they can be determined locally by each institution without knowing the complete network structure as they basically only depend on the institution's exposures to its counterparties .
[ { "type": "R", "before": "To", "after": "The aim of this paper is to", "start_char_pos": 0, "end_char_pos": 2 }, { "type": "R", "before": ", we propose a weighted, directed random network model. The vertices in the network are", "after": ". We model the market as a random directed network, where the vertices represent", "start_char_pos": 61, "end_char_pos": 148 }, { "type": "D", "before": "represent", "after": null, "start_char_pos": 195, "end_char_pos": 204 }, { "type": "R", "before": "resembles", "after": "captures", "start_char_pos": 248, "end_char_pos": 257 }, { "type": "D", "before": "of the model", "after": null, "start_char_pos": 339, "end_char_pos": 351 }, { "type": "R", "before": "market data . We derive asymptotic results that, based on these parameters, allow", "after": "real data sets. One of our main results allows us", "start_char_pos": 376, "end_char_pos": 457 }, { "type": "A", "before": null, "after": ", where initially some banks default,", "start_char_pos": 498, "end_char_pos": 498 }, { "type": "R", "before": "Furthermore", "after": "Here the impact is measured by some index of total systemic importance of all eventually defaulted institutions. As a central application", "start_char_pos": 543, "end_char_pos": 554 }, { "type": "R", "before": "For networks with degree sequences", "after": "In particular, for the prominent case where the network has a degree sequence", "start_char_pos": 608, "end_char_pos": 642 }, { "type": "A", "before": null, "after": "we show that", "start_char_pos": 666, "end_char_pos": 666 }, { "type": "R", "before": "even under the absence of so called contagious links. Paralleling regulatory discussions", "after": ". Our results complement and extend significantly earlier findings derived in the configuration model where the existence of a second moment of the degree distribution is assumed. Moreover, paralleling regulatory discussions,", "start_char_pos": 753, "end_char_pos": 841 }, { "type": "A", "before": null, "after": ". An appealing feature of these capital requirements is that they can be determined locally by each institution without knowing the complete network structure as they basically only depend on the institution's exposures to its counterparties", "start_char_pos": 968, "end_char_pos": 968 } ]
[ 0, 116, 237, 389, 542, 607, 806 ]
1610.09542
2
The aim of this paper is to quantify and manage systemic risk in the interbank market. We model the market as a random directed network, where the vertices represent financial institutions and the weighted edges monetary exposures between them. Our model captures the strong degree of heterogeneity observed in empirical data and the parameters can easily be fitted to real data sets. One of our main results allows us to determine the impact of local shocks, where initially some banks default, to the entire system and the wider economy. Here the impact is measured by some index of total systemic importance of all eventually defaulted institutions. As a central application, we characterize resilient and non-resilient cases. In particular, for the prominent case where the network has a degree sequence without second moment, we show that a small number of initially defaulted banks can trigger a substantial default cascade. Our results complement and extend significantly earlier findings derived in the configuration model where the existence of a second moment of the degree distribution is assumed. Moreover , paralleling regulatory discussions, we determine minimal capital requirements for financial institutions sufficient to make the network resilient to small shocks. An appealing feature of these capital requirements is that they can be determined locally by each institution without knowing the complete network structure as they basically only depend on the institution's exposures to its counterparties.
The aim of this paper is to quantify and manage systemic risk caused by default contagion in the interbank market. We model the market as a random directed network, where the vertices represent financial institutions and the weighted edges monetary exposures between them. Our model captures the strong degree of heterogeneity observed in empirical data and the parameters can easily be fitted to real data sets. One of our main results allows us to determine the impact of local shocks, where initially some banks default, to the entire system and the wider economy. Here the impact is measured by some index of total systemic importance of all eventually defaulted institutions. As a central application, we characterize resilient and non-resilient cases. In particular, for the prominent case where the network has a degree sequence without second moment, we show that a small number of initially defaulted banks can trigger a substantial default cascade. Our results complement and extend significantly earlier findings derived in the configuration model where the existence of a second moment of the degree distribution is assumed. As a second main contribution , paralleling regulatory discussions, we determine minimal capital requirements for financial institutions sufficient to make the network resilient to small shocks. An appealing feature of these capital requirements is that they can be determined locally by each institution without knowing the complete network structure as they basically only depend on the institution's exposures to its counterparties.
[ { "type": "A", "before": null, "after": "caused by default contagion", "start_char_pos": 62, "end_char_pos": 62 }, { "type": "R", "before": "Moreover", "after": "As a second main contribution", "start_char_pos": 1110, "end_char_pos": 1118 } ]
[ 0, 87, 245, 385, 540, 653, 730, 931, 1109, 1283 ]
1610.09714
1
This paper considers the pricing of discretely-sampled variance swaps under the class of equity-interest rate hybridization. Our modeling framework consists of the equity which follows the dynamics of the Heston stochastic volatility model and the stochastic interest rate driven by the Cox-Ingersoll-Ross (CIR) process with full correlation structure among the state variables. Since one limitation of hybrid models is the unavailability of analytical pricing formula of variance swaps due to the non-affinity property , we obtain an efficient semi-closed form pricing formula of variance swaps for an approximation of the hybrid model via the derivation of characteristic functions. We implement numerical experiments to evaluate the accuracy of our formula and confirm that the impact of the correlation between the underlying and interest rate is significant .
This paper considers the case of pricing discretely-sampled variance swaps under the class of equity-interest rate hybridization. Our modeling framework consists of the equity which follows the dynamics of the Heston stochastic volatility model , and the stochastic interest rate is driven by the Cox-Ingersoll-Ross (CIR) process with full correlation structure imposed among the state variables. This full correlation structure possess the limitation to have fully analytical pricing formula for hybrid models of variance swaps , due to the non-affinity property embedded in the model itself. We address this issue by obtaining an efficient semi-closed form pricing formula of variance swaps for an approximation of the hybrid model via the derivation of characteristic functions. Subsequently, we implement numerical experiments to evaluate the accuracy of our pricing formula. Our findings confirmed that the impact of the correlation between the underlying and the interest rate is significant for pricing discretely-sampled variance swaps .
[ { "type": "R", "before": "pricing of", "after": "case of pricing", "start_char_pos": 25, "end_char_pos": 35 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 240, "end_char_pos": 240 }, { "type": "A", "before": null, "after": "is", "start_char_pos": 274, "end_char_pos": 274 }, { "type": "A", "before": null, "after": "imposed", "start_char_pos": 354, "end_char_pos": 354 }, { "type": "R", "before": "Since one limitation of hybrid models is the unavailability of", "after": "This full correlation structure possess the limitation to have fully", "start_char_pos": 382, "end_char_pos": 444 }, { "type": "A", "before": null, "after": "for hybrid models", "start_char_pos": 472, "end_char_pos": 472 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 491, "end_char_pos": 491 }, { "type": "R", "before": ", we obtain", "after": "embedded in the model itself. We address this issue by obtaining", "start_char_pos": 525, "end_char_pos": 536 }, { "type": "R", "before": "We", "after": "Subsequently, we", "start_char_pos": 690, "end_char_pos": 692 }, { "type": "R", "before": "formula and confirm", "after": "pricing formula. Our findings confirmed", "start_char_pos": 757, "end_char_pos": 776 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 839, "end_char_pos": 839 }, { "type": "A", "before": null, "after": "for pricing discretely-sampled variance swaps", "start_char_pos": 869, "end_char_pos": 869 } ]
[ 0, 124, 381, 689 ]
1610.09734
1
We derive bounds on the distribution function, therefore also on the Value-at-Risk, of \varphi(\mathbf X) where \varphi is an aggregation function and \mathbf X = (X_1, ... \dots ,X_d) is a random vector with known marginal distributions and partially known dependence structure. More specifically, we analyze three types of available information on the dependence structure: First, we consider the case where extreme value information, such as distributions of partial minima and maxima of \mathbf X are known . In order to include this information in the computation of Value-at-Risk bounds, we establish a reduction principle that relates this problem to an optimization problem over a standard Fr\'echet class. Second, we assume that the copula of \mathbf X is known only on a subset of its domain, and finally we consider the case where the copula of \mathbf X lies in the vicinity of a reference copula as measured by a statistical distance. In order to derive Value-at-Risk bounds in the latter situations, we first improve the Fr\'echet-Hoeffding bounds on copulas so as to include the additional information . Then, we relate the improved Fr\'echet-Hoeffding bounds to Value-at-Risk using the improved standard bounds of Embrechts et al. In numerical examples we illustrate that the additional information may lead to a considerable improvement of the bounds compared to the marginals-only case.
We derive bounds on the distribution function, therefore also on the Value-at-Risk, of \varphi(\mathbf X) where \varphi is an aggregation function and \mathbf X = (X_1, \dots ,X_d) is a random vector with known marginal distributions and partially known dependence structure. More specifically, we analyze three types of available information on the dependence structure: First, we consider the case where extreme value information, such as the distributions of partial minima and maxima of \mathbf X , is available . In order to include this information in the computation of Value-at-Risk bounds, we establish a reduction principle that relates this problem to an optimization problem over a standard Fr\'echet class. Second, we assume that the copula of \mathbf X is known on a subset of its domain, and finally we consider the case where the copula of \mathbf X lies in the vicinity of a reference copula as measured by a statistical distance. In order to derive Value-at-Risk bounds in the latter situations, we first improve the Fr\'echet-Hoeffding bounds on copulas so as to include this additional information on the dependence structure . Then, we relate the improved Fr\'echet-Hoeffding bounds to Value-at-Risk using the improved standard bounds of Embrechts et al. In numerical examples we illustrate that the additional information typically leads to a considerable improvement of the bounds compared to the marginals-only case.
[ { "type": "D", "before": "...", "after": null, "start_char_pos": 169, "end_char_pos": 172 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 445, "end_char_pos": 445 }, { "type": "R", "before": "are known", "after": ", is available", "start_char_pos": 502, "end_char_pos": 511 }, { "type": "D", "before": "only", "after": null, "start_char_pos": 772, "end_char_pos": 776 }, { "type": "R", "before": "the additional information", "after": "this additional information on the dependence structure", "start_char_pos": 1091, "end_char_pos": 1117 }, { "type": "R", "before": "may lead", "after": "typically leads", "start_char_pos": 1316, "end_char_pos": 1324 } ]
[ 0, 279, 513, 715, 948, 1119, 1247 ]
1610.09734
2
We derive bounds on the distribution function, therefore also on the Value-at-Risk, of \varphi(\mathbf X) where \varphi is an aggregation function and \mathbf X = (X_1,\dots,X_d) is a random vector with known marginal distributions and partially known dependence structure. More specifically, we analyze three types of available information on the dependence structure: First, we consider the case where extreme value information, such as the distributions of partial minima and maxima of \mathbf X, is available. In order to include this information in the computation of Value-at-Risk bounds, we establish a reduction principle that relates this problem to an optimization problem over a standard Fr\'echet class . Second, we assume that the copula of \mathbf X is known on a subset of its domain, and finally we consider the case where the copula of \mathbf X lies in the vicinity of a reference copula as measured by a statistical distance. In order to derive Value-at-Risk bounds in the latter situations, we first improve the Fr\'echet-Hoeffding \FH bounds on copulas so as to include this additional information on the dependence structure. Then, we relate the improved Fr\'echet-Hoeffding bounds to\FH Value-at-Risk using the improved standard bounds of Embrechts et al . In numerical examples we illustrate that the additional information typically leads to a considerable improvement of the bounds compared to the marginals-only case.
We derive bounds on the distribution function, therefore also on the Value-at-Risk, of \varphi(\mathbf X) where \varphi is an aggregation function and \mathbf X = (X_1,\dots,X_d) is a random vector with known marginal distributions and partially known dependence structure. More specifically, we analyze three types of available information on the dependence structure: First, we consider the case where extreme value information, such as the distributions of partial minima and maxima of \mathbf X, is available. In order to include this information in the computation of Value-at-Risk bounds, we establish a reduction principle that relates this problem to an optimization problem over a standard Fr\'echet class , which can then be solved by means of the well-known standard bounds or the rearrangement algorithm . Second, we assume that the copula of \mathbf X is known on a subset of its domain, and finally we consider the case where the copula of \mathbf X lies in the vicinity of a reference copula as measured by a statistical distance. In order to derive Value-at-Risk bounds in the latter situations, we first improve the \FH bounds on copulas so as to include this additional information on the dependence structure. Then, we translate the improved\FH bounds to bounds on the Value-at-Risk using the so-called improved standard bounds . In numerical examples we illustrate that the additional information typically leads to a considerable improvement of the bounds compared to the marginals-only case.
[ { "type": "A", "before": null, "after": ", which can then be solved by means of the well-known standard bounds or the rearrangement algorithm", "start_char_pos": 715, "end_char_pos": 715 }, { "type": "D", "before": "Fr\\'echet-Hoeffding", "after": null, "start_char_pos": 1033, "end_char_pos": 1052 }, { "type": "R", "before": "relate the improved Fr\\'echet-Hoeffding bounds to", "after": "translate the improved", "start_char_pos": 1158, "end_char_pos": 1207 }, { "type": "A", "before": null, "after": "bounds to bounds on the", "start_char_pos": 1211, "end_char_pos": 1211 }, { "type": "A", "before": null, "after": "so-called", "start_char_pos": 1236, "end_char_pos": 1236 }, { "type": "D", "before": "of Embrechts et al", "after": null, "start_char_pos": 1262, "end_char_pos": 1280 } ]
[ 0, 273, 513, 945, 1148, 1282 ]
1610.09734
3
We derive bounds on the distribution function, therefore also on the Value-at-Risk, of \varphi(\mathbf X) where \varphi is an aggregation function and \mathbf X = (X_1,\dots,X_d) is a random vector with known marginal distributions and partially known dependence structure. More specifically, we analyze three types of available information on the dependence structure: First, we consider the case where extreme value information, such as the distributions of partial minima and maxima of \mathbf X, is available. In order to include this information in the computation of Value-at-Risk bounds, we establish a reduction principle that relates this problem to an optimization problem over a standard Fr\'echet class, which can then be solved by means of the well-known standard bounds or the rearrangement algorithm . Second, we assume that the copula of \mathbf X is known on a subset of its domain, and finally we consider the case where the copula of \mathbf X lies in the vicinity of a reference copula as measured by a statistical distance. In order to derive Value-at-Risk bounds in the latter situations, we first improve the %DIFDELCMD < \FH %%% bounds on copulas so as to include this additional information on the dependence structure. Then, we translate the improved %DIFDELCMD < \FH %%% bounds to bounds on the Value-at-Risk using the so-called improved standard bounds. In numerical examples we illustrate that the additional information typically leads to a considerable improvement of the bounds compared to the marginals-only case.
We derive bounds on the distribution function, therefore also on the Value-at-Risk, of \varphi(\mathbf X) where \varphi is an aggregation function and \mathbf X = (X_1,\dots,X_d) is a random vector with known marginal distributions and partially known dependence structure. More specifically, we analyze three types of available information on the dependence structure: First, we consider the case where extreme value information, such as the distributions of partial minima and maxima of \mathbf X, is available. In order to include this information in the computation of Value-at-Risk bounds, we utilize a reduction principle that relates this problem to an optimization problem over a standard Fr\'echet class, which can then be solved by means of the rearrangement algorithm or using analytical results . Second, we assume that the copula of \mathbf X is known on a subset of its domain, and finally we consider the case where the copula of \mathbf X lies in the vicinity of a reference copula as measured by a statistical distance. In order to derive Value-at-Risk bounds in the latter situations, we first improve the %DIFDELCMD < \FH %%% Fr\'echet--Hoeffding bounds on copulas so as to include this additional information on the dependence structure. Then, we translate the improved %DIFDELCMD < \FH %%% Fr\'echet--Hoeffding bounds to bounds on the Value-at-Risk using the so-called improved standard bounds. In numerical examples we illustrate that the additional information typically leads to a significant improvement of the bounds compared to the marginals-only case.
[ { "type": "R", "before": "establish", "after": "utilize", "start_char_pos": 598, "end_char_pos": 607 }, { "type": "R", "before": "well-known standard bounds or the rearrangement algorithm", "after": "rearrangement algorithm or using analytical results", "start_char_pos": 757, "end_char_pos": 814 }, { "type": "A", "before": null, "after": "Fr\\'echet--Hoeffding", "start_char_pos": 1153, "end_char_pos": 1153 }, { "type": "A", "before": null, "after": "Fr\\'echet--Hoeffding", "start_char_pos": 1299, "end_char_pos": 1299 }, { "type": "R", "before": "considerable", "after": "significant", "start_char_pos": 1473, "end_char_pos": 1485 } ]
[ 0, 273, 513, 816, 1044, 1245, 1383 ]
1611.00666
1
Several studies pointed out the relevance of extrinsic noise in molecular networks in shaping cell decision making and differentiation . Interestingly, bimodal distributions of gene expression levels, that may be a feature of phenotypic differentiation, are a common phenomenon in gene expression data. The modes of the distribution often correspond to different physiological states of the system. In this work we address the role of extrinsic noise in shaping bimodal gene distributions in the context of microRNA (miRNA)-mediated regulation , both with stochastic modelling and simulations . MiRNAs are small noncoding RNA molecules that downregulate the expression of their target mRNAs. The titrative nature of the interaction is sufficient to induce bimodal distributions of the targets. We study the fluctuating miRNA transcription case to probe the effects of extrinsic noise on the system . We show that (i) bimodal target distributions can be obtained exploiting a noisy environment even in case of small miRNA-target interaction strength , (ii) an increase in the extrinsic noise shifts the range of target transcription rates that allow bimodality towards higher values, (iii) the protein half-life may buffer bimodal mRNA preventing its distribution from becoming bimodal and that (iv) in a noisy environment different targets may cross-regulate each other's bimodal distribution when competing for a shared pool of miRNAs even if the miRNA regulation is small .
Several studies highlighted the relevance of extrinsic noise in shaping cell decision making and differentiation in molecular networks. Experimental evidences of phenotypic differentiation are given by the presence of bimodal distributions of gene expression levels, where the modes of the distribution often correspond to different physiological states of the system. We theoretically address the presence of bimodal phenotypes in the context of microRNA (miRNA)-mediated regulation . MiRNAs are small noncoding RNA molecules that downregulate the expression of their target mRNAs. The nature of this interaction is titrative and induces a threshold effect: below a given target transcription rate no mRNAs are free and available for translation. We investigate the effect of extrinsic noise on the system by introducing a fluctuating miRNA-transcription rate. We find that the presence of extrinsic noise favours the presence of bimodal target distributions which can be observed for a wider range of parameters compared to the case with intrinsic noise only and for lower miRNA-target interaction strength . Our results suggest that combining threshold-inducing interactions with extrinsic noise provides a simple and robust mechanism for obtaining bimodal populations not requiring fine tuning. We furthermore characterise the protein distributions dependence on protein half-life .
[ { "type": "R", "before": "pointed out", "after": "highlighted", "start_char_pos": 16, "end_char_pos": 27 }, { "type": "D", "before": "molecular networks in", "after": null, "start_char_pos": 64, "end_char_pos": 85 }, { "type": "R", "before": ". Interestingly,", "after": "in molecular networks. Experimental evidences of phenotypic differentiation are given by the presence of", "start_char_pos": 135, "end_char_pos": 151 }, { "type": "R", "before": "that may be a feature of phenotypic differentiation, are a common phenomenon in gene expression data. The", "after": "where the", "start_char_pos": 201, "end_char_pos": 306 }, { "type": "R", "before": "In this work we address the role of extrinsic noise in shaping bimodal gene distributions", "after": "We theoretically address the presence of bimodal phenotypes", "start_char_pos": 399, "end_char_pos": 488 }, { "type": "D", "before": ", both with stochastic modelling and simulations", "after": null, "start_char_pos": 544, "end_char_pos": 592 }, { "type": "R", "before": "titrative nature of the interaction is sufficient to induce bimodal distributions of the targets. We study the fluctuating miRNA transcription case to probe the effects", "after": "nature", "start_char_pos": 696, "end_char_pos": 864 }, { "type": "A", "before": null, "after": "this interaction is titrative and induces a threshold effect: below a given target transcription rate no mRNAs are free and available for translation. We investigate the effect of", "start_char_pos": 868, "end_char_pos": 868 }, { "type": "R", "before": ". We show that (i)", "after": "by introducing a fluctuating miRNA-transcription rate. We find that the presence of extrinsic noise favours the presence of", "start_char_pos": 899, "end_char_pos": 917 }, { "type": "R", "before": "can be obtained exploiting a noisy environment even in case of small", "after": "which can be observed for a wider range of parameters compared to the case with intrinsic noise only and for lower", "start_char_pos": 947, "end_char_pos": 1015 }, { "type": "R", "before": ", (ii) an increase in the extrinsic noise shifts the range of target transcription rates that allow bimodality towards higher values, (iii) the protein", "after": ". Our results suggest that combining threshold-inducing interactions with extrinsic noise provides a simple and robust mechanism for obtaining bimodal populations not requiring fine tuning. We furthermore characterise the protein distributions dependence on protein", "start_char_pos": 1050, "end_char_pos": 1201 }, { "type": "D", "before": "may buffer bimodal mRNA preventing its distribution from becoming bimodal and that (iv) in a noisy environment different targets may cross-regulate each other's bimodal distribution when competing for a shared pool of miRNAs even if the miRNA regulation is small", "after": null, "start_char_pos": 1212, "end_char_pos": 1474 } ]
[ 0, 136, 302, 398, 594, 691, 793, 900, 917, 1056 ]
1611.00723
1
Socio-economic inequality is quantitatively measured from data using various indices. The Gini (g) index, giving the overall inequality is the most common , while the recently introduced Kolkata (k) index gives a measure of 1-k fraction of population who possess top k fraction of wealth in the society. This article reviews the character of such inequalities, as seen from a variety of data sources, the apparent relationship between the two indices, and what toy models tell us. These socio-economic inequalities are also investigated in the context of man-made social conflicts or wars, as well as in natural disasters. Finally, we forward a proposal for an international institution with sufficient fund for visitors, where natural and social scientists from various institutions of the world can come to discuss, debate and formulate further developments.
Socio-economic inequality is measured using various indices. The Gini (g) index, giving the overall inequality is the most commonly used , while the recently introduced Kolkata (k) index gives a measure of 1-k fraction of population who possess top k fraction of wealth in the society. This article reviews the character of such inequalities, as seen from a variety of data sources, the apparent relationship between the two indices, and what toy models tell us. These socio-economic inequalities are also investigated in the context of man-made social conflicts or wars, as well as in natural disasters. Finally, we forward a proposal for an international institution with sufficient fund for visitors, where natural and social scientists from various institutions of the world can come to discuss, debate and formulate further developments.
[ { "type": "R", "before": "quantitatively measured from data", "after": "measured", "start_char_pos": 29, "end_char_pos": 62 }, { "type": "R", "before": "common", "after": "commonly used", "start_char_pos": 148, "end_char_pos": 154 } ]
[ 0, 85, 303, 480, 622 ]
1611.00885
1
We investigate qualitative and quantitative behavior of a solution to the problem of pricing American style of perpetual put options. We assume the option priceWe investigate qualitative and quantitative behavior of a solution to the problem of pricing American style of perpetual put options. We assume the option price is a solution to a stationary generalized Black-Scholes equation in which the volatility may depend on the second derivative of the option price itself. We prove existence and uniqueness of a solution to the free boundary problem. We derive a single implicit equation for the free boundary position and the closed form formula for the option price. It is a generalization of the well-known explicit closed form solution derived by Merton for the case of a constant volatility. We also present results of numerical computations of the free boundary position, option price and their dependence on model parameters. is a solution to a stationary generalized Black-Scholes equation in which the volatility may depend on the second derivative of the option price itself. We prove existence and uniqueness of a solution to the free boundary problem. We derive a single implicit equation for the free boundary position and the closed form formula for the option price. It is a generalization of the well-known explicit closed form solution derived by Merton for the case of a constant volatility. We also present results of numerical computations of the free boundary position, option price and their dependence on model parameters.
We investigate qualitative and quantitative behavior of a solution of the mathematical model for pricing American style of perpetual put options. We assume the option price is a solution to the stationary generalized Black-Scholes equation in which the volatility function may depend on the second derivative of the option price itself. We prove existence and uniqueness of a solution to the free boundary problem. We derive a single implicit equation for the free boundary position and the closed form formula for the option price. It is a generalization of the well-known explicit closed form solution derived by Merton for the case of a constant volatility. We also present results of numerical computations of the free boundary position, option price and their dependence on model parameters.
[ { "type": "R", "before": "to the problem of pricing American style of perpetual put options. We assume the option priceWe investigate qualitative and quantitative behavior of a solution to the problem of", "after": "of the mathematical model for", "start_char_pos": 67, "end_char_pos": 244 }, { "type": "R", "before": "a stationary generalized Black-Scholes equation in which the volatility may depend on the second derivative of the option price itself. We prove existence and uniqueness of a solution to the free boundary problem. We derive a single implicit equation for the free boundary position and the closed form formula for the option price. It is a generalization of the well-known explicit closed form solution derived by Merton for the case of a constant volatility. We also present results of numerical computations of the free boundary position, option price and their dependence on model parameters. is a solution to a", "after": "the", "start_char_pos": 338, "end_char_pos": 952 }, { "type": "A", "before": null, "after": "function", "start_char_pos": 1023, "end_char_pos": 1023 } ]
[ 0, 133, 293, 473, 551, 669, 797, 1087, 1165, 1283, 1411 ]
1611.00997
1
We present a generic solver for dynamical portfolio allocation problem when the market exhibits return predictability and price impact as well as partial observability. We assume that the prices modeling can be encoded into a linear state-space and show how the problem then falls into the LQG framework. We derive the optimal control policy and introduce tools to analyze it that preserve the intelligibility of the solution. Furthermore, we link theoretical assumptions for existence and uniqueness of the optimal controller to a dynamical non-arbitrage criterion. Finally, we illustrate our method on a synthetic portfolio allocation problem and provide intuition about the behavior of the controlled system .
We introduce a generic solver for dynamic portfolio allocation problems when the market exhibits return predictability , price impact and partial observability. We assume that the price modeling can be encoded into a linear state-space and we demonstrate how the problem then falls into the LQG framework. We derive the optimal control policy and introduce analytical tools that preserve the intelligibility of the solution. Furthermore, we link the existence and uniqueness of the optimal controller to a dynamical non-arbitrage criterion. Finally, we illustrate our method using a synthetic portfolio allocation problem .
[ { "type": "R", "before": "present", "after": "introduce", "start_char_pos": 3, "end_char_pos": 10 }, { "type": "R", "before": "dynamical portfolio allocation problem", "after": "dynamic portfolio allocation problems", "start_char_pos": 32, "end_char_pos": 70 }, { "type": "R", "before": "and price impact as well as", "after": ", price impact and", "start_char_pos": 118, "end_char_pos": 145 }, { "type": "R", "before": "prices", "after": "price", "start_char_pos": 188, "end_char_pos": 194 }, { "type": "R", "before": "show", "after": "we demonstrate", "start_char_pos": 249, "end_char_pos": 253 }, { "type": "R", "before": "tools to analyze it", "after": "analytical tools", "start_char_pos": 356, "end_char_pos": 375 }, { "type": "R", "before": "theoretical assumptions for", "after": "the", "start_char_pos": 448, "end_char_pos": 475 }, { "type": "R", "before": "on", "after": "using", "start_char_pos": 601, "end_char_pos": 603 }, { "type": "D", "before": "and provide intuition about the behavior of the controlled system", "after": null, "start_char_pos": 645, "end_char_pos": 710 } ]
[ 0, 168, 304, 426, 566 ]
1611.01114
1
Natural genetic variation between individuals in a population leads to variations in gene expression that are informative for the inference of gene regulatory networks. Particularly, genome-wide genotype and transcriptome data from the same samples allow for causal inference between gene expression traits using the DNA variations in cis-regulatory regions as causal anchors . However, existing causal inference programs are not efficient enough for contemporary datasets, and unrealistically assume the absence of hidden confounders affecting the coexpression of causally related gene pairs. Here we propose alternative statistical tests to infer causal effects in the presence of confounding and weak regulations , and implemented both the novel and the traditional causal inference tests in the software package Findr (Fast Inference of Networks from Directed Regulations), achieving thousands to millions of times of speedup due to analytical false positive rate estimation and implementational optimizations. A systematic evaluation using simulated data from the DREAM5 Systems Genetics challenge demonstrated that the novel tests outperformed existing causal inference methods as well as all challenge submissions. We confirmed these results using siRNA silencing, ChIP-sequencing and microRNA target data to validate causal gene-gene and microRNA-gene interactions inferred from genotype, microRNA and mRNA sequencing data of nearly 400 human individuals from the Geuvadis study. Findr provides the community with the first efficient and accurate causal inference tool for modern datasets of tens of thousands of RNA expression traits and genotypes from hundreds or more human individuals . Findr is publicly available at URL
Mapping gene expression as a quantitative trait using whole genome-sequencing and transcriptome analysis allows to discover the functional consequences of genetic variation. We developed a novel method and ultra-fast software Findr for higly accurate causal inference between gene expression traits using cis-regulatory DNA variations as causal anchors , which improves current methods by taking into account hidden confounders and weak regulations . Findr outperformed existing methods on the DREAM5 Systems Genetics challenge and on the prediction of microRNA and transcription factor targets in human lymphoblastoid cells, while being nearly a million times faster . Findr is publicly available at URL
[ { "type": "R", "before": "Natural genetic variation between individuals in a population leads to variations in gene expression that are informative for the inference of gene regulatory networks. Particularly, genome-wide genotype and transcriptome data from the same samples allow for", "after": "Mapping gene expression as a quantitative trait using whole genome-sequencing and transcriptome analysis allows to discover the functional consequences of genetic variation. We developed a novel method and ultra-fast software Findr for higly accurate", "start_char_pos": 0, "end_char_pos": 258 }, { "type": "R", "before": "the DNA variations in cis-regulatory regions", "after": "cis-regulatory DNA variations", "start_char_pos": 313, "end_char_pos": 357 }, { "type": "R", "before": ". However, existing causal inference programs are not efficient enough for contemporary datasets, and unrealistically assume the absence of hidden confounders affecting the coexpression of causally related gene pairs. Here we propose alternative statistical tests to infer causal effects in the presence of confounding", "after": ", which improves current methods by taking into account hidden confounders", "start_char_pos": 376, "end_char_pos": 694 }, { "type": "R", "before": ", and implemented both the novel and the traditional causal inference tests in the software package Findr (Fast Inference of Networks from Directed Regulations), achieving thousands to millions of times of speedup due to analytical false positive rate estimation and implementational optimizations. A systematic evaluation using simulated data from", "after": ". Findr outperformed existing methods on", "start_char_pos": 716, "end_char_pos": 1064 }, { "type": "D", "before": "demonstrated that the novel tests outperformed existing causal inference methods as well as all challenge submissions. We confirmed these results using siRNA silencing, ChIP-sequencing and microRNA target data to validate causal gene-gene", "after": null, "start_char_pos": 1103, "end_char_pos": 1341 }, { "type": "R", "before": "microRNA-gene interactions inferred from genotype, microRNA and mRNA sequencing data of nearly 400 human individuals from the Geuvadis study. Findr provides the community with the first efficient and accurate causal inference tool for modern datasets of tens of thousands of RNA expression traits and genotypes from hundreds or more human individuals", "after": "on the prediction of microRNA and transcription factor targets in human lymphoblastoid cells, while being nearly a million times faster", "start_char_pos": 1346, "end_char_pos": 1696 } ]
[ 0, 168, 377, 593, 1014, 1221, 1487 ]
1611.01958
1
In this paper we estimate the mean-variance (MV) portfolio in the high-dimensional case using the recent results from the theory of random matrices. We construct a linear shrinkage estimator which is distribution-free and is optimal in the sense of maximizing with probability 1 the asymptotic out-of-sample expected utility, i.e., mean-variance objective function. Its asymptotic properties are investigated when the number of assets p together with the sample size n tend to infinity such that p/n \rightarrow c\in (0,+\infty). The results are obtained under weak assumptions imposed on the distribution of the asset returns, namely the existence of the fourth moments . Thereafter we perform numerical and empirical studies where the small- and large-sample behavior of the derived estimator are investigated. The resulting estimator shows significant improvements over the naive diversification and it is robust to the deviations from normality.
In this paper we estimate the mean-variance (MV) portfolio in the high-dimensional case using the recent results from the theory of random matrices. We construct a linear shrinkage estimator which is distribution-free and is optimal in the sense of maximizing with probability 1 the asymptotic out-of-sample expected utility, i.e., mean-variance objective function. Its asymptotic properties are investigated when the number of assets p together with the sample size n tend to infinity such that p/n \rightarrow c\in (0,+\infty). The results are obtained under weak assumptions imposed on the distribution of the asset returns, namely the existence of the fourth moments is only required . Thereafter we perform numerical and empirical studies where the small- and large-sample behavior of the derived estimator is investigated. The suggested estimator shows significant improvements over the naive diversification and it is robust to the deviations from normality.
[ { "type": "A", "before": null, "after": "is only required", "start_char_pos": 671, "end_char_pos": 671 }, { "type": "R", "before": "are", "after": "is", "start_char_pos": 796, "end_char_pos": 799 }, { "type": "R", "before": "resulting", "after": "suggested", "start_char_pos": 818, "end_char_pos": 827 } ]
[ 0, 148, 365, 529, 673, 813 ]
1611.02547
1
This paper studies the optimal extraction and taxation of nonrenewable natural resources. It is well known the market values of the main strategic resources such as oil, natural gas, uranium, copper,..., etc, fluctuate randomly following global and seasonal macro-economic parameters, these values are modeled using Markov switching L\'evy processes. We formulate this problem as a differential game where the two players are the mining company whose aim is to maximize the revenues generated from its extracting activities and the government agency in charge of regulating and taxing natural resources. We prove the existence of a Nash equilibrium and characterize the value functions of this differential game as the unique viscosity solutions of the corresponding Hamilton Jacobi Isaacs equations. Furthermore, optimal extraction and taxation policies that should be applied when the equilibrium is reached are derived . In addition, we construct and prove the convergence of a numerical scheme for approximating the value functions and optimal policies. A numerical example is presented to illustrate our findings.
This paper studies the optimal extraction and taxation of nonrenewable natural resources. It is well known that the market values of the main strategic resources such as oil, natural gas, uranium, copper,..., etc, fluctuate randomly following global and seasonal macroeconomic parameters, these values are modeled using Markov switching L\'evy processes. We formulate this problem as a differential game . The two players of this differential game are the mining company whose aim is to maximize the revenues generated from its extracting activities and the government agency in charge of regulating and taxing natural resources. We prove the existence of a Nash equilibrium . The corresponding Hamilton Jacobi Isaacs equations are completely solved and the value functions as well as the optimal extraction and taxation rates are derived in closed-form. A Numerical example is presented to illustrate our findings.
[ { "type": "A", "before": null, "after": "that", "start_char_pos": 107, "end_char_pos": 107 }, { "type": "R", "before": "macro-economic", "after": "macroeconomic", "start_char_pos": 259, "end_char_pos": 273 }, { "type": "R", "before": "where the two players", "after": ". The two players of this differential game", "start_char_pos": 401, "end_char_pos": 422 }, { "type": "R", "before": "and characterize", "after": ". The corresponding Hamilton Jacobi Isaacs equations are completely solved and", "start_char_pos": 650, "end_char_pos": 666 }, { "type": "R", "before": "of this differential game as the unique viscosity solutions of the corresponding Hamilton Jacobi Isaacs equations. Furthermore,", "after": "as well as the", "start_char_pos": 687, "end_char_pos": 814 }, { "type": "R", "before": "policies that should be applied when the equilibrium is reached are derived . In addition, we construct and prove the convergence of a numerical scheme for approximating the value functions and optimal policies. A numerical", "after": "rates are derived in closed-form. A Numerical", "start_char_pos": 847, "end_char_pos": 1070 } ]
[ 0, 89, 351, 604, 801, 924, 1058 ]
1611.03144
1
In a previous article, an algorithm for discovering therapeutic targets in Boolean networks modeling disease mechanisms was introduced. In the present article, the updates made on this algorithm, named kali, are described. These updates are : i) the possibility to work on asynchronous Boolean networks, ii) a smarter search for therapeutic targets , and iii) the possibility to use multivalued logic. kali assumes that the attractors of a dynamical system correspond to the phenotypes of the modeled biological system. Given a logical model of a pathophysiology, either Boolean or multivalued, kali searches for which biological components should be therapeutically disturbed in order to reduce the reachability of the attractors associated with pathological phenotypes, thus reducing the likelinessof pathological phenotypes . kali is illustrated on a simple example network and shows that it can find therapeutic targets able to reduce the likeliness of pathological phenotypes . However, like any computational tool, kali can predict but can not replace human expertise: it is an aid for coping with the complexity of biological systems .
In a previous article, an algorithm for identifying therapeutic targets in Boolean networks modeling pathological mechanisms was introduced. In the present article, the improvements made on this algorithm, named kali, are described. These improvements are i) the possibility to work on asynchronous Boolean networks, ii) a finer assessment of therapeutic targets and iii) the possibility to use multivalued logic. kali assumes that the attractors of a dynamical system , such as a Boolean network, are associated with the phenotypes of the modeled biological system. Given a logic-based model of pathological mechanisms, kali searches for therapeutic targets able to reduce the reachability of the attractors associated with pathological phenotypes, thus reducing their likeliness . kali is illustrated on an example network and used on a biological case study. This case study is a published logic-based model of bladder tumorigenesis from which kali returns consistent results . However, like any computational tool, kali can predict but can not replace human expertise: it is a supporting tool for coping with the complexity of biological systems in the field of drug discovery .
[ { "type": "R", "before": "discovering", "after": "identifying", "start_char_pos": 40, "end_char_pos": 51 }, { "type": "R", "before": "disease", "after": "pathological", "start_char_pos": 101, "end_char_pos": 108 }, { "type": "R", "before": "updates", "after": "improvements", "start_char_pos": 164, "end_char_pos": 171 }, { "type": "R", "before": "updates are :", "after": "improvements are", "start_char_pos": 229, "end_char_pos": 242 }, { "type": "R", "before": "smarter search for therapeutic targets ,", "after": "finer assessment of therapeutic targets", "start_char_pos": 310, "end_char_pos": 350 }, { "type": "R", "before": "correspond to", "after": ", such as a Boolean network, are associated with", "start_char_pos": 457, "end_char_pos": 470 }, { "type": "R", "before": "logical model of a pathophysiology, either Boolean or multivalued,", "after": "logic-based model of pathological mechanisms,", "start_char_pos": 528, "end_char_pos": 594 }, { "type": "R", "before": "which biological components should be therapeutically disturbed in order", "after": "therapeutic targets able", "start_char_pos": 613, "end_char_pos": 685 }, { "type": "R", "before": "the likelinessof pathological phenotypes", "after": "their likeliness", "start_char_pos": 786, "end_char_pos": 826 }, { "type": "R", "before": "a simple", "after": "an", "start_char_pos": 852, "end_char_pos": 860 }, { "type": "R", "before": "shows that it can find therapeutic targets able to reduce the likeliness of pathological phenotypes", "after": "used on a biological case study. This case study is a published logic-based model of bladder tumorigenesis from which kali returns consistent results", "start_char_pos": 881, "end_char_pos": 980 }, { "type": "R", "before": "an aid", "after": "a supporting tool", "start_char_pos": 1081, "end_char_pos": 1087 }, { "type": "A", "before": null, "after": "in the field of drug discovery", "start_char_pos": 1141, "end_char_pos": 1141 } ]
[ 0, 135, 222, 519, 982 ]
1611.04941
1
Cash management models determine policies based either on the statistical properties of daily cash flow or on forecasts . Usual assumptions on the statistical properties of daily cash flow include normality, independence and stationarity. Surprisingly, little empirical evidence confirming these assumptions has been provided. In this work, we provide a comprehensive study on 54 real-world daily cash flow data sets, which we also make publicly available. Apart from the previous assumptions, we also consider linearity, meaning that cash flow is proportional to a particular explanatory variable, and we propose a new cross-validated test for time series non-linearity . We further analyze the implications of all aforementioned assumptions for forecasting, showing that: (i) the usual assumption of normality, independence and stationarity hardly appear; (ii) non-linearity is often relevant for forecasting; and (iii) common data transformations such as outlier treatment and Box-Cox have little impact on linearity and normality. Our results highlight the utility of non-linear models as a justifiable alternative for time series forecasting .
Cash managers make daily decisions based on predicted monetary inflows from debtors and outflows to creditors . Usual assumptions on the statistical properties of daily net cash flow include normality, absence of correlation and stationarity. We provide a comprehensive study based on a real-world cash flow data set from small and medium companies, which is the most common type of companies in Europe. We also propose a new cross-validated test for time-series non-linearity showing that: (i) the usual assumption of normality, absence of correlation and stationarity hardly appear; (ii) non-linearity is often relevant for forecasting; and (iii) typical data transformations have little impact on linearity and normality. Our results provide a forecasting strategy for cash flow management which performs better than classical methods. This evidence may lead to consider a more data-driven approach such as time-series forecasting in an attempt to provide cash managers with expert systems in cash management .
[ { "type": "R", "before": "management models determine policies based either on the statistical properties of daily cash flow or on forecasts", "after": "managers make daily decisions based on predicted monetary inflows from debtors and outflows to creditors", "start_char_pos": 5, "end_char_pos": 119 }, { "type": "A", "before": null, "after": "net", "start_char_pos": 179, "end_char_pos": 179 }, { "type": "R", "before": "independence", "after": "absence of correlation", "start_char_pos": 209, "end_char_pos": 221 }, { "type": "R", "before": "Surprisingly, little empirical evidence confirming these assumptions has been provided. In this work, we", "after": "We", "start_char_pos": 240, "end_char_pos": 344 }, { "type": "R", "before": "on 54", "after": "based on a", "start_char_pos": 375, "end_char_pos": 380 }, { "type": "D", "before": "daily", "after": null, "start_char_pos": 392, "end_char_pos": 397 }, { "type": "R", "before": "sets, which we also make publicly available. Apart from the previous assumptions, we also consider linearity, meaning that cash flow is proportional to a particular explanatory variable, and we", "after": "set from small and medium companies, which is the most common type of companies in Europe. We also", "start_char_pos": 413, "end_char_pos": 606 }, { "type": "R", "before": "time series", "after": "time-series", "start_char_pos": 646, "end_char_pos": 657 }, { "type": "D", "before": ". We further analyze the implications of all aforementioned assumptions for forecasting,", "after": null, "start_char_pos": 672, "end_char_pos": 760 }, { "type": "R", "before": "independence", "after": "absence of correlation", "start_char_pos": 814, "end_char_pos": 826 }, { "type": "R", "before": "common data transformations such as outlier treatment and Box-Cox", "after": "typical data transformations", "start_char_pos": 923, "end_char_pos": 988 }, { "type": "R", "before": "highlight the utility of non-linear models as a justifiable alternative for time series forecasting", "after": "provide a forecasting strategy for cash flow management which performs better than classical methods. This evidence may lead to consider a more data-driven approach such as time-series forecasting in an attempt to provide cash managers with expert systems in cash management", "start_char_pos": 1048, "end_char_pos": 1147 } ]
[ 0, 239, 327, 457, 673, 858, 912, 1035 ]
1611.05149
1
It is well known that population sizes increase exponentially during balanced growth . Concomitantly, at the single-cell level, the sizes of individual cells themselves increase exponentially; the single-cell exponential growth-rate also determines the statistics of cell size and cell division time distributions. Seeking an integrated perspective of microbial growth dynamics under balanced conditions, we formulate a theoretical framework that takes into account observables at both single-cell and population scales. Our exact analytical solutions for both symmetric and asymmetric cell division reveal surprising effects of the stochastic single-cell dynamics on features of population growth. In particular, we find how the population growth rate is sensitive to the shape of the distribution of cell division times. We validate the model by quantitatively predicting the observed cell-age distributions without fitting parameters. Our model also provides a prescription for deducing the time for transitioning from the swarmer (reproductively quiescent ) to stalked (reproduction able) stage of the {\em C. crescentus} lifecycle ; our predictions match with previous indirect estimates. We discuss the scalings of all timescales with external parameters that control the balanced growth state . For {\em C. crescentus} cells, we show that the rate of exponential growth of single (stalked) cells governs the dynamics of the entire lifecycle, including the swarmer-to-stalked cell transition .
How are granular details of stochastic growth and division of individual cells reflected in smooth deterministic growth of population numbers? We provide an integrated, multiscale perspective of microbial growth dynamics by formulating a data-validated theoretical framework that accounts for observables at both single-cell and population scales. We derive exact analytical complete time-dependent solutions to cell-age distributions and population growth rates as functionals of the underlying interdivision time distributions, for symmetric and asymmetric cell division . These results provide insights into the surprising implications of stochastic single-cell dynamics for population growth. Using our results for asymmetric division, we deduce the time to transition from the reproductively quiescent (swarmer) to replication-competent (stalked ) stage of the {\em Caulobacter crescentus} lifecycle . Remarkably, population numbers can spontaneously oscillate with time. We elucidate the physics leading to these population oscillations . For {\em C. crescentus} cells, we show that a simple measurement of the population growth rate, for a given growth condition, is sufficient to characterize the condition-specific cellular unit of time, and thus yields the mean (single-cell) growth and division timescales, fluctuations in cell division times, the cell age distribution, and the quiescence timescale .
[ { "type": "R", "before": "It is well known that population sizes increase exponentially during balanced growth . Concomitantly, at the single-cell level, the sizes", "after": "How are granular details of stochastic growth and division", "start_char_pos": 0, "end_char_pos": 137 }, { "type": "R", "before": "themselves increase exponentially; the single-cell exponential growth-rate also determines the statistics of cell size and cell division time distributions. Seeking an integrated", "after": "reflected in smooth deterministic growth of population numbers? We provide an integrated, multiscale", "start_char_pos": 158, "end_char_pos": 336 }, { "type": "R", "before": "under balanced conditions, we formulate a", "after": "by formulating a data-validated", "start_char_pos": 378, "end_char_pos": 419 }, { "type": "R", "before": "takes into account", "after": "accounts for", "start_char_pos": 447, "end_char_pos": 465 }, { "type": "R", "before": "Our exact analytical solutions for both", "after": "We derive exact analytical complete time-dependent solutions to cell-age distributions and population growth rates as functionals of the underlying interdivision time distributions, for", "start_char_pos": 521, "end_char_pos": 560 }, { "type": "R", "before": "reveal surprising effects of the", "after": ". These results provide insights into the surprising implications of", "start_char_pos": 600, "end_char_pos": 632 }, { "type": "R", "before": "on features of", "after": "for", "start_char_pos": 665, "end_char_pos": 679 }, { "type": "R", "before": "In particular, we find how the population growth rate is sensitive to the shape of the distribution of cell division times. We validate the model by quantitatively predicting the observed cell-age distributions without fitting parameters. Our model also provides a prescription for deducing the time for transitioning from the swarmer (reproductively quiescent", "after": "Using our results for asymmetric division, we deduce the time to transition from the reproductively quiescent (swarmer) to replication-competent (stalked", "start_char_pos": 699, "end_char_pos": 1059 }, { "type": "D", "before": "to stalked (reproduction able)", "after": null, "start_char_pos": 1062, "end_char_pos": 1092 }, { "type": "R", "before": "C.", "after": "Caulobacter", "start_char_pos": 1111, "end_char_pos": 1113 }, { "type": "R", "before": "; our predictions match with previous indirect estimates. We discuss the scalings of all timescales with external parameters that control the balanced growth state", "after": ". Remarkably, population numbers can spontaneously oscillate with time. We elucidate the physics leading to these population oscillations", "start_char_pos": 1136, "end_char_pos": 1299 }, { "type": "R", "before": "the rate of exponential growth of single (stalked) cells governs the dynamics of the entire lifecycle, including the swarmer-to-stalked cell transition", "after": "a simple measurement of the population growth rate, for a given growth condition, is sufficient to characterize the condition-specific cellular unit of time, and thus yields the mean (single-cell) growth and division timescales, fluctuations in cell division times, the cell age distribution, and the quiescence timescale", "start_char_pos": 1346, "end_char_pos": 1497 } ]
[ 0, 192, 314, 520, 698, 822, 937, 1137, 1193, 1301 ]
1611.05707
1
A continuum model for epithelial tissue mechanics is formulated from cell level mechanical ingredients and morphogenetic cell dynamics, including cell shape changes and cell rearrangements. The model is capable of dealing with finite deformation, and uses stress and deformation tensors that can be compared with experimental data. Using the model, we uncover the dynamical behaviour that underlies passive relaxationand active contraction-elongation of a tissue . The present work provides an integrated scheme for understanding the mechanisms by which morphogenetic processes of each individual cell collectively lead to the development of a large tissue with its correct shape and size .
A continuum model of epithelial tissue mechanics was formulated using cellular-level mechanical ingredients and cell morphogenetic processes, including cellular shape changes and cellular rearrangements. This model can include finite deformation, and incorporates stress and deformation tensors , which can be compared with experimental data. Using this model, we elucidated dynamical behavior underlying passive relaxation, active contraction-elongation , and tissue shear flow. This study provides an integrated scheme for the understanding of the mechanisms that are involved in orchestrating the morphogenetic processes in individual cells, in order to achieve epithelial tissue morphogenesis .
[ { "type": "R", "before": "for", "after": "of", "start_char_pos": 18, "end_char_pos": 21 }, { "type": "R", "before": "is formulated from cell level", "after": "was formulated using cellular-level", "start_char_pos": 50, "end_char_pos": 79 }, { "type": "D", "before": "morphogenetic cell dynamics, including", "after": null, "start_char_pos": 107, "end_char_pos": 145 }, { "type": "A", "before": null, "after": "morphogenetic processes, including cellular", "start_char_pos": 151, "end_char_pos": 151 }, { "type": "R", "before": "cell rearrangements. The model is capable of dealing with", "after": "cellular rearrangements. This model can include", "start_char_pos": 170, "end_char_pos": 227 }, { "type": "R", "before": "uses", "after": "incorporates", "start_char_pos": 252, "end_char_pos": 256 }, { "type": "R", "before": "that", "after": ", which", "start_char_pos": 288, "end_char_pos": 292 }, { "type": "R", "before": "the", "after": "this", "start_char_pos": 339, "end_char_pos": 342 }, { "type": "R", "before": "uncover the dynamical behaviour that underlies passive relaxationand", "after": "elucidated dynamical behavior underlying passive relaxation,", "start_char_pos": 353, "end_char_pos": 421 }, { "type": "R", "before": "of a tissue . The present work", "after": ", and tissue shear flow. This study", "start_char_pos": 452, "end_char_pos": 482 }, { "type": "R", "before": "understanding the mechanisms by which morphogenetic processes of each individual cell collectively lead to the development of a large tissue with its correct shape and size", "after": "the understanding of the mechanisms that are involved in orchestrating the morphogenetic processes in individual cells, in order to achieve epithelial tissue morphogenesis", "start_char_pos": 517, "end_char_pos": 689 } ]
[ 0, 190, 332, 465 ]
1611.06053
1
Sensors are the first element of the pathways that control the response of cells to their environment. After chemical, the next most important cue is mechanical, and protein complexes that produce or enable a chemical signal in response to a mechanical stimulus are called mechanosensors. There is a sharp distinction between sensing an external force or pressure/tension applied to the cell, and sensing the mechanical stiffness of the environment. We call the first mechanosensitivity of the 1st kind, and the latter mechanosensitivity of the 2nd kind. There are two variants of protein complexes that act as mechanosensors of the 2nd kind: producing the one-off or a reversible action. The latent complex of TGF-beta is an example of the one-off action: on the release of active TGF-beta signal, the complex is discarded and needs to be replaced. In contrast, the focal adhesion kinase (FAK) in a complex with integrin is a reversible mechanosensor, which initiates the chemical signal in its active phosphorylated conformation, but can spontaneously return to its closed folded conformation. Here we study the physical mechanism of the reversible mechanosensor of the 2nd kind, using FAK as a practical example , and find how the rates of conformation changes depend on the substrate stiffness and the pulling force applied from the cell cytoskeleton. The results compare well with the phenotype observations of cells on different substrates.
Sensors are the first element of the pathways that control the response of cells to their environment. After chemical, the next most important cue is mechanical, and protein complexes that produce or enable a chemical signal in response to a mechanical stimulus are called mechanosensors. There is a sharp distinction between sensing an external force or pressure/tension applied to the cell, and sensing the mechanical stiffness of the environment. We call the first mechanosensitivity of the 1st kind, and the latter mechanosensitivity of the 2nd kind. There are two variants of protein complexes that act as mechanosensors of the 2nd kind: producing either a one-off or a reversible action. The latent complex of TGF-\beta is an example of the one-off action: on the release of active TGF-\beta signal, the complex is discarded and needs to be replaced. In contrast, focal adhesion kinase (FAK) in a complex with integrin is a reversible mechanosensor, which initiates the chemical signal in its active phosphorylated conformation, but can spontaneously return to its closed folded conformation. Here we study the physical mechanism of the reversible mechanosensor of the 2nd kind, using FAK as a practical example . We find how the rates of conformation changes depend on the substrate stiffness and the pulling force applied from the cell cytoskeleton. The results compare well with the phenotype observations of cells on different substrates.
[ { "type": "R", "before": "the", "after": "either a", "start_char_pos": 653, "end_char_pos": 656 }, { "type": "R", "before": "TGF-beta", "after": "TGF-\\beta", "start_char_pos": 711, "end_char_pos": 719 }, { "type": "R", "before": "TGF-beta", "after": "TGF-\\beta", "start_char_pos": 782, "end_char_pos": 790 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 863, "end_char_pos": 866 }, { "type": "R", "before": ", and", "after": ". We", "start_char_pos": 1215, "end_char_pos": 1220 } ]
[ 0, 102, 288, 449, 554, 688, 849, 1095, 1355 ]
1611.06218
1
In the dual L ^{\Phi^* of a \Delta_2-Orlicz space L ^\Phi, we show that a proper (resp. finite) convex function is lower semicontinuous (resp. continuous) for the Mackey topology \tau(L ^{\Phi^*,L^\Phi} ,L_\Phi} ) if and only if on each order interval [-\zeta,\zeta]=\{\xi: -\zeta\leq \xi\leq\zeta\} (\zeta\in L ^{\Phi^* ), it is lower semicontinuous (resp. continuous) for the topology of convergence in probability. For this purpose, we provide the following Koml\'os type result: every norm bounded sequence (\xi_n)_n in L ^{\Phi^* admits a sequence of forward convex combinations %DIFDELCMD < \xi%%% \bar _n\inconv(\xi_n,n+1,...) such that \sup_n| %DIFDELCMD < \xi%%% \bar _n|\in L ^{\Phi^* and }%DIFDELCMD < \bar{\xi}%%% and }\bar _n converges a.s.
In the dual L _{\Phi^* of a \Delta_2-Orlicz space L _\Phi, that we call a dual Orlicz space, we show that a proper (resp. finite) convex function is lower semicontinuous (resp. continuous) for the Mackey topology \tau(L ,L^\Phi} _{\Phi^*,L_\Phi} ) if and only if on each order interval [-\zeta,\zeta]=\{\xi: -\zeta\leq \xi\leq\zeta\} (\zeta\in L _{\Phi^* ), it is lower semicontinuous (resp. continuous) for the topology of convergence in probability. For this purpose, we provide the following Koml\'os type result: every norm bounded sequence (\xi_n)_n in L _{\Phi^* admits a sequence of forward convex combinations %DIFDELCMD < \xi%%% \bar\xi _n\inconv(\xi_n,n+1,...) such that \sup_n| %DIFDELCMD < \xi%%% \bar\xi _n|\in L and }%DIFDELCMD < \bar{\xi}%%% _{\Phi^* and }\bar\xi _n converges a.s.
[ { "type": "R", "before": "^{\\Phi^*", "after": "_{\\Phi^*", "start_char_pos": 14, "end_char_pos": 22 }, { "type": "R", "before": "^\\Phi, we", "after": "_\\Phi, that we call a dual Orlicz space, we", "start_char_pos": 52, "end_char_pos": 61 }, { "type": "D", "before": "^{\\Phi^*", "after": null, "start_char_pos": 186, "end_char_pos": 194 }, { "type": "A", "before": null, "after": "_{\\Phi^*", "start_char_pos": 203, "end_char_pos": 203 }, { "type": "R", "before": "^{\\Phi^*", "after": "_{\\Phi^*", "start_char_pos": 312, "end_char_pos": 320 }, { "type": "R", "before": "^{\\Phi^*", "after": "_{\\Phi^*", "start_char_pos": 526, "end_char_pos": 534 }, { "type": "A", "before": null, "after": "\\xi", "start_char_pos": 608, "end_char_pos": 608 }, { "type": "A", "before": null, "after": "\\xi", "start_char_pos": 676, "end_char_pos": 676 }, { "type": "D", "before": "^{\\Phi^*", "after": null, "start_char_pos": 686, "end_char_pos": 694 }, { "type": "A", "before": null, "after": "_{\\Phi^*", "start_char_pos": 726, "end_char_pos": 726 }, { "type": "A", "before": null, "after": "\\xi", "start_char_pos": 736, "end_char_pos": 736 } ]
[ 0, 417 ]
1611.06672
1
We propose a simple model of inter-bank lending and borrowing incorporating a game feature where the evolution of monetary reserve is described by a system of coupled Feller diffusions. The optimization subject to the quadratic cost reflects the desire of each bank to borrow from or lend to a central bank through manipulating its lending preference and the intention of each bank to deposit in the central bank in order to control the volatility for cost minimization. We observe that the adding liquidity creates a flocking effect leading to stability or systemic risk depending on the level of the growth rate. The deposit rate diminishes the growth of the total monetary reserve causing a large number of bank defaults. The central bank acts as a central deposit corporation. In addition, the corresponding Mean Field Game in the case of the number of banks N large and the infinite time horizon stochastic game with the discount factor are also discussed.
We propose a simple model of the banking system incorporating a game feature where the evolution of monetary reserve is modeled as a system of coupled Feller diffusions. The Markov Nash equilibrium generated through minimizing the linear quadratic cost subject to Cox-Ingersoll-Ross type processes creates liquidity and deposit rate. The adding liquidity leads to a flocking effect but the deposit rate diminishes the growth rate of the total monetary reserve causing a large number of bank defaults. In addition, the corresponding Mean Field Game and the infinite time horizon stochastic game with the discount factor are also discussed.
[ { "type": "R", "before": "inter-bank lending and borrowing", "after": "the banking system", "start_char_pos": 29, "end_char_pos": 61 }, { "type": "R", "before": "described by", "after": "modeled as", "start_char_pos": 134, "end_char_pos": 146 }, { "type": "R", "before": "optimization subject to the quadratic cost reflects the desire of each bank to borrow from or lend to a central bank through manipulating its lending preference and the intention of each bank to deposit in the central bank in order to control the volatility for cost minimization. We observe that the adding liquidity creates", "after": "Markov Nash equilibrium generated through minimizing the linear quadratic cost subject to Cox-Ingersoll-Ross type processes creates liquidity and deposit rate. The adding liquidity leads to", "start_char_pos": 190, "end_char_pos": 515 }, { "type": "R", "before": "leading to stability or systemic risk depending on the level of the growth rate. The", "after": "but the", "start_char_pos": 534, "end_char_pos": 618 }, { "type": "A", "before": null, "after": "rate", "start_char_pos": 654, "end_char_pos": 654 }, { "type": "D", "before": "The central bank acts as a central deposit corporation.", "after": null, "start_char_pos": 726, "end_char_pos": 781 }, { "type": "D", "before": "in the case of the number of banks N large", "after": null, "start_char_pos": 829, "end_char_pos": 871 } ]
[ 0, 185, 470, 614, 725, 781 ]
1611.07432
1
In this paper we study the possible "chaotic" nature of some energy and commodity futures time series (like heating oil and natural gas, among the others). In particular the sensitive dependence on initial conditions (the so called "butterfly effect", which represents one of the characterizing properties of a chaotic system) is investigated estimating the Kolmogorov entropy, in addition to the maximum Lyapunov exponent. The results obtained with these two methods are consistent and should indicate the presence of butterfly effect. Nevertheless, this phenomenon - which is usually showed by deterministic systems - is not here completely deterministic. In fact, using a test introduced by Kaplan and Glass, we prove that , for all the series analyzed here, the stochastic component and the deterministic one turn up to be approximately in the same proportions. The presence of butterfly effect in energy futures markets is a controversial matter, and the evaluations obtained here confirm the findings of some authors cited in this paper . Thus, we can say with reasonable certainty that in energy futures markets we cannot talk about deterministicbutterfly effect .
We test whether the futures prices of some commodity and energy markets are determined by stochastic rules or exhibit nonlinear deterministic endogenous fluctuations. As for the methodologies, we use the maximal Lyapunov exponents (MLE) and a determinism test, both based on the reconstruction of the phase space. In particular, employing a recent methodology, we estimate a coefficient \kappa that describes the determinism rate of the analyzed time series. We find that the underlying system for futures prices shows a reliability level \kappa near to 1 while the MLE is positive for all commodity futures series . Thus, the empirical evidence suggests that commodity and energy futures prices are the measured footprint of a nonlinear deterministic, rather than a stochastic, system .
[ { "type": "R", "before": "In this paper we study the possible \"chaotic\" nature of some energy and commodity futures time series (like heating oil and natural gas, among the others). In particular the sensitive dependence on initial conditions (the so called \"butterfly effect\", which represents one of the characterizing properties of a chaotic system) is investigated estimating the Kolmogorov entropy, in addition to the maximum Lyapunov exponent. The results obtained with these two methods are consistent and should indicate the presence of butterfly effect. Nevertheless, this phenomenon - which is usually showed by deterministic systems - is not here completely deterministic. In fact, using a test introduced by Kaplan and Glass, we prove that , for all the series analyzed here, the stochastic component and the deterministic one turn up to be approximately in the same proportions. The presence of butterfly effect in energy futures markets is a controversial matter, and the evaluations obtained here confirm the findings of some authors cited in this paper", "after": "We test whether the futures prices of some commodity and energy markets are determined by stochastic rules or exhibit nonlinear deterministic endogenous fluctuations. As for the methodologies, we use the maximal Lyapunov exponents (MLE) and a determinism test, both based on the reconstruction of the phase space. In particular, employing a recent methodology, we estimate a coefficient \\kappa that describes the determinism rate of the analyzed time series. We find that the underlying system for futures prices shows a reliability level \\kappa near to 1 while the MLE is positive for all commodity futures series", "start_char_pos": 0, "end_char_pos": 1042 }, { "type": "R", "before": "we can say with reasonable certainty that in energy futures markets we cannot talk about deterministicbutterfly effect", "after": "the empirical evidence suggests that commodity and energy futures prices are the measured footprint of a nonlinear deterministic, rather than a stochastic, system", "start_char_pos": 1051, "end_char_pos": 1169 } ]
[ 0, 155, 423, 536, 657, 865, 1044 ]
1611.07843
1
This paper presents several models addressing optimal portfolio choice and optimal portfolio transition issues, in which the expected returns of risky assets are unknown. Our approach is based on a coupling between Bayesian learning and dynamic programming techniques . It permits to recover the well-known results of Karatzas and Zhao in the case of conjugate (Gaussian) priors for the drift distribution , but also to go beyond the no-friction case, when martingale methods are no longer available. In particular, we address optimal portfolio choice in a framework \`a la Almgren-Chriss and we build therefore a model in which the agent takes into account in his /her allocation decision process both the liquidity of assets and the uncertainty with respect to their expected returns. We also address optimal portfolio liquidation and optimal portfolio transition problems .
This paper presents several models addressing optimal portfolio choice , optimal portfolio liquidation, and optimal portfolio transition issues, in which the expected returns of risky assets are unknown. Our approach is based on a coupling between Bayesian learning and dynamic programming techniques that leads to partial differential equations. It enables to recover the well-known results of Karatzas and Zhao in a framework\`a la Merton , but also to deal with cases where martingale methods are no longer available. In particular, we address optimal portfolio choice , portfolio liquidation, and portfolio transition problems in a framework \`a la Almgren-Chriss , and we build therefore a model in which the agent takes into account in his decision process both the liquidity of assets and the uncertainty with respect to their expected return .
[ { "type": "A", "before": null, "after": ", optimal portfolio liquidation,", "start_char_pos": 71, "end_char_pos": 71 }, { "type": "R", "before": ". It permits", "after": "that leads to partial differential equations. It enables", "start_char_pos": 269, "end_char_pos": 281 }, { "type": "R", "before": "the case of conjugate (Gaussian) priors for the drift distribution", "after": "a framework", "start_char_pos": 340, "end_char_pos": 406 }, { "type": "A", "before": null, "after": "\\`a la", "start_char_pos": 406, "end_char_pos": 406 }, { "type": "A", "before": null, "after": "Merton", "start_char_pos": 407, "end_char_pos": 407 }, { "type": "R", "before": "go beyond the no-friction case, when", "after": "deal with cases where", "start_char_pos": 422, "end_char_pos": 458 }, { "type": "A", "before": null, "after": ", portfolio liquidation, and portfolio transition problems", "start_char_pos": 554, "end_char_pos": 554 }, { "type": "R", "before": "\\`a la", "after": "\\`a la", "start_char_pos": 570, "end_char_pos": 576 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 592, "end_char_pos": 592 }, { "type": "D", "before": "/her allocation", "after": null, "start_char_pos": 669, "end_char_pos": 684 }, { "type": "R", "before": "returns. We also address optimal portfolio liquidation and optimal portfolio transition problems", "after": "return", "start_char_pos": 782, "end_char_pos": 878 } ]
[ 0, 171, 270, 502, 790 ]