doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1611.07843
2
This paper presents several models addressing optimal portfolio choice, optimal portfolio liquidation, and optimal portfolio transition issues, in which the expected returns of risky assets are unknown. Our approach is based on a coupling between Bayesian learning and dynamic programming techniques that leads to partial differential equations. It enables to recover the well-known results of Karatzas and Zhao in a framework \`a la Merton, but also to deal with cases where martingale methods are no longer available. In particular, we address optimal portfolio choice, portfolio liquidation, and portfolio transition problems in a framework \`a la Almgren-Chriss, and we build therefore a model in which the agent takes into account in his decision process both the liquidity of assets and the uncertainty with respect to their expected return.
This paper presents several models addressing optimal portfolio choice, optimal portfolio liquidation, and optimal portfolio transition issues, in which the expected returns of risky assets are unknown. Our approach is based on a coupling between Bayesian learning and dynamic programming techniques that leads to partial differential equations. It enables to recover the well-known results of Karatzas and Zhao in a framework \`a la Merton, but also to deal with cases where martingale methods are no longer available. In particular, we address optimal portfolio choice, portfolio liquidation, and portfolio transition problems in a framework \`a la Almgren-Chriss, and we build therefore a model in which the agent takes into account in his decision process both the liquidity of assets and the uncertainty with respect to their expected return.
[ { "type": "R", "before": "\\`a la", "after": "\\`a la", "start_char_pos": 427, "end_char_pos": 433 }, { "type": "R", "before": "\\`a la", "after": "\\`a la", "start_char_pos": 644, "end_char_pos": 650 } ]
[ 0, 202, 345, 519 ]
1612.00828
1
We develop a new method for hedging derivatives based on the premise that a hedger should not always rely on a universal set of trading instruments that are used to form a linear portfolio of the stock, riskless bond and standard derivatives, but rather should design a set of specific, most suited financial instruments for the hedging problem. We introduce a sequence of new financial instruments best suited for hedging in jump-diffusion and stochastic volatility market models and those with long-range dependence. Our methods lead to a new set of partial and partial and partial-integro differential equations for pricing derivatives .
In complete markets, there are risky assets and a riskless asset. It is assumed that the riskless asset and the risky asset are traded continuously in time and that the market is frictionless. In this paper, we propose a new method for hedging derivatives assuming that a hedger should not always rely on trading existing assets that are used to form a linear portfolio comprised of the risky asset, the riskless asset, and standard derivatives, but rather should design a set of specific, most-suited financial instruments for the hedging problem. We introduce a sequence of new financial instruments best suited for hedging jump-diffusion and stochastic volatility market models . The new instruments we introduce are perpetual derivatives. More specifically, they are options with perpetual maturities. In a financial market where perpetual derivatives are introduced, there is a new set of partial and partial-integro differential equations for pricing derivatives . Our analysis demonstrates that the set of new financial instruments together with a risk measure called the tail-loss ratio measure defined by the new instrument's return series can be potentially used as an early warning system for a market crash .
[ { "type": "R", "before": "We develop a", "after": "In complete markets, there are risky assets and a riskless asset. It is assumed that the riskless asset and the risky asset are traded continuously in time and that the market is frictionless. In this paper, we propose a", "start_char_pos": 0, "end_char_pos": 12 }, { "type": "R", "before": "based on the premise", "after": "assuming", "start_char_pos": 48, "end_char_pos": 68 }, { "type": "R", "before": "a universal set of trading instruments", "after": "trading existing assets", "start_char_pos": 109, "end_char_pos": 147 }, { "type": "R", "before": "of the stock, riskless bond", "after": "comprised of the risky asset, the riskless asset,", "start_char_pos": 189, "end_char_pos": 216 }, { "type": "R", "before": "most suited", "after": "most-suited", "start_char_pos": 287, "end_char_pos": 298 }, { "type": "D", "before": "in", "after": null, "start_char_pos": 423, "end_char_pos": 425 }, { "type": "R", "before": "and those with long-range dependence. Our methods lead to a", "after": ". The new instruments we introduce are perpetual derivatives. More specifically, they are options with perpetual maturities. In a financial market where perpetual derivatives are introduced, there is a", "start_char_pos": 481, "end_char_pos": 540 }, { "type": "D", "before": "partial and", "after": null, "start_char_pos": 564, "end_char_pos": 575 }, { "type": "A", "before": null, "after": ". Our analysis demonstrates that the set of new financial instruments together with a risk measure called the tail-loss ratio measure defined by the new instrument's return series can be potentially used as an early warning system for a market crash", "start_char_pos": 639, "end_char_pos": 639 } ]
[ 0, 345, 518 ]
1612.00981
1
We consider a simple model for the evolution of a limit order book in which limit orders of unit size arrive according to independent Poisson processes. The frequency of buy limit orders below a given price level, respectively sell limit orders above a given level are described by fixed demand and supply functions. Buy (resp. sell) limit orders that arrive above (resp. below) the current ask (resp. bid) price are converted into market orders. There is no cancellation of limit orders. This model has independently been reinvented by several authors, including Stigler in 1964 and Luckock in 2003, who was able to calculate the equilibrium distribution of the bid and ask prices. We extend the model by introducing market makers that simultaneously place both a buy and sell limit order at the current bid and ask price. We show how the introduction of market makers reduces the spread, which in the original model is unrealistically large. In particular, we are able to calculate the exact rate of market makers needed to close the spread completely .
We consider a simple model for the evolution of a limit order book in which limit orders of unit size arrive according to independent Poisson processes. The frequencies of buy limit orders below a given price level, respectively sell limit orders above a given level are described by fixed demand and supply functions. Buy (resp. sell) limit orders that arrive above (resp. below) the current ask (resp. bid) price are converted into market orders. There is no cancellation of limit orders. This model has independently been reinvented by several authors, including Stigler in 1964 and Luckock in 2003, who was able to calculate the equilibrium distribution of the bid and ask prices. We extend the model by introducing market makers that simultaneously place both a buy and sell limit order at the current bid and ask price. We show how the introduction of market makers reduces the spread, which in the original model is unrealistically large. In particular, we are able to calculate the exact rate at which market makers need to place orders in order to close the spread completely . If this rate is exceeded, we show that the price settles at a random level that in general does not correspond the Walrasian equilibrium price .
[ { "type": "R", "before": "frequency", "after": "frequencies", "start_char_pos": 157, "end_char_pos": 166 }, { "type": "R", "before": "of market makers needed to", "after": "at which market makers need to place orders in order to", "start_char_pos": 999, "end_char_pos": 1025 }, { "type": "A", "before": null, "after": ". If this rate is exceeded, we show that the price settles at a random level that in general does not correspond the Walrasian equilibrium price", "start_char_pos": 1054, "end_char_pos": 1054 } ]
[ 0, 152, 316, 446, 488, 682, 823, 943 ]
1612.01104
1
Gene expression is a noisy process that leads to regime shift between alternative steady states among individual living cells, inducing phenotypic variability. The effects of white noise on the regime shift in bistable systems have been well characterized, however little is known about such effects of colored noise (noise with non-zero correlation time). Here, we show that noise correlation time, by considering a genetic circuit of autoactivation, can have significant effect on the regime shift in gene expression. We demonstrate this theoretically, using stochastic potential, stationary probability density function and first-passage time based on the Fokker-Planck description, where the Ornstein-Uhlenbeck process is used to model colored noise. We find that increase in noise correlation time in degradation rate can induce a regime shift from low to high protein concentration state and enhance the bistable regime, while noise in basal rate makes system steady states more stable and amplify the protein production . We then show how cross-correlated colored noises in basal and degradation rates can induce regime shifts from low to high protein concentration state, but reduce the bistable regime. In addition, we show that early warning indicators can also be used to predict shifts between distinct phenotypic states in gene expression. Predictions that a cell is about to shift to a harmful phenotype could improve early therapeutic intervention in complex human diseases.
Gene expression is a noisy process that leads to regime shift between alternative steady states among individual living cells, inducing phenotypic variability. The effects of white noise on the regime shift in bistable systems have been well characterized, however little is known about such effects of colored noise (noise with non-zero correlation time). Here, we show that noise correlation time, by considering a genetic circuit of autoactivation, can have significant effect on the regime shift in gene expression. We demonstrate this theoretically, using stochastic potential, stationary probability density function and first-passage time based on the Fokker-Planck description, where the Ornstein-Uhlenbeck process is used to model colored noise. We find that increase in noise correlation time in degradation rate can induce a regime shift from low to high protein concentration state and enhance the bistable regime, while increase in noise correlation time in basal rate retain the bimodal distribution . We then show how cross-correlated colored noises in basal and degradation rates can induce regime shifts from low to high protein concentration state, but reduce the bistable regime. In addition, we show that early warning indicators can also be used to predict shifts between distinct phenotypic states in gene expression. Predictions that a cell is about to shift to a harmful phenotype could improve early therapeutic intervention in complex human diseases.
[ { "type": "R", "before": "noise", "after": "increase in noise correlation time", "start_char_pos": 933, "end_char_pos": 938 }, { "type": "R", "before": "makes system steady states more stable and amplify the protein production", "after": "retain the bimodal distribution", "start_char_pos": 953, "end_char_pos": 1026 } ]
[ 0, 159, 356, 519, 754, 1028, 1211, 1352 ]
1612.02090
1
This article proposes different tests for treatment effect heterogeneity when the outcome of interest, typically a duration variable, may be right-censored. The proposed tests study whether a policy 1) has zero distributional (average) effect for all subpopulations defined by covariate values, and 2) has homogeneous average effect across different subpopulations. The proposed tests are based on two-step Kaplan-Meier integrals , and do not rely on parametric distributional assumptions, shape restrictions, nor on restricting the potential treatment effect heterogeneity across different subpopulations. Our framework is suitable not only to exogenous treatment allocation , but can also account for treatment noncompliance , an important feature in many applications. The proposed tests are consistent against fixed alternatives, and can detect nonparametric alternatives converging to the null at the parametric n^{-1/2}-rate, n being the sample size. Critical values are computed with the assistance of a multiplier bootstrap. The finite sample properties of the proposed tests are examined by means of a Monte Carlo study , and an application about the effect of labor market programs on unemployment duration. Open-source software is available for implementing all proposed tests.
This article proposes different tests for treatment effect heterogeneity when the outcome of interest, typically a duration variable, may be right-censored. The proposed tests study whether a policy 1) has zero distributional (average) effect for all subpopulations defined by covariate values, and 2) has homogeneous average effect across different subpopulations. The proposed tests are based on two-step Kaplan-Meier integrals and do not rely on parametric distributional assumptions, shape restrictions, or on restricting the potential treatment effect heterogeneity across different subpopulations. Our framework is suitable not only to exogenous treatment allocation but can also account for treatment noncompliance - an important feature in many applications. The proposed tests are consistent against fixed alternatives, and can detect nonparametric alternatives converging to the null at the parametric n^{-1/2}-rate, n being the sample size. Critical values are computed with the assistance of a multiplier bootstrap. The finite sample properties of the proposed tests are examined by means of a Monte Carlo study and an application about the effect of labor market programs on unemployment duration. Open-source software is available for implementing all proposed tests.
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 430, "end_char_pos": 431 }, { "type": "R", "before": "nor", "after": "or", "start_char_pos": 510, "end_char_pos": 513 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 676, "end_char_pos": 677 }, { "type": "R", "before": ",", "after": "-", "start_char_pos": 727, "end_char_pos": 728 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1129, "end_char_pos": 1130 } ]
[ 0, 156, 365, 606, 771, 956, 1032, 1217 ]
1612.02444
1
Avanzi et al. (2016) recently studied the optimal dividend problem where dividends can be paid both periodically and continuously at different transaction costs. In the Brownian model with Poissonian periodic dividend payment opportunities, they showed that the optimal strategy is of pure-continuous, pure-periodic or hybrid-barrier-type . In this paper, we generalize their results to the dual (spectrally positive L\'evy) model. The optimal strategy is again of hybrid-barrier-type and can be concisely expressed using the scale function. The results are confirmed through a sequence of numerical experiments.
Avanzi et al. (2016) recently studied an optimal dividend problem where dividends are paid both periodically and continuously with different transaction costs. In the Brownian model with Poissonian periodic dividend payment opportunities, they showed that the optimal strategy is either of the pure-continuous, pure-periodic , or hybrid-barrier type . In this paper, we generalize the results of their previous study to the dual (spectrally positive L\'evy) model. The optimal strategy is again of the hybrid-barrier type and can be concisely expressed using the scale function. These results are confirmed through a sequence of numerical experiments.
[ { "type": "R", "before": "the", "after": "an", "start_char_pos": 38, "end_char_pos": 41 }, { "type": "R", "before": "can be", "after": "are", "start_char_pos": 83, "end_char_pos": 89 }, { "type": "R", "before": "at", "after": "with", "start_char_pos": 130, "end_char_pos": 132 }, { "type": "R", "before": "of", "after": "either of the", "start_char_pos": 282, "end_char_pos": 284 }, { "type": "R", "before": "or hybrid-barrier-type", "after": ", or hybrid-barrier type", "start_char_pos": 316, "end_char_pos": 338 }, { "type": "R", "before": "their results", "after": "the results of their previous study", "start_char_pos": 370, "end_char_pos": 383 }, { "type": "R", "before": "hybrid-barrier-type", "after": "the hybrid-barrier type", "start_char_pos": 465, "end_char_pos": 484 }, { "type": "R", "before": "The", "after": "These", "start_char_pos": 542, "end_char_pos": 545 } ]
[ 0, 161, 340, 431, 541 ]
1612.02770
1
The dominant paradigm in origin of life research is that of an RNA world. However, despite experimental progress towards the spontaneous formation of RNA, the RNA world hypothesis still has its problems. Here, we introduce a novel computational model of chemical reaction networks based on RNA secondary structure , and analyze the emergence of autocatalytic sub-networks in random instances of this model, by combining two well-established computational tools. Our main results are that (i) autocatalytic sets are highly likely to emerge , even for very small reaction networks and short RNA sequences, and (ii) molecular diversity seems to be a more important factor in the formation of autocatalytic sets than molecular complexity . These findings could shed new light on the probability of the spontaneous emergence of an RNA world as a network of mutually collaborative ribozymes.
The dominant paradigm in origin of life research is that of an RNA world. However, despite experimental progress towards the spontaneous formation of RNA, the RNA world hypothesis still has its problems. Here, we introduce a novel computational model of chemical reaction networks based on RNA secondary structure and analyze the existence of autocatalytic sub-networks in random instances of this model, by combining two well-established computational tools. Our main results are that (i) autocatalytic sets are highly likely to exist , even for very small reaction networks and short RNA sequences, and (ii) sequence diversity seems to be a more important factor in the formation of autocatalytic sets than sequence length . These findings could shed new light on the probability of the spontaneous emergence of an RNA world as a network of mutually collaborative ribozymes.
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 314, "end_char_pos": 315 }, { "type": "R", "before": "emergence", "after": "existence", "start_char_pos": 332, "end_char_pos": 341 }, { "type": "R", "before": "emerge", "after": "exist", "start_char_pos": 532, "end_char_pos": 538 }, { "type": "R", "before": "molecular", "after": "sequence", "start_char_pos": 613, "end_char_pos": 622 }, { "type": "R", "before": "molecular complexity", "after": "sequence length", "start_char_pos": 713, "end_char_pos": 733 } ]
[ 0, 73, 203, 461, 735 ]
1612.03066
1
In this article we consider the parameter risk in the context of internal modelling of the reserve risk under Solvency II. We discuss two opposed perspectives on parameter uncertainty and point out that standard methods of classical reserving focusing on the estimation error of claims reserves are in general not appropriate to model the impact of parameter uncertainty upon the actual risk of economic losses from the undertakings's perspective. Referring to the requirements of Solvency II we assess methods to model parameter uncertainty for the reserve risk by comparing the probability of solvency actually attained when modelling the solvency risk capital requirement based on the respective method to the required confidence level. Using the simple example of a normal model we show that the bootstrapping approach is not appropriate to model parameter uncertainty according to this criterion. We then present an adaptation of the approach proposed in Modelling parameter uncertainty for risk capital calculation, Andreas Fr\"ohlich, Annegret Weng, European Actuarial Journal, Vol. 5, Issue No. 1 (2015), pp. 79-112 . Experimental results demonstrate that this new method yields a risk capital model for the reserve risk achieving the required confidence level in good approximation.
In this article we consider the parameter risk in the context of internal modelling of the reserve risk under Solvency II. We discuss two opposed perspectives on parameter uncertainty and point out that standard methods of classical reserving focusing on the estimation error of claims reserves are in general not appropriate to model the impact of parameter uncertainty upon the actual risk of economic losses from the undertakings's perspective. Referring to the requirements of Solvency II we assess methods to model parameter uncertainty for the reserve risk by comparing the probability of solvency actually attained when modelling the solvency risk capital requirement based on the respective method to the required confidence level. Using the simple example of a normal model we show that the bootstrapping approach is not appropriate to model parameter uncertainty according to this criterion. We then present an adaptation of the approach proposed in \mbox{%DIFAUXCMD \cite froehlich2014 . Experimental results demonstrate that this new method yields a risk capital model for the reserve risk achieving the required confidence level in good approximation.
[ { "type": "R", "before": "Modelling parameter uncertainty for risk capital calculation, Andreas Fr\\\"ohlich, Annegret Weng, European Actuarial Journal, Vol. 5, Issue No. 1 (2015), pp. 79-112", "after": "\\mbox{%DIFAUXCMD \\cite", "start_char_pos": 960, "end_char_pos": 1123 }, { "type": "A", "before": null, "after": "froehlich2014", "start_char_pos": 1124, "end_char_pos": 1124 } ]
[ 0, 122, 447, 739, 901 ]
1612.03347
1
In the economics of risk, the primal moments of mean and variance play a central role to define the local index of absolute risk aversion. In this note , we show that in canonical non-EU models dual moments have to be used instead of, or on par with, their primal counterparts to obtain an equivalent index of absolute risk aversion.
In decision under risk, the primal moments of mean and variance play a central role to define the local index of absolute risk aversion. In this paper , we show that in canonical non-EU models dual moments have to be used instead of, or on par with, their primal counterparts to obtain an equivalent index of absolute risk aversion.
[ { "type": "R", "before": "the economics of", "after": "decision under", "start_char_pos": 3, "end_char_pos": 19 }, { "type": "R", "before": "note", "after": "paper", "start_char_pos": 147, "end_char_pos": 151 } ]
[ 0, 138 ]
1612.03698
1
A fractal approach to the long-short portfolio optimization is proposed. The algorithmic system based on the composition of market-neutral spreads into a single entity has been considered. The core of the optimization scheme is a fractal walk model of returns, modifying a risk aversion according to the investment horizon. The covariance matrix of spread returns has been used for the optimization and modified according to the Hurst stability analysis. Out-of-sample performance data has been represented for the space of exchange traded funds in five period time period of observation. The considered portfolio system has turned out to be statistically more stable than a passive investment into benchmark with higher risk adjusted cumulated return .
A fractal approach to the long-short portfolio optimization is proposed. The algorithmic system based on the composition of market-neutral spreads into a single entity was considered. The core of the optimization scheme is a fractal walk model of returns, optimizing a risk aversion according to the investment horizon. The covariance matrix of spread returns has been used for the optimization and modified according to the Hurst stability analysis. Out-of-sample performance data has been represented for the space of exchange traded funds in five period time period of observation. The considered portfolio system has turned out to be statistically more stable than a passive investment into benchmark with higher risk adjusted cumulative return over the observed period .
[ { "type": "R", "before": "has been", "after": "was", "start_char_pos": 168, "end_char_pos": 176 }, { "type": "R", "before": "modifying", "after": "optimizing", "start_char_pos": 261, "end_char_pos": 270 }, { "type": "R", "before": "cumulated return", "after": "cumulative return over the observed period", "start_char_pos": 735, "end_char_pos": 751 } ]
[ 0, 72, 188, 323, 454, 588 ]
1612.05681
1
In this paper, we study the properties of nonlinear BSDEs driven by a Brownian motion and a martingale measure associated with a default jump with intensity process (\lambda_t). We give a priori estimates for these equations and prove comparison and strict comparison theorems. These results are generalized to drivers involving a singular process. The special case of a \lambda-linear driver is studied, leading to a representation of the solution of the associated BSDE in terms of a conditional expectation of an adjoint exponential semi-martingale. We then apply these results to nonlinear pricing of European contingent claims in an imperfect financial market with a defaultable risky asset. The case of claims paying dividends is also included via the singular process.
We study the properties of nonlinear Backward Stochastic Differential Equations (BSDEs) driven by a Brownian motion and a martingale measure associated with a default jump with intensity process (\lambda_t). We give a priori estimates for these equations and prove comparison and strict comparison theorems. These results are generalized to drivers involving a singular process. The special case of a \lambda-linear driver is studied, leading to a representation of the solution of the associated BSDE in terms of a conditional expectation and an adjoint exponential semi-martingale. We then apply these results to nonlinear pricing of European contingent claims in an imperfect financial market with a totally defaultable risky asset. The case of claims paying dividends is also studied via a singular process.
[ { "type": "R", "before": "In this paper, we", "after": "We", "start_char_pos": 0, "end_char_pos": 17 }, { "type": "R", "before": "BSDEs", "after": "Backward Stochastic Differential Equations (BSDEs)", "start_char_pos": 52, "end_char_pos": 57 }, { "type": "R", "before": "of", "after": "and", "start_char_pos": 510, "end_char_pos": 512 }, { "type": "A", "before": null, "after": "totally", "start_char_pos": 672, "end_char_pos": 672 }, { "type": "R", "before": "included via the", "after": "studied via a", "start_char_pos": 742, "end_char_pos": 758 } ]
[ 0, 177, 277, 348, 552, 697 ]
1612.05952
1
We show that there exists an empirical linkage between nominal financial networks and the underlying economic fundamentals across countries. We construct the nominal return correlation networks from daily data to encapsulate sector-level dynamics and calculate the relative importance of the sectors in the nominal network through centrality measure and clustering algorithms. The centrality measure robustly identifies the backbone of the minimum spanning trees defined on the return networks . We show that the sectors that are relatively large constitute the core of the return networks, whereas the periphery is mostly populated by relatively smaller sectors. Therefore, sector-level nominal return dynamics is anchored to the real size effect, which ultimately shapes the optimal portfolios for risk management. The results are reasonably robust across 27 countries of varying degree of prosperity and across periods of market turbulence (2008-09) , as well as relative calmness (2015-16).
We demonstrate the existence of an empirical linkage between the nominal financial networks and the underlying economic fundamentals across countries. We construct the nominal return correlation networks from daily data to encapsulate sector-level dynamics and figure the relative importance of the sectors in the nominal network through a measure of centrality and clustering algorithms. The eigenvector centrality robustly identifies the backbone of the minimum spanning tree defined on the return networks as well as the primary cluster in the multidimensional scaling map . We show that the sectors that are relatively large in size, defined with the metrics market capitalization, revenue and number of employees, constitute the core of the return networks, whereas the periphery is mostly populated by relatively smaller sectors. Therefore, sector-level nominal return dynamics is anchored to the real size effect, which ultimately shapes the optimal portfolios for risk management. Our results are reasonably robust across 27 countries of varying degrees of prosperity and across periods of market turbulence (2008-09) as well as relative calmness (2015-16).
[ { "type": "R", "before": "show that there exists", "after": "demonstrate the existence of", "start_char_pos": 3, "end_char_pos": 25 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 55, "end_char_pos": 55 }, { "type": "R", "before": "calculate", "after": "figure", "start_char_pos": 252, "end_char_pos": 261 }, { "type": "R", "before": "centrality measure", "after": "a measure of centrality", "start_char_pos": 332, "end_char_pos": 350 }, { "type": "R", "before": "centrality measure", "after": "eigenvector centrality", "start_char_pos": 382, "end_char_pos": 400 }, { "type": "R", "before": "trees", "after": "tree", "start_char_pos": 458, "end_char_pos": 463 }, { "type": "A", "before": null, "after": "as well as the primary cluster in the multidimensional scaling map", "start_char_pos": 495, "end_char_pos": 495 }, { "type": "A", "before": null, "after": "in size, defined with the metrics market capitalization, revenue and number of employees,", "start_char_pos": 549, "end_char_pos": 549 }, { "type": "R", "before": "The", "after": "Our", "start_char_pos": 820, "end_char_pos": 823 }, { "type": "R", "before": "degree", "after": "degrees", "start_char_pos": 885, "end_char_pos": 891 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 956, "end_char_pos": 957 } ]
[ 0, 141, 377, 497, 666, 819 ]
1612.06200
1
Many analysts, who had anticipated a great market anxiety resulting in market-wide stock price losses over the event of a Trump presidential victory, remain puzzling through why the market rebounded since the next election day. Whatever the reason, investors appear to be digesting Trump's win speedier than expected. The present paper examines , at sectoral level, the behavior of a variety of US stock price indices (Dow Jones Industrial Average, S\&P 500 and Nasdaq Composite) surrounding the announcement of the Republican candidate's win on 08 November 2016. Although all companies face ongoing uncertainty , the 2016 US election outcome is likely to divide the stock market into losing (technology and utilities) and winning sectors (health care , oil and gas, real estate, defense, financials and consumer goods and services) . Judging by the campaign promises, the best-performing companies are generally those that will gain directly from Trump's proposals revolving around rising infrastructurespending , renegotiating trade agreements, loosening financial regulation, easing restrictions on energy production, and repealing Obamacare.
There is bountiful evidence that political uncertainty stemming from presidential elections or doubt about the direction of future policy make financial markets significantly volatile, especially in proximity to close elections or elections that may prompt radical policy changes. Although several studies have examined the association between presidential elections and stock returns, very little attention has been given to the impacts of elections and election induced uncertainty on stock markets. This paper explores , at sectoral level, the uncertain information hypothesis (UIH) as a means of explaining the reaction of markets to the arrival of unanticipated information. This hypothesis postulates that political uncertainty is greater prior to the elections (relative to pre-election period) but is resolved once the outcome of the elections is determined (relative to post-election period). To this end, we adopt an event-study methodology that examines abnormal return behavior around the election date. We show that collapsing stock returns around the election result is reversed by positive abnormal return on the next day, except some cases where we note negative responses following the vote count. Although Trump's win plunges US into uncertain future, positive reactions of abnormal return are found. Therefore, our results do not support the UIH hypothesis. Besides, the effect of political uncertainty is sector-specific. While some sectors emerged winners (healthcare , oil and gas, real estate, defense, financials and consumer goods and services) , others took the opposite route (technology and utilities). The winning industries are generally those that will benefit from the new administration's focus on rebuilding infrastructure , renegotiating trade agreements, reforming tax policy and labour laws, increasing defense funding, easing restrictions on energy production, and rolling back Obamacare.
[ { "type": "R", "before": "Many analysts, who had anticipated a great market anxiety resulting in market-wide stock price losses over the event of a Trump presidential victory, remain puzzling through why the market rebounded since the next election day. Whatever the reason, investors appear to be digesting Trump's win speedier than expected. The present paper examines", "after": "There is bountiful evidence that political uncertainty stemming from presidential elections or doubt about the direction of future policy make financial markets significantly volatile, especially in proximity to close elections or elections that may prompt radical policy changes. Although several studies have examined the association between presidential elections and stock returns, very little attention has been given to the impacts of elections and election induced uncertainty on stock markets. This paper explores", "start_char_pos": 0, "end_char_pos": 344 }, { "type": "R", "before": "behavior of a variety of US stock price indices (Dow Jones Industrial Average, S\\&P 500 and Nasdaq Composite) surrounding the announcement of the Republican candidate's win on 08 November 2016. Although all companies face ongoing uncertainty , the 2016 US election outcome is likely to divide the stock market into losing (technology and utilities) and winning sectors (health care", "after": "uncertain information hypothesis (UIH) as a means of explaining the reaction of markets to the arrival of unanticipated information. This hypothesis postulates that political uncertainty is greater prior to the elections (relative to pre-election period) but is resolved once the outcome of the elections is determined (relative to post-election period). To this end, we adopt an event-study methodology that examines abnormal return behavior around the election date. We show that collapsing stock returns around the election result is reversed by positive abnormal return on the next day, except some cases where we note negative responses following the vote count. Although Trump's win plunges US into uncertain future, positive reactions of abnormal return are found. Therefore, our results do not support the UIH hypothesis. Besides, the effect of political uncertainty is sector-specific. While some sectors emerged winners (healthcare", "start_char_pos": 370, "end_char_pos": 751 }, { "type": "R", "before": ". Judging by the campaign promises, the best-performing companies", "after": ", others took the opposite route (technology and utilities). The winning industries", "start_char_pos": 833, "end_char_pos": 898 }, { "type": "R", "before": "gain directly from Trump's proposals revolving around rising infrastructurespending", "after": "benefit from the new administration's focus on rebuilding infrastructure", "start_char_pos": 929, "end_char_pos": 1012 }, { "type": "R", "before": "loosening financial regulation,", "after": "reforming tax policy and labour laws, increasing defense funding,", "start_char_pos": 1047, "end_char_pos": 1078 }, { "type": "R", "before": "repealing", "after": "rolling back", "start_char_pos": 1125, "end_char_pos": 1134 } ]
[ 0, 227, 317, 563, 718 ]
1612.06709
1
Membrane-protein systems constitute an important avenue for a variety of targeted therapies. The ability to alter these systems remotely via physical fields is highly desirable for the advance of noninvasive therapies. Biophysical action of acoustic fields in particular holds immense potential for applications in drug delivery and neuro-modulation. Here we investigate the optical response of solvato-chromic fluorescent probe Laurdan, embedded in multilamellar lipid vesicles, subjected to broadband pressure impulses of the order of 1Mpa peak amplitude and pulse width of less than 10 \mu%DIFDELCMD < }%%% s. The response is quantified in terms of the shift in fluorescence spectra using a ratiometric technique . Based on the fluctuation dissipation theorem applied to a coupled membrane-fluorophore system, it is shown that the perturbation of the fluorescence spectra of embedded molecules to the pressure impulses is determined by the thermodynamic state of the interface , or in other words, the thermodynamic susceptibilities such as the compressibility or heat capacity of the system. However, given that the thermodynamic susceptibilities of such systems, especially in native biological environment, are not easy to obtain experimentally, a necessary corollary to the thermodynamic approach is derived. This establishes a direct relation between the width of the emission spectra and the thermodynamic susceptibilities of the system. The result has practical importance as it gives access to the thermodynamic properties of the membrane from steady state fluorescence measurement without the need to perturb the system. Simply stated, the experiments show that the magnitude of the perturbation response of a membrane system to an acoustic insult is proportional to the width of the emission spectrum of the embedded membrane probe .
%DIFDELCMD < }%%% Physical and chemical changes in biological interfaces can be quantified by corresponding changes in the thermodynamic state of the interface . Based on Einsteins approach to thermodynamics, we show that fluorescence spectra of a dye embedded in a lipid membrane is a function of the state of the interface . We derive the coupling between the energy fluctuations of a fluorophore and its environment allowing the thermodynamic susceptibility, in particular, the specific heat of the interface to be estimated from the spectra of the embedded fluorophore. The estimate of the thermodynamic susceptibility is shown to hold not only for quasi-static near-equilibrium measurements, but also for dynamic state changes in lipid vesicles induced by broadband pressure impulses of the order of 1 MPa peak amplitude and 10 microsecond pulse duration. These experiments also provide crucial insights into how dynamic state changes may affect biological function, for example, during an action potential or during therapeutic use of acoustic impulses in the form of ultrasound or shock waves .
[ { "type": "D", "before": "Membrane-protein systems constitute an important avenue for a variety of targeted therapies. The ability to alter these systems remotely via physical fields is highly desirable for the advance of noninvasive therapies. Biophysical action of acoustic fields in particular holds immense potential for applications in drug delivery and neuro-modulation. Here we investigate the optical response of solvato-chromic fluorescent probe Laurdan, embedded in multilamellar lipid vesicles, subjected to broadband pressure impulses of the order of 1Mpa peak amplitude and pulse width of less than 10", "after": null, "start_char_pos": 0, "end_char_pos": 588 }, { "type": "D", "before": "\\mu", "after": null, "start_char_pos": 589, "end_char_pos": 592 }, { "type": "R", "before": "s. The response is quantified in terms of the shift in fluorescence spectra using a ratiometric technique", "after": "Physical and chemical changes in biological interfaces can be quantified by corresponding changes in the thermodynamic state of the interface", "start_char_pos": 610, "end_char_pos": 715 }, { "type": "R", "before": "the fluctuation dissipation theorem applied to a coupled membrane-fluorophore system, it is shown that the perturbation of the", "after": "Einsteins approach to thermodynamics, we show that", "start_char_pos": 727, "end_char_pos": 853 }, { "type": "R", "before": "embedded molecules to the pressure impulses is determined by the thermodynamic", "after": "a dye embedded in a lipid membrane is a function of the", "start_char_pos": 878, "end_char_pos": 956 }, { "type": "R", "before": ", or in other words, the thermodynamic susceptibilities such as the compressibility or heat capacity of", "after": ". We derive the coupling between the energy fluctuations of a fluorophore and its environment allowing the thermodynamic susceptibility, in particular,", "start_char_pos": 980, "end_char_pos": 1083 }, { "type": "R", "before": "system. However, given that the thermodynamic susceptibilities of such systems, especially in native biological environment, are not easy to obtain experimentally, a necessary corollary to the thermodynamic approach is derived. This establishes a direct relation between the width of the emission spectra and the thermodynamic susceptibilities of the system. The result has practical importance as it gives access to the thermodynamic properties of the membrane from steady state fluorescence measurement without the need to perturb the system. Simply stated, the experiments show that the magnitude of the perturbation response of a membrane system to an acoustic insult is proportional to the width of the emission spectrum of the embedded membrane probe", "after": "specific heat of the interface to be estimated from the spectra of the embedded fluorophore. The estimate of the thermodynamic susceptibility is shown to hold not only for quasi-static near-equilibrium measurements, but also for dynamic state changes in lipid vesicles induced by broadband pressure impulses of the order of 1 MPa peak amplitude and 10 microsecond pulse duration. These experiments also provide crucial insights into how dynamic state changes may affect biological function, for example, during an action potential or during therapeutic use of acoustic impulses in the form of ultrasound or shock waves", "start_char_pos": 1088, "end_char_pos": 1844 } ]
[ 0, 92, 218, 350, 437, 479, 812, 1095, 1315, 1446, 1632 ]
1612.06709
2
Physical and chemical changes in biological interfaces can be quantified by corresponding changes in the thermodynamic state of the interface. Based on Einsteins approach to thermodynamics, we show that fluorescence spectra of a dye embedded in a lipid membrane is a function of the state of the interface. We derive the coupling between the energy fluctuations of a fluorophore and its environment allowing the thermodynamic susceptibility, in particular, the specific heat of the interface to be estimated from the spectra of the embedded fluorophore. The estimate of the thermodynamic susceptibility is shown to hold not only for quasi-static near-equilibrium measurements, but also for dynamic state changes in lipid vesicles induced by broadband pressure impulses of the order of 1 MPa peak amplitude and 10 microsecond pulse duration. These experiments also provide crucial insights into how dynamic state changes may affect biological function, for example, during an action potential or during therapeutic use of acoustic impulses in the form of ultrasound or shock waves .
Solvation shell dynamics is a critical determinant of protein and enzyme functions. Here we investigate how the dynamics of the solvation shell changes for molecules embedded in lipid membranes during an acoustic impulse. Solvation sensitive fluorescence probes, Laurdan, embedded in multi-lamellar lipid vesicles in water, were exposed to broadband pressure impulses of the order of 1MPa peak amplitude and 10 \mu s pulse duration. Corresponding changes in emission spectrum of the dye were observed simultaneously across two different wavelengths at sub-microsecond resolution. The experiments show that changes in the emission spectrum and hence the fluctuations of the solvation shell are given by the thermodynamic state change during the process. Therefore, the study suggests that acoustic fields can potentially modulate the kinetics of channels and proteins embedded in lipid membranes by controlling the state dependent fluctuations .
[ { "type": "R", "before": "Physical and chemical changes in biological interfaces can be quantified by corresponding changes in the thermodynamic state of the interface. Based on Einsteins approach to thermodynamics, we show that fluorescence spectra of a dye embedded in a lipid membrane is a function of the state of the interface. We derive the coupling between the energy fluctuations of a fluorophore and its environment allowing the thermodynamic susceptibility, in particular, the specific heat of the interface to be estimated from the spectra of the embedded fluorophore. The estimate of the thermodynamic susceptibility is shown to hold not only for quasi-static near-equilibrium measurements, but also for dynamic state changes in", "after": "Solvation shell dynamics is a critical determinant of protein and enzyme functions. Here we investigate how the dynamics of the solvation shell changes for molecules embedded in", "start_char_pos": 0, "end_char_pos": 714 }, { "type": "R", "before": "vesicles induced by", "after": "membranes during an acoustic impulse. Solvation sensitive fluorescence probes, Laurdan, embedded in multi-lamellar lipid vesicles in water, were exposed to", "start_char_pos": 721, "end_char_pos": 740 }, { "type": "R", "before": "1 MPa", "after": "1MPa", "start_char_pos": 785, "end_char_pos": 790 }, { "type": "R", "before": "microsecond", "after": "\\mu", "start_char_pos": 813, "end_char_pos": 824 }, { "type": "A", "before": null, "after": "s", "start_char_pos": 825, "end_char_pos": 825 }, { "type": "R", "before": "These experiments also provide crucial insights into how dynamic state changes may affect biological function, for example, during an action potential or during therapeutic use of acoustic impulses in the form of ultrasound or shock waves", "after": "Corresponding changes in emission spectrum of the dye were observed simultaneously across two different wavelengths at sub-microsecond resolution. The experiments show that changes in the emission spectrum and hence the fluctuations of the solvation shell are given by the thermodynamic state change during the process. Therefore, the study suggests that acoustic fields can potentially modulate the kinetics of channels and proteins embedded in lipid membranes by controlling the state dependent fluctuations", "start_char_pos": 842, "end_char_pos": 1080 } ]
[ 0, 142, 306, 553, 841 ]
1612.06709
3
Solvation shell dynamics is a critical determinant of protein and enzyme functions. Here we investigate how the dynamics of the solvation shell changes for molecules embedded in lipid membranes during an acoustic impulse . Solvation sensitive fluorescence probes, Laurdan, embedded in multi-lamellar lipid vesicles in water, were exposed to broadband pressure impulses of the order of 1MPa peak amplitude and 10{\mu}s pulse duration. Corresponding changes in emission spectrum of the dye were observed simultaneously across two different wavelengths at sub-microsecond resolution. The experiments show that changes in the emission spectrum and hence the fluctuations of the solvation shell are given by the thermodynamic state change during the process. Therefore, the study suggests that acoustic fields can potentially modulate the kinetics of channels and proteins embedded in lipid membranes by controlling the state dependent fluctuations .
Ultrasound is increasingly being used to modulate the properties of biological membranes for applications in drug delivery and neuromodulation. While various studies have investigated the mechanical aspect of the interaction such as acoustic absorption and membrane deformation, it is not clear how these effects transduce into biological functions, for example, changes in the permeability or the enzymatic activity of the membrane. A critical aspect of the activity of an enzyme is the thermal fluctuations of its solvation or hydration shell. Thermal fluctuations are also known to be directly related to membrane permeability. Here solvation shell changes of lipid membranes subject to an acoustic impulse were investigated using a fluorescence probe, Laurdan. Laurdan was embedded in multi-lamellar lipid vesicles in water, which were exposed to broadband pressure impulses of the order of 1MPa peak amplitude and 10{\mu}s pulse duration. An instrument was developed to monitor changes in the emission spectrum of the dye at two wavelengths with sub-microsecond temporal resolution. The experiments show that changes in the emission spectrum , and hence the fluctuations of the solvation shell , are related to the changes in the thermodynamic state of the membrane and correlated with the compression and rarefaction of the incident sound wave. The results suggest that acoustic fields affect the state of a lipid membrane and therefore can potentially modulate the kinetics of channels and proteins embedded in the membrane .
[ { "type": "R", "before": "Solvation shell dynamics is a critical determinant of protein and enzyme functions. Here we investigate how the dynamics of the solvation shell changes for molecules embedded in lipid membranes during", "after": "Ultrasound is increasingly being used to modulate the properties of biological membranes for applications in drug delivery and neuromodulation. While various studies have investigated the mechanical aspect of the interaction such as acoustic absorption and membrane deformation, it is not clear how these effects transduce into biological functions, for example, changes in the permeability or the enzymatic activity of the membrane. A critical aspect of the activity of an enzyme is the thermal fluctuations of its solvation or hydration shell. Thermal fluctuations are also known to be directly related to membrane permeability. Here solvation shell changes of lipid membranes subject to", "start_char_pos": 0, "end_char_pos": 200 }, { "type": "R", "before": ". Solvation sensitive fluorescence probes, Laurdan,", "after": "were investigated using a fluorescence probe, Laurdan. Laurdan was", "start_char_pos": 221, "end_char_pos": 272 }, { "type": "A", "before": null, "after": "which", "start_char_pos": 325, "end_char_pos": 325 }, { "type": "R", "before": "Corresponding changes in", "after": "An instrument was developed to monitor changes in the", "start_char_pos": 435, "end_char_pos": 459 }, { "type": "R", "before": "were observed simultaneously across two different wavelengths at", "after": "at two wavelengths with", "start_char_pos": 489, "end_char_pos": 553 }, { "type": "A", "before": null, "after": "temporal", "start_char_pos": 570, "end_char_pos": 570 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 642, "end_char_pos": 642 }, { "type": "R", "before": "are given by the thermodynamic state change during the process. Therefore, the study suggests", "after": ", are related to the changes in the thermodynamic state of the membrane and correlated with the compression and rarefaction of the incident sound wave. The results suggest", "start_char_pos": 693, "end_char_pos": 786 }, { "type": "A", "before": null, "after": "affect the state of a lipid membrane and therefore", "start_char_pos": 808, "end_char_pos": 808 }, { "type": "R", "before": "lipid membranes by controlling the state dependent fluctuations", "after": "the membrane", "start_char_pos": 884, "end_char_pos": 947 } ]
[ 0, 83, 434, 582, 756 ]
1612.06850
1
This chapter provides an overview of extremal quantile regression. It is forthcoming in the Handbook of Quantile Regression .
Extremal quantile regression, i.e. quantile regression applied to the tails of the conditional distribution, counts with an increasing number of economic and financial applications such as value-at-risk, production frontiers, determinants of low infant birth weights, and auction models. This chapter provides an overview of recent developments in the theory and empirics of extremal quantile regression. The advances in the theory have relied on the use of extreme value approximations to the law of the Koenker and Bassett (1978) quantile regression estimator. Extreme value laws not only have been shown to provide more accurate approximations than Gaussian laws at the tails, but also have served as the basis to develop bias corrected estimators and inference methods using simulation and suitable variations of bootstrap and subsampling. The applicability of these methods is illustrated with two empirical examples on conditional value-at-risk and financial contagion .
[ { "type": "A", "before": null, "after": "Extremal quantile regression, i.e. quantile regression applied to the tails of the conditional distribution, counts with an increasing number of economic and financial applications such as value-at-risk, production frontiers, determinants of low infant birth weights, and auction models.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "A", "before": null, "after": "recent developments in the theory and empirics of", "start_char_pos": 38, "end_char_pos": 38 }, { "type": "R", "before": "It is forthcoming in the Handbook of Quantile Regression", "after": "The advances in the theory have relied on the use of extreme value approximations to the law of the Koenker and Bassett (1978) quantile regression estimator. Extreme value laws not only have been shown to provide more accurate approximations than Gaussian laws at the tails, but also have served as the basis to develop bias corrected estimators and inference methods using simulation and suitable variations of bootstrap and subsampling. The applicability of these methods is illustrated with two empirical examples on conditional value-at-risk and financial contagion", "start_char_pos": 69, "end_char_pos": 125 } ]
[ 0, 68 ]
1612.07067
1
A large portfolio of independent returns is optimized under the variance risk measure with a ban on short positions. The no-short selling constraint acts as an asymmetric \ell_1 regularizer, setting some portfolio weights to zero and keeping the estimation error bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value 2 of the ratio N/T, where N is the number of different assets in the portfolio and T the length of available time series. This means that a ban on short positions does not prevent the phase transition in the optimization problem, it merely shifts the critical point from its non-regularized value of N/T =1 to 2. We show that this critical value is universal, independent of the distribution of the returns. Beyond this critical value, the variance of the portfolio vanishesfor any portfolio weight vector constructed as a linear combination of the eigenvectors from the null space of the covariance matrix, but these linear combinations are not legitimate solutions of the optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large variances, in accord with one's natural expectation . The analytic calculations are supported by numerical simulations. The analytic and numerical results are in perfect agreement for N/T<2, but some numerical solvers keep yielding a stable solution even in the region N/T>2. This is because there are regularizers built into these solvers that stabilize the otherwise freely fluctuating, meaningless solutions .
A large portfolio of independent returns is optimized under the variance risk measure with a ban on short positions. The no-short selling constraint acts as an asymmetric \ell_1 regularizer, setting some of the portfolio weights to zero and keeping the out of sample estimator for the variance bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value r=2. This means that a ban on short positions does not prevent the phase transition in the optimization problem, it merely shifts the critical point from its non-regularized value of r =1 to 2. At r=2 the out of sample estimator for the portfolio variance stays finite and the estimated in-sample variance vanishes. We have performed numerical simulations to support the analytic results and found perfect agreement for N/T<2. Numerical experiments on finite size samples of symmetrically distributed returns show that above this critical point the probability of finding solutions with zero in-sample variance increases rapidly with increasing N, becoming one in the large N limit. However, these are not legitimate solutions of the optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large variances, in accord with one's natural expectation .
[ { "type": "A", "before": null, "after": "of the", "start_char_pos": 204, "end_char_pos": 204 }, { "type": "R", "before": "estimation error", "after": "out of sample estimator for the variance", "start_char_pos": 247, "end_char_pos": 263 }, { "type": "R", "before": "2 of the ratio N/T, where N is the number of different assets in the portfolio and T the length of available time series.", "after": "r=2.", "start_char_pos": 473, "end_char_pos": 594 }, { "type": "R", "before": "N/T", "after": "r", "start_char_pos": 773, "end_char_pos": 776 }, { "type": "R", "before": "We show that this critical value is universal, independent of the distribution of the returns. Beyond this critical value, the variance of the portfolio vanishesfor any portfolio weight vector constructed as a linear combination of the eigenvectors from the null space of the covariance matrix, but these linear combinations", "after": "At r=2 the out of sample estimator for the portfolio variance stays finite and the estimated in-sample variance vanishes. We have performed numerical simulations to support the analytic results and found perfect agreement for N/T<2. Numerical experiments on finite size samples of symmetrically distributed returns show that above this critical point the probability of finding solutions with zero in-sample variance increases rapidly with increasing N, becoming one in the large N limit. However, these", "start_char_pos": 786, "end_char_pos": 1110 }, { "type": "D", "before": ". The analytic calculations are supported by numerical simulations. The analytic and numerical results are in perfect agreement for N/T<2, but some numerical solvers keep yielding a stable solution even in the region N/T>2. This is because there are regularizers built into these solvers that stabilize the otherwise freely fluctuating, meaningless solutions", "after": null, "start_char_pos": 1514, "end_char_pos": 1872 } ]
[ 0, 116, 333, 594, 785, 880, 1304, 1515, 1581, 1737 ]
1612.07468
1
We obtain a closed form of generating functions of RNA substructure using hermitian matrix model with the Chebyshev polynomial of the second kind, which turns out to be the hypergeometric function. To match the experimental findings of the statistical behavior, we regard the substructure as a grand canonical ensemble and find its fugacity value . We also suggest a hierarchical picture based on the planar structure so that the non-planar structure such as pseudoknot are included.
We obtain a closed form of generating functions of RNA substructure using hermitian matrix model with the Chebyshev polynomial of the second kind, which has the form of the hypergeometric function. To match the experimental findings of the statistical behavior, we regard the substructure as a grand canonical ensemble and find its fugacity value corresponding to the number of stems . We also suggest a hierarchical picture based on the planar structure so that the non-planar structure such as pseudoknot are included.
[ { "type": "R", "before": "turns out to be the", "after": "has the form of the", "start_char_pos": 153, "end_char_pos": 172 }, { "type": "A", "before": null, "after": "corresponding to the number of stems", "start_char_pos": 347, "end_char_pos": 347 } ]
[ 0, 197, 349 ]
1612.07468
2
We obtain a closed form of generating functions of RNA substructure using hermitian matrix model with the Chebyshev polynomial of the second kind , which has the form of the hypergeometric function . To match the experimental findings of the statistical behavior, we regard the substructure as a grand canonical ensemble and find its fugacity value corresponding to the number of stems . We also suggest a hierarchical picture based on the planar structure so that the non-planar structure such as pseudoknot are included .
Combinatorial analysis of a certain abstract of RNA structures has been studied to investigate their statistics. Our approach regards the backbone of secondary structures as an alternate sequence of paired and unpaired sets of nucleotides, which can be described by random matrix model. We obtain the generating function of the structures using Hermitian matrix model with Chebyshev polynomial of the second kind and analyze the statistics with respect to the number of stems . To match the experimental findings of the statistical behavior, we consider the structures in a grand canonical ensemble and find a fugacity value corresponding to an appropriate number of stems .
[ { "type": "R", "before": "We obtain a closed form of generating functions of RNA substructure using hermitian", "after": "Combinatorial analysis of a certain abstract of RNA structures has been studied to investigate their statistics. Our approach regards the backbone of secondary structures as an alternate sequence of paired and unpaired sets of nucleotides, which can be described by random matrix model. We obtain the generating function of the structures using Hermitian", "start_char_pos": 0, "end_char_pos": 83 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 102, "end_char_pos": 105 }, { "type": "R", "before": ", which has the form of the hypergeometric function", "after": "and analyze the statistics with respect to the number of stems", "start_char_pos": 146, "end_char_pos": 197 }, { "type": "R", "before": "regard the substructure as", "after": "consider the structures in", "start_char_pos": 267, "end_char_pos": 293 }, { "type": "R", "before": "its", "after": "a", "start_char_pos": 330, "end_char_pos": 333 }, { "type": "R", "before": "the", "after": "an appropriate", "start_char_pos": 366, "end_char_pos": 369 }, { "type": "D", "before": ". We also suggest a hierarchical picture based on the planar structure so that the non-planar structure such as pseudoknot are included", "after": null, "start_char_pos": 386, "end_char_pos": 521 } ]
[ 0, 199, 387 ]
1612.07802
1
A symmetry-guided time redefinition may enhance and simplify analyses of historical series displaying recurrent patterns . Enforcing a simple-scaling symmetry with Hurst exponent 1/2 and the requirement of increments' stationarity , we identify a time-definition protocol in the financial case. The novel time scale, constructed through a systematic application of the Kolmogorov-Smirnov criterion to extensive data of the S P500 index, lays a bridge between the regime of minutes and that of several days in physical time. It allows us to quantify the duration of periods in which the market is inactive , like amid nights, and to optimally exploit the statistical information contained in the series.The overall strategy leads to a significant reduction of multiscaling features, once the moments of the return probability density function are analyzed versus the novel time .
A symmetry-guided definition of time may enhance and simplify the analysis of historical series with recurrent patterns and seasonalities. By enforcing simple-scaling and stationarity of the distributions of returns , we identify a successful protocol of time definition in Finance. The essential structure of the stochastic process underlying the series can thus be analyzed within a most parsimonious symmetry scheme in which multiscaling is reduced in the quest of a time scale additive and independent of moment-order in the distribution of returns. At the same time, duration of periods in which markets remain inactive are properly quantified by the novel clock, and the corresponding (e.g., overnight) returns are consistently taken into account for financial applications .
[ { "type": "R", "before": "time redefinition", "after": "definition of time", "start_char_pos": 18, "end_char_pos": 35 }, { "type": "R", "before": "analyses", "after": "the analysis", "start_char_pos": 61, "end_char_pos": 69 }, { "type": "R", "before": "displaying recurrent patterns . Enforcing a", "after": "with recurrent patterns and seasonalities. By enforcing", "start_char_pos": 91, "end_char_pos": 134 }, { "type": "R", "before": "symmetry with Hurst exponent 1/2 and the requirement of increments' stationarity", "after": "and stationarity of the distributions of returns", "start_char_pos": 150, "end_char_pos": 230 }, { "type": "D", "before": "time-definition protocol in the financial case. The novel time scale, constructed through a systematic application of the Kolmogorov-Smirnov criterion to extensive data of the S", "after": null, "start_char_pos": 247, "end_char_pos": 424 }, { "type": "R", "before": "P500 index, lays a bridge between the regime of minutes and that of several days in physical time. It allows us to quantify the", "after": "successful protocol of time definition in Finance. The essential structure of the stochastic process underlying the series can thus be analyzed within a most parsimonious symmetry scheme in which multiscaling is reduced in the quest of a time scale additive and independent of moment-order in the distribution of returns. At the same time,", "start_char_pos": 425, "end_char_pos": 552 }, { "type": "R", "before": "the market is inactive , like amid nights, and to optimally exploit the statistical information contained in the series.The overall strategy leads to a significant reduction of multiscaling features, once the moments of the return probability density function are analyzed versus the novel time", "after": "markets remain inactive are properly quantified by the novel clock, and the corresponding (e.g., overnight) returns are consistently taken into account for financial applications", "start_char_pos": 582, "end_char_pos": 876 } ]
[ 0, 294, 523, 702 ]
1612.08116
1
We introduce a tensor-based algebraic clustering method to extract sparse, low-dimensional structure from multidimensional arrays of experimental data. Our methodology is applicable to high dimensional data structures that arise across the sciences. Specifically we introduce a new way to cluster data subject to multi-indexed structural constraints via integer programming. The method can work as a stand-alone clustering tool or in combination with established methods. We implement this approach on a dataset consisting of genetically diverse breast cancer cell lines exposed to a range of signaling molecules, where each experiment is labelled by its combination of cell line and ligand. The data consist of time-course measurements of the immediate-early signaling of mitogen activated protein kinase (MAPK ), and phosphoinositide 3-kinase (PI3K)/Protein kinase B (AKT ). By respecting the multi-indexed structure of the experimental data, the analysis can be optimized for biological interpretation and therapeutic understanding. We quantify the heterogeneity of breast cancer cell subtypes and systematically explore mechanistic models of MAP Kinase and PI3K (phosphoinositide 3-kinase)/AKT crosstalk based on the results of our method .
We introduce a tensor-based clustering method to extract sparse, low-dimensional structure from high-dimensional, multi-indexed datasets. Specifically, this framework is designed to enable detection of clusters of data in the presence of structural requirements which we encode as algebraic constraints in a linear program. We illustrate our method on a collection of experiments measuring the response of genetically diverse breast cancer cell lines to an array of ligands. Each experiment consists of a cell line-ligand combination, and contains time-course measurements of the early-signalling kinases MAPK and AKT at two different ligand dose levels. By imposing appropriate structural constraints and respecting the multi-indexed structure of the data, our clustering analysis can be optimized for biological interpretation and therapeutic understanding. We then perform a systematic, large-scale exploration of mechanistic models of MAPK-AKT crosstalk for each cluster. This analysis allows us to quantify the heterogeneity of breast cancer cell subtypes , and leads to hypotheses about the mechanisms by which cell lines respond to ligands. Our clustering method is general and can be tailored to a variety of applications in science and industry .
[ { "type": "D", "before": "algebraic", "after": null, "start_char_pos": 28, "end_char_pos": 37 }, { "type": "R", "before": "multidimensional arrays of experimental data. Our methodology is applicable to high dimensional data structures that arise across the sciences. Specifically we introduce a new way to cluster data subject to multi-indexed structural constraints via integer programming. The method can work as a stand-alone clustering tool or in combination with established methods. We implement this approach on a dataset consisting of", "after": "high-dimensional, multi-indexed datasets. Specifically, this framework is designed to enable detection of clusters of data in the presence of structural requirements which we encode as algebraic constraints in a linear program. We illustrate our method on a collection of experiments measuring the response of", "start_char_pos": 106, "end_char_pos": 525 }, { "type": "R", "before": "exposed to a range of signaling molecules, where each experiment is labelled by its combination of cell line and ligand. The data consist of", "after": "to an array of ligands. Each experiment consists of a cell line-ligand combination, and contains", "start_char_pos": 571, "end_char_pos": 711 }, { "type": "R", "before": "immediate-early signaling of mitogen activated protein kinase (MAPK ), and phosphoinositide 3-kinase (PI3K)/Protein kinase B (AKT ). By", "after": "early-signalling kinases MAPK and AKT at two different ligand dose levels. By imposing appropriate structural constraints and", "start_char_pos": 744, "end_char_pos": 879 }, { "type": "R", "before": "experimental data, the", "after": "data, our clustering", "start_char_pos": 926, "end_char_pos": 948 }, { "type": "A", "before": null, "after": "then perform a systematic, large-scale exploration of mechanistic models of MAPK-AKT crosstalk for each cluster. This analysis allows us to", "start_char_pos": 1039, "end_char_pos": 1039 }, { "type": "R", "before": "and systematically explore mechanistic models of MAP Kinase and PI3K (phosphoinositide 3-kinase)/AKT crosstalk based on the results of our method", "after": ", and leads to hypotheses about the mechanisms by which cell lines respond to ligands. Our clustering method is general and can be tailored to a variety of applications in science and industry", "start_char_pos": 1098, "end_char_pos": 1243 } ]
[ 0, 151, 249, 374, 471, 691, 876, 1035 ]
1612.08116
2
We introduce a tensor-based clustering method to extract sparse, low-dimensional structure from high-dimensional, multi-indexed datasets. Specifically, this framework is designed to enable detection of clusters of data in the presence of structural requirements which we encode as algebraic constraints in a linear program. We illustrate our method on a collection of experiments measuring the response of genetically diverse breast cancer cell lines to an array of ligands. Each experiment consists of a cell line-ligand combination, and contains time-course measurements of the early-signalling kinases MAPK and AKT at two different ligand dose levels. By imposing appropriate structural constraints and respecting the multi-indexed structure of the data, our clustering analysis can be optimized for biological interpretation and therapeutic understanding. We then perform a systematic, large-scale exploration of mechanistic models of MAPK-AKT crosstalk for each cluster. This analysis allows us to quantify the heterogeneity of breast cancer cell subtypes, and leads to hypotheses about the mechanisms by which cell lines respond to ligands. Our clustering method is general and can be tailored to a variety of applications in science and industry .
We introduce a tensor-based clustering method to extract sparse, low-dimensional structure from high-dimensional, multi-indexed datasets. This framework is designed to enable detection of clusters of data in the presence of structural requirements which we encode as algebraic constraints in a linear program. Our clustering method is general and can be tailored to a variety of applications in science and industry. We illustrate our method on a collection of experiments measuring the response of genetically diverse breast cancer cell lines to an array of ligands. Each experiment consists of a cell line-ligand combination, and contains time-course measurements of the early-signalling kinases MAPK and AKT at two different ligand dose levels. By imposing appropriate structural constraints and respecting the multi-indexed structure of the data, the analysis of clusters can be optimized for biological interpretation and therapeutic understanding. We then perform a systematic, large-scale exploration of mechanistic models of MAPK-AKT crosstalk for each cluster. This analysis allows us to quantify the heterogeneity of breast cancer cell subtypes, and leads to hypotheses about the signalling mechanisms that mediate the response of the cell lines to ligands .
[ { "type": "R", "before": "Specifically, this", "after": "This", "start_char_pos": 138, "end_char_pos": 156 }, { "type": "A", "before": null, "after": "Our clustering method is general and can be tailored to a variety of applications in science and industry.", "start_char_pos": 324, "end_char_pos": 324 }, { "type": "R", "before": "our clustering analysis", "after": "the analysis of clusters", "start_char_pos": 759, "end_char_pos": 782 }, { "type": "R", "before": "mechanisms by which cell lines respond to ligands. Our clustering method is general and can be tailored to a variety of applications in science and industry", "after": "signalling mechanisms that mediate the response of the cell lines to ligands", "start_char_pos": 1097, "end_char_pos": 1253 } ]
[ 0, 137, 323, 475, 655, 860, 976, 1147 ]
1612.08763
1
We introduce a general framework for biological systems, called MESSI systems, that describe Modifications of type Enzyme-Substrate or Swap with Intermediates, and we prove general results based on the network structure. Many post-translational modification networks are MESSI systems. For example: the motifs in Feliu-Wiuf'12 , sequential distributive multisite phosphorylation networks, sequential processive multisite phosphorylation networks, most of the examples in Angeli et al. '07 , phosphorylation cascades, two component systems as in Kothamachu et al. '15 , the bacterial EnvZ/OmpR network in Shinar-Feinberg'10 and all linear networks. We show that, under mass-action kinetics, MESSI systems are conservative. We simplify the study of steady states of these systems by explicit elimination of intermediate complexes (inspired by Feliu-Wiuf'12 , 13 and Thomson-Gunawardena'09)and we define an important subclass of MESSI systems with toric steady states . We give for MESSI systems with toric steady states an easy algorithm to determine the capacity for multistationarity. In this case, the algorithm provides rate constants for which multistationarity takes place, based on the theory of oriented matroids.
We introduce a general framework for biological systems, called MESSI systems, that describe Modifications of type Enzyme-Substrate or Swap with Intermediates, and we prove general results based on the network structure. Many post-translational modification networks are MESSI systems. For example: the motifs in Feliu and Wiuf (2012a) , sequential distributive and processive multisite phosphorylation networks, most of the examples in Angeli et al. (2007) , phosphorylation cascades, two component systems as in Kothamachu et al. (2015) , the bacterial EnvZ/OmpR network in Shinar and Feinberg (2010) , and all linear networks. We show that, under mass-action kinetics, MESSI systems are conservative. We simplify the study of steady states of these systems by explicit elimination of intermediate complexes and we give conditions to ensure an explicit rational parametrization of the variety of steady states (inspired by Feliu and Wiuf (2013a , 2013b), Thomson and Gunawardena (2009) ). We define an important subclass of MESSI systems with toric steady states P\'erez Mill\'an et al. (2012) and we give for MESSI systems with toric steady states an easy algorithm to determine the capacity for multistationarity. In this case, the algorithm provides rate constants for which multistationarity takes place, based on the theory of oriented matroids.
[ { "type": "R", "before": "Feliu-Wiuf'12", "after": "Feliu and Wiuf (2012a)", "start_char_pos": 313, "end_char_pos": 326 }, { "type": "R", "before": "multisite phosphorylation networks, sequential", "after": "and", "start_char_pos": 353, "end_char_pos": 399 }, { "type": "R", "before": "'07", "after": "(2007)", "start_char_pos": 485, "end_char_pos": 488 }, { "type": "R", "before": "'15", "after": "(2015)", "start_char_pos": 563, "end_char_pos": 566 }, { "type": "R", "before": "Shinar-Feinberg'10 and", "after": "Shinar and Feinberg (2010)", "start_char_pos": 604, "end_char_pos": 626 }, { "type": "A", "before": null, "after": ", and", "start_char_pos": 627, "end_char_pos": 627 }, { "type": "A", "before": null, "after": "and we give conditions to ensure an explicit rational parametrization of the variety of steady states", "start_char_pos": 829, "end_char_pos": 829 }, { "type": "R", "before": "Feliu-Wiuf'12", "after": "Feliu and Wiuf (2013a", "start_char_pos": 843, "end_char_pos": 856 }, { "type": "R", "before": "13 and Thomson-Gunawardena'09)and we", "after": "2013b), Thomson and Gunawardena (2009)", "start_char_pos": 859, "end_char_pos": 895 }, { "type": "A", "before": null, "after": "). We", "start_char_pos": 896, "end_char_pos": 896 }, { "type": "R", "before": ". We", "after": "P\\'erez Mill\\'an et al. (2012)", "start_char_pos": 968, "end_char_pos": 972 }, { "type": "A", "before": null, "after": "and we", "start_char_pos": 973, "end_char_pos": 973 } ]
[ 0, 220, 285, 648, 722, 969, 1088 ]
1612.09103
1
We fully characterize discrete-time dynamic convex expectations (\mathcal{E } upper semianalytic functions - in particular we work without a reference measure and do not assume essential suprema to exist. It is shown that \mathcal{E } (\cdot|U)}\colon }\to(U) be a sublinear increasing functional which leaves L(U) invariant. We prove that there exists a set-valued mapping P_V from U to the set of probabilities on V with compact convex values and analytic graph such that E(X|U)(u)= P\in\mathcal{P_V(u)} \int_V X(u,v)\,P(dv) if and only if \mathcal{E}(\cdot |U) } is pointwise continuous from below and continuous from above on the continuous functions if and only if a dual representation of \mathcal{E_t in terms of conditional expectations minus the convex conjugate of \mathcal{E}_t holds true, where the conjugate is lower semianalytic with pointwise weakly compact level sets. Moreover, we provide a dual characterization of the dynamic property , i.e. we show that \mathcal{E}_t} (\cdot)}\colon } \to which leaves the constants invariant, the tower property \mathcal{E}(\cdot)} =\mathcal{E} _t\circ\mathcal{E_{t+1} if and only if the convex conjugate of \mathcal{E}_t has an additive form. We also consider dynamic convex expectations defined on the set of discrete-time stochastic processes} (\cdot|U)) is characterized via a pasting property of the representing sets of probabilities. As applications, we characterize under which conditions the product of a set of probabilities and a set of kernels is compact, and under which conditions a nonlinear version of Fubini's theorem holds true} .
Given two Polish spaces U and V, denote by \mathcal{L \times} V) and \mathcal{L upper semianalytic functions from U \times} V and U to the real line, respectively. Let \mathcal{E(\cdot|U)}\colon\mathcal{L \times}V)\to\mathcal{L(U) be a sublinear increasing functional which leaves L(U) invariant. We prove that there exists a set-valued mapping P_V from U to the set of probabilities on V with compact convex values and analytic graph such that E(X|U)(u)= P\in\mathcal{P_V(u)} \int_V X(u,v)\,P(dv) if and only if \mathcal{E}(\cdot |U) } is pointwise continuous from below and continuous from above on the continuous functions _t in terms of conditional expectations minus the convex conjugate of \mathcal{E}_t holds true, where the conjugate is lower semianalytic with pointwise weakly compact level sets. Moreover, we provide a dual characterization of the dynamic property , i.e. we show that \mathcal{E}_t} . Further, given another sublinear increasing functional \mathcal{E(\cdot)}\colon\mathcal{L \times} V)\to\mathbb{R which leaves the constants invariant, the tower property \mathcal{E}(\cdot)} =\mathcal{E} _{t+1} if and only if the convex conjugate of \mathcal{E}_t has an additive form. We also consider dynamic convex expectations defined on the set of discrete-time stochastic processes} (\mathcal{E(\cdot|U)) is characterized via a pasting property of the representing sets of probabilities. As applications, we characterize under which conditions the product of a set of probabilities and a set of kernels is compact, and under which conditions a nonlinear version of Fubini's theorem holds true} .
[ { "type": "R", "before": "We fully characterize discrete-time dynamic convex expectations (\\mathcal{E", "after": "Given two Polish spaces U and V, denote by \\mathcal{L", "start_char_pos": 0, "end_char_pos": 75 }, { "type": "A", "before": null, "after": "\\times", "start_char_pos": 76, "end_char_pos": 76 }, { "type": "A", "before": null, "after": "V) and \\mathcal{L", "start_char_pos": 78, "end_char_pos": 78 }, { "type": "R", "before": "- in particular we work without a reference measure and do not assume essential suprema to exist. It is shown that \\mathcal{E", "after": "from U", "start_char_pos": 108, "end_char_pos": 233 }, { "type": "A", "before": null, "after": "\\times", "start_char_pos": 234, "end_char_pos": 234 }, { "type": "A", "before": null, "after": "V and U to the real line, respectively. Let \\mathcal{E", "start_char_pos": 236, "end_char_pos": 236 }, { "type": "A", "before": null, "after": "\\mathcal{L", "start_char_pos": 252, "end_char_pos": 252 }, { "type": "A", "before": null, "after": "\\times", "start_char_pos": 253, "end_char_pos": 253 }, { "type": "A", "before": null, "after": "V)", "start_char_pos": 254, "end_char_pos": 254 }, { "type": "A", "before": null, "after": "\\mathcal{L", "start_char_pos": 257, "end_char_pos": 257 }, { "type": "D", "before": "if and only if a dual representation of \\mathcal{E", "after": null, "start_char_pos": 656, "end_char_pos": 706 }, { "type": "A", "before": null, "after": ". Further, given another sublinear increasing functional \\mathcal{E", "start_char_pos": 990, "end_char_pos": 990 }, { "type": "A", "before": null, "after": "\\mathcal{L", "start_char_pos": 1004, "end_char_pos": 1004 }, { "type": "A", "before": null, "after": "\\times", "start_char_pos": 1005, "end_char_pos": 1005 }, { "type": "A", "before": null, "after": "V)", "start_char_pos": 1007, "end_char_pos": 1007 }, { "type": "A", "before": null, "after": "\\mathbb{R", "start_char_pos": 1010, "end_char_pos": 1010 }, { "type": "D", "before": "_t\\circ\\mathcal{E", "after": null, "start_char_pos": 1101, "end_char_pos": 1118 }, { "type": "A", "before": null, "after": "(\\mathcal{E", "start_char_pos": 1303, "end_char_pos": 1303 } ]
[ 0, 205, 326, 527, 885, 1199, 1396 ]
1612.09244
1
Over the last 23 years, the U.S. Securities and Exchange Commission has required over 34,000 companies to file over 165,000 annual reports. These reports, the so-called "Form 10-Ks," contain a characterization of a company's financial performance and its risks, including the regulatory environment in which a company operates. In this paper, we analyze over 4.5 million references to U.S. Federal Acts and Agencies contained within these reports to build a mean-field measurement of temperature and diversity in this regulatory ecosystem . While individuals across the political, economic, and academic world frequently refer to trends in this regulatory ecosystem, there has been far less attention paid to supporting such claims with large-scale, longitudinal data. In this paper, we document an increase in the regulatory energy per filing, i.e., a warming "temperature." We also find that the diversity of the regulatory ecosystem has been increasing over the past two decades, as measured by the dimensionality of the regulatory space and distance between the "regulatory bitstrings" of companies. This measurement framework and its ongoing application contribute an important step towards improving academic and policy discussions around legal complexity and the regulationof large-scale human techno-social systems .
Over the last 23 years, the U.S. Securities and Exchange Commission has required over 34,000 companies to file over 165,000 annual reports. These reports, the so-called "Form 10-Ks," contain a characterization of a company's financial performance and its risks, including the regulatory environment in which a company operates. In this paper, we analyze over 4.5 million references to U.S. Federal Acts and Agencies contained within these reports to build a mean-field measurement of temperature and diversity in this regulatory ecosystem , where companies URLanisms inhabiting the regulatory environment . While individuals across the political, economic, and academic world frequently refer to trends in this regulatory ecosystem, far less attention has been paid to supporting such claims with large-scale, longitudinal data. In this paper, we document an increase in the regulatory energy per filing, i.e., a warming "temperature." We also find that the diversity of the regulatory ecosystem has been increasing over the past two decades, as measured by the dimensionality of the regulatory space and distance between the "regulatory bitstrings" of companies. These findings support the claim that regulatory activity and complexity are increasing, and this measurement framework contributes an important step towards improving academic and policy discussions around legal complexity and regulation .
[ { "type": "A", "before": null, "after": ", where companies URLanisms inhabiting the regulatory environment", "start_char_pos": 539, "end_char_pos": 539 }, { "type": "D", "before": "there has been", "after": null, "start_char_pos": 668, "end_char_pos": 682 }, { "type": "A", "before": null, "after": "has been", "start_char_pos": 702, "end_char_pos": 702 }, { "type": "R", "before": "This measurement framework and its ongoing application contribute", "after": "These findings support the claim that regulatory activity and complexity are increasing, and this measurement framework contributes", "start_char_pos": 1106, "end_char_pos": 1171 }, { "type": "R", "before": "the regulationof large-scale human techno-social systems", "after": "regulation", "start_char_pos": 1268, "end_char_pos": 1324 } ]
[ 0, 139, 327, 541, 770, 877, 1105 ]
1612.09379
2
Assuming that mutation and fixation processes are reversible Markov processes, we prove that the equilibrium ensemble of sequences obeys a Boltzmann distribution with \exp(4N_e m(1 - 1/(2N))), where m is a Malthusian fitness and N_e and N are the effective and actual population sizes. On the other hand, the probability distribution of sequences with maximum entropy that satisfies a given amino acid composition at each site and a given pairwise amino acid frequency at each site pair is a Boltzmann distribution with \exp(-\psi_N), where \psi_N is represented as the sum of one body and pairwise potentials. A protein folding theory indicates that homologous sequences obey a canonical ensemble characterized by \exp(-\Delta G_{ND}/k_B T_s) or by \exp(- G_{N}/k_B T_s) if an amino acid composition is kept constant, where \Delta G_{ND} \equiv G_N - G_D, G_N and G_D are the native and denatured free energies, and T_s is selective temperature. Thus, 4N_e m (1 - 1/(2N)), -\Delta , and -\Delta } G_{ND}/k_B T_s , and -\Delta \psi_{ND must be equivalent to each other. Based on the analysis of the changes (\Delta \psi_N) of \psi_N due to single nucleotide nonsynonymous substitutions, T_s, and then glass transition temperature T_g, and \Delta G_{ND} are estimated with reasonable values for 14 protein domains. In addition, approximating the probability density function (PDF) of \Delta \psi_N by a log-normal distribution, PDFs of \Delta \psi_N and K_a/K_s, which is the ratio of nonsynonymous to synonymous substitution rate per site, in all and in fixed mutants are estimated. It is confirmed that T_s negatively correlates with the average of K_a/K_s. Stabilizing mutations are significantly fixed by positive selection, and balance with destabilizing mutations fixed by random drift. Contrary to the neutral theory, the proportion of neutral selection is not large .
Assuming that mutation and fixation processes are reversible Markov processes, we prove that the equilibrium ensemble of sequences obeys a Boltzmann distribution with \exp(4N_e m(1 - 1/(2N))), where m is Malthusian fitness and N_e and N are effective and actual population sizes. On the other hand, the probability distribution of sequences with maximum entropy that satisfies a given amino acid composition at each site and a given pairwise amino acid frequency at each site pair is a Boltzmann distribution with \exp(-\psi_N), where \psi_N is represented as the sum of one body and pairwise potentials. A protein folding theory indicates that homologous sequences obey a canonical ensemble characterized by \exp(-\Delta G_{ND}/k_B T_s) or by \exp(- G_{N}/k_B T_s) if an amino acid composition is kept constant, where \Delta G_{ND} \equiv G_N - G_D, G_N and G_D are the native and denatured free energies, and T_s is selective temperature. Thus, 4N_e m (1 - 1/(2N)), -\Delta \psi_{ND, and -\Delta } G_{ND}/k_B T_s must be equivalent to each other. Based on the analysis of the changes (\Delta \psi_N) of \psi_N due to single nucleotide nonsynonymous substitutions, T_s, and then glass transition temperature T_g, and \Delta G_{ND} are estimated with reasonable values for 14 protein domains. In addition, approximating the probability density function (PDF) of \Delta \psi_N by a log-normal distribution, PDFs of \Delta \psi_N and K_a/K_s, which is the ratio of nonsynonymous to synonymous substitution rate per site, in all and in fixed mutants are estimated. It is confirmed that T_s negatively correlates with the mean of K_a/K_s. Stabilizing mutations are significantly fixed by positive selection, and balance with destabilizing mutations fixed by random drift. Supporting the nearly neutral theory, neutral selection is not significant .
[ { "type": "D", "before": "a", "after": null, "start_char_pos": 204, "end_char_pos": 205 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 243, "end_char_pos": 246 }, { "type": "A", "before": null, "after": "\\psi_{ND", "start_char_pos": 982, "end_char_pos": 982 }, { "type": "D", "before": ", and -\\Delta \\psi_{ND", "after": null, "start_char_pos": 1013, "end_char_pos": 1035 }, { "type": "R", "before": "average", "after": "mean", "start_char_pos": 1639, "end_char_pos": 1646 }, { "type": "R", "before": "Contrary to the", "after": "Supporting the nearly", "start_char_pos": 1792, "end_char_pos": 1807 }, { "type": "D", "before": "the proportion of", "after": null, "start_char_pos": 1824, "end_char_pos": 1841 }, { "type": "R", "before": "large", "after": "significant", "start_char_pos": 1867, "end_char_pos": 1872 } ]
[ 0, 285, 610, 946, 1069, 1313, 1582, 1791 ]
1612.09553
1
How do financial crises and stock-market fluctuations affect investor behavior and the dynamicsof financial markets in the long run ? Recent evidence suggests that individuals overweight personal experiences of macroeconomic shocks when forming beliefs and making investmentdecisions. We propose a theoretical foundation for such experience-based learning and derive its dynamic implications in a simple OLG model. Risk averse agents invest in a risky and a risk-free asset. They form beliefsabout the payoff of the risky asset based on the two key components of experience effects: (1) they overweight data observed during their lifetimes so far, and (2) they exhibit recency bias. In equilibrium, prices depend on past dividends, but only on those observed by the generations that are alive, and they are more sensitive to more recent dividends. Younger generations react more strongly to recent experiences than older generations, and hence have higher demand for the risky asset in good times, but lower demand in bad times. As a result, a crisis increases the average age of stock market participants, while booms have the opposite effect. The stronger the disagreement across generations (e.g., after a recent shock), the higher is the trade volume. We also show that, vice versa, the demographic composition of markets significantly influences the response to aggregate shocks. We generate empirical results on stock-market participation, stock-market investment, and trade volume from theSurvey of Consumer Finances , merged with CRSP and historical data on stock-market performance, that are consistent with the model predictions .
How do macro-financial shocks affect investor behavior and market dynamics ? Recent evidence on experience effects suggests a long-lasting influence of personally experienced outcomes on investor beliefs and investment, but also significant differences across older and younger generations. We formalize experience-based learning in an OLG model, where different cross-cohort experiences generate persistent heterogeneity in beliefs, portfolio choices, and trade. The model allows us to characterize a novel link between investor demographics and the dependence of prices on past dividends, while also generating known features of asset prices, such as excess volatility and return predictability. The model produces new implications for the cross-section of asset holdings, trade volume, and investors' heterogenous responses to recent financial crises, which we show to be in line with the data .
[ { "type": "R", "before": "financial crises and stock-market fluctuations", "after": "macro-financial shocks", "start_char_pos": 7, "end_char_pos": 53 }, { "type": "R", "before": "the dynamicsof financial markets in the long run", "after": "market dynamics", "start_char_pos": 83, "end_char_pos": 131 }, { "type": "R", "before": "suggests that individuals overweight personal experiences of macroeconomic shocks when forming beliefs and making investmentdecisions. We propose a theoretical foundation for such", "after": "on experience effects suggests a long-lasting influence of personally experienced outcomes on investor beliefs and investment, but also significant differences across older and younger generations. We formalize", "start_char_pos": 150, "end_char_pos": 329 }, { "type": "R", "before": "and derive its dynamic implications in a simple OLG model. Risk averse agents invest in a risky and a risk-free asset. They form beliefsabout the payoff of the risky asset based on the two key components of experience effects: (1) they overweight data observed during their lifetimes so far, and (2) they exhibit recency bias. In equilibrium, prices depend", "after": "in an OLG model, where different cross-cohort experiences generate persistent heterogeneity in beliefs, portfolio choices, and trade. The model allows us to characterize a novel link between investor demographics and the dependence of prices", "start_char_pos": 356, "end_char_pos": 712 }, { "type": "D", "before": "but only on those observed by the generations that are alive, and they are more sensitive to more recent dividends. Younger generations react more strongly to recent experiences than older generations, and hence have higher demand for the risky asset in good times, but lower demand in bad times. As a result, a crisis increases the average age of stock market participants, while booms have the opposite effect. The stronger the disagreement across generations (e.g., after a recent shock), the higher is the trade volume. We also show that, vice versa, the demographic composition of markets significantly influences the response to aggregate shocks. We generate empirical results on stock-market participation, stock-market investment, and trade volume from the", "after": null, "start_char_pos": 732, "end_char_pos": 1496 }, { "type": "D", "before": "Survey of Consumer Finances", "after": null, "start_char_pos": 1496, "end_char_pos": 1523 }, { "type": "R", "before": ", merged with CRSP and historical data on stock-market performance, that are consistent with the model predictions", "after": "while also generating known features of asset prices, such as excess volatility and return predictability. The model produces new implications for the cross-section of asset holdings, trade volume, and investors' heterogenous responses to recent financial crises, which we show to be in line with the data", "start_char_pos": 1524, "end_char_pos": 1638 } ]
[ 0, 284, 414, 474, 682, 847, 1028, 1144, 1255, 1384 ]
1701.01185
1
This paper shows how to carry out efficient asymptotic variance reduction when estimating volatility in the presence of stochastic volatility and microstructure noise with the realized kernels (RK) from [Barndorff-Nielsen et al., 2008] and the quasi-maximum likelihood estimator (QMLE) studied in [Xiu, 2010]. To obtain such a reduction, we chop the data into B blocks, compute the RK (or QMLE) on each block, and aggregate the block estimates. The ratio of asymptotic variance over the bound of asymptotic efficiency converges as B increases to the ratio in the parametric version of the problem, i.e. 1.0025 in the case of the fastest RK Tukey-Hanning 16 and 1 for the QMLE. The estimators are shown to be robust to jumps in price process and stochastic sampling times . The finite sample performance of both estimators is investigated in simulations, while empirical work illustrates the gain in practice.
This paper shows how to carry out efficient asymptotic variance reduction when estimating volatility in the presence of stochastic volatility and microstructure noise with the realized kernels (RK) from [Barndorff-Nielsen et al., 2008] and the quasi-maximum likelihood estimator (QMLE) studied in [Xiu, 2010]. To obtain such a reduction, we chop the data into B blocks, compute the RK (or QMLE) on each block, and aggregate the block estimates. The ratio of asymptotic variance over the bound of asymptotic efficiency converges as B increases to the ratio in the parametric version of the problem, i.e. 1.0025 in the case of the fastest RK Tukey-Hanning 16 and 1 for the QMLE. The impact of stochastic sampling times and jump in the price process is examined carefully . The finite sample performance of both estimators is investigated in simulations, while empirical work illustrates the gain in practice.
[ { "type": "R", "before": "estimators are shown to be robust to jumps in price process and", "after": "impact of", "start_char_pos": 681, "end_char_pos": 744 }, { "type": "A", "before": null, "after": "and jump in the price process is examined carefully", "start_char_pos": 771, "end_char_pos": 771 } ]
[ 0, 309, 444, 676, 773 ]
1701.01511
1
The cell has the ability to convert an extracellular biochemical change into the expression of genetic information through a chain of intracellular cycle reactionswith information conversion . Here, we show that an individual reaction cycle can be regarded as a kind of Szilard engine. Accordingly, the work done at the individual cycle level can be calculated by measuring the amount of information transmitted. As a result, we can obtain a method for quantifying the information transduction of biochemical reaction cascades .
A cell has the ability to convert an environmental change into the expression of genetic information through a chain of intracellular signal transduction reactions . Here, we aimed to develop a method for quantifying this signal transduction. We showed that the channel capacities of individual steps in a given general model cascade were equivalent in an independent manner, and were given by the entropy production rate. Signal transduction was transmitted by fluctuation of the entropy production rate and quantified transduction was estimated by the work done in individual steps. If the individual step representing the modification to demodification of the signal molecules is considered to be a Szilard engine, the maximal work done is equivalent to the chemical potential change of the messenger that is consumed during the modification reaction. Our method was applicable to calculate the channel capacity of the MAPK cascade. In conclusion, our method is suitable for quantitative analyses of signal transduction .
[ { "type": "R", "before": "The", "after": "A", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "R", "before": "extracellular biochemical", "after": "environmental", "start_char_pos": 39, "end_char_pos": 64 }, { "type": "R", "before": "cycle reactionswith information conversion", "after": "signal transduction reactions", "start_char_pos": 148, "end_char_pos": 190 }, { "type": "R", "before": "show that an individual reaction cycle can be regarded as a kind of Szilard engine. Accordingly, the work done at the individual cycle level can be calculated by measuring the amount of information transmitted. As a result, we can obtain a method for quantifying the information transduction of biochemical reaction cascades", "after": "aimed to develop a method for quantifying this signal transduction. We showed that the channel capacities of individual steps in a given general model cascade were equivalent in an independent manner, and were given by the entropy production rate. Signal transduction was transmitted by fluctuation of the entropy production rate and quantified transduction was estimated by the work done in individual steps. If the individual step representing the modification to demodification of the signal molecules is considered to be a Szilard engine, the maximal work done is equivalent to the chemical potential change of the messenger that is consumed during the modification reaction. Our method was applicable to calculate the channel capacity of the MAPK cascade. In conclusion, our method is suitable for quantitative analyses of signal transduction", "start_char_pos": 202, "end_char_pos": 526 } ]
[ 0, 192, 285, 412 ]
1701.01726
1
In this paper we present the new PlanetServer,a set of tools comprising a web Geographic Information System (GIS) and a recently developed Python API capable of analyzing a wide variety of hyperspectral data from different planetary bodies. The research case studies are focusing on 1) the characterization of different hydrosilicates such as chlorites, prehnites and kaolinites in the Nili Fossae area on Mars , and 2) the characterization of ice (CO 2 and H 2 O ice) in two different areas of Mars where ice was reported in a nearly pure state. Results show positive outcome in hyperspectral analysis and visualization compared to previous literature, therefore we suggest using PlanetServer for such investigations.
The lack of open-source tools for hyperspectral data visualization and analysiscreates a demand for new tools. In this paper we present the new PlanetServer,a set of tools comprising a web Geographic Information System (GIS) and arecently developed Python Application Programming Interface (API) capableof visualizing and analyzing a wide variety of hyperspectral data from differentplanetary bodies. Current WebGIS open-source tools are evaluated in orderto give an overview and contextualize how PlanetServer can help in this mat-ters. The web client is thoroughly described as well as the datasets availablein PlanetServer. Also, the Python API is described and exposed the reason ofits development. Two different examples of mineral characterization of differenthydrosilicates such as chlorites, prehnites and kaolinites in the Nili Fossae areaon Mars are presented. As the obtained results show positive outcome in hyper-spectral analysis and visualization compared to previous literature, we suggestusing the PlanetServer approach for such investigations.
[ { "type": "A", "before": null, "after": "The lack of open-source tools for hyperspectral data visualization and analysiscreates a demand for new tools.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "a recently developed Python API capable of", "after": "arecently developed Python Application Programming Interface (API) capableof visualizing and", "start_char_pos": 119, "end_char_pos": 161 }, { "type": "R", "before": "different planetary bodies. The research case studies are focusing on 1) the characterization of different hydrosilicates", "after": "differentplanetary bodies. Current WebGIS open-source tools are evaluated in orderto give an overview and contextualize how PlanetServer can help in this mat-ters. The web client is thoroughly described as well as the datasets availablein PlanetServer. Also, the Python API is described and exposed the reason ofits development. Two different examples of mineral characterization of differenthydrosilicates", "start_char_pos": 214, "end_char_pos": 335 }, { "type": "R", "before": "area on Mars , and 2) the characterization of ice (CO 2 and H 2 O ice) in two different areas of Mars where ice was reported in a nearly pure state. Results", "after": "areaon Mars are presented. As the obtained results", "start_char_pos": 399, "end_char_pos": 555 }, { "type": "R", "before": "hyperspectral", "after": "hyper-spectral", "start_char_pos": 581, "end_char_pos": 594 }, { "type": "R", "before": "therefore we suggest using PlanetServer", "after": "we suggestusing the PlanetServer approach", "start_char_pos": 655, "end_char_pos": 694 } ]
[ 0, 241, 547 ]
1701.02028
1
Asset correlations play an important role in credit portfolio modelling. One possible data source for their estimation are default time series. This study investigates the systematic error that is made if the exposure pool underlying a default time series is assumed to be homogeneous when in reality it is not. We find that the asset correlation will always be underestimated if homogeneity with respect to the probability of default (PD) is wrongly assumed, and the error is the larger the more spread out the PD is within the exposure pool. If the exposure pool is inhomogeneous with respect to the asset correlation itself then the error may be going in both directions, but for most PD- and asset correlation ranges relevant in practice the asset correlation is systematically underestimated. Both effects stack up and the error tends to become even larger if in addition we assume a negative correlation between asset correlation and PD within the exposure pool, an assumption that is plausible in many circumstances and consistent with the Basel RWA formula. It is argued that the generic inhomogeneity effect described in this paper is one of the reasons why asset correlations measured from default data tend to be lower than asset correlations derived from asset value data.
A possible data source for the estimation of asset correlations is default time series. This study investigates the systematic error that is made if the exposure pool underlying a default time series is assumed to be homogeneous when in reality it is not. We find that the asset correlation will always be underestimated if homogeneity with respect to the probability of default (PD) is wrongly assumed, and the error is the larger the more spread out the PD is within the exposure pool. If the exposure pool is inhomogeneous with respect to the asset correlation itself then the error may be going in both directions, but for most PD- and asset correlation ranges relevant in practice the asset correlation is systematically underestimated. Both effects stack up and the error tends to become even larger if in addition a negative correlation between asset correlation and PD is assumed, which is plausible in many circumstances and consistent with the Basel RWA formula. It is argued that the generic inhomogeneity effect described is one of the reasons why asset correlations measured from default data tend to be lower than asset correlations derived from asset value data.
[ { "type": "R", "before": "Asset correlations play an important role in credit portfolio modelling. One", "after": "A", "start_char_pos": 0, "end_char_pos": 76 }, { "type": "R", "before": "their estimation are", "after": "the estimation of asset correlations is", "start_char_pos": 102, "end_char_pos": 122 }, { "type": "D", "before": "we assume", "after": null, "start_char_pos": 877, "end_char_pos": 886 }, { "type": "R", "before": "within the exposure pool, an assumption that", "after": "is assumed, which", "start_char_pos": 943, "end_char_pos": 987 }, { "type": "D", "before": "in this paper", "after": null, "start_char_pos": 1127, "end_char_pos": 1140 } ]
[ 0, 72, 143, 311, 543, 797, 1065 ]
1701.02167
1
We prove continuity of a controlled SDE solution in Skorokhod's J_1 and M _1 topologies and also uniformly, in probability, as a non-linear functional of the control strategy. The functional comes from a finance problem to model price impact of a large investor in an illiquid market. We dl\`{a}g trading strategies are determined as the continuous extensions for those from continuous strategies. We } demonstrate by examples how continuity properties are useful to solve different stochastic control problems on optimal liquidation , and to identify asymptotically realizable proceeds and wealth processes from (self-financing) c\`{adl\`{a}g trading strategies} .
We prove continuity of a controlled SDE solution in Skorokhod's M_1 and J _1 topologies and also uniformly, in probability, as a non-linear functional of the control strategy. The functional comes from a finance problem to model price impact of a large investor in an illiquid market. We show that M_1-continuity is the key to ensure that proceeds and wealth processes from (self-financing) c\`{adl\`{a}g trading strategies are determined as the continuous extensions for those from continuous strategies. We } demonstrate by examples how continuity properties are useful to solve different stochastic control problems on optimal liquidation and to identify asymptotically realizable proceeds dl\`{a}g trading strategies} .
[ { "type": "R", "before": "J_1 and M", "after": "M_1 and J", "start_char_pos": 64, "end_char_pos": 73 }, { "type": "A", "before": null, "after": "show that M_1-continuity is the key to ensure that proceeds and wealth processes from (self-financing) c\\`{a", "start_char_pos": 288, "end_char_pos": 288 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 534, "end_char_pos": 535 }, { "type": "D", "before": "and wealth processes from (self-financing) c\\`{a", "after": null, "start_char_pos": 587, "end_char_pos": 635 } ]
[ 0, 175, 284, 397 ]
1701.02287
1
Sedimentation velocity analytical ultracentrifugation with fluorescence detection has emerged as a powerful method for the study of interacting systems of macromolecules. It combines picomolar sensitivity with high hydrodynamic resolution, and can be carried out with photoswitchable fluorophores for multi-component discrimination, to determine the stoichiometry, affinity, and shape of macromolecular complexes with dissociation equilibrium constants from picomolar to micromolar. A popular approach for data interpretation is the determination of the binding affinity by isotherms of weight-average sedimentation coefficients sw. A prevailing dogma in sedimentation analysis is that the weight-average sedimentation coefficient from the transport method corresponds to the signal- and population-weighted average of all species. We show that this does not always hold true for systems that exhibit significant signal changes with complex formation -- properties that may be readily encountered in practice, e.g., from a change in fluorescence quantum yield. Coupled transport in the reaction boundary of rapidly reversible systems can make significant contributions to the observed migration in a way that cannot be accounted for in the standard population-based average. Effective particle theory provides a simple physical picture for the reaction-coupled migration process. On this basis we develop a more general binding model that converges to the well-known form of sw in the absence of quenching , but can account simultaneously for hydrodynamic co-transport in the presence of signal quenching . We believe this will be useful when studying interacting systems exhibiting fluorescence quenching or fluorescent energy transfer with transport methods.
Sedimentation velocity analytical ultracentrifugation with fluorescence detection has emerged as a powerful method for the study of interacting systems of macromolecules. It combines picomolar sensitivity with high hydrodynamic resolution, and can be carried out with photoswitchable fluorophores for multi-component discrimination, to determine the stoichiometry, affinity, and shape of macromolecular complexes with dissociation equilibrium constants from picomolar to micromolar. A popular approach for data interpretation is the determination of the binding affinity by isotherms of weight-average sedimentation coefficients , sw. A prevailing dogma in sedimentation analysis is that the weight-average sedimentation coefficient from the transport method corresponds to the signal- and population-weighted average of all species. We show that this does not always hold true for systems that exhibit significant signal changes with complex formation - properties that may be readily encountered in practice, e.g., from a change in fluorescence quantum yield. Coupled transport in the reaction boundary of rapidly reversible systems can make significant contributions to the observed migration in a way that cannot be accounted for in the standard population-based average. Effective particle theory provides a simple physical picture for the reaction-coupled migration process. On this basis we develop a more general binding model that converges to the well-known form of sw with constant signals , but can account simultaneously for hydrodynamic co-transport in the presence of changes in fluorescence quantum yield . We believe this will be useful when studying interacting systems exhibiting fluorescence quenching , enhancement or Forster resonance energy transfer with transport methods.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 629, "end_char_pos": 629 }, { "type": "R", "before": "--", "after": "-", "start_char_pos": 952, "end_char_pos": 954 }, { "type": "R", "before": "in the absence of quenching", "after": "with constant signals", "start_char_pos": 1479, "end_char_pos": 1506 }, { "type": "R", "before": "signal quenching", "after": "changes in fluorescence quantum yield", "start_char_pos": 1589, "end_char_pos": 1605 }, { "type": "R", "before": "or fluorescent", "after": ", enhancement or Forster resonance", "start_char_pos": 1707, "end_char_pos": 1721 } ]
[ 0, 170, 482, 633, 832, 1061, 1275, 1380, 1607 ]
1701.03770
1
Recent work has shown that a country's productive structure constrains its level of economic growth and income inequality. In this paper , we compare the productive structure of countries in Latin American and the Caribbean (LAC) with that of China and other High-Performing Asian Economies (HPAE) to expose the increasing gap in their productive capabilities. Moreover, we use the product space and the Product Gini Index to reveal the structural constraints on income inequality. Our network maps reveal that HPAE have managed to diversify into products typically produced by countries with low levels of income inequality, while LAC economies have remained dependent on products related with high levels of income inequality. We also introduce the Xgini, a coefficient that captures the constraints on income inequality imposed by the mix of products a country makes. Finally, we argue that LAC countries need to emphasize a smart combination of social and economic policies to overcome the structural constraints for inclusive growth.
Recent work has shown that a country's productive structure constrains its level of economic growth and income inequality. Here , we compare the productive structure of countries in Latin America and the Caribbean (LAC) with that of China and other High-Performing Asian Economies (HPAE) to expose the increasing gap in their productive capabilities. Moreover, we use the product space and the Product Gini Index to reveal the structural constraints on income inequality. Our network maps reveal that HPAE have managed to diversify into products typically produced by countries with low levels of income inequality, while LAC economies have remained dependent on products related to high levels of income inequality. We also introduce the Xgini, a coefficient that captures the constraints on income inequality imposed by the mix of products a country makes. Finally, we argue that LAC countries need to emphasize a smart combination of social and economic policies to overcome the structural constraints for inclusive growth.
[ { "type": "R", "before": "In this paper", "after": "Here", "start_char_pos": 123, "end_char_pos": 136 }, { "type": "R", "before": "American", "after": "America", "start_char_pos": 197, "end_char_pos": 205 }, { "type": "R", "before": "with", "after": "to", "start_char_pos": 690, "end_char_pos": 694 } ]
[ 0, 122, 360, 481, 728, 870 ]
1701.03897
1
The space of call price functions has a natural noncommutative semigroup structure with an involution. A basic example is the Black--Scholes call price surface, from which an interesting inequality for Black--Scholes implied volatility is derived. The binary operation is compatible with the convex order, and therefore a one-parameter sub-semigroup can be identified with a peacock . It is shown that each such one-parameter semigroup corresponds to a unique log-concave probability density, providing a family of tractable call price surface parametrisations in the spirit of the Gatheral--Jacquier SVI surface . The key observation is an isomorphism linking an initial call price curve to the lift zonoid of the terminal price of the underlying asset.
The space of call price functions has a natural noncommutative semigroup structure with an involution. A basic example is the Black--Scholes call price surface, from which an interesting inequality for Black--Scholes implied volatility is derived. The binary operation is compatible with the convex order, and therefore a one-parameter sub-semigroup gives rise to an arbitrage-free market model . It is shown that each such one-parameter semigroup corresponds to a unique log-concave probability density, providing a family of tractable call price surface parametrisations in the spirit of the Gatheral--Jacquier SVI surface . An explicit example is given to illustrate the idea . The key observation is an isomorphism linking an initial call price curve to the lift zonoid of the terminal price of the underlying asset.
[ { "type": "R", "before": "can be identified with a peacock", "after": "gives rise to an arbitrage-free market model", "start_char_pos": 350, "end_char_pos": 382 }, { "type": "A", "before": null, "after": ". An explicit example is given to illustrate the idea", "start_char_pos": 613, "end_char_pos": 613 } ]
[ 0, 102, 247, 384, 615 ]
1701.04565
1
This paper develops a risk management framework for companies , based on the leverage process (a ratio of company asset value over its debt) by analyzing the characteristics of general linear diffusions with killing . We approach this issue by time reversal, last passage time, and h-transform of linear diffusions. For such processes, we derive the probability density of the last passage time to a certain alarming level and the distribution of the time left until killing after the last passage time . We apply these results to the leverage process of the company. Finally, we suggest how a company should specify that abovementioned alarming levelfor the leverage process by solving an optimization problem .
This article develops a new risk management framework for companies based on the leverage process (a ratio of company asset value over its debt) . We approach this task by time reversal, last passage time, and h-transform of linear diffusions. For general diffusions with killing, we obtain the probability density of the last passage time to a certain alarming level , and analyze the distribution of the time left until killing after the last passage time to that level. We then apply these results to the leverage process of the company. Finally, we suggest how a company should determine the aforementioned alarming level. More specifically, we construct a relevant optimization problem and derive an optimal alarming level as its solution .
[ { "type": "R", "before": "paper develops a", "after": "article develops a new", "start_char_pos": 5, "end_char_pos": 21 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 62, "end_char_pos": 63 }, { "type": "D", "before": "by analyzing the characteristics of general linear diffusions with killing", "after": null, "start_char_pos": 141, "end_char_pos": 215 }, { "type": "R", "before": "issue", "after": "task", "start_char_pos": 235, "end_char_pos": 240 }, { "type": "R", "before": "such processes, we derive", "after": "general diffusions with killing, we obtain", "start_char_pos": 320, "end_char_pos": 345 }, { "type": "R", "before": "and", "after": ", and analyze", "start_char_pos": 423, "end_char_pos": 426 }, { "type": "R", "before": ". We", "after": "to that level. We then", "start_char_pos": 503, "end_char_pos": 507 }, { "type": "R", "before": "specify that abovementioned alarming levelfor the leverage process by solving an optimization problem", "after": "determine the aforementioned alarming level. More specifically, we construct a relevant optimization problem and derive an optimal alarming level as its solution", "start_char_pos": 609, "end_char_pos": 710 } ]
[ 0, 217, 315, 504, 567 ]
1701.04565
2
This article develops a new risk management framework for companies based on the leverage process (a ratio of company asset value over its debt). We approach this task by time reversal, last passage time, and h-transform of linear diffusions. For general diffusions with killing, we obtain the probability density of the last passage time to a certain alarming level , and analyze the distribution of the time left until killing after the last passage time to that level. We then apply these results to the leverage process of the company. Finally, we suggest how a company should determine the aforementioned alarming level. More specifically , we construct a relevant optimization problem and derive an optimal alarming level as its solution.
This article develops a new risk management framework for companies on the basis of the leverage process (a ratio of company asset value over its debt). We approach this task by time reversal, last passage time, and the h-transform of linear diffusions. For general diffusions with killing, we obtain the probability density of the last passage time to a certain alarming level and analyze the distribution of the time left until killing after the last passage time to that level. We then apply these results to the leverage process of the company. Finally, we suggest how a company should determine the aforementioned alarming level. Specifically , we construct a relevant optimization problem and derive an optimal alarming level as its solution.
[ { "type": "R", "before": "based on the", "after": "on the basis of the", "start_char_pos": 68, "end_char_pos": 80 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 209, "end_char_pos": 209 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 368, "end_char_pos": 369 }, { "type": "R", "before": "More specifically", "after": "Specifically", "start_char_pos": 627, "end_char_pos": 644 } ]
[ 0, 145, 243, 472, 540, 626 ]
1701.04565
3
This article develops a new risk management framework for companies on the basis of the leverage process (a ratio of company asset value over its debt). We approach this task by time reversal, last passage time, and the h-transform of linear diffusions. For general diffusions with killing, we obtain the probability density of the last passage time to a certain alarming level and analyze the distribution of the time left until killing after the last passage time to that level. We then apply these results to the leverage process of the company. Finally, we suggest how a company should determine the aforementioned alarming level . Specifically, we construct a relevant optimization problem and derive an optimal alarming level as its solution .
We study time reversal, last passage time, and h-transform of linear diffusions. For general diffusions with killing, we obtain the probability density of the last passage time to an arbitrary level and analyze the distribution of the time left until killing after the last passage time . With these tools, we develop a new risk management framework for companies based on the leverage process (the ratio of a company asset process over its debt) and its corresponding alarming level. We also suggest how a company can determine the alarming level for the leverage process by constructing a relevant optimization problem .
[ { "type": "R", "before": "This article develops a new risk management framework for companies on the basis of the leverage process (a ratio of company asset value over its debt). We approach this task by", "after": "We study", "start_char_pos": 0, "end_char_pos": 177 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 216, "end_char_pos": 219 }, { "type": "R", "before": "a certain alarming", "after": "an arbitrary", "start_char_pos": 353, "end_char_pos": 371 }, { "type": "R", "before": "to that level. We then apply these results to", "after": ". With these tools, we develop a new risk management framework for companies based on", "start_char_pos": 466, "end_char_pos": 511 }, { "type": "R", "before": "of the company. Finally, we", "after": "(the ratio of a company asset process over its debt) and its corresponding alarming level. We also", "start_char_pos": 533, "end_char_pos": 560 }, { "type": "R", "before": "should determine the aforementioned alarming level . Specifically, we construct", "after": "can determine the alarming level for the leverage process by constructing", "start_char_pos": 583, "end_char_pos": 662 }, { "type": "D", "before": "and derive an optimal alarming level as its solution", "after": null, "start_char_pos": 695, "end_char_pos": 747 } ]
[ 0, 152, 253, 480, 548 ]
1701.05091
1
Conditions for geometric ergodicity of multivariate ARCH processes, with the so-called BEKK parametrization, are considered. We show for a class of BEKK-ARCH processes that the invariant distribution is regularly varying. In order to account for the possibility of different tail indices of the marginals, we consider the notion of vector scaling regular variation, in the spirit of Perfekt (1997 ). The characterization of the tail behavior of the processes is used for deriving the asymptotic distribution of the sample covariance matrices.
Conditions for geometric ergodicity of multivariate autoregressive conditional heteroskedasticity (ARCH) processes, with the so-called BEKK (Baba, Engle, Kraft, and Kroner) parametrization, are considered. We show for a class of BEKK-ARCH processes that the invariant distribution is regularly varying. In order to account for the possibility of different tail indices of the marginals, we consider the notion of vector scaling regular variation, in the spirit of Perfekt (1997 , Advances in Applied Probability, 29, pp. 138-164 ). The characterization of the tail behavior of the processes is used for deriving the asymptotic properties of the sample covariance matrices.
[ { "type": "R", "before": "ARCH", "after": "autoregressive conditional heteroskedasticity (ARCH)", "start_char_pos": 52, "end_char_pos": 56 }, { "type": "A", "before": null, "after": "(Baba, Engle, Kraft, and Kroner)", "start_char_pos": 92, "end_char_pos": 92 }, { "type": "A", "before": null, "after": ", Advances in Applied Probability, 29, pp. 138-164", "start_char_pos": 398, "end_char_pos": 398 }, { "type": "R", "before": "distribution", "after": "properties", "start_char_pos": 497, "end_char_pos": 509 } ]
[ 0, 125, 222, 401 ]
1701.05967
1
We provide a variety of results for (quasi)convex, law-invariant functionals defined on a general Orlicz space, which extend well-known results in the setting of bounded random variables. First, we show that Delbaen's dual characterization of the Fatou property, which no longer holds in a general Orlicz space, continues to hold under the assumption of law-invariance. Second, we identify the range of Orlicz spaces where the characterization of the Fatou property in terms of norm lower semicontinuity by Jouini, Schachermayer and Touzi still holds . Third, we extend Kusuoka's dual representation to a general Orlicz space. Finally, we prove a version of the extension result by Filipovi\'{c} and Svindland by replacing norm lower semicontinuity with the (generally non-equivalent) Fatou property. Our results have natural applications to the theory of risk measures.
We provide a variety of results for (quasi)convex, law-invariant functionals defined on a general Orlicz space, which extend well-known results in the setting of bounded random variables. First, we show that Delbaen's representation of convex functionals with the Fatou property, which fails in a general Orlicz space, can be always achieved under the assumption of law-invariance. Second, we identify the range of Orlicz spaces where the characterization of the Fatou property in terms of norm lower semicontinuity by Jouini, Schachermayer and Touzi continues to hold . Third, we extend Kusuoka's representation to a general Orlicz space. Finally, we prove a version of the extension result by Filipovi\'{c} and Svindland by replacing norm lower semicontinuity with the (generally non-equivalent) Fatou property. Our results have natural applications to the theory of risk measures.
[ { "type": "R", "before": "dual characterization of", "after": "representation of convex functionals with", "start_char_pos": 218, "end_char_pos": 242 }, { "type": "R", "before": "no longer holds", "after": "fails", "start_char_pos": 269, "end_char_pos": 284 }, { "type": "R", "before": "continues to hold", "after": "can be always achieved", "start_char_pos": 312, "end_char_pos": 329 }, { "type": "R", "before": "still holds", "after": "continues to hold", "start_char_pos": 539, "end_char_pos": 550 }, { "type": "D", "before": "dual", "after": null, "start_char_pos": 580, "end_char_pos": 584 } ]
[ 0, 187, 369, 552, 626, 800 ]
1701.06001
1
We propose a novel and generic calibration technique for four-factor foreign-exchange hybrid local-stochastic volatility models with stochastic short rates. We build upon the particle method introduced by Guyon and Labord\`ere [Nonlinear Option Pricing, Chapter 11, Chapman and Hall, 2013] and combine it with new variance reduction techniques in order to accelerate convergence. We use control variates derived from a calibrated pure local volatility model, a two-factor Heston-type LSV model (both with deterministic rates), and the stochastic (CIR) short rates. Our numerical experiments show that because of the dramatic variance reduction we are able to calibrate the four-factor model at almost no extra computational cost when the corresponding calibrated two-factor model is at our disposal. The method can be applied to a large class of hybrid LSV models and is not restricted to our particular choice of the diffusion. The calibration procedure is performed on real-world market data for the EUR-USD currency pair .
We propose a novel and generic calibration technique for four-factor foreign-exchange hybrid local-stochastic volatility models with stochastic short rates. We build upon the particle method introduced by Guyon and Labord\`ere [Nonlinear Option Pricing, Chapter 11, Chapman and Hall, 2013] and combine it with new variance reduction techniques in order to accelerate convergence. We use control variates derived from a calibrated pure local volatility model, a two-factor Heston-type LSV model (both with deterministic rates), and the stochastic (CIR) short rates. The method can be applied to a large class of hybrid LSV models and is not restricted to our particular choice of the diffusion. The calibration procedure is performed on real-world market data for the EUR-USD currency pair and has a comparable run-time to the PDE calibration of a two-factor LSV model alone .
[ { "type": "D", "before": "Our numerical experiments show that because of the dramatic variance reduction we are able to calibrate the four-factor model at almost no extra computational cost when the corresponding calibrated two-factor model is at our disposal.", "after": null, "start_char_pos": 565, "end_char_pos": 799 }, { "type": "A", "before": null, "after": "and has a comparable run-time to the PDE calibration of a two-factor LSV model alone", "start_char_pos": 1024, "end_char_pos": 1024 } ]
[ 0, 156, 379, 564, 799, 928 ]
1701.06234
1
We propose a numerical recipe for risk evaluation defined by a backward stochastic differential equation. Using dual representation of the risk measure, we convert the risk valuation to a stochastic control problem where the control is a certain Radon-Nikodym derivative process. By exploring the maximum principle, we show that a piecewise-constant dual control provides a good approximation on a short interval. A dynamic programming algorithm extends the approximation to a finite time horizon. Finally, we illustrate the application of the procedure to risk management in conjunction with nested simulation .
We propose a numerical recipe for risk evaluation defined by a backward stochastic differential equation. Using dual representation of the risk measure, we convert the risk valuation to a stochastic control problem where the control is a certain Radon-Nikodym derivative process. By exploring the maximum principle, we show that a piecewise-constant dual control provides a good approximation on a short interval. A dynamic programming algorithm extends the approximation to a finite time horizon. Finally, we illustrate the application of the procedure to financial risk management in conjunction with nested simulation and on an multidimensional portfolio valuation problem .
[ { "type": "A", "before": null, "after": "financial", "start_char_pos": 557, "end_char_pos": 557 }, { "type": "A", "before": null, "after": "and on an multidimensional portfolio valuation problem", "start_char_pos": 612, "end_char_pos": 612 } ]
[ 0, 105, 279, 413, 497 ]
1701.06975
1
The theory of multilayer networks is in its early stages, and its development provides powerful and vital methods for understanding complex systems. Multilayer networks, in their multiplex form, have been introduced within the last three years to analysing the structure of financial systems, and existing studies have modelled and evaluated interdependencies of different type among financial institutions. The empirical studies, however, have considered the multiplex structure rather as an ensemble of single layer networks than as an interconnected multiplex or multilayer network. No mechanism of multichannel contagion has been modelled and empirically evaluated, and no multichannel stabilisation strategies for pre-emptive contagion containment have been designed. This paper formulates an interconnected multilayer structure, and a contagion mechanism among financial institutions due to bilateral exposures arising from institutions activity within different interconnected markets that compose the overall financial market. We introduce structural measures of absolute systemic risk and resilience, and relative systemic-risk indexes. The multiple-market systemic risk and resilience allow comparing the structural (in)stability of different financial system or the same system in different periods. The relative systemic-risk indexes of institutions acting in multiple markets allow comparing the institutions according to their relative contributions to overall structural instability within the same period. Based on the contagion mechanism and systemic-risk quantification, this study designs minimum-cost stabilisation strategies that act simultaneously on different markets and their interconnections, in order to effectively contain potential contagion progressing through the overall structure. The empirical analysis uses granular data now available to the Bank of England .
The theory of multilayer networks is in its early stages, and its development provides vital methods for understanding complex systems. Multilayer networks, in their multiplex form, have been introduced within the last three years to analysing the structure of financial systems, and existing studies have modelled and evaluated interdependencies of different type among financial institutions. These studies, however, have considered the structure as a non-interconnected multiplex - an ensemble of single layer networks comprising the same nodes - rather than as an interconnected multiplex network. No mechanism of multichannel contagion has been modelled and empirically evaluated, and no multichannel stabilisation strategies for pre-emptive contagion containment have been designed. This paper formulates an interconnected multiplex structure, and a contagion mechanism among financial institutions due to bilateral exposures arising from institutions activity within different interconnected markets that compose the overall financial market. We introduce structural measures of absolute systemic risk and resilience, and relative systemic-risk indexes. Based on the contagion mechanism and systemic-risk quantification, this study designs minimum-cost stabilisation strategies that act simultaneously on different markets and their interconnections, in order to effectively contain potential contagion progressing through the overall structure. The stabilisation strategies subtly affect the emergence process of structure to adaptively build in structural resilience and achieve pre-emptive stabilisation at a minimum cost for each institution and at no cost for the system as a whole. We empirically evaluate the new approach using large granular databases, maintained by the Prudential Regulatory Authority of the Bank of England . The capabilities of multichannel stabilisation are confirmed empirically .
[ { "type": "D", "before": "powerful and", "after": null, "start_char_pos": 87, "end_char_pos": 99 }, { "type": "R", "before": "The empirical", "after": "These", "start_char_pos": 408, "end_char_pos": 421 }, { "type": "R", "before": "multiplex structure rather as", "after": "structure as a non-interconnected multiplex -", "start_char_pos": 460, "end_char_pos": 489 }, { "type": "A", "before": null, "after": "comprising the same nodes - rather", "start_char_pos": 527, "end_char_pos": 527 }, { "type": "D", "before": "or multilayer", "after": null, "start_char_pos": 564, "end_char_pos": 577 }, { "type": "R", "before": "multilayer", "after": "multiplex", "start_char_pos": 814, "end_char_pos": 824 }, { "type": "D", "before": "The multiple-market systemic risk and resilience allow comparing the structural (in)stability of different financial system or the same system in different periods. The relative systemic-risk indexes of institutions acting in multiple markets allow comparing the institutions according to their relative contributions to overall structural instability within the same period.", "after": null, "start_char_pos": 1147, "end_char_pos": 1522 }, { "type": "R", "before": "empirical analysis uses granular data now available to the", "after": "stabilisation strategies subtly affect the emergence process of structure to adaptively build in structural resilience and achieve pre-emptive stabilisation at a minimum cost for each institution and at no cost for the system as a whole. We empirically evaluate the new approach using large granular databases, maintained by the Prudential Regulatory Authority of the", "start_char_pos": 1819, "end_char_pos": 1877 }, { "type": "A", "before": null, "after": ". The capabilities of multichannel stabilisation are confirmed empirically", "start_char_pos": 1894, "end_char_pos": 1894 } ]
[ 0, 148, 407, 586, 773, 1035, 1146, 1311, 1522, 1814 ]
1701.07011
1
P-values are being computed for increasingly complicated statistics but lacking evaluations on their quality. Meanwhile, accurate p-values enable significance comparison across batches of hypothesis tests and consequently unified false discover rate (FDR) control. This article discusses two related questions in this setting. First, we propose statistical tests to evaluate the quality of p-value and the cross-batch comparability of any other statistic. Second, we propose a lasso based variable selection statistic, based on when the predictor variable first becomes active, and compute its p-value to achieve unified FDR control across multiple selections. In the end, we apply our tests on covTest, selectiveInference, and our statistic, based on real and null datasets for network inference in normal and high-dimensional settings. Results demonstrate higher p-value quality from our statistic and reveal p-value errors from others hidden before. We implement our statistic as lassopv in R .
Bayesian networks can represent directed gene regulations and therefore are favored over co-expression networks. However, hardly any Bayesian network study concerns the false discovery control (FDC) of network edges, leading to low accuracies due to systematic biases from inconsistent false discovery levels in the same study. We design four empirical tests to examine the FDC of Bayesian networks from three p-value based lasso regression variable selections --- two existing and one we originate. Our method, lassopv, computes p-values for the critical regularization strength at which a predictor starts to contribute to lasso regression. Using null and Geuvadis datasets, we find that lassopv obtains optimal FDC in Bayesian gene networks, whilst existing methods have defective p-values. The FDC concept and tests extend to most network inference scenarios and will guide the design and improvement of new and existing methods. Our novel variable selection method with lasso regression also allows FDC on other datasets and questions, even beyond network inference and computational biology. Lassopv is implemented in R and freely available at URL and URL
[ { "type": "R", "before": "P-values are being computed for increasingly complicated statistics but lacking evaluations on their quality. Meanwhile, accurate p-values enable significance comparison across batches of hypothesis tests and consequently unified false discover rate (FDR) control. This article discusses two related questions in this setting. First, we propose statistical tests to evaluate the quality of", "after": "Bayesian networks can represent directed gene regulations and therefore are favored over co-expression networks. However, hardly any Bayesian network study concerns the false discovery control (FDC) of network edges, leading to low accuracies due to systematic biases from inconsistent false discovery levels in the same study. We design four empirical tests to examine the FDC of Bayesian networks from three", "start_char_pos": 0, "end_char_pos": 389 }, { "type": "A", "before": null, "after": "based lasso regression variable selections --- two existing and one we originate. Our method, lassopv, computes p-values for the critical regularization strength at which a predictor starts to contribute to lasso regression. Using null and Geuvadis datasets, we find that lassopv obtains optimal FDC in Bayesian gene networks, whilst existing methods have defective p-values. The FDC concept and tests extend to most network inference scenarios and will guide the design and improvement of new and existing methods. Our novel variable selection method with lasso regression also allows FDC on other datasets and questions, even beyond network inference", "start_char_pos": 398, "end_char_pos": 398 }, { "type": "R", "before": "the cross-batch comparability of any other statistic. Second, we propose a lasso based variable selection statistic, based on when the predictor variable first becomes active, and compute its p-value to achieve unified FDR control across multiple selections. In the end, we apply our tests on covTest, selectiveInference, and our statistic, based on real and null datasets for network inference in normal and high-dimensional settings. Results demonstrate higher p-value quality from our statistic and reveal p-value errors from others hidden before. We implement our statistic as lassopv in R .", "after": "computational biology. Lassopv is implemented in R and freely available at URL and URL", "start_char_pos": 403, "end_char_pos": 998 } ]
[ 0, 109, 264, 326, 456, 661, 838, 953 ]
1701.08149
1
We extend the Granger-Johansen representation theory for I(1) vector autoregressive processes to accommodate processes that take values in an arbitrary complex separable Hilbert space. This more general setting is of central relevance for statistical applications involving functional time series. We obtain necessary and sufficient conditions for the existence of I(1) solutions to a given autoregressive law of motion generalizing the Johansen I(1) condition, and a characterization of such solutions. To accomplish this we obtain necessary and sufficient conditions for a pole in the inverse of a holomorphic index-zero Fredholm operator pencil to be simple, and a formula for its residue. In the case of first order autoregressive dynamics with a unit root our results take a particularly simple form , with the residue associated with the simple pole at one proportional to a Riesz projection .
We extend the Granger-Johansen representation theorems for I(1) and I(2) vector autoregressive processes to accommodate processes that take values in an arbitrary complex separable Hilbert space. This more general setting is of central relevance for statistical applications involving functional time series. We first obtain a range of necessary and sufficient conditions for a pole in the inverse of a holomorphic index-zero Fredholm operator pencil to be of first or second order. Those conditions form the basis for our development of I(1) and I(2) representations of autoregressive Hilbertian processes. Cointegrating and attractor subspaces are characterized in terms of the behavior of the autoregressive operator pencil in a neighborhood of one .
[ { "type": "R", "before": "theory", "after": "theorems", "start_char_pos": 46, "end_char_pos": 52 }, { "type": "A", "before": null, "after": "and I(2)", "start_char_pos": 62, "end_char_pos": 62 }, { "type": "R", "before": "obtain necessary and sufficient conditions for the existence of I(1) solutions to a given autoregressive law of motion generalizing the Johansen I(1) condition, and a characterization of such solutions. To accomplish this we obtain", "after": "first obtain a range of", "start_char_pos": 302, "end_char_pos": 533 }, { "type": "R", "before": "simple, and a formula for its residue. In the case of first order autoregressive dynamics with a unit root our results take a particularly simple form , with the residue associated with the simple pole at one proportional to a Riesz projection", "after": "of first or second order. Those conditions form the basis for our development of I(1) and I(2) representations of autoregressive Hilbertian processes. Cointegrating and attractor subspaces are characterized in terms of the behavior of the autoregressive operator pencil in a neighborhood of one", "start_char_pos": 655, "end_char_pos": 898 } ]
[ 0, 185, 298, 504, 693 ]
1701.08399
1
The main objective is to study no-arbitrage pricing of financial derivatives in the presence of funding costs, the counterparty credit risk and market frictions affecting the trading mechanism, such as collateralization and capital requirements. To achieve our goals, we extend in several respects the nonlinear pricing approach developed in El Karoui and Quenez (1997) and El Karoui et al. (1997 ).
The objective of this paper is to provide a comprehensive study no-arbitrage pricing of financial derivatives in the presence of funding costs, the counterparty credit risk and market frictions affecting the trading mechanism, such as collateralization and capital requirements. To achieve our goals, we extend in several respects the nonlinear pricing approach developed in El Karoui and Quenez (1997) and El Karoui et al. (1997 ), which was subsequently continued in Bielecki and Rutkowski (2015 ).
[ { "type": "R", "before": "main objective is to", "after": "objective of this paper is to provide a comprehensive", "start_char_pos": 4, "end_char_pos": 24 }, { "type": "A", "before": null, "after": "), which was subsequently continued in Bielecki and Rutkowski (2015", "start_char_pos": 397, "end_char_pos": 397 } ]
[ 0, 245 ]
1701.08861
1
This paper studies a class of non-Markovian singular stochastic control problems, for which we provide a novel probabilistic representation. The solution of such control problem is proved to identify with the solution of a Z-constrained BSDE, with dynamics associated to a non singular underlying forward process. Due to the non-Markovian environment, our main argumentation relies on the use of comparison arguments for path dependent PDEs. Our representation allows in particular to quantify the regularity of the solution to the singular stochastic control problem in terms of the space and time initial data. Our framework also extends to the consideration of degenerate diffusions, leading to the representation of the solution as the infimum of solutions to Z-constrained BSDEs. As an application, we study the utility maximization problem with transaction costs for non-Markovian dynamics.
This paper studies a class of non-Markovian singular stochastic control problems, for which we provide a novel probabilistic representation. The solution of such control problem is proved to identify with the solution of a Z-constrained BSDE, with dynamics associated to a non singular underlying forward process. Due to the non-Markovian environment, our main argumentation relies on the use of comparison arguments for path dependent PDEs. Our representation allows in particular to quantify the regularity of the solution to the singular stochastic control problem in terms of the space and time initial data. Our framework also extends to the consideration of degenerate diffusions, leading to the representation of the solution as the infimum of solutions to Z-constrained BSDEs. As an application, we study the utility maximisation problem with transaction costs for non-Markovian dynamics.
[ { "type": "R", "before": "maximization", "after": "maximisation", "start_char_pos": 825, "end_char_pos": 837 } ]
[ 0, 140, 313, 441, 612, 784 ]
1702.00489
1
Many biochemical reactions involve a stream of chemical reactants (ligand molecules) flowing over a surface to which other reactants (receptors) are confined. Scientists measure rate constants associated with these reactions in an optical biosensor: an instrument in which ligand molecules are convected through a flow cell, over a surface on which receptors are immobilized. In applications such as DNA damage repair multiple simultaneous reactions occur on the surface of the biosensor. We quantify transport effects on such multiple-component reactions, which result in a nonlinear set of integrodifferential equations for the reacting species concentrations. In physically relevant parameter regimes, these integrodifferential equations further reduce to a nonlinear set of ordinary differential equations, which may be used to estimate rate constants from biosensor data. We verify our results with a semi-implicit finite difference algorithm .
Optical biosensors are often used to measure kinetic rate constants associated with chemical reactions. Such instruments operate in thesurface-volume configuration, in which ligand molecules are convected through a fluid-filled volume over a surface to which receptors are confined. Currently, scientists are using optical biosenors to measure the kinetic rate constants associated with DNA translesion synthesis--a process critical to DNA damage repair. Biosensor experiments to study this process involve multiple interacting components on the sensor surface. This multiple-component biosensor experiment is modeled with a set of nonlinear integrodifferential equations (IDEs). It is shown that in physically relevant asymptotic limits these equations reduce to a much simpler set of Ordinary Differential Equations (ODEs). To verify the validity of our ODE approximation, a numerical method for the IDE system is developed and studied. Results from the ODE model agree with simulations of the IDE model, rendering our ODE model useful for parameter estimation .
[ { "type": "R", "before": "Many biochemical reactions involve a stream of chemical reactants (ligand molecules) flowing over a surface to which other reactants (receptors) are confined. Scientists measure", "after": "Optical biosensors are often used to measure kinetic", "start_char_pos": 0, "end_char_pos": 177 }, { "type": "R", "before": "these reactions", "after": "chemical reactions. Such instruments operate in the", "start_char_pos": 209, "end_char_pos": 224 }, { "type": "A", "before": null, "after": "surface-volume", "start_char_pos": 224, "end_char_pos": 224 }, { "type": "A", "before": null, "after": "configuration,", "start_char_pos": 225, "end_char_pos": 225 }, { "type": "D", "before": "an optical biosensor: an instrument in", "after": null, "start_char_pos": 229, "end_char_pos": 267 }, { "type": "R", "before": "flow cell,", "after": "fluid-filled volume", "start_char_pos": 315, "end_char_pos": 325 }, { "type": "R", "before": "on", "after": "to", "start_char_pos": 341, "end_char_pos": 343 }, { "type": "R", "before": "immobilized. In applications such as DNA damage repair multiple simultaneous reactions occur on the surface of the biosensor. We quantify transport effects on such", "after": "confined. Currently, scientists are using optical biosenors to measure the kinetic rate constants associated with DNA translesion synthesis--a process critical to DNA damage repair. Biosensor experiments to study this process involve multiple interacting components on the sensor surface. This", "start_char_pos": 364, "end_char_pos": 527 }, { "type": "R", "before": "reactions, which result in a nonlinear set of integrodifferential", "after": "biosensor experiment is modeled with a set of nonlinear integrodifferential equations (IDEs). It is shown that in physically relevant asymptotic limits these", "start_char_pos": 547, "end_char_pos": 612 }, { "type": "D", "before": "for the reacting species concentrations. In physically relevant parameter regimes, these integrodifferential equations further", "after": null, "start_char_pos": 623, "end_char_pos": 749 }, { "type": "R", "before": "nonlinear set of ordinary differential equations, which may be used to estimate rate constants from biosensor data. We verify our results with a semi-implicit finite difference algorithm", "after": "much simpler set of Ordinary Differential Equations (ODEs). To verify the validity of our ODE approximation, a numerical method for the IDE system is developed and studied. Results from the ODE model agree with simulations of the IDE model, rendering our ODE model useful for parameter estimation", "start_char_pos": 762, "end_char_pos": 948 } ]
[ 0, 158, 376, 489, 663, 877 ]
1702.00632
1
Protein synthesis rates are determined, at the translational level, by properties of the transcript's sequence. The efficiency of an mRNA can be tuned by varying the ribosome binding sites controlling the recruitment of the ribosomes, or the codon usage establishing the speed of protein elongation. In this work we promote transcript length as a further key determinant of translation efficiency. Based on a physical model that considers the kinetics of ribosomes advancing on the mRNA and diffusing in its surrounding, we explain how the transcript length might play a central role in establishing ribosome recruitment and the overall translation rate of an mRNA. We also demonstrate how this process might be involved in shaping the experimental ribosome density-gene length dependence. Finally, we argue that cells could exploit this mechanism to adjust and balance the usage of its ribosomal resources.
Protein synthesis rates are determined, at the translational level, by properties of the transcript's sequence. The efficiency of an mRNA can be tuned by varying the ribosome binding sites controlling the recruitment of the ribosomes, or the codon usage establishing the speed of protein elongation. In this work we propose transcript length as a further key determinant of translation efficiency. Based on a physical model that considers the kinetics of ribosomes advancing on the mRNA and diffusing in its surrounding, as well as mRNA circularisation and ribosome drop-off, we explain how the transcript length may play a central role in establishing ribosome recruitment and the overall translation rate of an mRNA. We also demonstrate how this process may be involved in shaping the experimental ribosome density-gene length dependence. Finally, we argue that cells could exploit this mechanism to adjust and balance the usage of its ribosomal resources.
[ { "type": "R", "before": "promote", "after": "propose", "start_char_pos": 316, "end_char_pos": 323 }, { "type": "A", "before": null, "after": "as well as mRNA circularisation and ribosome drop-off,", "start_char_pos": 521, "end_char_pos": 521 }, { "type": "R", "before": "might", "after": "may", "start_char_pos": 559, "end_char_pos": 564 }, { "type": "R", "before": "might", "after": "may", "start_char_pos": 704, "end_char_pos": 709 } ]
[ 0, 111, 299, 397, 666, 790 ]
1702.00982
1
We treat utility maximization from terminal wealth for an agent dynamically investing}\to in a continuous-time financial market and receiving a possibly unbounded random endowment. The utility function is assumed finite on the whole real line. We prove the existence of an optimal investment without introducing the associated dual problem in the case where the utility has a "moderate" tail at -\infty. We rely on a recent Koml\'os-type lemma of Delbaen and Owari which leads to a simple and transparent proof. Our results apply to non-smooth utilities and even global strict concavity can be relaxed so we can accommodate, in particular, the problem of minimizing expected loss for a wide class of loss functions . We can handle certain random endowments with non-hedgeable risks, complementing earlier papers. Constraints on the terminal wealth can also be incorporated. As examples, we treat the cases of frictionless markets with finitely many assets, markets with proportional transaction costs and large financial markets comprising a countably infinite number of assets .
We treat utility maximization from terminal wealth for an agent with utility function U:\mathbb{R}\to\mathbb{R in a continuous-time financial market and receives a possibly unbounded random endowment. We prove the existence of an optimal investment without introducing the associated dual problem in the case where the utility has a "moderate" tail at -\infty. We rely on a recent Koml\'os-type lemma of Delbaen and Owari which leads to a simple and transparent proof. Our results apply to non-smooth utilities and even strict concavity can be relaxed . We can handle certain random endowments with non-hedgeable risks, complementing earlier papers. Constraints on the terminal wealth can also be incorporated. As examples, we treat the cases of frictionless markets with finitely many assets, markets with proportional transaction costs and large financial markets .
[ { "type": "R", "before": "dynamically investing", "after": "with utility function U:\\mathbb{R", "start_char_pos": 64, "end_char_pos": 85 }, { "type": "A", "before": null, "after": "\\mathbb{R", "start_char_pos": 89, "end_char_pos": 89 }, { "type": "R", "before": "receiving", "after": "receives", "start_char_pos": 132, "end_char_pos": 141 }, { "type": "D", "before": "The utility function is assumed finite on the whole real line.", "after": null, "start_char_pos": 181, "end_char_pos": 243 }, { "type": "D", "before": "global", "after": null, "start_char_pos": 563, "end_char_pos": 569 }, { "type": "D", "before": "so we can accommodate, in particular, the problem of minimizing expected loss for a wide class of loss functions", "after": null, "start_char_pos": 602, "end_char_pos": 714 }, { "type": "D", "before": "comprising a countably infinite number of assets", "after": null, "start_char_pos": 1029, "end_char_pos": 1077 } ]
[ 0, 180, 243, 403, 511, 716, 812, 873 ]
1702.00982
2
We treat utility maximization from terminal wealth for an agent with utility function U:R\toR who dynamically invests in a continuous-time financial market and receives a possibly unbounded random endowment. We prove the existence of an optimal investment without introducing the associated dual problem in the case where the utility has a "moderate" tail at -\infty . We rely on a recent Koml\'os-type lemma of Delbaen and Owari which leads to a simple and transparent proof. Our results apply to non-smooth utilities and even strict concavity can be relaxed. We can handle certain random endowments with non-hedgeable risks, complementing earlier papers. Constraints on the terminal wealth can also be incorporated. As examples, we treat the cases of frictionless markets with finitely many assets , markets with proportional transaction costs and large financial markets.
We treat utility maximization from terminal wealth for an agent with utility function U:R\toR who dynamically invests in a continuous-time financial market and receives a possibly unbounded random endowment. We prove the existence of an optimal investment without introducing the associated dual problem . We rely on a recent Koml\'os-type lemma of Delbaen and Owari which leads to a simple and transparent proof. Our results apply to non-smooth utilities and even strict concavity can be relaxed. We can handle certain random endowments with non-hedgeable risks, complementing earlier papers. Constraints on the terminal wealth can also be incorporated. As examples, we treat frictionless markets with finitely many assets and large financial markets.
[ { "type": "D", "before": "in the case where the utility has a \"moderate\" tail at -\\infty", "after": null, "start_char_pos": 304, "end_char_pos": 366 }, { "type": "D", "before": "the cases of", "after": null, "start_char_pos": 740, "end_char_pos": 752 }, { "type": "D", "before": ", markets with proportional transaction costs", "after": null, "start_char_pos": 800, "end_char_pos": 845 } ]
[ 0, 207, 368, 476, 560, 656, 717 ]
1702.00982
3
We treat utility maximization from terminal wealth for an agent with utility function U:R\toR who dynamically invests in a continuous-time financial market and receives a possibly unbounded random endowment. We prove the existence of an optimal investment without introducing the associated dual problem. We rely on a recent Koml\'os-type lemma of Delbaen and Owari which leads to a simple and transparent proof. Our results apply to non-smooth utilities and even strict concavity can be relaxed. We can handle certain random endowments with non-hedgeable risks, complementing earlier papers. Constraints on the terminal wealth can also be incorporated. As examples, we treat frictionless markets with finitely many assets and large financial markets.
We treat utility maximization from terminal wealth for an agent with utility function U:R\toR who dynamically invests in a continuous-time financial market and receives a possibly unbounded random endowment. We prove the existence of an optimal investment without introducing the associated dual problem. We rely on a recent result of Orlicz space theory, due to Delbaen and Owari which leads to a simple and transparent proof. Our results apply to non-smooth utilities and even strict concavity can be relaxed. We can handle certain random endowments with non-hedgeable risks, complementing earlier papers. Constraints on the terminal wealth can also be incorporated. As examples, we treat frictionless markets with finitely many assets and large financial markets.
[ { "type": "R", "before": "Koml\\'os-type lemma of", "after": "result of Orlicz space theory, due to", "start_char_pos": 325, "end_char_pos": 347 } ]
[ 0, 207, 304, 412, 496, 592, 653 ]
1702.01265
2
Background: During asymmetric division of the Caenorhabditis elegans nematode zygote, the polarity cues distribution and daughter cell fates depend on the correct positioning of the mitotic spindle which results from both centering and cortical pulling forces. Revealed by spindle rocking, these pulling forces are regulated by the force generator dynamics, which are related to mitosis progression. This may be combined with a second regulation, this one by the posterior spindle pole position, which can be seen when comparing related species. Results: After delaying anaphase onset, we identified a positional pulling force regulation in C. elegans, which we ascribed to microtubule dynamics at the cortex. Indeed, in mapping the contacts we found a correlation between the centrosome-cortex distance and the microtubule contact density. This density in turn modulates pulling force generator activity. We expanded our model of spindle rocking and predicted then experimentally validated that the oscillation onset position resists changes in cellular geometry and number of force generators. Consistent with final spindle position measurements, this new model accounts for a lower dependence on force generator dynamics and quantities than predicted by the previous model. Conclusion: The spindleposition regulates the rapid increase in forces needed for anaphase oscillation and positioning through the spatial modulation of microtubule-cortex contacts . This regulation superimposes that of force generator processivity , putatively linked to the cell cycle . This novel control confers resistance to variations in zygote geometry and dynamics of cortical force generators . Interestingly, this robustness originates in cell mechanics rather than biochemical networks.
During the asymmetric division of the Caenorhabditis elegans nematode zygote, the polarity cues distribution and daughter cell fates depend on the correct positioning of the mitotic spindle , which results from both centering and cortical pulling forces. Revealed by anaphase spindle rocking, these pulling forces are regulated by the force generator dynamics, which are in turn consequent of mitotic progression. We found a novel, additional, regulation of these forces by the spindle position. It controls astral microtubule availability at the cortex, on which the active force generators can pull. Importantly, this positional control relies on the polarity dependent LET-99 cortical band, which restricts or concentrates generators to a posterior crescent. We ascribed this control to the microtubule dynamics at the cortex. Indeed, in mapping the cortical contacts, we found a correlation between the centrosome-cortex distance and the microtubule contact density. In turn, it modulates pulling force generator activity. We modelled this control, predicting and experimentally validating that the posterior crescent extent controlled where the anaphase oscillations started, in addition to mitotic progression. Finally, we propose that spatially restricting force generator to a posterior crescent sets the spindle's final position, reflecting polarity through the LET-99 dependent restriction of force generators to a posterior crescent . This regulation superimposes that of force generator processivity . This novel control confers a low dependence on microtubule and active force generator exact numbers or dynamics, provided that they exceed the threshold needed for posterior displacement . Interestingly, this robustness originates in cell mechanics rather than biochemical networks.
[ { "type": "R", "before": "Background: During", "after": "During the", "start_char_pos": 0, "end_char_pos": 18 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 198, "end_char_pos": 198 }, { "type": "A", "before": null, "after": "anaphase", "start_char_pos": 274, "end_char_pos": 274 }, { "type": "R", "before": "related to mitosis progression. This may be combined with a second regulation, this one by the posterior spindle pole position, which can be seen when comparing related species. Results: After delaying anaphase onset, we identified a positional pulling force regulation in C. elegans, which we ascribed to", "after": "in turn consequent of mitotic progression. We found a novel, additional, regulation of these forces by the spindle position. It controls astral microtubule availability at the cortex, on which the active force generators can pull. Importantly, this positional control relies on the polarity dependent LET-99 cortical band, which restricts or concentrates generators to a posterior crescent. We ascribed this control to the", "start_char_pos": 370, "end_char_pos": 675 }, { "type": "R", "before": "contacts", "after": "cortical contacts,", "start_char_pos": 735, "end_char_pos": 743 }, { "type": "R", "before": "This density in turn", "after": "In turn, it", "start_char_pos": 843, "end_char_pos": 863 }, { "type": "R", "before": "expanded our model of spindle rocking and predicted then experimentally validated that the oscillation onset position resists changes in cellular geometry and number of force generators. Consistent with final spindle position measurements, this new model accounts for a lower dependence on force generator dynamics and quantities than predicted by the previous model. Conclusion: The spindleposition regulates the rapid increase in forces needed for anaphase oscillation and positioning through the spatial modulation of microtubule-cortex contacts", "after": "modelled this control, predicting and experimentally validating that the posterior crescent extent controlled where the anaphase oscillations started, in addition to mitotic progression. Finally, we propose that spatially restricting force generator to a posterior crescent sets the spindle's final position, reflecting polarity through the LET-99 dependent restriction of force generators to a posterior crescent", "start_char_pos": 911, "end_char_pos": 1459 }, { "type": "D", "before": ", putatively linked to the cell cycle", "after": null, "start_char_pos": 1528, "end_char_pos": 1565 }, { "type": "R", "before": "resistance to variations in zygote geometry and dynamics of cortical force generators", "after": "a low dependence on microtubule and active force generator exact numbers or dynamics, provided that they exceed the threshold needed for posterior displacement", "start_char_pos": 1595, "end_char_pos": 1680 } ]
[ 0, 261, 401, 547, 711, 842, 907, 1097, 1278, 1461, 1567, 1682 ]
1702.01522
1
Inverse problems in statistical physics are motivated by the challenges of `big data' in different fields, in particular high-throughput experiments in biology. In inverse problems, the usual procedure of statistical physics needs to be reversed: Instead of calculating observables on the basis of model parameters, we seek to infer parameters of a model based on observations. In this review, we focus on the inverse Ising problem and closely related problems, namely how to infer the interactions between spins given observed spin correlations, magnetisations, or other data. We review applications of the inverse Ising problem, including the reconstruction of neural interactions , protein structure determination, and the inference of gene regulatory networks. For the inverse Ising problem in equilibrium, a number of controlled and uncontrolled approximate solutions have been developed in the statistical mechanics community. A particularly strong method, pseudolikelihood, stems from statistics. We also review the inverse Ising problem in the non-equilibrium case, where the model parameters must be reconstructed based on non-equilibrium statistics.
Inverse problems in statistical physics are motivated by the challenges of `big data' in different fields, in particular high-throughput experiments in biology. In inverse problems, the usual procedure of statistical physics needs to be reversed: Instead of calculating observables on the basis of model parameters, we seek to infer parameters of a model based on observations. In this review, we focus on the inverse Ising problem and closely related problems, namely how to infer the coupling strengths between spins given observed spin correlations, magnetisations, or other data. We review applications of the inverse Ising problem, including the reconstruction of neural connections , protein structure determination, and the inference of gene regulatory networks. For the inverse Ising problem in equilibrium, a number of controlled and uncontrolled approximate solutions have been developed in the statistical mechanics community. A particularly strong method, pseudolikelihood, stems from statistics. We also review the inverse Ising problem in the non-equilibrium case, where the model parameters must be reconstructed based on non-equilibrium statistics.
[ { "type": "R", "before": "interactions", "after": "coupling strengths", "start_char_pos": 486, "end_char_pos": 498 }, { "type": "R", "before": "interactions", "after": "connections", "start_char_pos": 670, "end_char_pos": 682 } ]
[ 0, 160, 377, 577, 764, 932, 1003 ]
1702.01649
1
The 70 kDa Heat Shock Proteins Hsp70 have several essential functions in living systems, such as protecting proteins against protein aggregation, assisting protein folding, remodeling protein complexes and driving the translocation URLanelles. These functions require high affinity for non-specific amino-acid sequences that are ubiquitous in proteins. It has been recently shown that this high affinity, called ultra-affinity, depends on a process driven out of equilibrium by ATP hydrolysis. Here we establish the thermodynamic bounds for ultra-affinity, and further show that the same reaction scheme can in principle be used both to strengthen and to weaken affinities (leading in this case to infra-affinity) . Finally, biological implications are discussed.
The 70 kDa Heat Shock Proteins Hsp70 have several essential functions in living systems, such as protecting cells against protein aggregation, assisting protein folding, remodeling protein complexes and driving the translocation URLanelles. These functions require high affinity for non-specific amino-acid sequences that are ubiquitous in proteins. It has been recently shown that this high affinity, called ultra-affinity, depends on a process driven out of equilibrium by ATP hydrolysis. Here we establish the thermodynamic bounds for ultra-affinity, and further show that the same reaction scheme can in principle be used both to strengthen and to weaken affinities (leading in this case to infra-affinity) . We show that cofactors are essential to achieve affinity beyond the equilibrium range . Finally, biological implications are discussed.
[ { "type": "R", "before": "proteins", "after": "cells", "start_char_pos": 108, "end_char_pos": 116 }, { "type": "A", "before": null, "after": ". We show that cofactors are essential to achieve affinity beyond the equilibrium range", "start_char_pos": 714, "end_char_pos": 714 } ]
[ 0, 243, 352, 493, 716 ]
1702.01706
1
We prove the existence of a Radner equilibrium in a model with proportional transaction costs on an infinite time horizon . Two agents receive exogenous, unspanned income and choose between consumption and investing into an annuity. After establishing the existence of a discrete-time equilibrium, we show that the discrete-time equilibrium converges to a continuous-time equilibrium model. The continuous-time equilibrium provides an explicit formula for the equilibrium interest rate in terms of the transaction cost parameter. We show analytically that the interest rate can be either increasing or decreasing in the transaction costs depending on the agents' risk parameters .
We prove the existence of a Radner equilibrium in a model with proportional transaction costs on an infinite time horizon and analyze the effect of transaction costs on the endogenously determined interest rate . Two agents receive exogenous, unspanned income and choose between consumption and investing into an annuity. After establishing the existence of a discrete-time equilibrium, we show that the discrete-time equilibrium converges to a continuous-time equilibrium model. The continuous-time equilibrium provides an explicit formula for the equilibrium interest rate in terms of the transaction cost parameter. We analyze the impact of transaction costs on the equilibrium interest rate and welfare levels .
[ { "type": "A", "before": null, "after": "and analyze the effect of transaction costs on the endogenously determined interest rate", "start_char_pos": 122, "end_char_pos": 122 }, { "type": "R", "before": "show analytically that the interest rate can be either increasing or decreasing in the transaction costs depending on the agents' risk parameters", "after": "analyze the impact of transaction costs on the equilibrium interest rate and welfare levels", "start_char_pos": 534, "end_char_pos": 679 } ]
[ 0, 124, 233, 391, 530 ]
1702.01819
1
The key issue in selecting between equilibria in signalling games is determining how receivers will interpret deviations from the path of play. We develop a foundation for these off-path beliefs, and an associated equilibrium refinement, in a model where equilibrium arises from non-equilibrium learning by long-lived senders and receivers. In our model, non-equilibrium signals are sent by young senders as experiments to learn about receivers' behavior , and different types of senders have different incentives for these various experiments. Using the Gittins index (Gittins, 1979), we characterize which sender types use each signal more often, leading to a constraint we call the "compatibility criterion" on the receiver's off-path beliefs and to the concept of a " type-compatible equilibrium. " We compare type-compatible equilibria to signalling-game refinements such as the Intuitive Criterion (Cho and Kreps, 1987) and divine equilibrium (Banks and Sobel, 1987) .
Equilibrium outcomes in signalling games can be very sensitive to the specification of how receivers interpret and thus respond to deviations from the path of play. We develop a micro-foundation for these off-path beliefs, and an associated equilibrium refinement, in a model where equilibrium arises through non-equilibrium learning by populations of patient and long-lived senders and receivers. In our model, young senders are uncertain about the prevailing distribution of play, so they rationally send out-of-equilibrium signals as experiments to learn about receivers' behavior . Differences in the payoff functions of the types of senders generate different incentives for these experiments. Using the Gittins index (Gittins, 1979), we characterize which sender types use each signal more often, leading to a constraint on the receiver's off-path beliefs based on " type-compatibility " and hence a learning-based equilibrium selection .
[ { "type": "R", "before": "The key issue in selecting between equilibria in signalling games is determining how receivers will interpret", "after": "Equilibrium outcomes in signalling games can be very sensitive to the specification of how receivers interpret and thus respond to", "start_char_pos": 0, "end_char_pos": 109 }, { "type": "R", "before": "foundation", "after": "micro-foundation", "start_char_pos": 157, "end_char_pos": 167 }, { "type": "R", "before": "from", "after": "through", "start_char_pos": 274, "end_char_pos": 278 }, { "type": "A", "before": null, "after": "populations of patient and", "start_char_pos": 307, "end_char_pos": 307 }, { "type": "R", "before": "non-equilibrium signals are sent by young senders", "after": "young senders are uncertain about the prevailing distribution of play, so they rationally send out-of-equilibrium signals", "start_char_pos": 356, "end_char_pos": 405 }, { "type": "R", "before": ", and different", "after": ". Differences in the payoff functions of the", "start_char_pos": 456, "end_char_pos": 471 }, { "type": "R", "before": "have", "after": "generate", "start_char_pos": 489, "end_char_pos": 493 }, { "type": "D", "before": "various", "after": null, "start_char_pos": 525, "end_char_pos": 532 }, { "type": "D", "before": "we call the \"compatibility criterion\"", "after": null, "start_char_pos": 674, "end_char_pos": 711 }, { "type": "R", "before": "and to the concept of a", "after": "based on", "start_char_pos": 747, "end_char_pos": 770 }, { "type": "R", "before": "type-compatible equilibrium.", "after": "type-compatibility", "start_char_pos": 773, "end_char_pos": 801 }, { "type": "R", "before": "We compare type-compatible equilibria to signalling-game refinements such as the Intuitive Criterion (Cho and Kreps, 1987) and divine equilibrium (Banks and Sobel, 1987)", "after": "and hence a learning-based equilibrium selection", "start_char_pos": 804, "end_char_pos": 973 } ]
[ 0, 143, 341, 545, 801 ]
1702.01819
2
Equilibrium outcomes in signalling games can be very sensitive to the specification of how receivers interpret and thus respond to deviations from the path of play. We develop a micro-foundation for these off-path beliefs, and an associated equilibrium refinement, in a model where equilibrium arises through non-equilibrium learning by populations of patient and long-lived senders and receivers. In our model, young senders are uncertain about the prevailing distribution of play, so they rationally send out-of-equilibrium signals as experiments to learn about receivers' behavior . Differences in the payoff functions of the types of senders generate different incentives for these experiments. Using the Gittins index (Gittins, 1979), we characterize which sender types use each signal more often, leading to a constraint on the receiver's off-path beliefs based on " type-compatibility " and hence a learning-based equilibrium selection.
Which equilibria will arise in signaling games depends on how the receiver interprets deviations from the path of play. We develop a micro-foundation for these off-path beliefs, and an associated equilibrium refinement, in a model where equilibrium arises through non-equilibrium learning by populations of patient and long-lived senders and receivers. In our model, young senders are uncertain about the prevailing distribution of play, so they rationally send out-of-equilibrium signals as experiments to learn about the behavior of the population of receivers . Differences in the payoff functions of the types of senders generate different incentives for these experiments. Using the Gittins index (Gittins, 1979), we characterize which sender types use each signal more often, leading to a constraint on the receiver's off-path beliefs based on " type compatibility " and hence a learning-based equilibrium selection.
[ { "type": "R", "before": "Equilibrium outcomes in signalling games can be very sensitive to the specification of how receivers interpret and thus respond to", "after": "Which equilibria will arise in signaling games depends on how the receiver interprets", "start_char_pos": 0, "end_char_pos": 130 }, { "type": "R", "before": "receivers' behavior", "after": "the behavior of the population of receivers", "start_char_pos": 564, "end_char_pos": 583 }, { "type": "R", "before": "type-compatibility", "after": "type compatibility", "start_char_pos": 873, "end_char_pos": 891 } ]
[ 0, 164, 397, 698 ]
1702.01936
1
We study the existence of portfolios of traded assets making a given financial institution pass some pre-specified (internal or external) regulatory test. In particular, we are interested in the existence of optimal portfolios , i.e. portfolios that allow to pass the test at the lowest cost, and in their sensitivity to changes in the underlying capital position. This naturally leads to investigate the continuity properties of the set-valued map associating to each capital position the corresponding set of optimal portfolios. We pay special attention to inner semicontinuity, which is the key continuity property from a financial perspective. This property is always satisfied if the test is based on a polyhedral risk measure such as Expected Shortfall, but it generally fails , even in a convex world, if we depart from polyhedrality . In this case, the optimal portfolio map may even fail to admit a continuous selection. Our results have applications to capital adequacy, pricingand hedging, and capital allocation . In particular, we allow for regulatory tests designed to capture systemic risk .
In a capital adequacy framework, risk measures are used to determine the minimal amount of capital that a financial institution has to raise and invest in a portfolio of pre-specified eligible assets in order to pass a given capital adequacy test. From a capital efficiency perspective, it is important to identify the set of portfolios of eligible assets that allow to pass the test by raising the least amount of capital. We study the existence and uniqueness of such optimal portfolios as well as their sensitivity to changes in the underlying capital position. This naturally leads to investigating the continuity properties of the set-valued map associating to each capital position the corresponding set of optimal portfolios. We pay special attention to lower semicontinuity, which is the key continuity property from a financial perspective. This "stability" property is always satisfied if the test is based on a polyhedral risk measure but it generally fails once we depart from polyhedrality even when the reference risk measure is convex. However, lower semicontinuity can be often achieved if one if one is willing to focuses on portfolios that are close to being optimal. Besides capital adequacy, our results have a variety of natural applications to pricing, hedging, and capital allocation problems .
[ { "type": "R", "before": "We study the existence of portfolios of traded assets making a given financial institution pass some", "after": "In a capital adequacy framework, risk measures are used to determine the minimal amount of capital that a financial institution has to raise and invest in a portfolio of", "start_char_pos": 0, "end_char_pos": 100 }, { "type": "R", "before": "(internal or external) regulatory test. In particular, we are interested in the existence of optimal portfolios , i.e. portfolios", "after": "eligible assets in order to pass a given capital adequacy test. From a capital efficiency perspective, it is important to identify the set of portfolios of eligible assets", "start_char_pos": 115, "end_char_pos": 244 }, { "type": "R", "before": "at the lowest cost, and in", "after": "by raising the least amount of capital. We study the existence and uniqueness of such optimal portfolios as well as", "start_char_pos": 273, "end_char_pos": 299 }, { "type": "R", "before": "investigate", "after": "investigating", "start_char_pos": 389, "end_char_pos": 400 }, { "type": "R", "before": "inner", "after": "lower", "start_char_pos": 559, "end_char_pos": 564 }, { "type": "A", "before": null, "after": "\"stability\"", "start_char_pos": 653, "end_char_pos": 653 }, { "type": "D", "before": "such as Expected Shortfall,", "after": null, "start_char_pos": 733, "end_char_pos": 760 }, { "type": "R", "before": ", even in a convex world, if", "after": "once", "start_char_pos": 784, "end_char_pos": 812 }, { "type": "R", "before": ". In this case, the optimal portfolio map may even fail to admit a continuous selection. Our results have applications to capital adequacy, pricingand", "after": "even when the reference risk measure is convex. However, lower semicontinuity can be often achieved if one if one is willing to focuses on portfolios that are close to being optimal. Besides capital adequacy, our results have a variety of natural applications to pricing,", "start_char_pos": 842, "end_char_pos": 992 }, { "type": "R", "before": ". In particular, we allow for regulatory tests designed to capture systemic risk", "after": "problems", "start_char_pos": 1025, "end_char_pos": 1105 } ]
[ 0, 154, 364, 530, 647, 843, 930, 1026 ]
1702.02076
1
As the most widely used antimalarial agent, chloroquine (CQ) has been used for more than half century. However, the mechanism of CQ action and resistance in Plasmodium falciparum remains elusive. Based on further analysis our published experimental results, we propose that the mechanism of CQ action and resistance might be closely linked with cell-cycle-associated amplified genomic-DNA fragments (CAGFs, singular form = CAGF) as CQ induces CAGF production in P. falciparum, which could affect multiple biological processes of the parasite, and thus might contribute to parasite death and CQ resistance. Recently, we found that CQ induced one of CAGFs, UB1- CAGF, might downregulate a probable P. falciparum cystine transporter (Pfct) gene expression, which could be used to understand the mechanism of CQ action and resistance in P. falciparum.
As a cheap and safe antimalarial agent, chloroquine (CQ) has been used in the battle against malaria for more than half century. However, the mechanism of CQ action and resistance in Plasmodium falciparum remains elusive. Based on further analysis of our published experimental results, we propose that the mechanism of CQ action and resistance might be closely linked with cell-cycle-associated amplified genomic-DNA fragments (CAGFs, singular form = CAGF) as CQ induces CAGF production in P. falciparum, which could affect multiple biological processes of the parasite, and thus might contribute to parasite death and CQ resistance. Recently, we found that CQ induced one of CAGFs, UB1- CAGF, might downregulate a probable P. falciparum cystine transporter (Pfct) gene expression, which could be used to understand the mechanism of CQ action and resistance in P. falciparum.
[ { "type": "R", "before": "the most widely used", "after": "a cheap and safe", "start_char_pos": 3, "end_char_pos": 23 }, { "type": "A", "before": null, "after": "in the battle against malaria", "start_char_pos": 75, "end_char_pos": 75 }, { "type": "A", "before": null, "after": "of", "start_char_pos": 223, "end_char_pos": 223 } ]
[ 0, 103, 196, 607 ]
1702.02087
1
We introduce the notion of a conditional Davis price and study its properties. Our ultimate goal is to use utility theory to price non-replicable contingent claims in the case when the investor 's portfolio already contains a non-replicable component. We show that even in the simplest of settings - such as Samuelson's model - conditional Davis prices are typically not unique and form a non-trivial subinterval of the set of all no-arbitrage prices. Our main result characterizes this set and provides simple conditions under which its two endpoints can be effectively computed. We illustrate the theory with several examples.
We study the set of marginal utility-based prices of a financial derivative in the case where the investor has a non-replicable random endowment. We provide an example showing that even in the simplest of settings - such as Samuelson's geometric Brownian motion model - the interval of marginal utility-based prices can be a non-trivial strict subinterval of the set of all no-arbitrage prices. This is in stark contrast to the case with a replicable endowment where non- uniqueness is exceptional. We provide formulas for the end points for these prices and illustrate the theory with several examples.
[ { "type": "R", "before": "introduce the notion of a conditional Davis price and study its properties. Our ultimate goal is to use utility theory to price non-replicable contingent claims", "after": "study the set of marginal utility-based prices of a financial derivative", "start_char_pos": 3, "end_char_pos": 163 }, { "type": "R", "before": "when the investor 's portfolio already contains", "after": "where the investor has", "start_char_pos": 176, "end_char_pos": 223 }, { "type": "R", "before": "component. We show", "after": "random endowment. We provide an example showing", "start_char_pos": 241, "end_char_pos": 259 }, { "type": "A", "before": null, "after": "geometric Brownian motion", "start_char_pos": 320, "end_char_pos": 320 }, { "type": "R", "before": "conditional Davis prices are typically not unique and form", "after": "the interval of marginal utility-based prices can be", "start_char_pos": 329, "end_char_pos": 387 }, { "type": "A", "before": null, "after": "strict", "start_char_pos": 402, "end_char_pos": 402 }, { "type": "R", "before": "Our main result characterizes this set and provides simple conditions under which its two endpoints can be effectively computed. We", "after": "This is in stark contrast to the case with a replicable endowment where non- uniqueness is exceptional. We provide formulas for the end points for these prices and", "start_char_pos": 454, "end_char_pos": 585 } ]
[ 0, 78, 251, 453, 582 ]
1702.02715
1
Estimating multiple sparse Gaussian Graphical Models (sGGMs) jointly for many related tasks (large K) under a high-dimensional (large p) situation is an important task. Most previous studies for the joint estimation of multiple sGGMs rely on penalized log-likelihood estimators that involve expensive and difficult non-smooth optimizations. We propose a novel approach, FASJEM for fast and scalable joint structure-estimation of multiple sGGMs at a large scale. As the first study of joint sGGM using the M-estimator framework, our work has three major contributions: (1) We solve FASJEM through an entry-wise manner which is parallelizable. (2) We choose a proximal algorithm to optimize FASJEM. This improves the computational efficiency from O(Kp^3) to O(Kp^2) and reduces the memory requirement from O(Kp^2) to O(K). (3) We theoretically prove that FASJEM achieves a consistent estimation with a convergence rate of O(\log(Kp)/n_{tot}). On several synthetic and four real-world datasets, FASJEM shows significant improvements over baselines on accuracy, computational complexity and memory costs.
Estimating multiple sparse Gaussian Graphical Models (sGGMs) jointly for many related tasks (large K) under a high-dimensional (large p) situation is an important task. Most previous studies for the joint estimation of multiple sGGMs rely on penalized log-likelihood estimators that involve expensive and difficult non-smooth optimizations. We propose a novel approach, FASJEM for fast and scalable joint structure-estimation of multiple sGGMs at a large scale. As the first study of joint sGGM using the Elementary Estimator framework, our work has three major contributions: (1) We solve FASJEM through an entry-wise manner which is parallelizable. (2) We choose a proximal algorithm to optimize FASJEM. This improves the computational efficiency from O(Kp^3) to O(Kp^2) and reduces the memory requirement from O(Kp^2) to O(K). (3) We theoretically prove that FASJEM achieves a consistent estimation with a convergence rate of O(\log(Kp)/n_{tot}). On several synthetic and four real-world datasets, FASJEM shows significant improvements over baselines on accuracy, computational complexity , and memory costs.
[ { "type": "R", "before": "M-estimator", "after": "Elementary Estimator", "start_char_pos": 505, "end_char_pos": 516 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1083, "end_char_pos": 1083 } ]
[ 0, 168, 340, 461, 641, 696, 820, 940 ]
1702.02896
1
There has been considerable interest across several fields in methods that reduce the problem of learning good treatment assignment policies to the problem of accurate policy evaluation. Given a class of candidate policies, these methods first effectively evaluate each policy individually, and then learn a policy by optimizing the estimated value function; such approaches are guaranteed to be risk-consistent whenever the policy value estimates are uniformly consistent . However, despite the wealth of proposed methods, the literature remains largely silent on questions of statistical efficiency: there are only limited results characterizing which policy evaluation strategies lead to better learned policies than others, or what the optimal policy evaluation strategies are . In this paper, we build on classical results in semiparametric efficiency theory to develop quasi-optimal methods for policy learning; in particular, we propose a class of policy value estimators that, when optimized, yield regret bounds for the learned policy that scale with the semiparametric efficient variance for policy evaluation. On a practical level, our result suggests new methods for policy learning motivated by semiparametric efficiency theory .
We consider the problem of using observational data to learn treatment assignment policies that satisfy certain constraints specified by a practitioner, such as budget, fairness, or functional form constraints. This problem has previously been studied in economics, statistics, and computer science, and several regret-consistent methods have been proposed . However, several key analytical components are missing, including a characterization of optimal methods for policy learning, and sharp bounds for minimax regret . In this paper, we derive lower bounds for the minimax regret of policy learning under constraints, and propose a method that attains this bound asymptotically up to a constant factor. Whenever the class of policies under consideration has a bounded Vapnik-Chervonenkis dimension, we show that the problem of minimax-regret policy learning can be asymptotically reduced to first efficiently evaluating how much each candidate policy improves over a randomized baseline, and then maximizing this value estimate. Our analysis relies on uniform generalizations of classical semiparametric efficiency results for average treatment effect estimation, paired with sharp concentration bounds for weighted empirical risk minimization that may be of independent interest .
[ { "type": "R", "before": "There has been considerable interest across several fields in methods that reduce", "after": "We consider", "start_char_pos": 0, "end_char_pos": 81 }, { "type": "R", "before": "learning good", "after": "using observational data to learn", "start_char_pos": 97, "end_char_pos": 110 }, { "type": "R", "before": "to the problem of accurate policy evaluation. Given a class of candidate policies, these methods first effectively evaluate each policy individually, and then learn a policy by optimizing the estimated value function; such approaches are guaranteed to be risk-consistent whenever the policy value estimates are uniformly consistent", "after": "that satisfy certain constraints specified by a practitioner, such as budget, fairness, or functional form constraints. This problem has previously been studied in economics, statistics, and computer science, and several regret-consistent methods have been proposed", "start_char_pos": 141, "end_char_pos": 472 }, { "type": "R", "before": "despite the wealth of proposed methods, the literature remains largely silent on questions of statistical efficiency: there are only limited results characterizing which policy evaluation strategies lead to better learned policies than others, or what the optimal policy evaluation strategies are", "after": "several key analytical components are missing, including a characterization of optimal methods for policy learning, and sharp bounds for minimax regret", "start_char_pos": 484, "end_char_pos": 780 }, { "type": "R", "before": "build on classical results in semiparametric efficiency theory to develop quasi-optimal methods for policy learning; in particular, we propose a class of policy value estimators that, when optimized, yield regret bounds for the learned policy that scale with the semiparametric efficient variance for policy evaluation. On a practical level, our result suggests new methods for policy learning motivated by semiparametric efficiency theory", "after": "derive lower bounds for the minimax regret of policy learning under constraints, and propose a method that attains this bound asymptotically up to a constant factor. Whenever the class of policies under consideration has a bounded Vapnik-Chervonenkis dimension, we show that the problem of minimax-regret policy learning can be asymptotically reduced to first efficiently evaluating how much each candidate policy improves over a randomized baseline, and then maximizing this value estimate. Our analysis relies on uniform generalizations of classical semiparametric efficiency results for average treatment effect estimation, paired with sharp concentration bounds for weighted empirical risk minimization that may be of independent interest", "start_char_pos": 801, "end_char_pos": 1240 } ]
[ 0, 186, 358, 474, 782, 917, 1120 ]
1702.02896
2
We consider the problem of using observational data to learn treatment assignment policies that satisfy certain constraintsspecified by a practitioner , such as budget, fairness, or functional form constraints. This problem has previously been studied in economics, statistics, and computer science, and several regret-consistent methods have been proposed. However, several key analytical components are missing, including a characterization of optimal methods for policy learning, and sharp bounds for minimax regret. In this paper, we derive lower bounds for the minimax regret of policy learning under constraints, and propose a method that attains this bound asymptotically up to a constant factor. Whenever the class of policies under consideration has a bounded Vapnik-Chervonenkis dimension, we show that the problem of minimax-regret policy learning can be asymptotically reduced to first efficiently evaluating how much each candidate policy improves over a randomized baseline, and then maximizing this value estimate. Our analysis relies on uniform generalizations of classical semiparametric efficiency results for average treatment effect estimation, paired with sharp concentration bounds for weighted empirical risk minimization that may be of independent interest .
In many areas, practitioners seek to use observational data to learn a treatment assignment policy that satisfies application-specific constraints , such as budget, fairness, simplicity, or other functional form constraints. For example, policies may be restricted to take the form of decision trees based on a limted set of easily observable individual characteristics. We propose a new approach to this problem motivated by the theory of semiparametrically efficient estimation. Our approach can be used to optimize either binary treatments or infinitesimal nudges to continuous treatments, and can leverage observational data where causal effects are identified using a variety of strategies, including selection-on-observables and instrumental variables. Given a doubly robust estimator of the causal effect of assigning everyone to treatment, we develop an algorithm for choosing whom to treat, and establish strong guarantees for the asymptotic utilitarian regret of the resulting policy .
[ { "type": "R", "before": "We consider the problem of using", "after": "In many areas, practitioners seek to use", "start_char_pos": 0, "end_char_pos": 32 }, { "type": "R", "before": "treatment assignment policies that satisfy certain constraintsspecified by a practitioner", "after": "a treatment assignment policy that satisfies application-specific constraints", "start_char_pos": 61, "end_char_pos": 150 }, { "type": "R", "before": "or", "after": "simplicity, or other", "start_char_pos": 179, "end_char_pos": 181 }, { "type": "R", "before": "This problem has previously been studied in economics, statistics, and computer science, and several regret-consistent methods have been proposed. However, several key analytical components are missing, including a characterization of optimal methods for policy learning, and sharp bounds for minimax regret. In this paper, we derive lower bounds for the minimax regret of policy learning under constraints, and propose a method that attains this bound asymptotically up to a constant factor. Whenever the class of policies under consideration has a bounded Vapnik-Chervonenkis dimension, we show that the problem of minimax-regret policy learning can be asymptotically reduced to first efficiently evaluating how much each candidate policy improves over a randomized baseline, and then maximizing this value estimate. Our analysis relies on uniform generalizations of classical semiparametric efficiency results for average treatment effect estimation, paired with sharp concentration bounds for weighted empirical risk minimization that may be of independent interest", "after": "For example, policies may be restricted to take the form of decision trees based on a limted set of easily observable individual characteristics. We propose a new approach to this problem motivated by the theory of semiparametrically efficient estimation. Our approach can be used to optimize either binary treatments or infinitesimal nudges to continuous treatments, and can leverage observational data where causal effects are identified using a variety of strategies, including selection-on-observables and instrumental variables. Given a doubly robust estimator of the causal effect of assigning everyone to treatment, we develop an algorithm for choosing whom to treat, and establish strong guarantees for the asymptotic utilitarian regret of the resulting policy", "start_char_pos": 211, "end_char_pos": 1280 } ]
[ 0, 210, 357, 519, 703, 1029 ]
1702.02896
3
In many areas, practitioners seek to use observational data to learn a treatment assignment policy that satisfies application-specific constraints, such as budget, fairness, simplicity, or other functional form constraints. For example, policies may be restricted to take the form of decision trees based on a limted set of easily observable individual characteristics. We propose a new approach to this problem motivated by the theory of semiparametrically efficient estimation. Our approach can be used to optimize either binary treatments or infinitesimal nudges to continuous treatments, and can leverage observational data where causal effects are identified using a variety of strategies, including selection-on-observables and instrumental variables. Given a doubly robust estimator of the causal effect of assigning everyone to treatment, we develop an algorithm for choosing whom to treat, and establish strong guarantees for the asymptotic utilitarian regret of the resulting policy.
In many areas, practitioners seek to use observational data to learn a treatment assignment policy that satisfies application-specific constraints, such as budget, fairness, simplicity, or other functional form constraints. For example, policies may be restricted to take the form of decision trees based on a limited set of easily observable individual characteristics. We propose a new approach to this problem motivated by the theory of semiparametrically efficient estimation. Our method can be used to optimize either binary treatments or infinitesimal nudges to continuous treatments, and can leverage observational data where causal effects are identified using a variety of strategies, including selection on observables and instrumental variables. Given a doubly robust estimator of the causal effect of assigning everyone to treatment, we develop an algorithm for choosing whom to treat, and establish strong guarantees for the asymptotic utilitarian regret of the resulting policy.
[ { "type": "R", "before": "limted", "after": "limited", "start_char_pos": 310, "end_char_pos": 316 }, { "type": "R", "before": "approach", "after": "method", "start_char_pos": 484, "end_char_pos": 492 }, { "type": "R", "before": "selection-on-observables", "after": "selection on observables", "start_char_pos": 705, "end_char_pos": 729 } ]
[ 0, 223, 369, 479, 757 ]
1702.03916
1
We introduce a simple mechanical model for adherent cells that quantitatively relates cell shape, internal cell stresses and cell forces as generated by an anisotropic cytoskeleton. We perform experiments on the shape and traction forces of different types of cells with anisotropic morphologies, cultured on microfabricated elastomeric pillar arrays. We demonstrate that , irrespectively of the cell type, the shape of the cell edge between focal adhesions is accurately described by elliptical arcs, whose eccentricity expresses the ratio between directed and isotropic stresses. Our work paves the way toward the reconstruction of cellular forces from geometrical data available via optical microscopy .
We investigate the geometrical and mechanical properties of adherent cells characterized by a highly anisotropic actin cytoskeleton. Using a combination of theoretical work and experiments on micropillar arrays, we demonstrate that the shape of the cell edge is accurately described by elliptical arcs, whose eccentricity expresses the degree of anisotropy of the internal cell stresses. This results in a spatially varying tension along the cell edge, that significantly affects the traction forces exerted by the cell on the substrate. Our work highlights the strong interplay between cell mechanics and geometry and paves the way towards the reconstruction of cellular forces from geometrical data .
[ { "type": "R", "before": "introduce a simple mechanical model for adherent cells that quantitatively relates cell shape, internal cell stresses and cell forces as generated by an anisotropic cytoskeleton. We perform experiments on the shape and traction forces of different types of cells with anisotropic morphologies, cultured on microfabricated elastomeric pillar arrays. We demonstrate that , irrespectively of the cell type, the", "after": "investigate the geometrical and mechanical properties of adherent cells characterized by a highly anisotropic actin cytoskeleton. Using a combination of theoretical work and experiments on micropillar arrays, we demonstrate that the", "start_char_pos": 3, "end_char_pos": 410 }, { "type": "D", "before": "between focal adhesions", "after": null, "start_char_pos": 434, "end_char_pos": 457 }, { "type": "R", "before": "ratio between directed and isotropic stresses. Our work", "after": "degree of anisotropy of the internal cell stresses. This results in a spatially varying tension along the cell edge, that significantly affects the traction forces exerted by the cell on the substrate. Our work highlights the strong interplay between cell mechanics and geometry and", "start_char_pos": 535, "end_char_pos": 590 }, { "type": "R", "before": "toward", "after": "towards", "start_char_pos": 605, "end_char_pos": 611 }, { "type": "D", "before": "available via optical microscopy", "after": null, "start_char_pos": 672, "end_char_pos": 704 } ]
[ 0, 181, 351, 581 ]
1702.04053
1
This article studies derivatives pricing when privately financed by non-cash collateral . The liability-side posting collateralmust weigh in haircuts stipulated in collateral agreements against those prevalent in the repo market and find the cheapest funded securities to post. The haircut difference is synthesized in the pricing PDE's discount rateand the impact of repo financing cost is captured by collateral liquidity value adjustment (LVA) . Because a derivatives netting set's time horizon is much longer than repo tenors, a break-even repo formulae is employed to forecast the repo curve beyond three month tenor. Collateral optimization is formulated as an LVA driven linear programming problem .
Cash collateral is perfect in that it provides simultaneous counterparty credit risk protection and derivatives funding. Securities are imperfect collateral, because of collateral segregation or differences in CSA haircuts and repo haircuts. Moreover, the collateral rate term structure is not observable in the repo market , for derivatives netting sets are perpetual while repo tenors are typically in months. This article synthesizes these effects into a derivative financing rate that replaces the risk-free discount rate. A break-even repo formulae is employed to supply non-observable collateral rates, enabling collateral liquidity value adjustment (LVA) to be computed. A linear programming problem of maximizing LVA under liquidity coverage ratio (LCR) constraint is formulated as a core algorithm of collateral optimization. Numerical examples show that LVA could be sizable for long average duration, deep in or out of the money swap portfolios .
[ { "type": "R", "before": "This article studies derivatives pricing when privately financed by non-cash collateral . The liability-side posting collateralmust weigh in haircuts stipulated in collateral agreements against those prevalent", "after": "Cash collateral is perfect in that it provides simultaneous counterparty credit risk protection and derivatives funding. Securities are imperfect collateral, because of collateral segregation or differences in CSA haircuts and repo haircuts. Moreover, the collateral rate term structure is not observable", "start_char_pos": 0, "end_char_pos": 209 }, { "type": "R", "before": "and find the cheapest funded securities to post. The haircut difference is synthesized in the pricing PDE's discount rateand the impact of repo financing cost is captured by collateral", "after": ", for derivatives netting sets are perpetual while repo tenors are typically in months. This article synthesizes these effects into a derivative financing rate that replaces the risk-free discount rate. A break-even repo formulae is employed to supply non-observable collateral rates, enabling collateral", "start_char_pos": 229, "end_char_pos": 413 }, { "type": "R", "before": ". Because a derivatives netting set's time horizon is much longer than repo tenors, a break-even repo formulae is employed to forecast the repo curve beyond three month tenor. Collateral optimization", "after": "to be computed. A linear programming problem of maximizing LVA under liquidity coverage ratio (LCR) constraint", "start_char_pos": 447, "end_char_pos": 646 }, { "type": "R", "before": "an LVA driven linear programming problem", "after": "a core algorithm of collateral optimization. Numerical examples show that LVA could be sizable for long average duration, deep in or out of the money swap portfolios", "start_char_pos": 664, "end_char_pos": 704 } ]
[ 0, 89, 277, 622 ]
1702.04183
1
Chemotaxis, a basic and universal phenomenon among URLanisms, directly controls the transport kinetics of active fluids such as swarming bacteria, but has not been considered when utilizing passive tracer to probe the nonequilibrium properties of such fluids. Here we present the first theoretical investigation of the diffusion dynamics of a chemoattractant-coated tracer in bacterial suspension, by developing a molecular dynamics model of bacterial chemotaxis . We demonstrate that the non-Gaussian statistics of full-coated tracer arises from the noises exerted by bacteria, which is athermal and exponentially correlated . Moreover, half-coated (Janus ) tracer performs a composite random walk combining power-law-tail distributed L\'{e}vy flights with Brownian jiggling at low coating concentration, but undergoes an enhanced directional transport when coating concentration is high. Particularly, such transition is identified to be second-order, with a critical exponent 1.5 independent of bacterial density . Our findings reveal the fundamental nonequilibrium physics of active matter under external stimuli, and underscore the crucial role of asymmetrical environment in regulating the transport processes in biological systems.
By developing a molecular dynamics model of bacterial chemotaxis, we present the first investigation of tracer statistics in bacterial suspensions where chemotactic effects are considered . We demonstrate that the non-Gaussian statistics of full-coated tracer arises from the athermal bacterial noise . Moreover, Janus ( half-coated ) tracer performs a composite random walk combining power-law-tail distributed L\'{e}vy flights with Brownian jiggling at low coating concentration, but turns to an enhanced directional transport (EDT) when coating concentration is high. Unlike conventional self-propelled particles, upon increasing coating concentration, the direction of EDT counterintuitively reverses from along to against the tracer orientation. Both these transitions are identified to be second-order, with the phase boundaries meeting at a triple point. A theoretical modeling that reveals the origin of such anomalous transport behaviors is proposed . Our findings reveal the fundamental nonequilibrium physics of active matter under external stimuli, and underscore the crucial role of asymmetrical environment in regulating the transport processes in biological systems.
[ { "type": "R", "before": "Chemotaxis, a basic and universal phenomenon among URLanisms, directly controls the transport kinetics of active fluids such as swarming bacteria, but has not been considered when utilizing passive tracer to probe the nonequilibrium properties of such fluids. Here", "after": "By developing a molecular dynamics model of bacterial chemotaxis,", "start_char_pos": 0, "end_char_pos": 264 }, { "type": "R", "before": "theoretical investigation of the diffusion dynamics of a chemoattractant-coated tracer in bacterial suspension, by developing a molecular dynamics model of bacterial chemotaxis", "after": "investigation of tracer statistics in bacterial suspensions where chemotactic effects are considered", "start_char_pos": 286, "end_char_pos": 462 }, { "type": "R", "before": "noises exerted by bacteria, which is athermal and exponentially correlated", "after": "athermal bacterial noise", "start_char_pos": 551, "end_char_pos": 625 }, { "type": "A", "before": null, "after": "Janus (", "start_char_pos": 638, "end_char_pos": 638 }, { "type": "D", "before": "(Janus", "after": null, "start_char_pos": 651, "end_char_pos": 657 }, { "type": "R", "before": "undergoes", "after": "turns to", "start_char_pos": 811, "end_char_pos": 820 }, { "type": "A", "before": null, "after": "(EDT)", "start_char_pos": 855, "end_char_pos": 855 }, { "type": "R", "before": "Particularly, such transition is", "after": "Unlike conventional self-propelled particles, upon increasing coating concentration, the direction of EDT counterintuitively reverses from along to against the tracer orientation. Both these transitions are", "start_char_pos": 892, "end_char_pos": 924 }, { "type": "R", "before": "a critical exponent 1.5 independent of bacterial density", "after": "the phase boundaries meeting at a triple point. A theoretical modeling that reveals the origin of such anomalous transport behaviors is proposed", "start_char_pos": 961, "end_char_pos": 1017 } ]
[ 0, 259, 464, 627, 891, 1019 ]
1702.04443
1
A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with variable-width basis functions, and the parameters are estimated by a Bayesian method. We find that the data are explained significantly better by our model as compared to the Hawkes model with a stationary background rate, which is commonly used in the field of quantitative finance. Our model can capture not only the slow time-variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. We also demonstrate that the level of the endogeneity of markets, quantified by the branching ratio of the Hawkes process, is overestimated if the time-variation is not considered .
A Hawkes process model with a time-varying background rate is developed for analyzing the high-frequency financial data. In our model, the logarithm of the background rate is modeled by a linear model with a relatively large number of variable-width basis functions, and the parameters are estimated by a Bayesian method. Our model can capture not only the slow time-variation, such as in the intraday seasonality, but also the rapid one, which follows a macroeconomic news announcement. By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the level of the endogeneity of markets, estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation .
[ { "type": "A", "before": null, "after": "a relatively large number of", "start_char_pos": 206, "end_char_pos": 206 }, { "type": "D", "before": "We find that the data are explained significantly better by our model as compared to the Hawkes model with a stationary background rate, which is commonly used in the field of quantitative finance.", "after": null, "start_char_pos": 294, "end_char_pos": 491 }, { "type": "R", "before": "We also demonstrate that the", "after": "By analyzing the tick data of the Nikkei 225 mini, we find that (i) our model is better fitted to the data than the Hawkes models with a constant background rate or a slowly varying background rate, which have been commonly used in the field of quantitative finance; (ii) the improvement in the goodness-of-fit to the data by our model is significant especially for sessions where considerable fluctuation of the background rate is present; and (iii) our model is statistically consistent with the data. The branching ratio, which quantifies the", "start_char_pos": 658, "end_char_pos": 686 }, { "type": "R", "before": "quantified by the branching ratio of the Hawkes process, is overestimated if the time-variation is not considered", "after": "estimated by our model is 0.41, suggesting the relative importance of exogenous factors in the market dynamics. We also demonstrate that it is critically important to appropriately model the time-dependent background rate for the branching ratio estimation", "start_char_pos": 724, "end_char_pos": 837 } ]
[ 0, 120, 293, 491, 657 ]
1702.04642
1
Groups of Small and Medium Enterprises (SME) back each other and form guarantee network to obtain loan from banks . The risk over the networked enterprises may cause significant contagious damage. To dissolve such risks , we propose a hybrid feature representation, which is feeded into a gradient boosting model for credit risk assessment of guarantee network. Empirical study is performed on a ten-year guarantee loan record from commercial banks. We find that often hundreds or thousands of enterprises back each other and constitute a sparse complex network. We study the risk of various structures of loan guarantee network, and observe the high correlation between defaults with centrality, and with the communities of the network. In particular , our quantitative risk evaluation model shows promising prediction performance on real-world data, which can be useful to both regulators and stakeholders.
Networked-guarantee loans may cause the systemic risk related concern of the government and banks in China. The prediction of default of enterprise loans is a typical extremely imbalanced prediction problem, and the networked-guarantee make this problem more difficult to solve. Since the guaranteed loan is a debt obligation promise, if one enterprise in the guarantee network falls into a financial crisis, the debt risk may spread like a virus across the guarantee network, even lead to a systemic financial crisis. In this paper , we propose an imbalanced network risk diffusion model to forecast the enterprise default risk in a short future. Positive weighted k-nearest neighbors (p-wkNN) algorithm is developed for the stand-alone case -- when there is no default contagious; then a data-driven default diffusion model is integrated to further improve the prediction accuracy. We perform the empirical study on a real-world three-years loan record from a major commercial bank. The results show that our proposed method outperforms conventional credit risk methods in terms of AUC. In summary , our quantitative risk evaluation model shows promising prediction performance on real-world data, which could be useful to both regulators and stakeholders.
[ { "type": "R", "before": "Groups of Small and Medium Enterprises (SME) back each other and form guarantee network to obtain loan from banks . The risk over the networked enterprises may cause significant contagious damage. To dissolve such risks", "after": "Networked-guarantee loans may cause the systemic risk related concern of the government and banks in China. The prediction of default of enterprise loans is a typical extremely imbalanced prediction problem, and the networked-guarantee make this problem more difficult to solve. Since the guaranteed loan is a debt obligation promise, if one enterprise in the guarantee network falls into a financial crisis, the debt risk may spread like a virus across the guarantee network, even lead to a systemic financial crisis. In this paper", "start_char_pos": 0, "end_char_pos": 219 }, { "type": "A", "before": null, "after": "an imbalanced network risk diffusion model to forecast the enterprise default risk in a short future. Positive weighted k-nearest neighbors (p-wkNN) algorithm is developed for the stand-alone case -- when there is no default contagious; then a data-driven default diffusion model is integrated to further improve the prediction accuracy. We perform the empirical study on", "start_char_pos": 233, "end_char_pos": 233 }, { "type": "R", "before": "hybrid feature representation, which is feeded into a gradient boosting model for credit risk assessment of guarantee network. Empirical study is performed on a ten-year guarantee", "after": "real-world three-years", "start_char_pos": 236, "end_char_pos": 415 }, { "type": "R", "before": "commercial banks. We find that often hundreds or thousands of enterprises back each other and constitute a sparse complex network. We study the risk of various structures of loan guarantee network, and observe the high correlation between defaults with centrality, and with the communities of the network. In particular", "after": "a major commercial bank. The results show that our proposed method outperforms conventional credit risk methods in terms of AUC. In summary", "start_char_pos": 433, "end_char_pos": 752 }, { "type": "R", "before": "can", "after": "could", "start_char_pos": 859, "end_char_pos": 862 } ]
[ 0, 115, 196, 362, 450, 563, 738 ]
1702.04719
1
Trace alignment , a procedure for finding common activities and deviations in process executions, does not have a well-established framework for evaluation. A common alignment evaluation tool-reference - based methods - is not applicable if a reference alignment, or ground truth, is not available. On the other hand, reference -free evaluation methods currently are not able to adequately and comprehensively assess alignment quality. We analyze and compare the existing evaluation methods, identify their limitations, and propose improvements . We introduce modifications to two reference-free evaluation methods , improving their ability to assess alignment quality. We summarize the parameter selection for these modified methods and analyze their results. We also tested these evaluation methods on the alignment of a trauma resuscitation process log .
Trace alignment algorithms have been used in process mining for discovering the consensus treatment procedures and process deviations. Different alignment algorithms, however, may produce very different results. No widely-adopted method exists for evaluating the results of trace alignment. Existing reference-free evaluation methods cannot adequately and comprehensively assess the alignment quality. We analyzed and compared the existing evaluation methods, identifying their limitations, and introduced improvements in two reference-free evaluation methods . Our approach assesses the alignment result globally instead of locally, and therefore helps the algorithm to optimize overall alignment quality. We also introduced a novel metric to measure the alignment complexity, which can be used as a constraint on alignment algorithm optimization. We tested our evaluation methods on a trauma resuscitation dataset and provided the medical explanation of the activities and patterns identified as deviations using our proposed evaluation methods .
[ { "type": "R", "before": ", a procedure for finding common activities and deviations in process executions, does not have a well-established framework for evaluation. A common alignment evaluation tool-reference - based methods - is not applicable if a reference alignment, or ground truth, is not available. On the other hand, reference -free evaluation methods currently are not able to", "after": "algorithms have been used in process mining for discovering the consensus treatment procedures and process deviations. Different alignment algorithms, however, may produce very different results. No widely-adopted method exists for evaluating the results of trace alignment. Existing reference-free evaluation methods cannot", "start_char_pos": 16, "end_char_pos": 378 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 417, "end_char_pos": 417 }, { "type": "R", "before": "analyze and compare", "after": "analyzed and compared", "start_char_pos": 440, "end_char_pos": 459 }, { "type": "R", "before": "identify", "after": "identifying", "start_char_pos": 493, "end_char_pos": 501 }, { "type": "R", "before": "propose improvements . We introduce modifications to", "after": "introduced improvements in", "start_char_pos": 525, "end_char_pos": 577 }, { "type": "R", "before": ", improving their ability to assess", "after": ". Our approach assesses the alignment result globally instead of locally, and therefore helps the algorithm to optimize overall", "start_char_pos": 616, "end_char_pos": 651 }, { "type": "R", "before": "summarize the parameter selection for these modified methods and analyze their results. We also tested these", "after": "also introduced a novel metric to measure the alignment complexity, which can be used as a constraint on alignment algorithm optimization. We tested our", "start_char_pos": 674, "end_char_pos": 782 }, { "type": "D", "before": "the alignment of", "after": null, "start_char_pos": 805, "end_char_pos": 821 }, { "type": "R", "before": "process log", "after": "dataset and provided the medical explanation of the activities and patterns identified as deviations using our proposed evaluation methods", "start_char_pos": 845, "end_char_pos": 856 } ]
[ 0, 156, 298, 436, 547, 670, 761 ]
1702.05434
1
This note complements the inspiring work on dimensional analysis and market microstructure by Kyle and Obizhaeva \mbox{%DIFAUXCMD kyle2016dimensional shows by a similar argument as usually applied in physics the following remarkable fact. If the market impact of a meta-order only depends on four well-defined and financially meaningful variables, then -- up to a constant -- there is only one possible form of this dependence. In particular, the market impact is proportional to the square root of the size of the meta-order. This theorem can be regarded as a special case of a more general result of Kyle and Obizhaeva. These authors consider five variables which might have an influence on the size of the market impact. In this case one finds a richer variety of functions which we precisely characterize. We also discuss the relation to classical arguments from physics, such as the period of a pendulum.
This note complements the inspiring work on dimensional analysis and market microstructure by Kyle and Obizhaeva 18 . Following closely these authors, our main result shows by a similar argument as usually applied in physics the following remarkable fact. If the market impact of a meta-order only depends on four well-defined and financially meaningful variables, then -- up to a constant -- there is only one possible form of this dependence. In particular, the market impact is proportional to the square-root of the size of the meta-order. This theorem can be regarded as a special case of a more general result of Kyle and Obizhaeva. These authors consider five variables which might have an influence on the size of the market impact. In this case one finds a richer variety of possible functional relations which we precisely characterize. We also discuss the analogies to classical arguments from physics, such as the period of a pendulum.
[ { "type": "R", "before": "\\mbox{%DIFAUXCMD kyle2016dimensional", "after": "18", "start_char_pos": 113, "end_char_pos": 149 }, { "type": "A", "before": null, "after": ". Following closely these authors, our main result", "start_char_pos": 150, "end_char_pos": 150 }, { "type": "R", "before": "square root", "after": "square-root", "start_char_pos": 485, "end_char_pos": 496 }, { "type": "R", "before": "functions", "after": "possible functional relations", "start_char_pos": 768, "end_char_pos": 777 }, { "type": "R", "before": "relation", "after": "analogies", "start_char_pos": 831, "end_char_pos": 839 } ]
[ 0, 239, 428, 527, 622, 724, 810 ]
1702.05468
1
The stochastic dynamics of networks of biochemical reactions in living cells are typically modelled using chemical master equations (CMEs ). The stationary distributions of CMEs are seldom solvable analytically, and few methods exist that yield numerical estimates with computable error bounds. Here, we present two such methods based on mathematical programming techniques . First, we use semidefinite programming to obtain increasingly tighter upper and lower bounds on the moments of the stationary distribution for networks with rational propensities. Second, we employ linear programming to compute convergent upper and lower bounds on the stationary distributions themselves. The bounds obtained provide a computational test for the uniqueness of the stationary distribution . In the unique case, the bounds collectively form an approximation of the stationary distribution accompanied with a computable \ell^1-error bound . In the non-unique case, we explain how to adapt our approach so that it yields approximations of the ergodic distributions , also accompanied with computable error bounds . We illustrate our methodology through two biological examples : Schl\"ogl's model and a toggle switch model.
The stochastic dynamics of biochemical networks is usually modelled with the chemical master equation (CME ). The stationary distributions of CMEs are seldom solvable analytically, and numerical methods typically produce estimates with uncontrolled errors. To fill this gap, we introduce mathematical programming approaches that yield approximations of these distributions with computable error bounds which enable the verification of their accuracy . First, we use semidefinite programming to compute increasingly tighter upper and lower bounds on the moments of the stationary distributions for networks with rational propensities. Second, we use these moment bounds to formulate linear programs that yield convergent upper and lower bounds on the stationary distributions themselves. The bounds obtained provide a computational test for the uniqueness of these distributions . In the unique case, the bounds form an approximation of the stationary distribution with a computable bound on its error . In the non-unique case, our approach yields converging approximations of the ergodic distributions . We illustrate our methodology through two biochemical networks that exhibit bifurcations to multimodal behaviour : Schl\"ogl's model and a toggle switch model.
[ { "type": "R", "before": "networks of biochemical reactions in living cells are typically modelled using chemical master equations (CMEs", "after": "biochemical networks is usually modelled with the chemical master equation (CME", "start_char_pos": 27, "end_char_pos": 137 }, { "type": "R", "before": "few methods exist that yield numerical estimates with computable error bounds. Here, we present two such methods based on mathematical programming techniques", "after": "numerical methods typically produce estimates with uncontrolled errors. To fill this gap, we introduce mathematical programming approaches that yield approximations of these distributions with computable error bounds which enable the verification of their accuracy", "start_char_pos": 216, "end_char_pos": 373 }, { "type": "R", "before": "obtain", "after": "compute", "start_char_pos": 418, "end_char_pos": 424 }, { "type": "R", "before": "distribution", "after": "distributions", "start_char_pos": 502, "end_char_pos": 514 }, { "type": "R", "before": "employ linear programming to compute", "after": "use these moment bounds to formulate linear programs that yield", "start_char_pos": 567, "end_char_pos": 603 }, { "type": "R", "before": "the stationary distribution", "after": "these distributions", "start_char_pos": 753, "end_char_pos": 780 }, { "type": "D", "before": "collectively", "after": null, "start_char_pos": 814, "end_char_pos": 826 }, { "type": "D", "before": "accompanied", "after": null, "start_char_pos": 880, "end_char_pos": 891 }, { "type": "R", "before": "\\ell^1-error bound", "after": "bound on its error", "start_char_pos": 910, "end_char_pos": 928 }, { "type": "R", "before": "we explain how to adapt our approach so that it yields", "after": "our approach yields converging", "start_char_pos": 955, "end_char_pos": 1009 }, { "type": "D", "before": ", also accompanied with computable error bounds", "after": null, "start_char_pos": 1054, "end_char_pos": 1101 }, { "type": "R", "before": "biological examples", "after": "biochemical networks that exhibit bifurcations to multimodal behaviour", "start_char_pos": 1146, "end_char_pos": 1165 } ]
[ 0, 140, 294, 375, 555, 681, 782, 930, 1103 ]
1702.05468
2
The stochastic dynamics of biochemical networks is usually modelled with the chemical master equation (CME). The stationary distributions of CMEs are seldom solvable analytically, and numerical methods typically produce estimates with uncontrolled errors. To fill this gap , we introduce mathematical programming approaches that yield approximations of these distributions with computable error bounds which enable the verification of their accuracy. First, we use semidefinite programming to compute increasingly tighter upper and lower bounds on the moments of the stationary distributions for networks with rational propensities. Second, we use these moment bounds to formulate linear programs that yield convergent upper and lower bounds on the stationary distributions themselves . The bounds obtained provide a computational test for the uniqueness of these distributions . In the unique case, the bounds form an approximation of the stationary distribution with a computable bound on its error. In the non-unique case, our approach yields converging approximations of the ergodic distributions. We illustrate our methodology through two biochemical networks that exhibit bifurcations to multimodal behaviour : Schl\"ogl's model and a toggle switch model .
The stochastic dynamics of biochemical networks are usually modelled with the chemical master equation (CME). The stationary distributions of CMEs are seldom solvable analytically, and numerical methods typically produce estimates with uncontrolled errors. Here , we introduce mathematical programming approaches that yield approximations of these distributions with computable error bounds which enable the verification of their accuracy. First, we use semidefinite programming to compute increasingly tighter upper and lower bounds on the moments of the stationary distributions for networks with rational propensities. Second, we use these moment bounds to formulate linear programs that yield convergent upper and lower bounds on the stationary distributions themselves , their marginals and stationary averages . The bounds obtained also provide a computational test for the uniqueness of the distribution . In the unique case, the bounds form an approximation of the stationary distribution with a computable bound on its error. In the non-unique case, our approach yields converging approximations of the ergodic distributions. We illustrate our methodology through several biochemical examples taken from the literature : Schl\"ogl's model for a chemical bifurcation, a two-dimensional toggle switch, and a model for bursty gene expression .
[ { "type": "R", "before": "is", "after": "are", "start_char_pos": 48, "end_char_pos": 50 }, { "type": "R", "before": "To fill this gap", "after": "Here", "start_char_pos": 256, "end_char_pos": 272 }, { "type": "A", "before": null, "after": ", their marginals and stationary averages", "start_char_pos": 785, "end_char_pos": 785 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 808, "end_char_pos": 808 }, { "type": "R", "before": "these distributions", "after": "the distribution", "start_char_pos": 860, "end_char_pos": 879 }, { "type": "R", "before": "two biochemical networks that exhibit bifurcations to multimodal behaviour", "after": "several biochemical examples taken from the literature", "start_char_pos": 1142, "end_char_pos": 1216 }, { "type": "R", "before": "and a toggle switch model", "after": "for a chemical bifurcation, a two-dimensional toggle switch, and a model for bursty gene expression", "start_char_pos": 1237, "end_char_pos": 1262 } ]
[ 0, 108, 255, 450, 632, 787, 881, 1003, 1103 ]
1702.05468
3
The stochastic dynamics of biochemical networks are usually modelled with the chemical master equation (CME). The stationary distributions of CMEs are seldom solvable analytically, and numerical methods typically produce estimates with uncontrolled errors. Here, we introduce mathematical programming approaches that yield approximations of these distributions with computable error bounds which enable the verification of their accuracy. First, we use semidefinite programming to compute increasingly tighter upper and lower bounds on the moments of the stationary distributions for networks with rational propensities. Second, we use these moment bounds to formulate linear programs that yield convergent upper and lower bounds on the stationary distributions themselves, their marginals and stationary averages. The bounds obtained also provide a computational test for the uniqueness of the distribution. In the unique case, the bounds form an approximation of the stationary distribution with a computable bound on its error. In the non-unique case, our approach yields converging approximations of the ergodic distributions. We illustrate our methodology through several biochemical examples taken from the literature: Schl\"ogl's model for a chemical bifurcation, a two-dimensional toggle switch, and a model for bursty gene expression .
The stochastic dynamics of biochemical networks are usually modelled with the chemical master equation (CME). The stationary distributions of CMEs are seldom solvable analytically, and numerical methods typically produce estimates with uncontrolled errors. Here, we introduce mathematical programming approaches that yield approximations of these distributions with computable error bounds which enable the verification of their accuracy. First, we use semidefinite programming to compute increasingly tighter upper and lower bounds on the moments of the stationary distributions for networks with rational propensities. Second, we use these moment bounds to formulate linear programs that yield convergent upper and lower bounds on the stationary distributions themselves, their marginals and stationary averages. The bounds obtained also provide a computational test for the uniqueness of the distribution. In the unique case, the bounds form an approximation of the stationary distribution with a computable bound on its error. In the non-unique case, our approach yields converging approximations of the ergodic distributions. We illustrate our methodology through several biochemical examples taken from the literature: Schl\"ogl's model for a chemical bifurcation, a two-dimensional toggle switch, a model for bursty gene expression , and a dimerisation model with multiple stationary distributions .
[ { "type": "D", "before": "and", "after": null, "start_char_pos": 1304, "end_char_pos": 1307 }, { "type": "A", "before": null, "after": ", and a dimerisation model with multiple stationary distributions", "start_char_pos": 1343, "end_char_pos": 1343 } ]
[ 0, 109, 256, 438, 620, 814, 908, 1030, 1130 ]
1702.06671
1
From a point of view of classical electrodynamics, the performance of two-dimensional shape-simplified antennae is discussed based upon the shape of naturally designed systems to harvest light. The modular design of nature is found to make the antenna non-reciprocal , hence more efficient . We further explain the reason that the light harvester must be a ring instead of a ball, the function of the notch at the LH1-RC complex, the non-heme iron at the reaction center, the chlorophylls are dielectric instead of conductor, a mechanism to prevent damages from excess sunlight, the functional role played by the long-lasting spectrometric signal observed, and the photon anti-bunching observed. Our model has the required structural information automatically built in. We comment about how our prediction might be verified experimentally.
From a point of view of classical electrodynamics, the performance of two-dimensional shape-simplified antennae is discussed based upon the shape of naturally designed systems to harvest light. The non-heme iron at the reaction center are, in particular, found to make the antenna non-reciprocal . We further explain the reason that the function of the notch at the complex, the function of the polypeptide termed PufX presented at the notch, the function of the special pair, the shape of the light harvestor must not be spherical, the cross section of the light harvestor must not be circular, the chlorophylls are dielectric instead of conductor, a mechanism to prevent damages from excess sunlight, the functional role played by the long-lasting spectrometric signal observed, and the photon anti-bunching observed. Our model has the required structural information automatically built in. We further comment about how our prediction might be verified experimentally.
[ { "type": "R", "before": "modular design of nature is", "after": "non-heme iron at the reaction center are, in particular,", "start_char_pos": 198, "end_char_pos": 225 }, { "type": "D", "before": ", hence more efficient", "after": null, "start_char_pos": 267, "end_char_pos": 289 }, { "type": "D", "before": "light harvester must be a ring instead of a ball, the", "after": null, "start_char_pos": 331, "end_char_pos": 384 }, { "type": "D", "before": "LH1-RC", "after": null, "start_char_pos": 414, "end_char_pos": 420 }, { "type": "R", "before": "non-heme iron at the reaction center, the", "after": "function of the polypeptide termed PufX presented at the notch, the function of the special pair, the shape of the light harvestor must not be spherical, the cross section of the light harvestor must not be circular, the", "start_char_pos": 434, "end_char_pos": 475 }, { "type": "A", "before": null, "after": "further", "start_char_pos": 773, "end_char_pos": 773 } ]
[ 0, 193, 291, 695, 769 ]
1702.06671
2
From a point of view of classical electrodynamics, the performance of two-dimensional shape-simplified antennae is discussed based upon the shape of naturally designed systems to harvest light. The non-heme iron at the reaction center are, in particular, found to make the antenna non-reciprocal. We further explain the reason that the function of the notch at the complex, the function of the polypeptide termed PufX presented at the notch, the function of the special pair, the shape of the light harvestor must not be spherical, the cross section of the light harvestor must not be circular, the chlorophylls are dielectric instead of conductor, a mechanism to prevent damages from excess sunlight , the functional role played by the long-lasting spectrometric signal observed, and the photon anti-bunching observed . Our model has the required structural information automatically built in. We further comment about how our prediction might be verified experimentally.
From a point of view of classical electrodynamics, the performance of two-dimensional shape-simplified antennae is discussed based upon the shape of naturally designed systems to harvest light. We explain the reason that the function of the notch at the complex, the function of the PufX presented at the notch, the function of the special pair, the bacteriochlorophylls are dielectric instead of conductor, and a mechanism to prevent damages from excess sunlight . The non-heme iron at the reaction center, the toroidal shape of the light harvestor, the functional role played by the long-lasting spectrometric signal observed, and the photon anti-bunching observed suggest non-reciprocity . Our model has the required structural information automatically built in. We further comment about how our prediction might be verified experimentally.
[ { "type": "R", "before": "The non-heme iron at the reaction center are, in particular, found to make the antenna non-reciprocal. We further", "after": "We", "start_char_pos": 194, "end_char_pos": 307 }, { "type": "D", "before": "polypeptide termed", "after": null, "start_char_pos": 394, "end_char_pos": 412 }, { "type": "R", "before": "shape of the light harvestor must not be spherical, the cross section of the light harvestor must not be circular, the chlorophylls", "after": "bacteriochlorophylls", "start_char_pos": 480, "end_char_pos": 611 }, { "type": "A", "before": null, "after": "and", "start_char_pos": 649, "end_char_pos": 649 }, { "type": "R", "before": ", the", "after": ". The non-heme iron at the reaction center, the toroidal shape of the light harvestor, the", "start_char_pos": 702, "end_char_pos": 707 }, { "type": "A", "before": null, "after": "suggest non-reciprocity", "start_char_pos": 820, "end_char_pos": 820 } ]
[ 0, 193, 296, 373, 822, 896 ]
1702.06671
3
From a point of view of classical electrodynamics, the performance of two-dimensional shape-simplified antennaeis discussed based upon the shape of naturally designed systems to harvest light . We explain the reason that the function of the notch at the complex, the function of the PufX presented at the notch, the function of the special pair, the bacteriochlorophylls are dielectric instead of conductor, and a mechanism to prevent damages from excess sunlight. The non-heme iron at the reaction center, the toroidal shape of the light harvestor , the functional role played by the long-lasting spectrometric signal observed, and the photon anti-bunching observed suggest non-reciprocity. Our model has the required structural information automatically built in. We further comment about how our prediction might be verified experimentally{\it {\it .
Most of our current understanding of mechanisms of photosynthesis comes from spectroscopy. However, classical definition of radio-antenna can be extended to optical regime to discuss the function of light-harvesting antennae. Further to our previously proposed model of a loop antenna we provide several more physical explanations on considering the non-reciprocal properties of the light harvesters of bacteria. We explained the function of the non-heme iron at the reaction center, and presented reasons for each module of the light harvester being composed of one carotenoid, two short \alpha-helical polypeptides and three bacteriochlorophylls; we explained also the toroidal shape of the light harvester, the upper bound of the characteristic length of the light harvester , the functional role played by the long-lasting spectrometric signal observed, and the photon anti-bunching observed . Based on these analyses, two mechanisms might be used by radiation-durable bacteria,{\it Deinococcus radiodurans ; and the non-reciprocity of an archaeon,{\it Haloquadratum walsbyi , are analyzed. The physical lessons involved are useful for designing artificial light harvesters, optical sensors, wireless power chargers, passive super-Planckian heat radiators, photocatalytic hydrogen generators, and radiation protective cloaks. In particular it can predict what kind of particles should be used to separate sunlight into a photovoltaically and thermally useful range to enhance the efficiency of solar cells .
[ { "type": "R", "before": "From a point of view of classical electrodynamics, the performance of two-dimensional shape-simplified antennaeis discussed based upon the shape of naturally designed systems to harvest light . We explain the reason that the", "after": "Most of our current understanding of mechanisms of photosynthesis comes from spectroscopy. However, classical definition of radio-antenna can be extended to optical regime to discuss the function of light-harvesting antennae. Further to our previously proposed model of a loop antenna we provide several more physical explanations on considering the non-reciprocal properties of the light harvesters of bacteria. We explained the", "start_char_pos": 0, "end_char_pos": 224 }, { "type": "D", "before": "notch at the complex, the function of the PufX presented at the notch, the function of the special pair, the bacteriochlorophylls are dielectric instead of conductor, and a mechanism to prevent damages from excess sunlight. The", "after": null, "start_char_pos": 241, "end_char_pos": 468 }, { "type": "R", "before": "reaction center, the toroidal shape", "after": "reaction center, and presented reasons for each module of the light harvester being composed of one carotenoid, two short \\alpha-helical polypeptides and three bacteriochlorophylls; we explained also the toroidal shape of the light harvester, the upper bound of the characteristic length", "start_char_pos": 490, "end_char_pos": 525 }, { "type": "R", "before": "harvestor", "after": "harvester", "start_char_pos": 539, "end_char_pos": 548 }, { "type": "R", "before": "suggest non-reciprocity. Our model has the required structural information automatically built in. We further comment about how our prediction might be verified experimentally", "after": ". Based on these analyses, two mechanisms might be used by radiation-durable bacteria,", "start_char_pos": 667, "end_char_pos": 842 }, { "type": "A", "before": null, "after": "Deinococcus radiodurans", "start_char_pos": 847, "end_char_pos": 847 }, { "type": "A", "before": null, "after": "; and the non-reciprocity of an archaeon,", "start_char_pos": 848, "end_char_pos": 848 }, { "type": "A", "before": null, "after": "Haloquadratum walsbyi", "start_char_pos": 853, "end_char_pos": 853 }, { "type": "A", "before": null, "after": ", are analyzed. The physical lessons involved are useful for designing artificial light harvesters, optical sensors, wireless power chargers, passive super-Planckian heat radiators, photocatalytic hydrogen generators, and radiation protective cloaks. In particular it can predict what kind of particles should be used to separate sunlight into a photovoltaically and thermally useful range to enhance the efficiency of solar cells", "start_char_pos": 854, "end_char_pos": 854 } ]
[ 0, 193, 464, 691, 765 ]
1702.06939
1
Myxobacteria are social bacteria, that can glide in 2D and form counter-propagating, interacting waves. Here we present a novel age-structured, continuous macroscopic model for the movement of myxobacteria. The derivation is based on microscopic interaction rules that can be formulated as a particle-based model and set within the SOH (Self-Organized Hydrodynamics) framework. The strength of this combined approach is that microscopic knowledge or data can be incorporated easily into the particle model, whilst the continuous model allows for easy numerical analysis of the different effects. This allows to analyze the influence of a refractory (insensitivity) period following a reversal of movement. Our analysis reveals that the refractory period is not necessary for wave formation, but essential to wave synchronization, indicating separate molecular mechanisms.
Myxobacteria are social bacteria, that can glide in 2D and form counter-propagating, interacting waves. Here we present a novel age-structured, continuous macroscopic model for the movement of myxobacteria. The derivation is based on microscopic interaction rules that can be formulated as a particle-based model and set within the SOH (Self-Organized Hydrodynamics) framework. The strength of this combined approach is that microscopic knowledge or data can be incorporated easily into the particle model, whilst the continuous model allows for easy numerical analysis of the different effects. However we found that the derived macroscopic model lacks a diffusion term in the density equations, which is necessary to control the number of waves, indicating that a higher order approximation during the derivation is crucial. Upon ad-hoc addition of the diffusion term, we found very good agreement between the age-structured model and the biology. In particular we analyzed the influence of a refractory (insensitivity) period following a reversal of movement. Our analysis reveals that the refractory period is not necessary for wave formation, but essential to wave synchronization, indicating separate molecular mechanisms.
[ { "type": "R", "before": "This allows to analyze", "after": "However we found that the derived macroscopic model lacks a diffusion term in the density equations, which is necessary to control the number of waves, indicating that a higher order approximation during the derivation is crucial. Upon ad-hoc addition of the diffusion term, we found very good agreement between the age-structured model and the biology. In particular we analyzed", "start_char_pos": 596, "end_char_pos": 618 } ]
[ 0, 103, 206, 377, 595, 705 ]
1702.07460
1
Allosteric molecules serve as regulators of cellular activity across all domains of life . We present a general theory of allosteric transcriptional regulation that permits quantitative predictions for how physiological responses are tuned to environmental stimuli. To test the model 's predictive power, we apply it to the specific case of the ubiquitous simple repression motif in bacteria . We measure the fold-change in gene expression at different inducer concentrations in a collection of strains that span a range of repressor copy numbers and operator binding strengths . After inferring the inducer dissociation constants using data from one of these strains, we show the broad reach of the model by predicting the induction profiles of all other strains . Finally, we derive an expression for the free energy of allosteric transcription factors which enables us to collapse the data from all of our experiments onto a single master curve , capturing the diverse phenomenology of the induction profiles.
Allosteric regulation is found across all domains of life , yet we still lack simple, predictive theories that directly link the experimentally tunable parameters of a system to its input-output response. To that end, we present a general theory of allosteric transcriptional regulation using the Monod-Wyman-Changeux model. We rigorously test this model using the ubiquitous simple repression motif in bacteria by first predicting the behavior of strains that span a large range of repressor copy numbers and DNA binding strengths and then constructing and measuring their response. Our model not only accurately captures the induction profiles of these strains but also enables us to derive analytic expressions for key properties such as the dynamic range and EC_{50 . Finally, we derive an expression for the free energy of allosteric repressors which enables us to collapse our experimental data onto a single master curve that captures the diverse phenomenology of the induction profiles.
[ { "type": "R", "before": "molecules serve as regulators of cellular activity", "after": "regulation is found", "start_char_pos": 11, "end_char_pos": 61 }, { "type": "R", "before": ". We", "after": ", yet we still lack simple, predictive theories that directly link the experimentally tunable parameters of a system to its input-output response. To that end, we", "start_char_pos": 89, "end_char_pos": 93 }, { "type": "R", "before": "that permits quantitative predictions for how physiological responses are tuned to environmental stimuli. To test the model 's predictive power, we apply it to the specific case of the", "after": "using the Monod-Wyman-Changeux model. We rigorously test this model using the", "start_char_pos": 160, "end_char_pos": 344 }, { "type": "R", "before": ". We measure the fold-change in gene expression at different inducer concentrations in a collection", "after": "by first predicting the behavior", "start_char_pos": 392, "end_char_pos": 491 }, { "type": "A", "before": null, "after": "large", "start_char_pos": 515, "end_char_pos": 515 }, { "type": "R", "before": "operator binding strengths . After inferring the inducer dissociation constants using data from one of these strains, we show the broad reach of the model by predicting", "after": "DNA binding strengths and then constructing and measuring their response. Our model not only accurately captures", "start_char_pos": 552, "end_char_pos": 720 }, { "type": "R", "before": "all other strains", "after": "these strains but also enables us to derive analytic expressions for key properties such as the dynamic range and", "start_char_pos": 747, "end_char_pos": 764 }, { "type": "A", "before": null, "after": "EC_{50", "start_char_pos": 765, "end_char_pos": 765 }, { "type": "R", "before": "transcription factors", "after": "repressors", "start_char_pos": 835, "end_char_pos": 856 }, { "type": "R", "before": "the data from all of our experiments", "after": "our experimental data", "start_char_pos": 886, "end_char_pos": 922 }, { "type": "R", "before": ", capturing", "after": "that captures", "start_char_pos": 950, "end_char_pos": 961 } ]
[ 0, 90, 265, 393, 767 ]
1702.07556
1
We derive an explicit closed-form representation of mean-variance hedging strategies for models whose asset price follows an exponential additive process. Our representation is given in terms of Malliavin calculus for L\'evy processes . In addition, we develop an approximation method to compute mean-variance hedging strategies for exponential L\'evy models , and illustrate numerical results .
We focus on mean-variance hedging problem for models whose asset price follows an exponential additive process. Some representations of mean-variance hedging strategies for jump type models have already been suggested, but none is suited to develop numerical methods of the values of strategies for any given time up to the maturity. In this paper, we aim to derive a new explicit closed-form representation, which enables us to develop an efficient numerical method using the fast Fourier transforms. Note that our representation is described in terms of Malliavin derivatives . In addition, we illustrate numerical results for exponential L\'evy models .
[ { "type": "R", "before": "derive an explicit closed-form representation of", "after": "focus on", "start_char_pos": 3, "end_char_pos": 51 }, { "type": "R", "before": "strategies", "after": "problem", "start_char_pos": 74, "end_char_pos": 84 }, { "type": "R", "before": "Our representation is given", "after": "Some representations of mean-variance hedging strategies for jump type models have already been suggested, but none is suited to develop numerical methods of the values of strategies for any given time up to the maturity. In this paper, we aim to derive a new explicit closed-form representation, which enables us to develop an efficient numerical method using the fast Fourier transforms. Note that our representation is described", "start_char_pos": 155, "end_char_pos": 182 }, { "type": "R", "before": "calculus for L\\'evy processes", "after": "derivatives", "start_char_pos": 205, "end_char_pos": 234 }, { "type": "R", "before": "develop an approximation method to compute mean-variance hedging strategies", "after": "illustrate numerical results", "start_char_pos": 253, "end_char_pos": 328 }, { "type": "D", "before": ", and illustrate numerical results", "after": null, "start_char_pos": 359, "end_char_pos": 393 } ]
[ 0, 154, 236 ]
1702.07936
1
This paper provides a general framework for modeling financial contagion in a system with obligations in multiple illiquid assets . In so doing, we develop a multi-layered financial network that extends the single network of \mbox{%DIFAUXCMD EN01 . In particular, we develop a financial contagion model with fire sales that allows institutions to both buy and sell assets to cover their liabilities and act as utility maximizers. We also emphasize the value of this general framework in studying a dynamic or multiple maturity setting for financial contagion. We prove that, under standard assumptions, equilibrium portfolio holdings and market prices exist which clear the multi-layered financial system. However, these clearing solutions are not unique in general. We extend the existence results to consider monotonicity, uniqueness, and sensitivity results under fixed exchange rates between assets . We further provide mathematical formulations for regulatory and utility functions satisfying the necessary conditions for these existence and uniqueness results. We demonstrate the value of our model through illustrative numerical case studies. In particular, we study a counterfactual scenario on the event that Greece re-instituted the drachma on a dataset from the European Banking Authority.
This paper provides a general framework for modeling financial contagion in a system with obligations in multiple illiquid assets (e.g., currencies). In so doing, we develop a multi-layered financial network that extends the single network of Eisenberg and Noe (2001) . In particular, we develop a financial contagion model with fire sales that allows institutions to both buy and sell assets to cover their liabilities in the different assets and act as utility maximizers. We also emphasize the value of this general framework in studying a dynamic or multiple maturity setting for financial contagion. We prove that, under standard assumptions, equilibrium portfolio holdings and market prices exist which clear the multi-layered financial system. However, these clearing solutions are not unique in general. We extend the existence results to consider monotonicity, uniqueness, and sensitivity results under fixed exchange rates between assets . We additionally provide results on the sensitivity of the equilibrium portfolio holdings under misspecification of the system parameters (i.e., initial endowments and interbank liabilities) . We further provide mathematical formulations for regulatory and utility functions satisfying the necessary conditions for these existence and uniqueness results. We demonstrate the value of our model through illustrative numerical case studies. In particular, we study a counterfactual scenario on the event that Greece re-instituted the drachma on a dataset from the European Banking Authority.
[ { "type": "R", "before": ".", "after": "(e.g., currencies).", "start_char_pos": 130, "end_char_pos": 131 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD EN01", "after": "Eisenberg and Noe (2001)", "start_char_pos": 225, "end_char_pos": 246 }, { "type": "A", "before": null, "after": "in the different assets", "start_char_pos": 399, "end_char_pos": 399 }, { "type": "A", "before": null, "after": ". We additionally provide results on the sensitivity of the equilibrium portfolio holdings under misspecification of the system parameters (i.e., initial endowments and interbank liabilities)", "start_char_pos": 904, "end_char_pos": 904 } ]
[ 0, 131, 248, 430, 560, 706, 767, 906, 1068, 1151 ]
1702.07936
2
This paper provides a general framework for modeling financial contagion in a system with obligations in multiple illiquid assets (e.g., currencies). In so doing, we develop a multi-layered financial network that extends the single network of Eisenberg and Noe (2001). In particular, we develop a financial contagion model with fire sales that allows institutions to both buy and sell assets to cover their liabilities in the different assets and act as utility maximizers. We also emphasize the value of this general framework in studying a dynamic or multiple maturity setting for financial contagion. We prove that, under standard assumptions , equilibrium portfolio holdings and market prices exist which clear the multi-layered financial system. However, these clearing solutions are not unique in general . We extend the existence results to consider monotonicity, uniqueness, and sensitivity results under fixed exchange rates between assets. We additionally provide results on the sensitivity of the equilibriumportfolio holdings under misspecification of the system parameters (i. e., initial endowments and interbank liabilities) . We further provide mathematical formulations for regulatory and utility functions satisfying the necessary conditions for these existence and uniqueness results. We demonstrate the value of our model through illustrative numerical case studies. In particular, we study a counterfactual scenario on the event that Greece re-instituted the drachma on a dataset from the European Banking Authority.
This paper provides a general framework for modeling financial contagion in a system with obligations in multiple illiquid assets (e.g., currencies). In so doing, we develop a multi-layered financial network that extends the single network of Eisenberg and Noe (2001). In particular, we develop a financial contagion model with fire sales that allows institutions to both buy and sell assets to cover their liabilities in the different assets and act as utility maximizers. We prove that, under standard assumptions and without market impacts , equilibrium portfolio holdings exist and are unique. However, with market impacts, we prove that equilibrium portfolio holdings and market prices exist which clear the multi-layered financial system. In general, though, these clearing solutions are not unique . We extend this result by considering the t\^atonnement process to find the unique attained equilibrium . The attained equilibrium need not be continuous with respect to the initial shock; these points of discontinuity match those stresses in which a financial crisis becomes a systemic crisis. We further provide mathematical formulations for payment rules and utility functions satisfying the necessary conditions for these existence and uniqueness results. We demonstrate the value of our model through illustrative numerical case studies. In particular, we study a counterfactual scenario on the event that Greece re-instituted the drachma on a dataset from the European Banking Authority.
[ { "type": "D", "before": "also emphasize the value of this general framework in studying a dynamic or multiple maturity setting for financial contagion. We", "after": null, "start_char_pos": 477, "end_char_pos": 606 }, { "type": "A", "before": null, "after": "and without market impacts", "start_char_pos": 646, "end_char_pos": 646 }, { "type": "R", "before": "and market", "after": "exist and are unique. However, with market impacts, we prove that equilibrium portfolio holdings and market", "start_char_pos": 680, "end_char_pos": 690 }, { "type": "R", "before": "However,", "after": "In general, though,", "start_char_pos": 752, "end_char_pos": 760 }, { "type": "D", "before": "in general", "after": null, "start_char_pos": 801, "end_char_pos": 811 }, { "type": "R", "before": "the existence results to consider monotonicity, uniqueness, and sensitivity results under fixed exchange rates between assets. We additionally provide results on the sensitivity of the equilibriumportfolio holdings under misspecification of the system parameters (i. e., initial endowments and interbank liabilities)", "after": "this result by considering the t\\^atonnement process to find the unique attained equilibrium", "start_char_pos": 824, "end_char_pos": 1140 }, { "type": "A", "before": null, "after": "The attained equilibrium need not be continuous with respect to the initial shock; these points of discontinuity match those stresses in which a financial crisis becomes a systemic crisis.", "start_char_pos": 1143, "end_char_pos": 1143 }, { "type": "R", "before": "regulatory", "after": "payment rules", "start_char_pos": 1193, "end_char_pos": 1203 } ]
[ 0, 149, 268, 473, 603, 751, 813, 950, 1142, 1305, 1388 ]
1702.08267
1
Using state-of-the-art techniques combining imaging methods and high-throughput genomic mapping tools leaded to the significant progress in detailing chromosome architecture of URLanisms. However, a gap still remains between the rapidly growing structural data on the chromosome folding and the large-scale URLanization. Could a part of information on the chromosome folding be obtained directly from underlying genomic DNA sequences abundantly stored in the databanks? To answer this question, we developed an original discrete double Fourier transform (DDFT). DDFT serves for the detection of large-scale genome regularities associated with domains/units at the different levels of hierarchical chromosome folding. The method is versatile and can be applied to both genomic DNA sequences and corresponding physico-chemical parameters such as base-pairing free energy. The latter characteristic is closely related to the replication and transcription and can also be used for the assessment of temperature or supercoiling effects on the chromosome folding. We tested the method on the genome of Escherichia coli K-12 and found good correspondence with the annotated domains/units established experimentally. The combined experimental, modeling, and bioinformatic DDFT analysis should yield more complete knowledge on the chromosome architecture and URLanization.
Using state-of-the-art techniques combining imaging methods and high-throughput genomic mapping tools leaded to the significant progress in detailing chromosome architecture of URLanisms. However, a gap still remains between the rapidly growing structural data on the chromosome folding and the large-scale URLanization. Could a part of information on the chromosome folding be obtained directly from underlying genomic DNA sequences abundantly stored in the databanks? To answer this question, we developed an original discrete double Fourier transform (DDFT). DDFT serves for the detection of large-scale genome regularities associated with domains/units at the different levels of hierarchical chromosome folding. The method is versatile and can be applied to both genomic DNA sequences and corresponding physico-chemical parameters such as base-pairing free energy. The latter characteristic is closely related to the replication and transcription and can also be used for the assessment of temperature or supercoiling effects on the chromosome folding. We tested the method on the genome of Escherichia coli K-12 and found good correspondence with the annotated domains/units established experimentally. As a brief illustration of further abilities of DDFT, the study of large-scale URLanization for bacteriophage PHIX174 and bacterium Caulobacter crescentus was also added. The combined experimental, modeling, and bioinformatic DDFT analysis should yield more complete knowledge on the chromosome architecture and URLanization.
[ { "type": "A", "before": null, "after": "As a brief illustration of further abilities of DDFT, the study of large-scale URLanization for bacteriophage PHIX174 and bacterium Caulobacter crescentus was also added.", "start_char_pos": 1209, "end_char_pos": 1209 } ]
[ 0, 187, 320, 469, 561, 716, 869, 1057, 1208 ]
1702.08867
1
Bond rating Transition Probability Matrices (TPMs) are built over a one-year time-frame and for many practical purposes, like the assessment of risk in portfolios , one needs to compute the TPM for a smaller time interval. In the context of continuous time Markov chains (CTMC) several deterministic and statistical algorithms have been proposed to estimate the generator matrix. We focus on the Expectation-Maximization (EM) algorithm by \mbox{%DIFAUXCMD BladtSorensen2005 for a CTMC with an absorbing state for such estimation. This work's contribution is fourfold . Firstly, we provide directly computable closed form expressions for quantities appearing in the EM algorithm . Previously, these quantities had to be estimated numerically and considerable computational speedups have been gained. Secondly, we prove convergence to a single set of parameters under reasonable conditions . Thirdly, we derive a closed-form expression for the error estimate in the EM algorithm allowing to approximate confidence intervals for the estimation . Finally, we provide a numerical benchmark of our results against other known algorithms, in particular, on several problems related to credit risk. The EM algorithm we propose, padded with the new formulas (and error criteria), is very competitive and outperforms other known algorithms in several metrics .
Bond rating Transition Probability Matrices (TPMs) are built over a one-year time-frame and for many practical purposes, like the assessment of risk in portfolios or the computation of banking Capital Requirements (e.g. the new IFRS 9 regulation) , one needs to compute the TPM and probabilities of default over a smaller time interval. In the context of continuous time Markov chains (CTMC) several deterministic and statistical algorithms have been proposed to estimate the generator matrix. We focus on the Expectation-Maximization (EM) algorithm by Bladt and Sorensen (2005) for a CTMC with an absorbing state for such estimation. This work's contribution is threefold . Firstly, we provide directly computable closed-form expressions for quantities appearing in the EM algorithm and associated information matrix, allowing to easily approximate confidence intervals . Previously, these quantities had to be estimated numerically and considerable computational speedups have been gained. Secondly, we prove convergence to a single set of parameters under very weak conditions (for the TPM problem) . Finally, we provide a numerical benchmark of our results against other known algorithms, in particular, on several problems related to credit risk. The EM algorithm we propose, padded with the new formulas (and error criteria), outperforms other known algorithms in several metrics , in particular, with much less overestimation of probabilities of default in higher ratings than other statistical algorithms .
[ { "type": "A", "before": null, "after": "or the computation of banking Capital Requirements (e.g. the new IFRS 9 regulation)", "start_char_pos": 163, "end_char_pos": 163 }, { "type": "R", "before": "for", "after": "and probabilities of default over", "start_char_pos": 195, "end_char_pos": 198 }, { "type": "R", "before": "\\mbox{%DIFAUXCMD BladtSorensen2005", "after": "Bladt and Sorensen (2005)", "start_char_pos": 440, "end_char_pos": 474 }, { "type": "R", "before": "fourfold", "after": "threefold", "start_char_pos": 559, "end_char_pos": 567 }, { "type": "R", "before": "closed form", "after": "closed-form", "start_char_pos": 610, "end_char_pos": 621 }, { "type": "A", "before": null, "after": "and associated information matrix, allowing to easily approximate confidence intervals", "start_char_pos": 679, "end_char_pos": 679 }, { "type": "R", "before": "reasonable conditions . Thirdly, we derive a closed-form expression for the error estimate in the EM algorithm allowing to approximate confidence intervals for the estimation", "after": "very weak conditions (for the TPM problem)", "start_char_pos": 868, "end_char_pos": 1042 }, { "type": "D", "before": "is very competitive and", "after": null, "start_char_pos": 1273, "end_char_pos": 1296 }, { "type": "A", "before": null, "after": ", in particular, with much less overestimation of probabilities of default in higher ratings than other statistical algorithms", "start_char_pos": 1351, "end_char_pos": 1351 } ]
[ 0, 223, 380, 530, 569, 800, 891, 1044, 1192 ]
1702.08901
1
Under Solvency II the computation of capital requirements is based on value at risk (V@R). V@R is a quantile-based risk measure and neglects extreme risks in the tail. V@R belongs to the family of distortion risk measures. A serious deficiency of V@R is that firms can hide their total downside risk in corporate groups. They can largely reduce their total capital requirements via appropriate transfer agreements within a group structure consisting of sufficiently many entities and thereby circumvent capital regulation. We prove several versions of such a result for general distortion risk measures of V@R-type, explicitly construct suitable allocations of the group portfolio, and finally demonstrate how these findings can be extended beyond distortion risk measures .
Under Solvency II the computation of capital requirements is based on value at risk (V@R). V@R is a quantile-based risk measure and neglects extreme risks in the tail. V@R belongs to the family of distortion risk measures. A serious deficiency of V@R is that firms can hide their total downside risk in corporate networks, unless a consolidated solvency balance sheet is required for each economic scenario. In this case, they can largely reduce their total capital requirements via appropriate transfer agreements within a network structure consisting of sufficiently many entities and thereby circumvent capital regulation. We prove several versions of such a result for general distortion risk measures of V@R-type, explicitly construct suitable allocations of the network portfolio, and finally demonstrate how these findings can be extended beyond distortion risk measures . We also discuss why consolidation requirements cannot completely eliminate this problem. Capital regulation should thus be based on coherent or convex risk measures like average value at risk or expectiles .
[ { "type": "R", "before": "groups. They", "after": "networks, unless a consolidated solvency balance sheet is required for each economic scenario. In this case, they", "start_char_pos": 313, "end_char_pos": 325 }, { "type": "R", "before": "group", "after": "network", "start_char_pos": 423, "end_char_pos": 428 }, { "type": "R", "before": "group", "after": "network", "start_char_pos": 665, "end_char_pos": 670 }, { "type": "A", "before": null, "after": ". We also discuss why consolidation requirements cannot completely eliminate this problem. Capital regulation should thus be based on coherent or convex risk measures like average value at risk or expectiles", "start_char_pos": 773, "end_char_pos": 773 } ]
[ 0, 90, 167, 222, 320, 522 ]
1703.00259
1
In this paper, we investigate conditions to represent derivative price under XVA explicitly. As long as we consider different borrowing/lending rates, XVA problem becomes a non-linear equa- tion and this makes finding explicit solution of XVA difficult. It is shown that the associated valuation problem is actually linear under some proper conditions so that we can have the same complexity in pricing as classical pricing theory. Moreover, the conditions mentioned above is mild in the sense that it can be obtained by choosing adequate covenants between the investor and counterparty .
In this paper, we investigate conditions to convert nonlinear equations of XVA to linear equations. As we consider different borrowing/lending rates, pricing derivatives encounters a non-linear equation and this makes finding an analytic solution for the price with the XVA difficult. Thus, in general, we have to resort to expensive numerical computation. In most previous works, the attempts to find an analytic solution were conducted on a restrictive assumption that the borrowing/lending rates are same. We find conditions to relax the assumption. Moreover, the conditions are mild in the sense that it is often satisfied in practice .
[ { "type": "R", "before": "represent derivative price under XVA explicitly. As long as", "after": "convert nonlinear equations of XVA to linear equations. As", "start_char_pos": 44, "end_char_pos": 103 }, { "type": "R", "before": "XVA problem becomes", "after": "pricing derivatives encounters", "start_char_pos": 151, "end_char_pos": 170 }, { "type": "R", "before": "equa- tion", "after": "equation", "start_char_pos": 184, "end_char_pos": 194 }, { "type": "R", "before": "explicit solution of", "after": "an analytic solution for the price with the", "start_char_pos": 218, "end_char_pos": 238 }, { "type": "R", "before": "It is shown that the associated valuation problem is actually linear under some proper conditions so that we can have the same complexity in pricing as classical pricing theory.", "after": "Thus, in general, we have to resort to expensive numerical computation. In most previous works, the attempts to find an analytic solution were conducted on a restrictive assumption that the borrowing/lending rates are same. We find conditions to relax the assumption.", "start_char_pos": 254, "end_char_pos": 431 }, { "type": "R", "before": "mentioned above is", "after": "are", "start_char_pos": 457, "end_char_pos": 475 }, { "type": "R", "before": "can be obtained by choosing adequate covenants between the investor and counterparty", "after": "is often satisfied in practice", "start_char_pos": 502, "end_char_pos": 586 } ]
[ 0, 92, 253, 431 ]
1703.00259
2
In this paper, we investigate conditionsto convert nonlinear equations of XVA to linear equations. As we consider different borrowing/lendingrates, pricing derivatives encounters a non-linear equation and this makes finding an analytic solution for the price with the XVA difficult. Thus, in general, we have to resort to expensive numerical computation. In most previous works, the attempts to find an analytic solution were conducted on a restrictive assumption that the borrowing / lending rates are same. We find conditions to relax the assumption. Moreover, the conditions are mild in the sense that it is often satisfied in practice .
We discuss a binary nature of funding impacts. Under some conditions, funding is either cost or benefit, i.e., one of the lending / borrowing rates does not play any role in pricing derivatives. When we price derivatives, considering different lending/borrowing rates leads to semi-linear BSDEs and PDEs, so we need to solve the equations numerically. However, once we can guarantee that only one of the rates affects pricing, we can recover linear equations and derive analytic formulae. Moreover, as a byproduct, our results explain how debt value adjustment (DVA) and funding benefits are different. It is often believed that DVA and funding benefits are overlapped but it will be shown that the two components are affected by different mathematical structures of derivative transactions. We will see later that FBA occurs where the payoff is non-increasing, but this relationship becomes weaken as the funding choices of underlying assets are transferred to repo markets .
[ { "type": "R", "before": "In this paper, we investigate conditionsto convert nonlinear equations of XVA to linear equations. As we consider different borrowing/lendingrates, pricing derivatives encounters a non-linear equation and this makes finding an analytic solution for the price with the XVA difficult. Thus, in general, we have to resort to expensive numerical computation. In most previous works, the attempts to find an analytic solution were conducted on a restrictive assumption that the borrowing", "after": "We discuss a binary nature of funding impacts. Under some conditions, funding is either cost or benefit, i.e., one of the lending", "start_char_pos": 0, "end_char_pos": 482 }, { "type": "R", "before": "lending rates are same. We find conditions to relax the assumption. Moreover, the conditions are mild in the sense that it is often satisfied in practice", "after": "borrowing rates does not play any role in pricing derivatives. When we price derivatives, considering different lending/borrowing rates leads to semi-linear BSDEs and PDEs, so we need to solve the equations numerically. However, once we can guarantee that only one of the rates affects pricing, we can recover linear equations and derive analytic formulae. Moreover, as a byproduct, our results explain how debt value adjustment (DVA) and funding benefits are different. It is often believed that DVA and funding benefits are overlapped but it will be shown that the two components are affected by different mathematical structures of derivative transactions. We will see later that FBA occurs where the payoff is non-increasing, but this relationship becomes weaken as the funding choices of underlying assets are transferred to repo markets", "start_char_pos": 485, "end_char_pos": 638 } ]
[ 0, 98, 282, 354, 508, 552 ]
1703.00485
1
This document is a preliminary version of an in-depth review on the state of the art of clustering financial time series and the study of correlation networks. This preliminary document is intended for researchers in this field so that they can feedback to allow amendments, corrections and addition of new material unknown to the authors of this review. The aim of the document is to gather in one place the relevant material that can help the researcher in the field to have a bigger picture, the quantitative researcher to play with this alternative modeling of the financial time series, and the decision maker to leverage the insights obtained from these methods. We hope that this document will form a basis for implementation of an open toolbox of standard tools to study correlations, hierarchies, networks and clustering in financial markets.
This document is an ongoing review on the state of the art of clustering financial time series and the study of correlation and other interaction networks. This preliminary document is intended for researchers in this field so that they can feedback to allow amendments, corrections and addition of new material unknown to the authors of this review. The aim of the document is to gather in one place the relevant material that can help the researcher in the field to have a bigger picture, the quantitative researcher to play with this alternative modeling of the financial time series, and the decision maker to leverage the insights obtained from these methods. We hope that this document will form a basis for implementation of an open toolbox of standard tools to study correlations, hierarchies, networks and clustering in financial markets.
[ { "type": "R", "before": "a preliminary version of an in-depth", "after": "an ongoing", "start_char_pos": 17, "end_char_pos": 53 }, { "type": "A", "before": null, "after": "and other interaction", "start_char_pos": 150, "end_char_pos": 150 } ]
[ 0, 160, 355, 669 ]
1703.00485
2
This document is an ongoing review on the state of the art of clustering financial time series and the study of correlation and other interaction networks. This preliminary document is intended for researchers in this field so that they can feedback to allow amendments, corrections and addition of new material unknown to the authors of this review. The aim of the document is to gather in one place the relevant material that can help the researcher in the field to have a bigger picture, the quantitative researcher to play with this alternative modeling of the financial time series , and the decision maker to leverage the insightsobtained from these methods. We hope that this document will form a basis for implementation of an open toolbox of standard tools to study correlations, hierarchies, networks and clustering in financial markets.
We review the state of the art of clustering financial time series and the study of their correlations alongside other interaction networks. The aim of this review is to gather in one place the relevant material from different fields, e.g. machine learning, information geometry, econophysics, statistical physics, econometrics, behavioral finance. We hope it will help researchers to use more effectively this alternative modeling of the financial time series . Decision makers and quantitative researchers may also be able to leverage its insights. Finally, we also hope that this review will form the basis of an open toolbox to study correlations, hierarchies, networks and clustering in financial markets.
[ { "type": "R", "before": "This document is an ongoing review on", "after": "We review", "start_char_pos": 0, "end_char_pos": 37 }, { "type": "R", "before": "correlation and", "after": "their correlations alongside", "start_char_pos": 112, "end_char_pos": 127 }, { "type": "D", "before": "This preliminary document is intended for researchers in this field so that they can feedback to allow amendments, corrections and addition of new material unknown to the authors of this review.", "after": null, "start_char_pos": 156, "end_char_pos": 350 }, { "type": "R", "before": "the document", "after": "this review", "start_char_pos": 362, "end_char_pos": 374 }, { "type": "R", "before": "that can help the researcher in the field to have a bigger picture, the quantitative researcher to play with", "after": "from different fields, e.g. machine learning, information geometry, econophysics, statistical physics, econometrics, behavioral finance. We hope it will help researchers to use more effectively", "start_char_pos": 423, "end_char_pos": 531 }, { "type": "R", "before": ", and the decision maker to leverage the insightsobtained from these methods. We", "after": ". Decision makers and quantitative researchers may also be able to leverage its insights. Finally, we also", "start_char_pos": 587, "end_char_pos": 667 }, { "type": "R", "before": "document will form a basis for implementation", "after": "review will form the basis", "start_char_pos": 683, "end_char_pos": 728 }, { "type": "D", "before": "of standard tools", "after": null, "start_char_pos": 748, "end_char_pos": 765 } ]
[ 0, 155, 350, 664 ]
1703.00703
1
We present *K-means clustering algorithm and source code by expanding statistical clustering methods applied in URL to quantitative finance. *K-means is essentially deterministic without specifying initial centers, etc. We apply *K-means to extracting cancer signatures from genome data without using nonnegative matrix factorization (NMF). *K-means' computational cots is a faction of NMF's. Using 1,389 published samples for 14 cancer types, we find that 3 cancers (liver cancer, lung cancer and renal cell carcinoma) stand out and do not have cluster-like structures. Two clusters have especially high within-cluster correlations with 11 other cancers indicating common underlying structures. Our approach opens a novel avenue for studying such structures. *K-means is universal and can be applied in other fields. We discuss some potential applications in quantitative finance.
We present *K-means clustering algorithm and source code by expanding statistical clustering methods applied in URL to quantitative finance. *K-means is statistically deterministic without specifying initial centers, etc. We apply *K-means to extracting cancer signatures from genome data without using nonnegative matrix factorization (NMF). *K-means' computational cost is a fraction of NMF's. Using 1,389 published samples for 14 cancer types, we find that 3 cancers (liver cancer, lung cancer and renal cell carcinoma) stand out and do not have cluster-like structures. Two clusters have especially high within-cluster correlations with 11 other cancers indicating common underlying structures. Our approach opens a novel avenue for studying such structures. *K-means is universal and can be applied in other fields. We discuss some potential applications in quantitative finance.
[ { "type": "R", "before": "essentially", "after": "statistically", "start_char_pos": 153, "end_char_pos": 164 }, { "type": "R", "before": "cots is a faction", "after": "cost is a fraction", "start_char_pos": 365, "end_char_pos": 382 } ]
[ 0, 140, 219, 392, 570, 695, 759, 817 ]
1703.00785
1
The rapid urbanization of developing countries coupled with explosion in construction of high rising buildings and the high power usage in them calls for conservation and efficient energy program. Such a programme require monitoring of end-use appliances energy consumption in real-time . The worldwide recent adoption of smart-meter in smart-grid, has led to the rise of Non-Intrusive Load Monitoring (NILM); which enables estimation of appliance-specific power consumption from building's aggregate power consumption reading. NILM provides households with cost-effective real-time monitoring of end-use appliances to help them understand their consumption pattern and become part and parcel of energy conservation strategy . The worldwide recent adoption of smart-meter in smart-grid, has led to the rise of Non-Intrusive Load Monitoring (NILM); which enables estimation of appliance-specific power consumption from building's aggregate power consumption reading. NILM provides households with cost-effective real-time monitoring of end-use appliances to help them understand their consumption pattern and become part and parcel of energy conservation strategy. This paper presents an up to date overview of NILM system and its associated methods and techniques for energy disaggregation problem. This is followed by the review of the state-of-the art NILM algorithms. Furthermore, we review several performance metrics used by NILM researcher to evaluate NILM algorithms and discuss existing benchmarking framework for direct comparison of the state of the art NILM algorithms. Finally, the paper discuss potential NILM use-cases, presents an overview of the public available dataset and highlight challenges and future research directions.
The rapid urbanization of developing countries coupled with explosion in construction of high rising buildings and the high power usage in them calls for conservation and efficient energy program. Such a program require monitoring of end-use appliances energy consumption in real-time . The worldwide recent adoption of smart-meter in smart-grid, has led to the rise of Non-Intrusive Load Monitoring (NILM); which enables estimation of appliance-specific power consumption from building's aggregate power consumption reading. NILM provides households with cost-effective real-time monitoring of end-use appliances to help them understand their consumption pattern and become part and parcel of energy conservation strategy. This paper presents an up to date overview of NILM system and its associated methods and techniques for energy disaggregation problem. This is followed by the review of the state-of-the art NILM algorithms. Furthermore, we review several performance metrics used by NILM researcher to evaluate NILM algorithms and discuss existing benchmarking framework for direct comparison of the state of the art NILM algorithms. Finally, the paper discuss potential NILM use-cases, presents an overview of the public available dataset and highlight challenges and future research directions.
[ { "type": "R", "before": "programme", "after": "program", "start_char_pos": 204, "end_char_pos": 213 }, { "type": "D", "before": ". The worldwide recent adoption of smart-meter in smart-grid, has led to the rise of Non-Intrusive Load Monitoring (NILM); which enables estimation of appliance-specific power consumption from building's aggregate power consumption reading. NILM provides households with cost-effective real-time monitoring of end-use appliances to help them understand their consumption pattern and become part and parcel of energy conservation strategy", "after": null, "start_char_pos": 287, "end_char_pos": 724 } ]
[ 0, 196, 288, 409, 527, 726, 847, 965, 1163, 1298, 1370, 1580 ]
1703.00789
1
The role of proton tunneling in biological catalysis remains an open question usually addressed with the tools of biochemistry. Here, we map the proton motion in a hydrogen-bonded system into a problem of pseudo-spins to allow us to approach the problem using quantum information theory and thermodynamics. We investigate the dynamics of the quantum correlations generated through two hydrogen bonds between a prototypical enzyme and a substrate , and discuss the possibility of utilizing these correlationsas a resource in the catalytic power of the enzyme . In particular, we show that classical changes induced in the binding site of the enzyme spreads the quantum correlations among all of the four hydrogen-bonded atoms . If the enzyme suddenly returns to its initial state after the binding stage, the substrate ends in a quantum superposition state. Environmental effects can then naturally drive the reaction in the forward direction from the substrate to the product without needing the additional catalytic stage that is usually assumed to follow the binding stage . We find that in this scenario the enzyme lowers the activation energy to a much lower value than expected in biochemical reactions .
The role of proton tunneling in biological catalysis is investigated here within the frameworks of quantum information theory and thermodynamics. We consider the quantum correlations generated through two hydrogen bonds between a substrate and a prototypical enzyme that first catalyzes the tautomerization of the substrate to move on to a subsequent catalysis , and discuss how the enzyme can derive its catalytic potency from these correlations . In particular, we show that classical changes induced in the binding site of the enzyme spreads the quantum correlations among all of the four hydrogen-bonded atoms thanks to the directionality of hydrogen bonds . If the enzyme rapidly returns to its initial state after the binding stage, the substrate ends in a new transition state corresponding to a quantum superposition. Open quantum system dynamics can then naturally drive the reaction in the forward direction from the major tautomeric form to the minor tautomeric form without needing any additional catalytic activity . We find that in this scenario the enzyme lowers the activation energy so much that there is no energy barrier left in the tautomerization, even if the quantum correlations quickly decay .
[ { "type": "R", "before": "remains an open question usually addressed with the tools of biochemistry. Here, we map the proton motion in a hydrogen-bonded system into a problem of pseudo-spins to allow us to approach the problem using", "after": "is investigated here within the frameworks of", "start_char_pos": 53, "end_char_pos": 259 }, { "type": "R", "before": "investigate the dynamics of the", "after": "consider the", "start_char_pos": 310, "end_char_pos": 341 }, { "type": "R", "before": "prototypical enzyme and a substrate", "after": "substrate and a prototypical enzyme that first catalyzes the tautomerization of the substrate to move on to a subsequent catalysis", "start_char_pos": 410, "end_char_pos": 445 }, { "type": "R", "before": "the possibility of utilizing these correlationsas a resource in the catalytic power of the enzyme", "after": "how the enzyme can derive its catalytic potency from these correlations", "start_char_pos": 460, "end_char_pos": 557 }, { "type": "A", "before": null, "after": "thanks to the directionality of hydrogen bonds", "start_char_pos": 725, "end_char_pos": 725 }, { "type": "R", "before": "suddenly", "after": "rapidly", "start_char_pos": 742, "end_char_pos": 750 }, { "type": "R", "before": "quantum superposition state. Environmental effects", "after": "new transition state corresponding to a quantum superposition. Open quantum system dynamics", "start_char_pos": 829, "end_char_pos": 879 }, { "type": "R", "before": "substrate to the product without needing the additional catalytic stage that is usually assumed to follow the binding stage", "after": "major tautomeric form to the minor tautomeric form without needing any additional catalytic activity", "start_char_pos": 952, "end_char_pos": 1075 }, { "type": "R", "before": "to a much lower value than expected in biochemical reactions", "after": "so much that there is no energy barrier left in the tautomerization, even if the quantum correlations quickly decay", "start_char_pos": 1148, "end_char_pos": 1208 } ]
[ 0, 127, 306, 559, 727, 857, 1077 ]
1703.01329
1
We propose a method to assess the intrinsic risk carried by a financial position when the agent faces uncertainty about the pricing rule providing its present value. Our construction), where p is the observed initial value of X or is assigned by \pi _{P}(X) under the probability P. }\newline is inspired by a new interpretation of the quasiconvex duality and naturally leads to the introduction of the general class of Value\&Risk measures .
We propose a method to assess the intrinsic risk carried by a financial position X when the agent faces uncertainty about the pricing rule \pi_{\mathbb{P its present value. We introduce a general class of Value\&Risk (V\&R) measures as a function of (p,X,\mathbb{P), where p is the observed initial value of X or is assigned by \pi _{P}(X) under the probability P. }\newline Our approach is inspired by a new interpretation of the quasiconvex duality in a Knightian setting, where a family of probability measures replaces the single reference probability and is then applied to value financial positions. Diametrically, our construction of V\&R measures is based on the selection of a basket of claims to test the reliability of models .
[ { "type": "A", "before": null, "after": "X", "start_char_pos": 81, "end_char_pos": 81 }, { "type": "R", "before": "providing", "after": "\\pi_{\\mathbb{P", "start_char_pos": 138, "end_char_pos": 147 }, { "type": "R", "before": "Our construction", "after": "We introduce a general class of Value\\&Risk (V\\&R) measures as a function of (p,X,\\mathbb{P", "start_char_pos": 167, "end_char_pos": 183 }, { "type": "A", "before": null, "after": "Our approach", "start_char_pos": 294, "end_char_pos": 294 }, { "type": "R", "before": "and naturally leads to the introduction of the general class of Value\\&Risk measures", "after": "in a Knightian setting, where a family of probability measures replaces the single reference probability and is then applied to value financial positions. Diametrically, our construction of V\\&R measures is based on the selection of a basket of claims to test the reliability of models", "start_char_pos": 358, "end_char_pos": 442 } ]
[ 0, 166 ]
1703.01329
2
We propose a method to assess the intrinsic risk carried by a financial position X when the agent faces uncertainty about the pricing rule \pi_{\mathbb{P assigning its present value. We introduce a general class of Value\&Risk (V\&R) measures as a function of (p,X,\mathbb{P), where p is the observed initial value of X or is assigned by \pi _{P}(X) under the probability P. }%DIFDELCMD < \newline %%% Our approach is inspired by a new interpretation of the quasiconvex duality in a Knightian setting, where a family of probability measures replaces the single reference probability and is then applied to value financial positions. Diametrically, our construction of V\&R measures is based on the selection of a basket of claims to test the reliability of models .
We propose a method to assess the intrinsic risk carried by a financial position X when the agent faces uncertainty about the pricing rule assigning its present value. ), where p is the observed initial value of X or is assigned by \pi _{P}(X) under the probability P. }%DIFDELCMD < \newline %%% Our approach is inspired by a new interpretation of the quasiconvex duality in a Knightian setting, where a family of probability measures replaces the single reference probability and is then applied to value financial positions. Diametrically, our construction of Value\&Risk measures is based on the selection of a basket of claims to test the reliability of models . We compare a random payoff X with a given class of derivatives written on X , and use these derivatives to \textquotedblleft test\textquotedblright\ the pricing measures. We further introduce and study a general class of Value\&Risk measures \% R(p,X,\mathbb{P .
[ { "type": "D", "before": "\\pi_{\\mathbb{P", "after": null, "start_char_pos": 139, "end_char_pos": 153 }, { "type": "D", "before": "We introduce a general class of Value\\&Risk (V\\&R) measures as a function of (p,X,\\mathbb{P", "after": null, "start_char_pos": 183, "end_char_pos": 274 }, { "type": "R", "before": "V\\&R", "after": "Value\\&Risk", "start_char_pos": 668, "end_char_pos": 672 }, { "type": "A", "before": null, "after": ". We compare a random payoff X with a given class of derivatives written on X , and use these derivatives to \\textquotedblleft test\\textquotedblright\\ the pricing measures. We further introduce and study a general class of Value\\&Risk measures \\% R(p,X,\\mathbb{P", "start_char_pos": 764, "end_char_pos": 764 } ]
[ 0, 182, 632 ]
1703.01574
1
This paper studies an optimal investment problem under M-CEV with power utility function. Using Laplace transform we obtain explicit expression for optimal strategy in terms of confluent hypergeometric functions. For obtained representations we derive asymptotic and approximation formulas contains only elementary functions and continued fractions. These formulas allow to make analysis of impact of model's parameters and effects of parameters misspecification. In addition we propose some extensions of obtained results that can be applicable for pair trading algorithmic strategies.
This paper studies an optimal investment problem under M-CEV with power utility function. Using Laplace transform we obtain explicit expression for optimal strategy in terms of confluent hypergeometric functions. For obtained representations we derive asymptotic and approximation formulas contains only elementary functions and continued fractions. These formulas allow to make analysis of impact of model's parameters and effects of parameters misspecification. In addition we propose some extensions of obtained results that can be applicable for algorithmic strategies.
[ { "type": "D", "before": "pair trading", "after": null, "start_char_pos": 550, "end_char_pos": 562 } ]
[ 0, 89, 212, 349, 463 ]
1703.02105
1
We study a model of sequential learning with naive agents on a network. The key behavioral assumption is that agents wrongly believe their predecessors act based on only private information, so that correlation between observed actions is ignored. We provide a simple linear formula characterizing agents' actions in terms of paths in the network and use this formula to determine when society learns correctly in the long-run. Because early agents are disproportionately influential, standard network structures can lead to herding on incorrect beliefs . The probability of mislearning increases when link densities are higher and when networks are more integrated. When actions can only communicate limited information, segregated networks often lead to persistent disagreement between groups.
We study a sequential learning model featuring naive agents on a network. The key behavioral assumption is that agents wrongly believe their predecessors act based only on private information, so correlation between observed actions is ignored. We provide a simple linear formula characterizing agents' actions in terms of network paths and use this formula to determine when society eventually learns correctly. Disproportionately influential early agents can cause herding on incorrect beliefs and we compute comparative statics of the probability of herding with respect to network parameters. When networks are segregated, divergent early signals can lead to persistent disagreement between groups.
[ { "type": "R", "before": "model of sequential learning with", "after": "sequential learning model featuring", "start_char_pos": 11, "end_char_pos": 44 }, { "type": "R", "before": "on only", "after": "only on", "start_char_pos": 162, "end_char_pos": 169 }, { "type": "D", "before": "that", "after": null, "start_char_pos": 194, "end_char_pos": 198 }, { "type": "R", "before": "paths in the network", "after": "network paths", "start_char_pos": 326, "end_char_pos": 346 }, { "type": "R", "before": "learns correctly in the long-run. Because early agents are disproportionately influential, standard network structures can lead to", "after": "eventually learns correctly. Disproportionately influential early agents can cause", "start_char_pos": 394, "end_char_pos": 524 }, { "type": "R", "before": ". The probability of mislearning increases when link densities are higher and when networks are more integrated. When actions can only communicate limited information, segregated networks often", "after": "and we compute comparative statics of the probability of herding with respect to network parameters. When networks are segregated, divergent early signals can", "start_char_pos": 554, "end_char_pos": 747 } ]
[ 0, 71, 247, 427, 555, 666 ]
1703.02105
2
We study a sequential learning model featuring naive agents on a network. The key behavioral assumption is that agents wrongly believe their predecessors act based only on private information, so correlation between observed actions is ignored . We provide a simple linear formula characterizing agents' actions in terms of network paths and use this formula to determine when society eventually learns correctly . Disproportionately influential early agents can cause herding on incorrect beliefs and we compute comparative statics of the probability of herding with respect to network parameters. When networks are segregated , divergent early signals can lead to persistent disagreement between groups .
We study a sequential learning model featuring naive agents on a network. Agents wrongly believe their predecessors act solely on private information, so they ignore correlation between observed actions . We provide a simple linear formula expressing agents' actions in terms of network paths and use this formula to completely characterize the set of networks allowing eventual correct learning . Disproportionately influential early agents can cause herding on incorrect beliefs ; we compute comparative statics of the probability of incorrect herding with respect to network parameters. The probability of mislearning increases when link densities are higher and when networks are more integrated. In segregated networks , divergent early signals can lead to persistent disagreement between groups . We conduct an experiment and find that the accuracy gain from social learning is twice as large on sparser networks, which is consistent with our behavioral assumption but inconsistent with the rational learning model .
[ { "type": "R", "before": "The key behavioral assumption is that agents", "after": "Agents", "start_char_pos": 74, "end_char_pos": 118 }, { "type": "R", "before": "based only", "after": "solely", "start_char_pos": 158, "end_char_pos": 168 }, { "type": "A", "before": null, "after": "they ignore", "start_char_pos": 196, "end_char_pos": 196 }, { "type": "D", "before": "is ignored", "after": null, "start_char_pos": 234, "end_char_pos": 244 }, { "type": "R", "before": "characterizing", "after": "expressing", "start_char_pos": 282, "end_char_pos": 296 }, { "type": "R", "before": "determine when society eventually learns correctly", "after": "completely characterize the set of networks allowing eventual correct learning", "start_char_pos": 363, "end_char_pos": 413 }, { "type": "R", "before": "and", "after": ";", "start_char_pos": 499, "end_char_pos": 502 }, { "type": "A", "before": null, "after": "incorrect", "start_char_pos": 556, "end_char_pos": 556 }, { "type": "R", "before": "When networks are segregated", "after": "The probability of mislearning increases when link densities are higher and when networks are more integrated. In segregated networks", "start_char_pos": 601, "end_char_pos": 629 }, { "type": "A", "before": null, "after": ". We conduct an experiment and find that the accuracy gain from social learning is twice as large on sparser networks, which is consistent with our behavioral assumption but inconsistent with the rational learning model", "start_char_pos": 707, "end_char_pos": 707 } ]
[ 0, 73, 246, 600 ]
1703.02105
3
We study a sequential learning model featuring naive agents on a network. Agents wrongly believe their predecessors act solely on private information, so they ignore correlation between observed actions. We provide a simple linear formula expressing agents' actions in terms of network paths and use this formula to completely characterize the set of networks allowing eventual correct learning. Disproportionately influential early agents can cause herding on incorrect beliefs; we compute comparative statics of the probability of incorrect herding with respect to network parameters . The probability of mislearning increases when link densities are higher and when networks are more integrated. In segregated networks, divergent early signals can lead to persistent disagreement between groups. We conduct an experiment and find that the accuracy gain from social learning is twice as large on sparser networks, which is consistent with our behavioral assumption but inconsistent with the rational learning model.
We study a sequential learning model featuring naive agents on a network. Agents wrongly believe their predecessors act solely on private information, so they neglect redundancies among observed actions. We provide a simple linear formula expressing agents' actions in terms of network paths and use this formula to completely characterize the set of networks guaranteeing eventual correct learning. This characterization shows that on almost all networks, disproportionately influential early agents can cause herding on incorrect actions. Going beyond existing social-learning results, we compute the probability of such mislearning exactly. This lets us compare likelihoods of incorrect herding, and hence expected welfare losses, across network structures . The probability of mislearning increases when link densities are higher and when networks are more integrated. In partially segregated networks, divergent early signals can lead to persistent disagreement between groups. We conduct an experiment and find that the accuracy gain from social learning is twice as large on sparser networks, which is consistent with naive inference but inconsistent with the rational-learning model.
[ { "type": "R", "before": "ignore correlation between", "after": "neglect redundancies among", "start_char_pos": 159, "end_char_pos": 185 }, { "type": "R", "before": "allowing", "after": "guaranteeing", "start_char_pos": 360, "end_char_pos": 368 }, { "type": "R", "before": "Disproportionately", "after": "This characterization shows that on almost all networks, disproportionately", "start_char_pos": 396, "end_char_pos": 414 }, { "type": "R", "before": "beliefs; we compute comparative statics of", "after": "actions. Going beyond existing social-learning results, we compute", "start_char_pos": 471, "end_char_pos": 513 }, { "type": "R", "before": "incorrect herding with respect to network parameters", "after": "such mislearning exactly. This lets us compare likelihoods of incorrect herding, and hence expected welfare losses, across network structures", "start_char_pos": 533, "end_char_pos": 585 }, { "type": "A", "before": null, "after": "partially", "start_char_pos": 702, "end_char_pos": 702 }, { "type": "R", "before": "our behavioral assumption", "after": "naive inference", "start_char_pos": 942, "end_char_pos": 967 }, { "type": "R", "before": "rational learning", "after": "rational-learning", "start_char_pos": 994, "end_char_pos": 1011 } ]
[ 0, 73, 203, 395, 479, 587, 698, 799 ]