doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1602.00782
1
We empirically show the power of the equally weighted S P 500 portfolio over Sharpe's market capitalization weighted S P 500 portfolio. We proceed to consider the MaxMedian rule, a nonproprietary rule which was designed for the investor who wishes to do his/her own investing on a laptop with the purchase of only 20 stocks. Shockingly, the rule beats equal weight by a factor of 1.24 and posts annual returns that exceed even those once allegedly promised by Bernie Madoff .
We empirically show the superiority of the equally weighted S \& P 500 portfolio over Sharpe's market capitalization weighted S \& P 500 portfolio. We proceed to consider the MaxMedian rule, a non-proprietary rule designed for the investor who wishes to do his/her own investing on a laptop with the purchase of only 20 stocks. Rather surprisingly, over the 1958-2016 horizon, the cumulative returns of MaxMedian beat those of the equally weighted S\&P 500 portfolio by a factor of 1.15 .
[ { "type": "R", "before": "power", "after": "superiority", "start_char_pos": 24, "end_char_pos": 29 }, { "type": "A", "before": null, "after": "\\&", "start_char_pos": 56, "end_char_pos": 56 }, { "type": "A", "before": null, "after": "\\&", "start_char_pos": 120, "end_char_pos": 120 }, { "type": "R", "before": "nonproprietary rule which was", "after": "non-proprietary rule", "start_char_pos": 183, "end_char_pos": 212 }, { "type": "R", "before": "Shockingly, the rule beats equal weight", "after": "Rather surprisingly, over the 1958-2016 horizon, the cumulative returns of MaxMedian beat those of the equally weighted S\\&P 500 portfolio", "start_char_pos": 327, "end_char_pos": 366 }, { "type": "R", "before": "1.24 and posts annual returns that exceed even those once allegedly promised by Bernie Madoff", "after": "1.15", "start_char_pos": 382, "end_char_pos": 475 } ]
[ 0, 137, 326 ]
1602.00931
1
Should employers pay their employees better? Although this question might appear provoking because lowering production costs remains a cornerstone of the contemporary economy, we present new evidence highlighting the benefits a company might reap by paying its employees better. We introduce an original methodology that uses firm economic and financial indicators to build factors that are more uncorrelated than in the classical Fama and French setting. As a result, we uncover a new anomaly in asset pricing that is linked to the average employee remuneration: the more a company spends on salaries and benefits per employee, the better its stock performs, on average. We ensure that the abnormal performance associated with employee remuneration is not explained by other factors, such as stock indexes, capitalization, book-to-market value , or momentum. A plausible rational explanation of the remuneration anomaly involves the positive correlation between pay and employee performance.
We uncover a new anomaly in asset pricing that is linked to the remuneration: the more a company spends on salaries and benefits per employee, the better its stock performs, on average. Moreover, the companies adopting similar remuneration policies share a common risk, which is comparable to that of the value premium. For this purpose,we set up an original methodology that uses firm financial characteristics to build factors that are less correlated than in the standard asset pricing methodology. We quantify the importance of these factors from an asset pricing perspective by introducing the factor correlation level as a directly accessible proxy of eigenvalues of the correlation matrix. A rational explanation of the remuneration anomaly involves the positive correlation between pay and employee performance.
[ { "type": "R", "before": "Should employers pay their employees better? Although this question might appear provoking because lowering production costs remains a cornerstone of the contemporary economy, we present new evidence highlighting the benefits a company might reap by paying its employees better. We introduce an original methodology that uses firm economic and financial indicators to build factors that are more uncorrelated than in the classical Fama and French setting. As a result, we", "after": "We", "start_char_pos": 0, "end_char_pos": 471 }, { "type": "D", "before": "average employee", "after": null, "start_char_pos": 533, "end_char_pos": 549 }, { "type": "R", "before": "We ensure that the abnormal performance associated with employee remuneration is not explained by other factors, such as stock indexes, capitalization, book-to-market value , or momentum. A plausible", "after": "Moreover, the companies adopting similar remuneration policies share a common risk, which is comparable to that of the value premium. For this purpose,we set up an original methodology that uses firm financial characteristics to build factors that are less correlated than in the standard asset pricing methodology. We quantify the importance of these factors from an asset pricing perspective by introducing the factor correlation level as a directly accessible proxy of eigenvalues of the correlation matrix. A", "start_char_pos": 672, "end_char_pos": 871 } ]
[ 0, 44, 278, 455, 671, 859 ]
1602.01578
1
We discuss the distribution of commuting distances and its relation to income. Using data from Great Britain, US and Denmark , we show that the commuting distance is (i) broadly distributed with a tail decaying typically as 1/r^\gamma with \gamma \approx 3 and (ii) an average growing slowly as a power law with an exponent less than one that depends on the country considered. The classical theory for job search is based on the idea that workers evaluate potential jobs on the wage as they arrive sequentially through time . Extending this model with space, we obtain predictions that are strongly contradicted by our empirical findings. We then propose an alternative model that is based on the idea that workers evaluate potential jobs based on a quality aspect and that workers search for jobs sequentially across space. We assume that the density of potential jobs depends on the skills of the worker and decreases with the wage. The predicted distribution of commuting distances decays as 1/r ^3 and is independent of the distribution of the quality of jobs. We find our alternative model to be in agreement with our data. This type of approach opens new perspectives for the modeling of urban phenomena .
We discuss the distribution of commuting distances and its relation to income. Using data from Denmark, the UK, and the US , we show that the commuting distance is (i) broadly distributed with a slow decaying tail that can be fitted by a power law with exponent \gamma \approx 3 and (ii) an average growing slowly as a power law with an exponent less than one that depends on the country considered. The classical theory for job search is based on the idea that workers evaluate the wage of potential jobs as they arrive sequentially through time , and extending this model with space, we obtain predictions that are strongly contradicted by our empirical findings. We propose an alternative model that is based on the idea that workers evaluate potential jobs based on a quality aspect and that workers search for jobs sequentially across space. We also assume that the density of potential jobs depends on the skills of the worker and decreases with the wage. The predicted distribution of commuting distances decays as 1/r ^{3 and is independent of the distribution of the quality of jobs. We find our alternative model to be in agreement with our data. This type of approach opens new perspectives for the modeling of mobility .
[ { "type": "R", "before": "Great Britain, US and Denmark", "after": "Denmark, the UK, and the US", "start_char_pos": 95, "end_char_pos": 124 }, { "type": "R", "before": "tail decaying typically as 1/r^\\gamma with", "after": "slow decaying tail that can be fitted by a power law with exponent", "start_char_pos": 197, "end_char_pos": 239 }, { "type": "R", "before": "potential jobs on the wage", "after": "the wage of potential jobs", "start_char_pos": 457, "end_char_pos": 483 }, { "type": "R", "before": ". Extending", "after": ", and extending", "start_char_pos": 525, "end_char_pos": 536 }, { "type": "D", "before": "then", "after": null, "start_char_pos": 643, "end_char_pos": 647 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 829, "end_char_pos": 829 }, { "type": "R", "before": "^3", "after": "^{3", "start_char_pos": 1001, "end_char_pos": 1003 }, { "type": "R", "before": "urban phenomena", "after": "mobility", "start_char_pos": 1196, "end_char_pos": 1211 } ]
[ 0, 78, 377, 639, 825, 936, 1066, 1130 ]
1602.02185
1
Estimation of the covariance matrix of asset returns from high frequency data is complicated by asynchronous returns, market microstructure noise and jumps. One technique for addressing both asynchronous returns and market microstructure is the Kalman-EM (KEM) algorithm. However the KEM approach assumes log-normal prices and does not address jumps in the return process which can corrupt estimation of the covariance matrix. In this paper we extend the KEM algorithm to price models that include jumps. We propose two sparse Kalman filtering approaches to this problem. In the first approach we develop a Kalman Expectation Conditional Maximization (KECM) algorithm to determine the unknown covariance as well as detecting the jumps. For this algorithm we consider Laplace and the spike and slab jump models, both of which promote sparse estimates of the jumps. In the second method we take a Bayesian approach and use Gibbs sampling to sample from the posterior distribution of the covariance matrix under the spike and slab jump model. Numerical results using simulated data show that each of these approaches provide for improved covariance estimation relative to the KEM method in a variety of settings where jumps occur.
Estimation of the covariance matrix of asset returns from high frequency data is complicated by asynchronous returns, market mi- crostructure noise and jumps. One technique for addressing both asynchronous returns and market microstructure is the Kalman-EM (KEM) algorithm. However the KEM approach assumes log-normal prices and does not address jumps in the return process which can corrupt estimation of the covariance matrix. In this paper we extend the KEM algorithm to price models that include jumps. We propose two sparse Kalman filtering approaches to this problem. In the first approach we develop a Kalman Expectation Conditional Maximization (KECM) algorithm to determine the un- known covariance as well as detecting the jumps. For this algorithm we consider Laplace and the spike and slab jump models, both of which promote sparse estimates of the jumps. In the second method we take a Bayesian approach and use Gibbs sampling to sample from the posterior distribution of the covariance matrix under the spike and slab jump model. Numerical results using simulated data show that each of these approaches provide for improved covariance estima- tion relative to the KEM method in a variety of settings where jumps occur.
[ { "type": "R", "before": "microstructure", "after": "mi- crostructure", "start_char_pos": 125, "end_char_pos": 139 }, { "type": "R", "before": "unknown", "after": "un- known", "start_char_pos": 685, "end_char_pos": 692 }, { "type": "R", "before": "estimation", "after": "estima- tion", "start_char_pos": 1146, "end_char_pos": 1156 } ]
[ 0, 156, 271, 426, 504, 571, 735, 863, 1039 ]
1602.02192
1
We obtain a lower asymptotic bound on the decay rate of the probability of a portfolio's underperformance against a benchmark over a large time horizon. It is assumed that the prices of the securities are governed by geometric Brownian motions with the coefficients depending on an economic factor, possibly nonlinearly. The bound is tight so that there exists a portfolio that optimises the decay rate. That portfolio is also risk-sensitive optimal .
We obtain a lower asymptotic bound on the decay rate of the probability of a portfolio's underperformance against a benchmark over a large time horizon. It is assumed that the prices of the securities are governed by geometric Brownian motions with the coefficients depending on an economic factor, possibly nonlinearly. The economic factor is modelled with a general Ito equation. The bound is shown to be tight. More specifically, epsilon-optimal portfolios are obtained under additional conditions .
[ { "type": "R", "before": "bound is tight so that there exists a portfolio that optimises the decay rate. That portfolio is also risk-sensitive optimal", "after": "economic factor is modelled with a general Ito equation. The bound is shown to be tight. More specifically, epsilon-optimal portfolios are obtained under additional conditions", "start_char_pos": 325, "end_char_pos": 449 } ]
[ 0, 152, 320, 403 ]
1602.03214
1
Multiple myeloma , a type of plasma cell cancer, is associated with many health challenges, including damage to the kidney by tubulointerstitial fibrosis. We develop an ordinary differential equation (ODE) model which captures the qualitative behavior of the cell populations involved. Specifically, we model the interaction between cells in the proximal tubule of the kidney and free light chains produced by the myeloma monoclonal protein .
Multiple myeloma (MM), a plasma cell cancer, is associated with many health challenges, including damage to the kidney by tubulointerstitial fibrosis. We develop a mathematical model which captures the qualitative behavior of the cell and protein populations involved. Specifically, we model the interaction between cells in the proximal tubule of the kidney , free light chains , renal fibroblasts, and myeloma cells. We analyze the model for steady-state solutions to find a mathematically and biologically relevant stable steady-state solution. This foundational model provides a representation of dynamics between key populations in tubulointerstitial fibrosis that demonstrates how these populations interact to affect patient prognosis in patients with MM and renal impairment .
[ { "type": "R", "before": ", a type of", "after": "(MM), a", "start_char_pos": 17, "end_char_pos": 28 }, { "type": "R", "before": "an ordinary differential equation (ODE)", "after": "a mathematical", "start_char_pos": 166, "end_char_pos": 205 }, { "type": "A", "before": null, "after": "and protein", "start_char_pos": 264, "end_char_pos": 264 }, { "type": "R", "before": "and", "after": ",", "start_char_pos": 377, "end_char_pos": 380 }, { "type": "R", "before": "produced by the myeloma monoclonal protein", "after": ", renal fibroblasts, and myeloma cells. We analyze the model for steady-state solutions to find a mathematically and biologically relevant stable steady-state solution. This foundational model provides a representation of dynamics between key populations in tubulointerstitial fibrosis that demonstrates how these populations interact to affect patient prognosis in patients with MM and renal impairment", "start_char_pos": 399, "end_char_pos": 441 } ]
[ 0, 154, 286 ]
1602.05477
1
We study comonotonicity of regulatory risk measures in terms of the primitives of the theory of risk measures: acceptance sets and eligible assets. We show that comonotonicity cannot be characterized by the properties of the acceptance set alone and heavily depends on the choice of the eligible asset. In fact, in many important cases, comonotonicity is only compatible with risk-free eligible assets. These findings seem to call for a renewed discussion about the meaning and the role of comonotonicity within the theory of regulatory risk measures .
We study comonotonicity of risk measures in the context of capital adequacy in terms of the primitives of the theory of risk measures: acceptance sets and eligible assets. We show that comonotonicity cannot be characterized by the properties of the acceptance set alone and heavily depends on the choice of the eligible asset. In fact, in many important cases, comonotonicity is only compatible with risk-free eligible assets. These findings seem to call for a renewed discussion about the meaning and the role of comonotonicity within a capital adequacy framework .
[ { "type": "D", "before": "regulatory", "after": null, "start_char_pos": 27, "end_char_pos": 37 }, { "type": "A", "before": null, "after": "the context of capital adequacy in", "start_char_pos": 55, "end_char_pos": 55 }, { "type": "R", "before": "the theory of regulatory risk measures", "after": "a capital adequacy framework", "start_char_pos": 513, "end_char_pos": 551 } ]
[ 0, 148, 303, 403 ]
1602.05477
2
We study comonotonicity of risk measures in the context of capital adequacy in terms of the primitives of the theory of risk measures : acceptance sets and eligible assets. We show that comonotonicity cannot be characterized by the properties of the acceptance set alone and heavily depends on the choice of the eligible asset. In fact, in many important cases, comonotonicity is only compatible with risk-free eligible assets. These findings severely question the assumption of comonotonicity in a world of "discounted" capital positions and seem to call for a renewed discussion about the meaning and the role of comonotonicity within a capital adequacy framework .
We study comonotonicity of risk measures in terms of the primitives of the theory : acceptance sets and eligible assets. We show that comonotonicity cannot be characterized by the properties of the acceptance set alone and heavily depends on the choice of the eligible asset. In fact, in many important cases, comonotonicity is only compatible with risk-free eligible assets. The incompatibility with risky eligible assets is systematic whenever the acceptability criterion is based on Value at Risk or any convex distortion risk measures such as Expected Shortfall. These findings show the limitations of the concept of comonotonicity in a world without risk-free assets and raise questions about the meaning and the role of comonotonicity within a capital adequacy framework . We also point out some potential traps when using comonotonicity for "discounted" capital positions .
[ { "type": "D", "before": "the context of capital adequacy in", "after": null, "start_char_pos": 44, "end_char_pos": 78 }, { "type": "D", "before": "of risk measures", "after": null, "start_char_pos": 117, "end_char_pos": 133 }, { "type": "R", "before": "These findings severely question the assumption of", "after": "The incompatibility with risky eligible assets is systematic whenever the acceptability criterion is based on Value at Risk or any convex distortion risk measures such as Expected Shortfall. These findings show the limitations of the concept of", "start_char_pos": 428, "end_char_pos": 478 }, { "type": "R", "before": "of \"discounted\" capital positions and seem to call for a renewed discussion", "after": "without risk-free assets and raise questions", "start_char_pos": 505, "end_char_pos": 580 }, { "type": "A", "before": null, "after": ". We also point out some potential traps when using comonotonicity for \"discounted\" capital positions", "start_char_pos": 666, "end_char_pos": 666 } ]
[ 0, 172, 327, 427 ]
1602.05477
3
We study comonotonicity of risk measures in terms of the primitives of the theory: acceptance sets and eligible assets. We show that comonotonicity cannot be characterized by the properties of the acceptance set alone and heavily depends on the choice of the eligible asset. In fact, in many important cases, comonotonicity is only compatible with risk-free eligible assets. The incompatibility with risky eligible assets is systematic whenever the acceptability criterion is based on Value at Risk or any convex distortion risk measures such as Expected Shortfall. These findings show the limitations of the concept of comonotonicity in a world without risk-free assets and raise questions about the meaning and the role of comonotonicity within a capital adequacy framework. We also point out some potential traps when using comonotonicity for "discounted" capital positions .
Within the context of capital adequacy, we study comonotonicity of risk measures in terms of the primitives of the theory: acceptance sets and eligible , or reference, assets. We show that comonotonicity cannot be characterized by the properties of the acceptance set alone and heavily depends on the choice of the eligible asset. In fact, in many important cases, comonotonicity is only compatible with risk-free eligible assets. The incompatibility with risky eligible assets is systematic whenever the acceptability criterion is based on Value at Risk or any convex distortion risk measure such as Expected Shortfall. These findings qualify and arguably call for a critical appraisal of the meaning and the role of comonotonicity within a capital adequacy context .
[ { "type": "R", "before": "We", "after": "Within the context of capital adequacy, we", "start_char_pos": 0, "end_char_pos": 2 }, { "type": "A", "before": null, "after": ", or reference,", "start_char_pos": 112, "end_char_pos": 112 }, { "type": "R", "before": "measures", "after": "measure", "start_char_pos": 530, "end_char_pos": 538 }, { "type": "R", "before": "show the limitations of the concept of comonotonicity in a world without risk-free assets and raise questions about the", "after": "qualify and arguably call for a critical appraisal of the", "start_char_pos": 582, "end_char_pos": 701 }, { "type": "R", "before": "framework. We also point out some potential traps when using comonotonicity for \"discounted\" capital positions", "after": "context", "start_char_pos": 767, "end_char_pos": 877 } ]
[ 0, 120, 275, 375, 566, 777 ]
1602.05489
1
We study how co-jumps influence covariance and correlation in currency markets. We propose a new wavelet-based estimator of quadratic covariation that is able to disentangle the continuous part of quadratic covariation from co-jumps . The proposed estimator is able to identify the statistically significant co-jumps that impact covariance structures by using bootstrapped test statistics. Empirical findings reveal the behavior of co-jumps during Asian, European and U.S. trading sessions. Our results show that the impact of co-jumps on correlations increased during the years 2012 - 2015. Hence appropriately estimating co-jumps is becoming a crucial step in understanding dependence in currency markets.
We quantify how co-jumps impact correlations in currency markets. To disentangle the continuous part of quadratic covariation from co-jumps , and study the influence of co-jumps on correlations, we propose a new wavelet-based estimator . The proposed estimation framework is able to localize the co-jumps very precisely through wavelet coefficients and identify the statistically significant co-jumps using bootstrapped test statistics. Empirical findings reveal the different behaviors of co-jumps during Asian, European and U.S. trading sessions. Importantly, we document that co-jumps significantly inflate correlation in currency markets.
[ { "type": "R", "before": "study", "after": "quantify", "start_char_pos": 3, "end_char_pos": 8 }, { "type": "R", "before": "influence covariance and correlation", "after": "impact correlations", "start_char_pos": 22, "end_char_pos": 58 }, { "type": "R", "before": "We propose a new wavelet-based estimator of quadratic covariation that is able to", "after": "To", "start_char_pos": 80, "end_char_pos": 161 }, { "type": "A", "before": null, "after": ", and study the influence of co-jumps on correlations, we propose a new wavelet-based estimator", "start_char_pos": 233, "end_char_pos": 233 }, { "type": "R", "before": "estimator", "after": "estimation framework", "start_char_pos": 249, "end_char_pos": 258 }, { "type": "A", "before": null, "after": "localize the co-jumps very precisely through wavelet coefficients and", "start_char_pos": 270, "end_char_pos": 270 }, { "type": "D", "before": "that impact covariance structures by", "after": null, "start_char_pos": 319, "end_char_pos": 355 }, { "type": "R", "before": "behavior", "after": "different behaviors", "start_char_pos": 422, "end_char_pos": 430 }, { "type": "R", "before": "Our results show that the impact of", "after": "Importantly, we document that", "start_char_pos": 493, "end_char_pos": 528 }, { "type": "R", "before": "on correlations increased during the years 2012 - 2015. Hence appropriately estimating co-jumps is becoming a crucial step in understanding dependence", "after": "significantly inflate correlation", "start_char_pos": 538, "end_char_pos": 688 } ]
[ 0, 79, 235, 391, 492, 593 ]
1602.05489
2
We quantify how co-jumps impact correlations in currency markets. To disentangle the continuous part of quadratic covariation from co-jumps, and study the influence of co-jumps on correlations, we propose a new wavelet-based estimator. The proposed estimation framework is able to localize the co-jumps very precisely through wavelet coefficients and identify the statistically significant co-jumps using bootstrapped test statistics . Empirical findings reveal the different behaviors of co-jumps during Asian, European and U.S. trading sessions. Importantly, we document that co-jumps significantly inflate correlation in currency markets.
We quantify how co-jumps impact correlations in currency markets. To disentangle the continuous part of quadratic covariation from co-jumps, and study the influence of co-jumps on correlations, we propose a new wavelet-based estimator. The proposed estimation framework is able to localize the co-jumps very precisely through wavelet coefficients and identify statistically significant co-jumps . Empirical findings reveal the different behaviors of co-jumps during Asian, European and U.S. trading sessions. Importantly, we document that co-jumps significantly influence correlation in currency markets.
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 360, "end_char_pos": 363 }, { "type": "D", "before": "using bootstrapped test statistics", "after": null, "start_char_pos": 399, "end_char_pos": 433 }, { "type": "R", "before": "inflate", "after": "influence", "start_char_pos": 601, "end_char_pos": 608 } ]
[ 0, 65, 235, 547 ]
1602.05758
1
We consider a classical market model of mathematical economics with infinitely many assets: the Arbitrage Pricing Model. We study optimal investment under an expected utility criterion and prove the existence of optimal strategies. Previous results require a certain restrictive hypothesis on the non-triviality of the tails of asset return distributions. Using a different method, we manage to remove this hypothesis in the present article , at the price of stronger assumptions on the moments of asset returns . We thus complement earlier results .
We consider a popular model of microeconomics with countably many assets: the Arbitrage Pricing Model. We study the problem of optimal investment under an expected utility criterion and look for conditions ensuring the existence of optimal strategies. Previous results required a certain restrictive hypothesis on the tails of asset return distributions. Using a different method, we manage to remove this hypothesis , at the price of stronger assumptions on the moments of asset returns .
[ { "type": "R", "before": "classical market model of mathematical economics with infinitely", "after": "popular model of microeconomics with countably", "start_char_pos": 14, "end_char_pos": 78 }, { "type": "A", "before": null, "after": "the problem of", "start_char_pos": 130, "end_char_pos": 130 }, { "type": "R", "before": "prove", "after": "look for conditions ensuring", "start_char_pos": 190, "end_char_pos": 195 }, { "type": "R", "before": "require", "after": "required", "start_char_pos": 250, "end_char_pos": 257 }, { "type": "D", "before": "non-triviality of the", "after": null, "start_char_pos": 298, "end_char_pos": 319 }, { "type": "D", "before": "in the present article", "after": null, "start_char_pos": 419, "end_char_pos": 441 }, { "type": "D", "before": ". We thus complement earlier results", "after": null, "start_char_pos": 513, "end_char_pos": 549 } ]
[ 0, 120, 232, 356, 514 ]
1602.05883
1
There is growing consensus that processes of market integration and risk diversification may come at the price of more systemic risk. Indeed, financial institutions are interconnected in a network of contracts where distress can either be amplified or dampened . However, a mathematical understanding of instability in relation to the network topology is still lacking. In a model financial network, we show that the origin of instabilityresides in the presence of specific types of cyclical structures , regardless of many of the details of the distress propagation mechanism. In particular, we show the existence of trajectories in the space of graphs along which a complex network turns from stable to unstable, although at each point along the trajectory its nodes satisfy constraints that would apparently make them individually stable. In the financial context, our findings have important implications for policies aimed at increasing financial stability . We illustrate the propositions on a sample dataset for the top 50 EU listed banks between 2008 and 2013. More in general, our results shed light on previous findings on the instability of model ecosystems and are relevant for a broad class of dynamical processes on complex networks .
Following the financial crisis of 2007-2008, a deep analogy between the origins of instability in financial systems and complex ecosystems has been pointed out: in both cases, topological features of network structures influence how easily distress can spread within the system . However, in financial network models, the details of how financial institutions interact typically play a decisive role, and a general understanding of precisely how network topology creates instability remains lacking. Here we show how processes that are widely believed to stabilise the financial system, i.e. market integration and diversification, can actually drive it towards instability, as they contribute to create cyclical structures which tend to amplify financial distress, thereby undermining systemic stability and making large crises more likely. This result holds irrespective of the details of how institutions interact, showing that policy-relevant analysis of the factors affecting financial stability can be carried out while abstracting away from such details .
[ { "type": "R", "before": "There is growing consensus that processes of market integration and risk diversification may come at the price of more systemic risk. Indeed, financial institutions are interconnected in a network of contracts where distress can either be amplified or dampened", "after": "Following the financial crisis of 2007-2008, a deep analogy between the origins of instability in financial systems and complex ecosystems has been pointed out: in both cases, topological features of network structures influence how easily distress can spread within the system", "start_char_pos": 0, "end_char_pos": 260 }, { "type": "R", "before": "a mathematical understanding of instability in relation to the network topology is still lacking. In a model financial network, we show that", "after": "in financial network models, the details of how financial institutions interact typically play a decisive role, and a general understanding of precisely how network topology creates instability remains lacking. Here we show how processes that are widely believed to stabilise the financial system, i.e. market integration and diversification, can actually drive it towards instability, as they contribute to create cyclical structures which tend to amplify financial distress, thereby undermining systemic stability and making large crises more likely. This result holds irrespective of", "start_char_pos": 272, "end_char_pos": 412 }, { "type": "D", "before": "origin of instabilityresides in the presence of specific types of cyclical structures , regardless of many of the", "after": null, "start_char_pos": 417, "end_char_pos": 530 }, { "type": "R", "before": "the distress propagation mechanism. In particular, we show the existence of trajectories in the space of graphs along which a complex network turns from stable to unstable, although at each point along the trajectory its nodes satisfy constraints that would apparently make them individually stable. In the financial context, our findings have important implications for policies aimed at increasing financial stability . We illustrate the propositions on a sample dataset for the top 50 EU listed banks between 2008 and 2013. More in general, our results shed light on previous findings on the instability of model ecosystems and are relevant for a broad class of dynamical processes on complex networks", "after": "how institutions interact, showing that policy-relevant analysis of the factors affecting financial stability can be carried out while abstracting away from such details", "start_char_pos": 542, "end_char_pos": 1246 } ]
[ 0, 133, 262, 369, 577, 841, 963, 1068 ]
1602.05998
1
This paper specializes a number of earlier contributions to the theory of valuation of financial products in presence of credit risk, repurchase agreements and funding costs. Earlier works, including our own, pointed to the need of tools such as Backward Stochastic Differential Equations (BSDEs) or semi-linear Partial Differential Equations (PDEs), which in practice translate to ad-hoc numerical methods that are time-consuming and which render the full valuation and risk analysis difficult. We specialize here the valuation framework to benchmark derivatives and we show that, under a number of simplifying assumptions, the valuation paradigm can be recast as a Black-Scholes model with dividends. In turn, this allows for a detailed valuation analysis, stress testing and risk analysis via sensitivities . We refer to the full paper 5%DIFDELCMD < ] %%% for a more complete mathematical treatment .
We take the holistic approach of computing an OTC claim value that incorporates credit and funding liquidity risks and their interplays, instead of forcing individual price adjustments: CVA, DVA, FVA, KVA. The resulting nonlinear mathematical problem features semilinear PDEs and FBSDEs. We show that for the benchmark vulnerable claim there is an analytical solution, and we express it in terms of the Black-Scholes formula with dividends. This allows for a detailed valuation analysis, stress testing and risk analysis via sensitivities %DIFDELCMD < ] %%% .
[ { "type": "R", "before": "This paper specializes a number of earlier contributions to the theory of valuation of financial products in presence of credit risk, repurchase agreements and funding costs. Earlier works, including our own, pointed to the need of tools such as Backward Stochastic Differential Equations (BSDEs) or semi-linear Partial Differential Equations (PDEs), which in practice translate to ad-hoc numerical methods that are time-consuming and which render the full valuation and risk analysis difficult. We specialize here the valuation framework to benchmark derivatives and we show that, under a number of simplifying assumptions, the valuation paradigm can be recast as a", "after": "We take the holistic approach of computing an OTC claim value that incorporates credit and funding liquidity risks and their interplays, instead of forcing individual price adjustments: CVA, DVA, FVA, KVA. The resulting nonlinear mathematical problem features semilinear PDEs and FBSDEs. We show that for the benchmark vulnerable claim there is an analytical solution, and we express it in terms of the", "start_char_pos": 0, "end_char_pos": 666 }, { "type": "R", "before": "model", "after": "formula", "start_char_pos": 681, "end_char_pos": 686 }, { "type": "R", "before": "In turn, this", "after": "This", "start_char_pos": 703, "end_char_pos": 716 }, { "type": "D", "before": ". We refer to the full paper", "after": null, "start_char_pos": 810, "end_char_pos": 838 }, { "type": "D", "before": "5", "after": null, "start_char_pos": 839, "end_char_pos": 840 }, { "type": "D", "before": "for a more complete mathematical treatment", "after": null, "start_char_pos": 859, "end_char_pos": 901 } ]
[ 0, 174, 495, 702, 811 ]
1602.06186
1
When optimizing the Sharpe ratio over a k-dimensional parameter space the thus obtained in-sample Sharpe ratio tends to be higher than what will be captured out-of-sample. For two reasons: the estimated parameter will be skewed towards the noise in the in-sample data (noise fitting) and, second, the estimated parameter will deviate from the optimal parameter (estimation error). This article derives a simple correction for both. Selecting a model with the highest corrected Sharpe selects the model with the highest expected out-of-sample Sharpe in the same way as selection by Akaike Information Criterion does for the log-likelihood as measure of fit.
We derive (1) an unbiased estimator for the out-of-sample Sharpe ratio when the in-sample Sharpe ratio is obtained by optimizing over a k-dimensional parameter space . The estimator corrects the in-sample Sharpe ratio for both: noise fit and estimation error. We then show (2) how to use the corrected Sharpe ratio as model selection criterion analogous to the Akaike Information Criterion (AIC). Selecting a model with the highest corrected Sharpe ratio selects the model with the highest estimated out-of-sample Sharpe ratio in the same way as selection by AIC does for the log-likelihood as measure of fit.
[ { "type": "R", "before": "When optimizing the Sharpe ratio", "after": "We derive (1) an unbiased estimator for the out-of-sample Sharpe ratio when the in-sample Sharpe ratio is obtained by optimizing", "start_char_pos": 0, "end_char_pos": 32 }, { "type": "R", "before": "the thus obtained", "after": ". The estimator corrects the", "start_char_pos": 70, "end_char_pos": 87 }, { "type": "R", "before": "tends to be higher than what will be captured out-of-sample. For two reasons: the estimated parameter will be skewed towards the noise in the in-sample data (noise fitting) and, second, the estimated parameter will deviate from the optimal parameter (estimation error). This article derives a simple correction for both.", "after": "for both: noise fit and estimation error. We then show (2) how to use the corrected Sharpe ratio as model selection criterion analogous to the Akaike Information Criterion (AIC).", "start_char_pos": 111, "end_char_pos": 431 }, { "type": "A", "before": null, "after": "ratio", "start_char_pos": 484, "end_char_pos": 484 }, { "type": "R", "before": "expected", "after": "estimated", "start_char_pos": 520, "end_char_pos": 528 }, { "type": "A", "before": null, "after": "ratio", "start_char_pos": 550, "end_char_pos": 550 }, { "type": "R", "before": "Akaike Information Criterion", "after": "AIC", "start_char_pos": 583, "end_char_pos": 611 } ]
[ 0, 171, 380, 431 ]
1602.06186
2
We derive (1) an unbiased estimator for the out-of-sample Sharpe ratio when the in-sample Sharpe ratio is obtained by optimizing over a k-dimensional parameter space . The estimator corrects the in-sample Sharpe ratio for both : noise fit and estimation error. We then show (2) how to use the corrected Sharpe ratio as model selection criterion analogous to the Akaike Information Criterion (AIC). Selecting a model with the highest corrected Sharpe ratio selects the model with the highest estimated out-of-sample Sharpe ratio in the same way as selection by AIC does for the log-likelihood as measure of fit.
When the in-sample Sharpe ratio is obtained by optimizing over a k-dimensional parameter space , it is a biased estimator for what can be expected on unseen data (out-of-sample). We derive (1) an unbiased estimator adjusting for both sources of bias : noise fit and estimation error. We then show (2) how to use the adjusted Sharpe ratio as model selection criterion analogously to the Akaike Information Criterion (AIC). Selecting a model with the highest adjusted Sharpe ratio selects the model with the highest estimated out-of-sample Sharpe ratio in the same way as selection by AIC does for the log-likelihood as measure of fit.
[ { "type": "R", "before": "We derive (1) an unbiased estimator for the out-of-sample Sharpe ratio when the", "after": "When the", "start_char_pos": 0, "end_char_pos": 79 }, { "type": "R", "before": ". The estimator corrects the in-sample Sharpe ratio for both", "after": ", it is a biased estimator for what can be expected on unseen data (out-of-sample). We derive (1) an unbiased estimator adjusting for both sources of bias", "start_char_pos": 166, "end_char_pos": 226 }, { "type": "R", "before": "corrected", "after": "adjusted", "start_char_pos": 293, "end_char_pos": 302 }, { "type": "R", "before": "analogous", "after": "analogously", "start_char_pos": 345, "end_char_pos": 354 }, { "type": "R", "before": "corrected", "after": "adjusted", "start_char_pos": 433, "end_char_pos": 442 } ]
[ 0, 167, 260, 397 ]
1602.06589
1
One of the greatest effort of computational scientists is to translate the mathematical model describing a class of physical phenomena into large and complex codes. Many of these codes face the difficulty of implementing the mathematical operations in the model in terms of low level optimized kernels offering both performance and portability. Legacy codes suffers from the additional curse of rigid design choices based on outdated performance metrics (e.g. minimization of memory footprint). Using a representative code from the Materials Science community, we propose a methodology to restructure the most expensive operations in terms of an optimized combination of dense linear algebra kernels. The resulting algorithm guarantees an increased performance and an extended life span of this code enabling larger scale simulations.
One of the greatest efforts of computational scientists is to translate the mathematical model describing a class of physical phenomena into large and complex codes. Many of these codes face the difficulty of implementing the mathematical operations in the model in terms of low level optimized kernels offering both performance and portability. Legacy codes suffer from the additional curse of rigid design choices based on outdated performance metrics (e.g. minimization of memory footprint). Using a representative code from the Materials Science community, we propose a methodology to restructure the most expensive operations in terms of an optimized combination of dense linear algebra kernels. The resulting algorithm guarantees an increased performance and an extended life span of this code enabling larger scale simulations.
[ { "type": "R", "before": "effort", "after": "efforts", "start_char_pos": 20, "end_char_pos": 26 }, { "type": "R", "before": "suffers", "after": "suffer", "start_char_pos": 358, "end_char_pos": 365 } ]
[ 0, 164, 344, 494, 700 ]
1602.06765
1
This paper studies an optimal irreversible extraction problem of an exhaustible commodity in presence of regime shifts. A company extracts a natural resource from a reserve with finite capacity, and sells it in the market at a spot price that evolves according to a Brownian motion with volatility modulated by a two state Markov chain. In this setting, the company aims at finding the extraction rule that maximizes its expected, discounted net cash flow. The problem is set up as a finite-fuel two-dimensional degenerate singular stochastic control problem over an infinite time horizon, and we provide explicit expressions both for the value function and for the optimal control. The latter prescribes a Skorokhod reflection of the optimally controlled state process at a certain state and price dependent threshold. This curve is given in terms of the optimal stopping boundary of an auxiliary family of perpetual optimal selling problems with regime switching. The techniques are those of stochastic calculus and stochastic optimal control theory.
This paper studies an optimal irreversible extraction problem of an exhaustible commodity in presence of regime shifts. A company extracts a natural resource from a reserve with finite capacity, and sells it in the market at a spot price that evolves according to a Brownian motion with volatility modulated by a two state Markov chain. In this setting, the company aims at finding the extraction rule that maximizes its expected, discounted net cash flow. The problem is set up as a finite-fuel two-dimensional degenerate singular stochastic control problem over an infinite time-horizon. We provide explicit expressions both for the value function and for the optimal control. We show that the latter prescribes a Skorokhod reflection of the optimally controlled state process at a certain state and price dependent threshold. This curve is given in terms of the optimal stopping boundary of an auxiliary family of perpetual optimal selling problems with regime switching. The techniques are those of stochastic calculus and stochastic optimal control theory.
[ { "type": "R", "before": "time horizon, and we", "after": "time-horizon. We", "start_char_pos": 576, "end_char_pos": 596 }, { "type": "R", "before": "The", "after": "We show that the", "start_char_pos": 683, "end_char_pos": 686 } ]
[ 0, 119, 336, 456, 682, 819, 965 ]
1602.06765
2
This paper studies an optimal irreversible extraction problem of an exhaustible commodity in presence of regime shifts . A company extracts a natural resource from a reserve with finite capacity, and sells it in the market at a spot price that evolves according to a Brownian motion with volatility modulated by a two state Markov chain. In this setting, the company aims at finding the extraction rule that maximizes its expected , discounted net cash flow. The problem is set up as a finite-fuel two-dimensional degenerate singular stochastic control problem over an infinite time-horizon. We provide explicit expressions both for the value function and for the optimal control. We show that the latter prescribes a Skorokhod reflection of the optimally controlled state process at a certain state and price dependent threshold. This curve is given in terms of the optimal stopping boundary of an auxiliary family of perpetual optimal selling problems with regime switching . The techniques are those of stochastic calculus and stochastic optimal control theory .
This paper studies a finite-fuel two-dimensional degenerate singular stochastic control problem under regime switching that is motivated by the optimal irreversible extraction problem of an exhaustible commodity . A company extracts a natural resource from a reserve with finite capacity, and sells it in the market at a spot price that evolves according to a Brownian motion with volatility modulated by a two-state Markov chain. In this setting, the company aims at finding the extraction rule that maximizes its expected discounted cash flow, net of the costs of extraction and maintenance of the reserve. We provide expressions both for the value function and for the optimal control. On the one hand, if the running cost for the maintenance of the reserve is a convex function of the reserve level, the optimal extraction rule prescribes a Skorokhod reflection of the (optimally) controlled state process at a certain state and price dependent threshold. On the other hand, in presence of a concave running cost function it is optimal to instantaneously deplet the reserve at the time at which the commodity's price exceeds an endogenously determined critical level. In both cases, the threshold triggering the optimal control is given in terms of the optimal stopping boundary of an auxiliary family of perpetual optimal selling problems with regime switching .
[ { "type": "R", "before": "an", "after": "a finite-fuel two-dimensional degenerate singular stochastic control problem under regime switching that is motivated by the", "start_char_pos": 19, "end_char_pos": 21 }, { "type": "D", "before": "in presence of regime shifts", "after": null, "start_char_pos": 90, "end_char_pos": 118 }, { "type": "R", "before": "two state", "after": "two-state", "start_char_pos": 314, "end_char_pos": 323 }, { "type": "R", "before": ", discounted net cash flow. The problem is set up as a finite-fuel two-dimensional degenerate singular stochastic control problem over an infinite time-horizon. We provide explicit", "after": "discounted cash flow, net of the costs of extraction and maintenance of the reserve. We provide", "start_char_pos": 431, "end_char_pos": 611 }, { "type": "R", "before": "We show that the latter", "after": "On the one hand, if the running cost for the maintenance of the reserve is a convex function of the reserve level, the optimal extraction rule", "start_char_pos": 681, "end_char_pos": 704 }, { "type": "R", "before": "optimally", "after": "(optimally)", "start_char_pos": 746, "end_char_pos": 755 }, { "type": "R", "before": "This curve is", "after": "On the other hand, in presence of a concave running cost function it is optimal to instantaneously deplet the reserve at the time at which the commodity's price exceeds an endogenously determined critical level. In both cases, the threshold triggering the optimal control is", "start_char_pos": 831, "end_char_pos": 844 }, { "type": "D", "before": ". The techniques are those of stochastic calculus and stochastic optimal control theory", "after": null, "start_char_pos": 976, "end_char_pos": 1063 } ]
[ 0, 120, 337, 458, 591, 680, 830, 977 ]
1602.06998
1
We consider an optimal investment/consumption problem to maximize expected utility from consumption. In this market model, the investor is allowed to choose a portfolio which consists of one bond, one liquid risky asset and one illiquid risky asset (proportional transaction costs). Using the shadow price approach, we fully characterize the optimal trading and consumption strategies in terms of the solution of a free boundary ODE with an integral constraint. In the analysis, there is no technical assumption (except a natural one) on the model parameters. We also prove an asymptotic expansion result for small transaction costs .
We consider an optimal investment/consumption problem to maximize expected utility from consumption. In this market model, the investor is allowed to choose a portfolio which consists of one bond, one liquid risky asset (no transaction costs) and one illiquid risky asset (proportional transaction costs). We fully characterize the optimal trading and consumption strategies in terms of the solution of the free boundary ODE with an integral constraint. We find an explicit characterization of model parameters for the well-posedness of the problem, and show that the problem is well-posed if and only if there exists a shadow price process. Finally, we describe how the investor's optimal strategy is affected by the additional opportunity of trading the liquid risky asset, compared to the simpler model with one bond and one illiquid risky asset .
[ { "type": "A", "before": null, "after": "(no transaction costs)", "start_char_pos": 220, "end_char_pos": 220 }, { "type": "R", "before": "Using the shadow price approach, we", "after": "We", "start_char_pos": 284, "end_char_pos": 319 }, { "type": "R", "before": "a", "after": "the", "start_char_pos": 414, "end_char_pos": 415 }, { "type": "R", "before": "In the analysis, there is no technical assumption (except a natural one) on the model parameters. We also prove an asymptotic expansion result for small transaction costs", "after": "We find an explicit characterization of model parameters for the well-posedness of the problem, and show that the problem is well-posed if and only if there exists a shadow price process. Finally, we describe how the investor's optimal strategy is affected by the additional opportunity of trading the liquid risky asset, compared to the simpler model with one bond and one illiquid risky asset", "start_char_pos": 463, "end_char_pos": 633 } ]
[ 0, 100, 283, 462, 560 ]
1602.06998
2
We consider an optimal investment /consumption problem to maximize expected utility from consumption. In this market model, the investor is allowed to choose a portfolio which consists of one bond, one liquid risky asset (no transaction costs) and one illiquid risky asset (proportional transaction costs). We fully characterize the optimal trading and consumption strategies in terms of the solution of the free boundary ODE with an integral constraint. We find an explicit characterization of model parameters for the well-posedness of the problem, and show that the problem is well-posed if and only if there exists a shadow price process. Finally, we describe how the investor's optimal strategy is affected by the additional opportunity of trading the liquid risky asset, compared to the simpler model with one bond and one illiquid risky asset.
We consider an optimal consumption/ investment problem to maximize expected utility from consumption. In this market model, the investor is allowed to choose a portfolio which consists of one bond, one liquid risky asset (no transaction costs) and one illiquid risky asset (proportional transaction costs). We fully characterize the optimal consumption and trading strategies in terms of the solution of the free boundary ODE with an integral constraint. We find an explicit characterization of model parameters for the well-posedness of the problem, and show that the problem is well-posed if and only if there exists a shadow price process. Finally, we describe how the investor's optimal strategy is affected by the additional opportunity of trading the liquid risky asset, compared to the simpler model with one bond and one illiquid risky asset.
[ { "type": "A", "before": null, "after": "consumption/", "start_char_pos": 23, "end_char_pos": 23 }, { "type": "D", "before": "/consumption", "after": null, "start_char_pos": 35, "end_char_pos": 47 }, { "type": "R", "before": "trading and consumption", "after": "consumption and trading", "start_char_pos": 342, "end_char_pos": 365 } ]
[ 0, 102, 307, 455, 643 ]
1602.07104
1
Recently, IEEE 802.11ax Task Group has proposed new guidelines for the use of OFDMA-based medium access control. In this new framework, it has been decided that the transmission for all the users in an multi-user OFDMA should end end at the same time , and the users with insufficient data should transmit null data (i.e. padding) to fill the duration. While this scheme offers strong features such as resilience to Overlapping Basic Service Set (OBSS) interference and ease of synchronization, it also poses major side issues of degraded throughput performance and waste of devices' energy. We investigate resource allocation problems where the scheduling duration (i.e., time ) is optimized through Lyapunov optimization techniques by taking into account the padding overhead, airtime fairness and energy consumption of the users. Also, being aware of the complexity of the existing OFDMA solutions, we propose lightweight and agile algorithms with the consideration of their overhead and implementation issues. We show that our resource allocation algorithms are arbitrarily close to the optimal performance at the price of reduced convergence rate .
Recently, IEEE 802.11ax Task Group has adapted OFDMA as a new technique for enabling multi-user transmission. It has been also decided that the scheduling duration should be same for all the users in a multi-user OFDMA so that the transmission of the users should end at the same time . In order to realize that condition, the users with insufficient data should transmit null data (i.e. padding) to fill the duration. While this scheme offers strong features such as resilience to Overlapping Basic Service Set (OBSS) interference and ease of synchronization, it also poses major side issues of degraded throughput performance and waste of devices' energy. In this work, for OFDMA based 802.11 WLANs we first propose practical algorithm in which the scheduling duration is fixed and does not change from time to time. In the second algorithm the scheduling duration is dynamically determined in a resource allocation framework by taking into account the padding overhead, airtime fairness and energy consumption of the users. We analytically investigate our resource allocation problems through Lyapunov optimization techniques and show that our algorithms are arbitrarily close to the optimal performance at the price of reduced convergence rate . We also calculate the overhead of our algorithms in a realistic set-up and propose solutions for the implementation issues .
[ { "type": "R", "before": "proposed new guidelines for the use of OFDMA-based medium access control. In this new framework, it has been", "after": "adapted OFDMA as a new technique for enabling multi-user transmission. It has been also", "start_char_pos": 39, "end_char_pos": 147 }, { "type": "R", "before": "transmission", "after": "scheduling duration should be same", "start_char_pos": 165, "end_char_pos": 177 }, { "type": "R", "before": "an", "after": "a", "start_char_pos": 199, "end_char_pos": 201 }, { "type": "R", "before": "should end end", "after": "so that the transmission of the users should end", "start_char_pos": 219, "end_char_pos": 233 }, { "type": "R", "before": ", and", "after": ". In order to realize that condition,", "start_char_pos": 251, "end_char_pos": 256 }, { "type": "R", "before": "We investigate resource allocation problems where", "after": "In this work, for OFDMA based 802.11 WLANs we first propose practical algorithm in which", "start_char_pos": 592, "end_char_pos": 641 }, { "type": "R", "before": "(i.e., time ) is optimized through Lyapunov optimization techniques", "after": "is fixed and does not change from time to time. In the second algorithm the scheduling duration is dynamically determined in a resource allocation framework", "start_char_pos": 666, "end_char_pos": 733 }, { "type": "R", "before": "Also, being aware of the complexity of the existing OFDMA solutions, we propose lightweight and agile algorithms with the consideration of their overhead and implementation issues. We show that", "after": "We analytically investigate", "start_char_pos": 833, "end_char_pos": 1026 }, { "type": "A", "before": null, "after": "problems through Lyapunov optimization techniques and show that our", "start_char_pos": 1051, "end_char_pos": 1051 }, { "type": "A", "before": null, "after": ". We also calculate the overhead of our algorithms in a realistic set-up and propose solutions for the implementation issues", "start_char_pos": 1153, "end_char_pos": 1153 } ]
[ 0, 112, 352, 591, 832, 1013 ]
1602.07891
1
A prototype of Loongson IoT ZigBee gateway is already designed and implemented. However, it is not perfect . And a lot of things should be done to improve it , such as adding IEEE 802.11 function, using fully open source ZigBee protocol stack or using fully open source embedded operating system , and implementing multiple interfaces.
A prototype of Loongson IoT (Internet of Things) ZigBee gateway is already designed and implemented. However, this prototype is not perfect enough because of the lack of a number of functions . And a lot of things should be done to improve this prototype , such as adding widely used IEEE 802.11 function, using a fully open source ZigBee protocol stack to get rid of proprietary implement or using a fully open source embedded operating system to support 6LoWPAN , and implementing multiple interfaces.
[ { "type": "A", "before": null, "after": "(Internet of Things)", "start_char_pos": 28, "end_char_pos": 28 }, { "type": "R", "before": "it", "after": "this prototype", "start_char_pos": 90, "end_char_pos": 92 }, { "type": "A", "before": null, "after": "enough because of the lack of a number of functions", "start_char_pos": 108, "end_char_pos": 108 }, { "type": "R", "before": "it", "after": "this prototype", "start_char_pos": 157, "end_char_pos": 159 }, { "type": "A", "before": null, "after": "widely used", "start_char_pos": 177, "end_char_pos": 177 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 206, "end_char_pos": 206 }, { "type": "R", "before": "or using", "after": "to get rid of proprietary implement or using a", "start_char_pos": 247, "end_char_pos": 255 }, { "type": "A", "before": null, "after": "to support 6LoWPAN", "start_char_pos": 300, "end_char_pos": 300 } ]
[ 0, 80, 110, 187 ]
1602.08297
1
The optimization of a large random portfolio under the Expected Shortfall risk measure with an \ell_2 regularizer is carried out by analytical calculation. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number of different assets in the portfolio is much less than the length of the available time series, the regularizer plays a negligible role , while in the (much more frequently occurring in practice) opposite limit, where the samples are comparable or even small compared to the number of different assets, the optimum is almost entirely determined by the regularizer. Our results clearly show that the transition region between these two extremes is relatively narrow, and it is only here that one can meaningfully speak of a trade-off between fluctuations and bias .
The optimization of a large random portfolio under the Expected Shortfall risk measure with an \ell_2 regularizer is carried out by analytical calculation. The regularizer reins in the large sample fluctuations and the concomitant divergent estimation error, and eliminates the phase transition where this error would otherwise blow up. In the data-dominated region, where the number N of different assets in the portfolio is much less than the length T of the available time series, the regularizer plays a negligible role even if its strength \eta is large , while in the opposite limit, where the size of samples is comparable to, or even smaller than the number of assets, the optimum is almost entirely determined by the regularizer. We construct the contour map of estimation error on the N/T vs. \eta plane and find that for a given value of the estimation error the gain in N/T due to the regularizer can reach a factor of about 4 for a sufficiently strong regularizer .
[ { "type": "A", "before": null, "after": "N", "start_char_pos": 384, "end_char_pos": 384 }, { "type": "A", "before": null, "after": "T", "start_char_pos": 451, "end_char_pos": 451 }, { "type": "A", "before": null, "after": "even if its strength \\eta is large", "start_char_pos": 522, "end_char_pos": 522 }, { "type": "D", "before": "(much more frequently occurring in practice)", "after": null, "start_char_pos": 538, "end_char_pos": 582 }, { "type": "R", "before": "samples are comparable or even small compared to", "after": "size of samples is comparable to, or even smaller than", "start_char_pos": 609, "end_char_pos": 657 }, { "type": "D", "before": "different", "after": null, "start_char_pos": 672, "end_char_pos": 681 }, { "type": "R", "before": "Our results clearly show that the transition region between these two extremes is relatively narrow, and it is only here that one can meaningfully speak of a trade-off between fluctuations and bias", "after": "We construct the contour map of estimation error on the N/T vs. \\eta plane and find that for a given value of the estimation error the gain in N/T due to the regularizer can reach a factor of about 4 for a sufficiently strong regularizer", "start_char_pos": 752, "end_char_pos": 949 } ]
[ 0, 155, 336, 751 ]
1602.08894
1
We derive upper and lower bounds on the expectation of f(S) under dependence uncertainty, i.e. when the marginal distributions of the random vector S=(S_1,\dots,S_d) are known but their dependence structure is partially unknown. We solve the problem by providing improved Fr\'echet--Hoeffding bounds on the copula of S that account for additional information. In particular, we derive bounds when the values of the copula are given on a compact subset of [0,1]^d, the value of a functional of the copula is prescribed or different types of information are available on the lower dimensional marginals of the copula. We then show that, in contrast to the two-dimensional case, the bounds are quasi-copulas but fail to be copulas if d>2. Thus, in order to translate the improved Fr\'echet--Hoeffding bounds into bounds on the expectation of f(S), we develop an alternative representation of multivariate integrals with respect to copulas that admits also quasi-copulas as integrators. By means of this representation, we provide an integral characterization of orthant orders on the set of quasi-copulas which relates the improved Fr\'echet--Hoeffding bounds to bounds on the expectation of f(S). Finally, we apply these results to compute model-free bounds on the prices of multi-asset options that take partial information on the dependence structure into account, such as correlations or market prices of other traded derivatives. The numerical results show that the additional information leads to a significant improvement of option price bounds compared to the situation where only the marginal distributions are known.
We derive upper and lower bounds on the expectation of f(S) under dependence uncertainty, i.e. when the marginal distributions of the random vector S=(S_1,\dots,S_d) are known but their dependence structure is partially unknown. We solve the problem by providing improved Fr\'echet-Hoeffding bounds on the copula of S that account for additional information. In particular, we derive bounds when the values of the copula are given on a compact subset of [0,1]^d, the value of a functional of the copula is prescribed or different types of information are available on the lower dimensional marginals of the copula. We then show that, in contrast to the two-dimensional case, the bounds are quasi-copulas but fail to be copulas if d>2. Thus, in order to translate the improved Fr\'echet-Hoeffding bounds into bounds on the expectation of f(S), we develop an alternative representation of multivariate integrals with respect to copulas that admits also quasi-copulas as integrators. By means of this representation, we provide an integral characterization of orthant orders on the set of quasi-copulas which relates the improved Fr\'echet-Hoeffding bounds to bounds on the expectation of f(S). Finally, we apply these results to compute model-free bounds on the prices of multi-asset options that take partial information on the dependence structure into account, such as correlations or market prices of other traded derivatives. The numerical results show that the additional information leads to a significant improvement of the option price bounds compared to the situation where only the marginal distributions are known.
[ { "type": "R", "before": "Fr\\'echet--Hoeffding", "after": "Fr\\'echet-Hoeffding", "start_char_pos": 272, "end_char_pos": 292 }, { "type": "R", "before": "Fr\\'echet--Hoeffding", "after": "Fr\\'echet-Hoeffding", "start_char_pos": 777, "end_char_pos": 797 }, { "type": "R", "before": "Fr\\'echet--Hoeffding", "after": "Fr\\'echet-Hoeffding", "start_char_pos": 1129, "end_char_pos": 1149 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1529, "end_char_pos": 1529 } ]
[ 0, 228, 359, 615, 735, 982, 1194, 1431 ]
1602.08894
2
We derive upper and lower bounds on the expectation of f(S) under dependence uncertainty, i.e. when the marginal distributions of the random vector S=(S_1,\dots,S_d) are known but their dependence structure is partially unknown. We solve the problem by providing improved Fr\'echet-Hoeffding \FH bounds on the copula of S that account for additional information. In particular, we derive bounds when the values of the copula are given on a compact subset of [0,1]^d, the value of a functional of the copula is prescribed or different types of information are available on the lower dimensional marginals of the copula. We then show that, in contrast to the two-dimensional case, the bounds are quasi-copulas but fail to be copulas if d>2. Thus, in order to translate the improved Fr\'echet-Hoeffding \FH bounds into bounds on the expectation of f(S), we develop an alternative representation of multivariate integrals with respect to copulas that admits also quasi-copulas as integrators. By means of this representation, we provide an integral characterization of orthant orders on the set of quasi-copulas which relates the improved Fr\'echet-Hoeffding \FH bounds to bounds on the expectation of f(S). Finally, we apply these results to compute model-free bounds on the prices of multi-asset options that take partial information on the dependence structure into account, such as correlations or market prices of other traded derivatives. The numerical results show that the additional information leads to a significant improvement of the option price bounds compared to the situation where only the marginal distributions are known.
We derive upper and lower bounds on the expectation of f(S) under dependence uncertainty, i.e. when the marginal distributions of the random vector S=(S_1,\dots,S_d) are known but their dependence structure is partially unknown. We solve the problem by providing improved \FH bounds on the copula of S that account for additional information. In particular, we derive bounds when the values of the copula are given on a compact subset of [0,1]^d, the value of a functional of the copula is prescribed or different types of information are available on the lower dimensional marginals of the copula. We then show that, in contrast to the two-dimensional case, the bounds are quasi-copulas but fail to be copulas if d>2. Thus, in order to translate the improved \FH bounds into bounds on the expectation of f(S), we develop an alternative representation of multivariate integrals with respect to copulas that admits also quasi-copulas as integrators. By means of this representation, we provide an integral characterization of orthant orders on the set of quasi-copulas which relates the improved \FH bounds to bounds on the expectation of f(S). Finally, we apply these results to compute model-free bounds on the prices of multi-asset options that take partial information on the dependence structure into account, such as correlations or market prices of other traded derivatives. The numerical results show that the additional information leads to a significant improvement of the option price bounds compared to the situation where only the marginal distributions are known.
[ { "type": "D", "before": "Fr\\'echet-Hoeffding", "after": null, "start_char_pos": 272, "end_char_pos": 291 }, { "type": "D", "before": "Fr\\'echet-Hoeffding", "after": null, "start_char_pos": 780, "end_char_pos": 799 }, { "type": "D", "before": "Fr\\'echet-Hoeffding", "after": null, "start_char_pos": 1135, "end_char_pos": 1154 } ]
[ 0, 228, 362, 618, 738, 988, 1203, 1440 ]
1603.00527
1
We propose a flexible and tractable approach based on affine processes to model multiple yield curves. More precisely, we model a numeraire process and multiplicative spreads between Libor rates and simply compounded OIS rates as functions of an underlying affine process. Besides allowing for ordered spreads and an exact fit to the initially observed term structures, this general framework leads to tractable valuation formulas for caplets and swaptions and embeds most of the existing multi-curve models based on affine processes. In particular, in the case of a model driven by a Wishart process, we derive a closed-form pricing formula for caplets. The empirical performance of some specifications of our framework is illustrated by calibration to market data.
We provide a general and tractable framework under which all multiple yield curve modeling approaches based on affine processes , be it short rate, Libor market, or HJM modeling, can be consolidated. We model a numeraire process and multiplicative spreads between Libor rates and simply compounded OIS rates as functions of an underlying affine process. Besides allowing for ordered spreads and an exact fit to the initially observed term structures, this general framework leads to tractable valuation formulas for caplets and swaptions and embeds all existing multi-curve affine models. The proposed approach also gives rise to new developments, such as a short rate type model driven by a Wishart process, for which we derive a closed-form pricing formula for caplets. The empirical performance of two specifications of our framework is illustrated by calibration to market data.
[ { "type": "R", "before": "propose a flexible and tractable approach", "after": "provide a general and tractable framework under which all multiple yield curve modeling approaches", "start_char_pos": 3, "end_char_pos": 44 }, { "type": "R", "before": "to model multiple yield curves. More precisely, we", "after": ", be it short rate, Libor market, or HJM modeling, can be consolidated. We", "start_char_pos": 71, "end_char_pos": 121 }, { "type": "R", "before": "most of the", "after": "all", "start_char_pos": 468, "end_char_pos": 479 }, { "type": "R", "before": "models based on affine processes. In particular, in the case of a", "after": "affine models. The proposed approach also gives rise to new developments, such as a short rate type", "start_char_pos": 501, "end_char_pos": 566 }, { "type": "A", "before": null, "after": "for which", "start_char_pos": 602, "end_char_pos": 602 }, { "type": "R", "before": "some", "after": "two", "start_char_pos": 685, "end_char_pos": 689 } ]
[ 0, 102, 272, 534, 655 ]
1603.00736
1
Galor discovered many mysteries of growth . He lists them in his Unified Growth Theory and wonders how they can be explained. Close inspection of his mysteries reveals that they are of his own creation. They do not exist. He created them by his habitually crude or distorted presentation of data and by failing to carry out their scientific analysis . One of his self-created mysteries is the mystery of the alleged sudden spurt in the growth rate of income per capita. The sudden spurt never happened. In order to understand the growth rate of income per capita, its mathematical properties are now explored and explained. The explanation is illustrated using the historical world economic growth. Galor also wonders about the sudden spurt in the growth rate of population. We show that this sudden spurt is also of his own creation . The mechanism of the historical economic growth and of the growth of human population is yet to be explained but it would be a waste of time, money and human resources to try to explain the non-existing , self-created mysteries described in the Unified Growth Theory .
Galor discovered many mysteries of the growth process . He lists them in his Unified Growth Theory and wonders how they can be explained. Close inspection of his mysteries reveals that they are of his own creation. They do not exist. He created them by his habitually distorted presentation of data . One of his self-created mysteries is the mystery of the alleged sudden spurt in the growth rate of income per capita. This sudden spurt never happened. In order to understand the growth rate of income per capita, its mathematical properties are now explored and explained. The explanation is illustrated using the historical world economic growth. Galor also wonders about the sudden spurt in the growth rate of population. We show that this sudden spurt was also created by the distorted presentation of data . The mechanism of the historical economic growth and of the growth of human population is yet to be explained but it would be unproductive to try to explain the non-existing and self-created mysteries of the growth process described in the scientifically unacceptable Unified Growth Theory . However, the problem is much deeper than just the examination of this theory. Demographic Growth Theory is based on the incorrect but deeply entrenched postulates developed by accretion over many years and now generally accepted in the economic and demographic research, postulates revolving around the concept of Malthusian stagnation and around a transition from stagnation to growth. The study presented here and earlier similar publications show that these postulates need to be replaced by interpretations based on the mathematical analysis of data and on the correct understanding of hyperbolic distributions .
[ { "type": "R", "before": "growth", "after": "the growth process", "start_char_pos": 35, "end_char_pos": 41 }, { "type": "D", "before": "crude or", "after": null, "start_char_pos": 256, "end_char_pos": 264 }, { "type": "D", "before": "and by failing to carry out their scientific analysis", "after": null, "start_char_pos": 296, "end_char_pos": 349 }, { "type": "R", "before": "The", "after": "This", "start_char_pos": 470, "end_char_pos": 473 }, { "type": "R", "before": "is also of his own creation", "after": "was also created by the distorted presentation of data", "start_char_pos": 806, "end_char_pos": 833 }, { "type": "R", "before": "a waste of time, money and human resources", "after": "unproductive", "start_char_pos": 961, "end_char_pos": 1003 }, { "type": "R", "before": ",", "after": "and", "start_char_pos": 1039, "end_char_pos": 1040 }, { "type": "A", "before": null, "after": "of the growth process", "start_char_pos": 1064, "end_char_pos": 1064 }, { "type": "A", "before": null, "after": "scientifically unacceptable", "start_char_pos": 1082, "end_char_pos": 1082 }, { "type": "A", "before": null, "after": ". However, the problem is much deeper than just the examination of this theory. Demographic Growth Theory is based on the incorrect but deeply entrenched postulates developed by accretion over many years and now generally accepted in the economic and demographic research, postulates revolving around the concept of Malthusian stagnation and around a transition from stagnation to growth. The study presented here and earlier similar publications show that these postulates need to be replaced by interpretations based on the mathematical analysis of data and on the correct understanding of hyperbolic distributions", "start_char_pos": 1105, "end_char_pos": 1105 } ]
[ 0, 43, 125, 202, 221, 351, 469, 502, 623, 698, 774, 835 ]
1603.00835
1
Single-molecule magnetic tweezers experiments performed in the past few years report a clear deviation of the effective torsional stiffness of DNA from the predictions of the twistable worm-like chain model. Here we show that this discrepancy can be resolved if a coupling term between bending and twisting is introduced. \pm Although the existence of such an interaction was predicted more than two decades ago (Marko and Siggia, Macromol. 27, 981 (1994)), its effect on the static and dynamical properties of DNA has been largely unexplored. Our analysis yields a twist-bend coupling constant of G=50+-10 nm. We show that the introduction of twist-bend coupling requires a re-tuning of the other elastic parameters of DNA , in particular for the intrinsic bending stiffness .
Recent magnetic tweezers experiments have reported systematic deviations of the twist response of double-stranded DNA from the predictions of the twistable worm-like chain model. Here we show , by means of analytical results and computer simulations, that these discrepancies can be resolved if a coupling between twist and bend is introduced. We obtain an estimate of 40\pm 10 nm for the twist-bend coupling constant. Our simulations are in good agreement with high-resolution, magnetic-tweezers torque data. Although the existence of twist-bend coupling was predicted long ago (Marko and Siggia, Macromolecules 27, 981 (1994)), its effects on the mechanical properties of DNA have been so far largely unexplored. We expect that this coupling plays an important role in several aspects of DNA statics and dynamics .
[ { "type": "R", "before": "Single-molecule", "after": "Recent", "start_char_pos": 0, "end_char_pos": 15 }, { "type": "R", "before": "performed in the past few years report a clear deviation of the effective torsional stiffness of", "after": "have reported systematic deviations of the twist response of double-stranded", "start_char_pos": 46, "end_char_pos": 142 }, { "type": "R", "before": "that this discrepancy", "after": ", by means of analytical results and computer simulations, that these discrepancies", "start_char_pos": 221, "end_char_pos": 242 }, { "type": "R", "before": "term between bending and twisting", "after": "between twist and bend", "start_char_pos": 273, "end_char_pos": 306 }, { "type": "A", "before": null, "after": "We obtain an estimate of 40", "start_char_pos": 322, "end_char_pos": 322 }, { "type": "A", "before": null, "after": "10 nm for the twist-bend coupling constant. Our simulations are in good agreement with high-resolution, magnetic-tweezers torque data.", "start_char_pos": 326, "end_char_pos": 326 }, { "type": "R", "before": "such an interaction was predicted more than two decades", "after": "twist-bend coupling was predicted long", "start_char_pos": 353, "end_char_pos": 408 }, { "type": "R", "before": "Macromol.", "after": "Macromolecules", "start_char_pos": 432, "end_char_pos": 441 }, { "type": "R", "before": "effect on the static and dynamical", "after": "effects on the mechanical", "start_char_pos": 463, "end_char_pos": 497 }, { "type": "R", "before": "has been", "after": "have been so far", "start_char_pos": 516, "end_char_pos": 524 }, { "type": "R", "before": "Our analysis yields a twist-bend coupling constant of G=50+-10 nm. We show that the introduction of twist-bend coupling requires a re-tuning of the other elastic parameters of DNA , in particular for the intrinsic bending stiffness", "after": "We expect that this coupling plays an important role in several aspects of DNA statics and dynamics", "start_char_pos": 545, "end_char_pos": 776 } ]
[ 0, 207, 321, 441, 544, 611 ]
1603.00987
1
The objective is either to design an appropriate securities lending auction mechanismor to come up with a strategy for placing bids , depending on which side of the fence a participant sits . There are two pieces to this puzzle. One is the valuation of the portfolio being auctioned subject to the available information set. The other piece would be to come up with the best strategy from an auction perspective once a valuation has been obtained. We derive valuations under different assumptions and show a weighting scheme that converges to the true valuation. We extend auction theory results to be more applicable to financial securities and intermediaries. All the propositions are new results and they refer to existing results which are given as Lemmas without proof. Lastly, we run simulations to establish numerical examples for the set of valuations and for various bidding strategies corresponding to the different auction settings.
We derive valuations of a portfolio of financial instruments from a securities lending perspective, under different assumptions, and show a weighting scheme that converges to the true valuation. This valuation can be useful either to derive a bidding strategy for an exclusive auction or to design an appropriate auction mechanism , depending on which side of the fence a participant sits (whether the interest is to procure the rights to use a portfolio for making stock loans such as for a lending desk, or, to obtain additional revenue from a portfolio such as from the point of view of a long only asset management firm). Lastly, we run simulations to establish numerical examples for the set of valuations and for various bidding strategies corresponding to different auction settings.
[ { "type": "R", "before": "The objective is either to", "after": "We derive valuations of a portfolio of financial instruments from a securities lending perspective, under different assumptions, and show a weighting scheme that converges to the true valuation. This valuation can be useful either to derive a bidding strategy for an exclusive auction or to", "start_char_pos": 0, "end_char_pos": 26 }, { "type": "R", "before": "securities lending auction mechanismor to come up with a strategy for placing bids", "after": "auction mechanism", "start_char_pos": 49, "end_char_pos": 131 }, { "type": "R", "before": ". There are two pieces to this puzzle. One is the valuation of the portfolio being auctioned subject to the available information set. The other piece would be to come up with the best strategy from an auction perspective once a valuation has been obtained. We derive valuations under different assumptions and show a weighting scheme that converges to the true valuation. We extend auction theory results to be more applicable to financial securities and intermediaries. All the propositions are new results and they refer to existing results which are given as Lemmas without proof.", "after": "(whether the interest is to procure the rights to use a portfolio for making stock loans such as for a lending desk, or, to obtain additional revenue from a portfolio such as from the point of view of a long only asset management firm).", "start_char_pos": 190, "end_char_pos": 774 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 912, "end_char_pos": 915 } ]
[ 0, 191, 228, 324, 447, 562, 661, 774 ]
1603.00987
2
We derive valuations of a portfolio of financial instruments from a securities lending perspective, under different assumptions, and show a weighting scheme that converges to the true valuation. This valuation can be useful either to derive a bidding strategy for an exclusive auction or to design an appropriate auction mechanism, depending on which side of the fence a participant sits (whether the interest is to procure the rights to use a portfolio for making stock loans such as for a lending desk, or, to obtain additional revenue from a portfolio such as from the point of view of a long only asset management firm). Lastly, we run simulations to establish numerical examples for the set of valuations and for various bidding strategies corresponding to different auction settings.
We derive valuations of a portfolio of financial instruments from a securities lending perspective, under different assumptions, and show a weighting scheme that converges to the true valuation. We illustrate conditions under which our alternative weighting scheme converges faster to the true valuation when compared to the minimum variance weighting. This weighting scheme is applicable in any situation where multiple forecasts are made and we need a methodology to combine them. Our valuations can be useful either to derive a bidding strategy for an exclusive auction or to design an appropriate auction mechanism, depending on which side of the fence a participant sits (whether the interest is to procure the rights to use a portfolio for making stock loans such as for a lending desk, or, to obtain additional revenue from a portfolio such as from the point of view of a long only asset management firm). Lastly, we run simulations to establish numerical examples for the set of valuations and for various bidding strategies corresponding to different auction settings.
[ { "type": "R", "before": "This valuation", "after": "We illustrate conditions under which our alternative weighting scheme converges faster to the true valuation when compared to the minimum variance weighting. This weighting scheme is applicable in any situation where multiple forecasts are made and we need a methodology to combine them. Our valuations", "start_char_pos": 195, "end_char_pos": 209 } ]
[ 0, 194, 624 ]
1603.01316
1
This paper assesses the performance of mobile messaging and VoIP connections. We investigate the CPU requirements of WhatsApp and IMO under different scenarios. This analysis also enabled a comparison of the performance of these applications on two Android operating system (OS) versions: KitKat or Lollipop. Two models of smartphones were considered, viz. Galaxy Note 4 and Galaxy S4. The applications behavior was statistically investigated for both sending and receiving VoIP calls. Connections have been examined over 3G and WiFi. The handset model plays a decisive role in CPU requirements of the application. t-tests showed that IMO has a statistical better performance that WhatsApp whatever be the Android at a significance level 1\%, on Galaxy Note 4. In contrast, WhatsApp requires less CPU than IMO on Galaxy S4 whatever be the OS and access (3G/WiFi). Galaxy Note 4 using WiFi always outperformed S4 in terms of processing efficiency.
This paper assesses the performance of mobile messaging and VoIP connections. We investigate the CPU usage of WhatsApp and IMO under different scenarios. This analysis also enabled a comparison of the performance of these applications on two Android operating system (OS) versions: KitKat or Lollipop. Two models of smartphones were considered, viz. Galaxy Note 4 and Galaxy S4. The applications behavior was statistically investigated for both sending and receiving VoIP calls. Connections have been examined over 3G and WiFi. The handset model plays a decisive role in CPU usage of the application. t-tests showed that IMO has a better performance that WhatsApp whatever be the Android at a significance level 1\%, on Galaxy Note 4. In contrast, WhatsApp requires less CPU than IMO on Galaxy S4 whatever be the OS and access (3G/WiFi). Galaxy Note 4 using WiFi always outperformed S4 in terms of processing efficiency.
[ { "type": "R", "before": "requirements", "after": "usage", "start_char_pos": 101, "end_char_pos": 113 }, { "type": "R", "before": "requirements", "after": "usage", "start_char_pos": 582, "end_char_pos": 594 }, { "type": "D", "before": "statistical", "after": null, "start_char_pos": 645, "end_char_pos": 656 } ]
[ 0, 77, 160, 308, 356, 385, 485, 534, 760, 863 ]
1603.01404
1
We study a parallel service queueing system with servers of types s_1,\ldots,s_J, customers of types c_1,\ldots,c_I, bipartite compatibility graph \mathcal{G . For a general renewal stream of arriving customers and general service time distributions, the behavior of such systems is very complicated , in particular the calculation of matching rates r_{c_i,s_j , the fraction of services of customers of typec_i by servers of type s_j , is intractable. We suggest through a heuristic argument that if the number of servers becomes large, the matching rates are well approximated by matching rates calculated from the tractable FCFS bipartite infinite matching model. We present simulation evidence to support this heuristic argument, and show how this can be used to design systems for given performance requirements.
We study a parallel queueing system with multiple types of servers and customers. A bipartite graph describes which pairs of customer-server types are compatible. We consider the service policy that always assigns servers to the first, longest waiting compatible customer, and that always assigns customers to the longest idle compatible server if on arrival, multiple compatible servers are available . For a general renewal stream of arriving customers and general service time distributions, the behavior of such systems is very complicated . In particular, the calculation of matching rates , the fraction of services of customer-server type , is intractable. We suggest through a heuristic argument that if the number of servers becomes large, the matching rates are well approximated by matching rates calculated from the tractable bipartite infinite matching model. We present simulation evidence to support this heuristic argument, and show how this can be used to design systems with desired performance requirements.
[ { "type": "D", "before": "service", "after": null, "start_char_pos": 20, "end_char_pos": 27 }, { "type": "R", "before": "servers of types s_1,\\ldots,s_J, customers of types c_1,\\ldots,c_I, bipartite compatibility graph \\mathcal{G", "after": "multiple types of servers and customers. A bipartite graph describes which pairs of customer-server types are compatible. We consider the service policy that always assigns servers to the first, longest waiting compatible customer, and that always assigns customers to the longest idle compatible server if on arrival, multiple compatible servers are available", "start_char_pos": 49, "end_char_pos": 157 }, { "type": "R", "before": ", in particular", "after": ". In particular,", "start_char_pos": 300, "end_char_pos": 315 }, { "type": "D", "before": "r_{c_i,s_j", "after": null, "start_char_pos": 350, "end_char_pos": 360 }, { "type": "R", "before": "customers of typec_i by servers of type s_j", "after": "customer-server type", "start_char_pos": 391, "end_char_pos": 434 }, { "type": "D", "before": "FCFS", "after": null, "start_char_pos": 627, "end_char_pos": 631 }, { "type": "R", "before": "for given", "after": "with desired", "start_char_pos": 782, "end_char_pos": 791 } ]
[ 0, 452, 666 ]
1603.01489
1
Profiling is a prominent technique for finding the location of performance "bottlenecks" in code. Profiling can be performed by adding code to a program which increments a counter for each line of code each time it is executed. Any lines of code which have a large execution count relative to other lines in the program can be considered a bottleneck. Though code profiling can determine the location of a performance issue or bottleneck, we posit that the code change required to improve performance may not always be found at the same location. Developers must frequently trace back through a program to understand what code is contributing to a bottleneck. We seek to highlight code which is likely causing or has the most effect on the overall execution cost of a program . In this document we compare different methods for localising potential performance improvements .
Performance becomes an issue particularly when execution cost hinders the functionality of a program. Typically a profiler can be used to find program code execution which represents a large portion of the overall execution cost of a program. Pinpointing where a performance issue exists provides a starting point for tracing cause back through a program. While profiling shows where a performance issue manifests, we use mutation analysis to show where a performance improvement is likely to exist. We find that mutation analysis can indicate locations within a program which are highly impactful to the overall execution cost of a program yet are executed relatively infrequently. By better locating potential performance improvements in programs we hope to make performance improvement more amenable to automation .
[ { "type": "R", "before": "Profiling is a prominent technique for finding the location of performance \"bottlenecks\" in code. Profiling can be performed by adding code to a program which increments a counter for each line of code each time it is executed. Any lines of code which have a large execution count relative to other lines in the program can be considered a bottleneck. Though code profiling can determine the location of", "after": "Performance becomes an issue particularly when execution cost hinders the functionality of a program. Typically a profiler can be used to find program code execution which represents a large portion of the overall execution cost of a program. Pinpointing where a performance issue exists provides a starting point for tracing cause back through a program. While profiling shows where", "start_char_pos": 0, "end_char_pos": 403 }, { "type": "R", "before": "or bottleneck, we posit that the code change required to improve performance may not always be found at the same location. Developers must frequently trace back through a program to understand what code is contributing to a bottleneck. We seek to highlight code which is likely causing or has the most effect on the", "after": "manifests, we use mutation analysis to show where a performance improvement is likely to exist. We find that mutation analysis can indicate locations within a program which are highly impactful to the", "start_char_pos": 424, "end_char_pos": 739 }, { "type": "R", "before": ". In this document we compare different methods for localising", "after": "yet are executed relatively infrequently. By better locating", "start_char_pos": 776, "end_char_pos": 838 }, { "type": "A", "before": null, "after": "in programs we hope to make performance improvement more amenable to automation", "start_char_pos": 874, "end_char_pos": 874 } ]
[ 0, 97, 227, 351, 546, 659, 777 ]
1603.01685
1
Data describing historical growth of income per capita [Gross Domestic Product per capita (GDP/cap)] for the world economic growth and for the growth in Western Europe, Eastern Europe, Asia, former USSR, Africa and Latin America are analyzed . They follow closely the linearly-modulated hyperbolic distributions represented by the ratios of hyperbolic distributions obtained by fitting the GDP and population data. Results of this analysis demonstrate that within the range of mathematically-analyzable data, epoch of Malthusian stagnation did not exist and the dramatic escapes from the Malthusian trap never happened because there was no trap. Unified Growth Theory is fundamentally incorrect because its central postulates are contradicted repeatedly by data, which were used but never analyzed during the formulation of this theory. Data of Maddison open new avenues for the economic and demographic research .
Data describing historical growth of income per capita [Gross Domestic Product per capita (GDP/cap)] for the world economic growth and for the growth in Western Europe, Eastern Europe, Asia, former USSR, Africa and Latin America are analysed . They follow closely the linearly-modulated hyperbolic distributions represented by the ratios of hyperbolic distributions obtained by fitting the GDP and population data. Results of this analysis demonstrate that income per capita was increasing monotonically. There was no stagnation and there were no transitions from stagnation to growth. The usually postulated dramatic escapes from the Malthusian trap never happened because there was no trap. Unified Growth Theory is fundamentally incorrect because its central postulates are contradicted repeatedly by data, which were used but never analysed during the formulation of this theory. The large body of readily-available data opens new avenues for the economic and demographic research . They show that certain fundamental postulates revolving around the concept of Malthusian stagnation need to be replaced by the evidence-based interpretations. Within the range of analysable data, which for the growth of population extends down to 10,000 BC, growth of human population and economic growth were hyperbolic. There was no Malthusian stagnation and there were no transitions to distinctly faster trajectories. Industrial Revolution had no impact on changing growth trajectories .
[ { "type": "R", "before": "analyzed", "after": "analysed", "start_char_pos": 233, "end_char_pos": 241 }, { "type": "R", "before": "within the range of mathematically-analyzable data, epoch of Malthusian stagnation did not exist and the", "after": "income per capita was increasing monotonically. There was no stagnation and there were no transitions from stagnation to growth. The usually postulated", "start_char_pos": 457, "end_char_pos": 561 }, { "type": "R", "before": "analyzed", "after": "analysed", "start_char_pos": 789, "end_char_pos": 797 }, { "type": "R", "before": "Data of Maddison open", "after": "The large body of readily-available data opens", "start_char_pos": 837, "end_char_pos": 858 }, { "type": "A", "before": null, "after": ". They show that certain fundamental postulates revolving around the concept of Malthusian stagnation need to be replaced by the evidence-based interpretations. Within the range of analysable data, which for the growth of population extends down to 10,000 BC, growth of human population and economic growth were hyperbolic. There was no Malthusian stagnation and there were no transitions to distinctly faster trajectories. Industrial Revolution had no impact on changing growth trajectories", "start_char_pos": 913, "end_char_pos": 913 } ]
[ 0, 243, 414, 645, 836 ]
1603.01789
1
Dynamical structural correlationsare essential for proteins to realize allostery. Based on analysis of extensive molecular dynamics (MD) simulation trajectories of eleven proteins with different sizes and folds, we found that significant spatially long-range backbone torsional pair correlations exist extensively in some proteins and are dominantly executed by loop residues . Further examinations suggest that such correlations are inherently non-linear and are associated with aharmonic torsional state transitions . Correspondingly, they occur on widely different and relatively longer time scales. In contrast, pair correlations between backbone torsions in stable \alpha helices and \beta strands are dominantly short-ranged, inherently linear, and are associated mainly with harmonic torsional dynamics. Challenges and implications inspired by these observations are discussed .
Protein allostery requires dynamical structural correlations. Physical origin of which, however, remain elusive despite intensive studies during last two decades. Based on analysis of molecular dynamics (MD) simulation trajectories for ten proteins with different sizes and folds, we found that nonlinear backbone torsional pair (BTP) correlations, which are spatially more long-ranged and are mainly executed by loop residues , exist extensively in most analyzed proteins. Examination of torsional motion for correlated BTPs suggested that aharmonic torsional state transitions are essential for such non-linear correlations, which correspondingly occur on widely different and relatively longer time scales. In contrast, BTP correlations between backbone torsions in stable \alpha helices and \beta strands are mainly linear and spatially more short-ranged, and are more likely to associate with intra-well torsional dynamics. Further analysis revealed that the direct cause of non-linear contributions are heterogeneous, and in extreme cases canceling, linear correlations associated with different torsional states of participating torsions. Therefore, torsional state transitions of participating torsions for a correlated BTP are only necessary but not sufficient condition for significant non-linear contributions. These findings implicate a general search strategy for novel allosteric modulation of protein activities. Meanwhile, it was suggested that ensemble averaged correlation calculation and static contact network analysis, while insightful, are not sufficient to elucidate mechanisms underlying allosteric signal transmission in general, dynamical and time scale resolved analysis are essential .
[ { "type": "R", "before": "Dynamical structural correlationsare essential for proteins to realize allostery.", "after": "Protein allostery requires dynamical structural correlations. Physical origin of which, however, remain elusive despite intensive studies during last two decades.", "start_char_pos": 0, "end_char_pos": 81 }, { "type": "D", "before": "extensive", "after": null, "start_char_pos": 103, "end_char_pos": 112 }, { "type": "R", "before": "of eleven", "after": "for ten", "start_char_pos": 161, "end_char_pos": 170 }, { "type": "R", "before": "significant spatially long-range", "after": "nonlinear", "start_char_pos": 226, "end_char_pos": 258 }, { "type": "R", "before": "correlations exist extensively in some proteins and are dominantly", "after": "(BTP) correlations, which are spatially more long-ranged and are mainly", "start_char_pos": 283, "end_char_pos": 349 }, { "type": "R", "before": ". Further examinations suggest that such correlations are inherently non-linear and are associated with", "after": ", exist extensively in most analyzed proteins. Examination of torsional motion for correlated BTPs suggested that", "start_char_pos": 376, "end_char_pos": 479 }, { "type": "R", "before": ". Correspondingly, they", "after": "are essential for such non-linear correlations, which correspondingly", "start_char_pos": 518, "end_char_pos": 541 }, { "type": "R", "before": "pair", "after": "BTP", "start_char_pos": 616, "end_char_pos": 620 }, { "type": "R", "before": "dominantly", "after": "mainly linear and spatially more", "start_char_pos": 707, "end_char_pos": 717 }, { "type": "R", "before": "inherently linear, and are associated mainly with harmonic", "after": "and are more likely to associate with intra-well", "start_char_pos": 732, "end_char_pos": 790 }, { "type": "R", "before": "Challenges and implications inspired by these observations are discussed", "after": "Further analysis revealed that the direct cause of non-linear contributions are heterogeneous, and in extreme cases canceling, linear correlations associated with different torsional states of participating torsions. Therefore, torsional state transitions of participating torsions for a correlated BTP are only necessary but not sufficient condition for significant non-linear contributions. These findings implicate a general search strategy for novel allosteric modulation of protein activities. Meanwhile, it was suggested that ensemble averaged correlation calculation and static contact network analysis, while insightful, are not sufficient to elucidate mechanisms underlying allosteric signal transmission in general, dynamical and time scale resolved analysis are essential", "start_char_pos": 811, "end_char_pos": 883 } ]
[ 0, 81, 377, 602, 810 ]
1603.02094
1
Guaranteeing accurate worst-case bounds on the end-to-end delay that data flows experience in communication networks is required for a variety of safety-critical systems , for instance in avionics . Deterministic Network Calculus (DNC) is a widely used method to derive such bounds. The DNC theory has been advanced in recent years to provide ever tighter delay bounds, though this turned out a hard problem in the general feed-forward network case. Currently, the only analysisto achieve tight delay bounds, i. e. , best possible ones, is based on an optimization formulation instead of the usual algebraic DNC analysis. However, it has also been shown to be NP-hard and was accompanied by a similar, yet relaxed optimization that trades tightness against computational effort. In our article, we derive a novel , fast algebraic delay analysis that nevertheless retains a high degree of accuracy . We show in extensive numerical experiments that our solution enables the analysis of large-scale networks by reducing the computation time by several orders of magnitude in contrast to the optimization analysis. Moreover, in networks where optimization is still feasible, our delay bounds stay within close range, deviating on average by only 1.16\% in our experiments .
Networks are integral parts of modern safety-critical systems and certification demands the provision of guarantees for data transmissions . Deterministic Network Calculus (DNC) can compute a worst-case bound on a data flow's end-to-end delay. Accuracy of DNC results has been improved steadily, resulting in two DNC branches: the classical algebraic analysis and the more recent optimization-based analysis. The optimization-based branch provides a theoretical solution for tight bounds. Its computational cost grows, however, (possibly super-)exponentially with the network size. Consequently, a heuristic optimization formulation trading accuracy against computational costs was proposed. In this paper, we challenge optimization-based DNC with a new algebraic DNC algorithm. We show that: (i) no current optimization formulation scales well with the network size and (ii) algebraic DNC can be considerably improved in both aspects, accuracy and computational cost. To that end, we contribute a novel DNC algorithm that transfers the optimization's search for best attainable delay bounds to algebraic DNC. It achieves a high degree of accuracy and our novel efficiency improvements reduce the cost of the analysis dramatically. In extensive numerical experiments , we observe that our delay bounds deviate from the optimization-based ones by only 1.142\% on average while computation times simultaneously decrease by several orders of magnitude .
[ { "type": "R", "before": "Guaranteeing accurate worst-case bounds on the end-to-end delay that data flows experience in communication networks is required for a variety of", "after": "Networks are integral parts of modern", "start_char_pos": 0, "end_char_pos": 145 }, { "type": "R", "before": ", for instance in avionics", "after": "and certification demands the provision of guarantees for data transmissions", "start_char_pos": 170, "end_char_pos": 196 }, { "type": "R", "before": "is a widely used method to derive such bounds. The DNC theory has been advanced in recent years to provide ever tighter delay bounds, though this turned out a hard problem in the general feed-forward network case. Currently, the only analysisto achieve tight delay bounds, i. e. , best possible ones, is based on an optimization formulation instead of the usual algebraic DNC analysis. However, it has also been shown to be NP-hard and was accompanied by a similar, yet relaxed optimization that trades tightness against computational effort. In our article, we derive a novel , fast algebraic delay analysis that nevertheless retains", "after": "can compute a worst-case bound on a data flow's end-to-end delay. Accuracy of DNC results has been improved steadily, resulting in two DNC branches: the classical algebraic analysis and the more recent optimization-based analysis. The optimization-based branch provides a theoretical solution for tight bounds. Its computational cost grows, however, (possibly super-)exponentially with the network size. Consequently, a heuristic optimization formulation trading accuracy against computational costs was proposed. In this paper, we challenge optimization-based DNC with a new algebraic DNC algorithm. We show that: (i) no current optimization formulation scales well with the network size and (ii) algebraic DNC can be considerably improved in both aspects, accuracy and computational cost. To that end, we contribute a novel DNC algorithm that transfers the optimization's search for best attainable delay bounds to algebraic DNC. It achieves", "start_char_pos": 236, "end_char_pos": 870 }, { "type": "R", "before": ". We show in", "after": "and our novel efficiency improvements reduce the cost of the analysis dramatically. In", "start_char_pos": 897, "end_char_pos": 909 }, { "type": "R", "before": "that our solution enables the analysis of large-scale networks by reducing the computation time", "after": ", we observe that our delay bounds deviate from the optimization-based ones by only 1.142\\% on average while computation times simultaneously decrease", "start_char_pos": 942, "end_char_pos": 1037 }, { "type": "D", "before": "in contrast to the optimization analysis. Moreover, in networks where optimization is still feasible, our delay bounds stay within close range, deviating on average by only 1.16\\% in our experiments", "after": null, "start_char_pos": 1069, "end_char_pos": 1267 } ]
[ 0, 282, 449, 621, 778, 898, 1110 ]
1603.02393
1
The proliferation of connected low-power devices on the Internet of Things will result in a data explosion that will significantly increase data transmission costs with respect to energy consumption and latency. Edge computing reduces these costs by performing computations at the edge nodes prior to data transmission to interpret and/or utilize the data. While much research has focused on the IoT's connected nature and communication challenges, the challenges of IoT embedded computing with respect to device microprocessors and optimizations has received much less attention. This article explores IoT applications' execution characteristics from a microarchitectural perspective and the microarchitectural characteristics that will enable efficient and effective edge computing. To tractably represent a wide variety of next-generation IoT applications, we present a broad IoT application classification methodology based on application functions . Using this classification, we model and analyze the microarchitectural characteristics of a wide range of state-of-the-art embedded system microprocessors, and evaluate the microprocessors' applicability to IoT edge computing. Using these analysis as foundation, we discuss the tradeoffs of potential microarchitectural optimizations that will enable the design of right-provisioned microprocessors that are efficient, configurable, extensible, and scalable for next-generation IoT devices. Our work provides insights into the impacts of microarchitectural characteristics on microprocessors' energy consumption, performance, and efficiency for various IoT application execution requirements. Our work also provides a foundation for the analysis and design of a diverse set of microprocessor architectures for edge computing in next-generation IoT devices.
The Internet of Things (IoT) refers to a pervasive presence of interconnected and uniquely identifiable physical devices. These devices' goal is to gather data and drive actions in order to improve productivity, and ultimately reduce or eliminate reliance on human intervention for data acquisition, interpretation, and use. The proliferation of these connected low-power devices will result in a data explosion that will significantly increase data transmission costs with respect to energy consumption and latency. Edge computing reduces these costs by performing computations at the edge nodes , prior to data transmission , to interpret and/or utilize the data. While much research has focused on the IoT's connected nature and communication challenges, the challenges of IoT embedded computing with respect to device microprocessors has received much less attention. This paper explores IoT applications' execution characteristics from a microarchitectural perspective and the microarchitectural characteristics that will enable efficient and effective edge computing. To tractably represent a wide variety of next-generation IoT applications, we present a broad IoT application classification methodology based on application functions , to enable quicker workload characterizations for IoT microprocessors. We then survey and discuss potential microarchitectural optimizations and computing paradigms that will enable the design of right-provisioned microprocessors that are efficient, configurable, extensible, and scalable . This paper provides a foundation for the analysis and design of a diverse set of microprocessor architectures for next-generation IoT devices.
[ { "type": "R", "before": "proliferation of", "after": "Internet of Things (IoT) refers to a pervasive presence of interconnected and uniquely identifiable physical devices. These devices' goal is to gather data and drive actions in order to improve productivity, and ultimately reduce or eliminate reliance on human intervention for data acquisition, interpretation, and use. The proliferation of these", "start_char_pos": 4, "end_char_pos": 20 }, { "type": "D", "before": "on the Internet of Things", "after": null, "start_char_pos": 49, "end_char_pos": 74 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 292, "end_char_pos": 292 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 320, "end_char_pos": 320 }, { "type": "D", "before": "and optimizations", "after": null, "start_char_pos": 531, "end_char_pos": 548 }, { "type": "R", "before": "article", "after": "paper", "start_char_pos": 588, "end_char_pos": 595 }, { "type": "R", "before": ". Using this classification, we model and analyze the microarchitectural characteristics of a wide range of state-of-the-art embedded system microprocessors, and evaluate the microprocessors' applicability to IoT edge computing. Using these analysis as foundation, we discuss the tradeoffs of", "after": ", to enable quicker workload characterizations for IoT microprocessors. We then survey and discuss", "start_char_pos": 955, "end_char_pos": 1247 }, { "type": "A", "before": null, "after": "and computing paradigms", "start_char_pos": 1291, "end_char_pos": 1291 }, { "type": "R", "before": "for next-generation IoT devices. Our work provides insights into the impacts of microarchitectural characteristics on microprocessors' energy consumption, performance, and efficiency for various IoT application execution requirements. Our work also provides", "after": ". This paper provides", "start_char_pos": 1416, "end_char_pos": 1673 }, { "type": "D", "before": "edge computing in", "after": null, "start_char_pos": 1768, "end_char_pos": 1785 } ]
[ 0, 211, 358, 582, 786, 1183, 1448, 1650 ]
1603.02438
1
The paper models foreign capital inflow from the developed to the developing countries in a stochastic dynamic programming framework. The model is solved by numerical technique because of the non-linearity of the functions. A number of comparative dynamic analyses explore the impact of parameters of the model on dynamic paths of capital inflow, interest rate in the international loan market and the exchange rate . The model also explores the possibility of financial crisis originating either in the developed country or in the developing country. The explanation of crisis in this structure is based on trade theoretic terms in a dynamic terms of trade framework rather than due to informational imperfections .
The paper models foreign capital inflow from the developed to the developing countries in a stochastic dynamic programming (SDP) framework. Under some regularity conditions, the existence of the solutions to the SDP problem is proved and they are then obtained by numerical technique because of the non-linearity of the related functions. A number of comparative dynamic analyses explore the impact of parameters of the model on dynamic paths of capital inflow, interest rate in the international loan market and the exchange rate .
[ { "type": "R", "before": "framework. The model is solved", "after": "(SDP) framework. Under some regularity conditions, the existence of the solutions to the SDP problem is proved and they are then obtained", "start_char_pos": 123, "end_char_pos": 153 }, { "type": "A", "before": null, "after": "related", "start_char_pos": 213, "end_char_pos": 213 }, { "type": "D", "before": ". The model also explores the possibility of financial crisis originating either in the developed country or in the developing country. The explanation of crisis in this structure is based on trade theoretic terms in a dynamic terms of trade framework rather than due to informational imperfections", "after": null, "start_char_pos": 417, "end_char_pos": 715 } ]
[ 0, 133, 224, 418, 552 ]
1603.02615
1
The estimation of risk measured in terms of a risk measure is typically done in two steps: in the first step, the distribution is estimated by statistical methods, either parametric or non-parametric. In the second step, the estimated distribution is considered as true distribution and the targeted risk-measure is computed. In the parametric case this is achieved by using the formula for the risk-measure in the model and inserting the estimated parameters. It is well-known that this procedure is not efficient because the highly nonlinear mapping from model parameters to the risk-measure introduces an additional biases. Statistical experiments show that this bias leads to a systematic underestimation of risk. In this regard we introduce the concept of unbiasedness to the estimation of risk . We show that an appropriate bias correction is available for many well known estimators. In particular, we consider value-at-risk and tail value-at-risk (expected shortfall ). In the special case of normal distributions, closed-formed solutions for unbiased estimators are given. For the general case we propose a bootstrapping algorithm and illustrate the outcomes by several data experiments .
The estimation of risk measures recently gained a lot of attention, partly because of the backtesting issues of expected shortfall related to elicitability. In this work we shed a new and fundamental light on optimal estimation procedures in terms of bias. We show that once the parameters of a model need to be estimated, one has to take additional care when estimating risks. The typical plug-in approach, for example, introduces a bias which leads to a systematic underestimation of risk. In this regard , we introduce a novel notion of unbiasedness to the estimation of risk which is motivated from economic principles. In general, the proposed concept does not coincide with the well-known statistical notion of unbiasedness. We show that an appropriate bias correction is available for many well-known estimators. In particular, we consider value-at-risk and expected shortfall (tail value-at-risk ). In the special case of normal distributions, closed-formed solutions for unbiased estimators can be obtained. We present a number of motivating examples which show the outperformance of unbiased estimators in many circumstances. The unbiasedness has a direct impact on backtesting and therefore adds a further viewpoint to established statistical properties .
[ { "type": "R", "before": "measured in terms of a risk measure is typically done in two steps: in the first step, the distribution is estimated by statistical methods, either parametric or non-parametric. In the second step, the estimated distribution is considered as true distribution and the targeted risk-measure is computed. In the parametric case this is achieved by using the formula for the risk-measure in the model and inserting the estimated parameters. It is well-known that this procedure is not efficient because the highly nonlinear mapping from model parameters to the risk-measure introduces an additional biases. Statistical experiments show that this bias", "after": "measures recently gained a lot of attention, partly because of the backtesting issues of expected shortfall related to elicitability. In this work we shed a new and fundamental light on optimal estimation procedures in terms of bias. We show that once the parameters of a model need to be estimated, one has to take additional care when estimating risks. The typical plug-in approach, for example, introduces a bias which", "start_char_pos": 23, "end_char_pos": 670 }, { "type": "R", "before": "we introduce the concept", "after": ", we introduce a novel notion", "start_char_pos": 733, "end_char_pos": 757 }, { "type": "R", "before": ".", "after": "which is motivated from economic principles. In general, the proposed concept does not coincide with the well-known statistical notion of unbiasedness.", "start_char_pos": 800, "end_char_pos": 801 }, { "type": "R", "before": "well known", "after": "well-known", "start_char_pos": 868, "end_char_pos": 878 }, { "type": "R", "before": "tail", "after": "expected shortfall (tail", "start_char_pos": 936, "end_char_pos": 940 }, { "type": "D", "before": "(expected shortfall", "after": null, "start_char_pos": 955, "end_char_pos": 974 }, { "type": "R", "before": "are given. For the general case we propose a bootstrapping algorithm and illustrate the outcomes by several data experiments", "after": "can be obtained. We present a number of motivating examples which show the outperformance of unbiased estimators in many circumstances. The unbiasedness has a direct impact on backtesting and therefore adds a further viewpoint to established statistical properties", "start_char_pos": 1071, "end_char_pos": 1195 } ]
[ 0, 200, 325, 460, 626, 717, 801, 890, 977, 1081 ]
1603.02615
2
The estimation of risk measures recently gained a lot of attention, partly because of the backtesting issues of expected shortfall related to elicitability. In this work we shed a new and fundamental light on optimal estimation procedures in terms of bias. We show that once the parameters of a model need to be estimated, one has to take additional care when estimating risks. The typical plug-in approach, for example, introduces a bias which leads to a systematic underestimation of risk. In this regard, we introduce a novel notion of unbiasedness to the estimation of risk which is motivated from economic principles. In general, the proposed concept does not coincide with the well-known statistical notion of unbiasedness. We show that an appropriate bias correction is available for many well-known estimators. In particular, we consider value-at-risk and expected shortfall (tail value-at-risk). In the special case of normal distributions, closed-formed solutions for unbiased estimators can be obtained. We present a number of motivating examples which show the outperformance of unbiased estimators in many circumstances. The unbiasedness has a direct impact on backtesting and therefore adds a further viewpoint to established statistical properties.
The estimation of risk measures recently gained a lot of attention, partly because of the backtesting issues of expected shortfall related to elicitability. In this work we shed a new and fundamental light on optimal estimation procedures of risk measures in terms of bias. We show that once the parameters of a model need to be estimated, one has to take additional care when estimating risks. The typical plug-in approach, for example, introduces a bias which leads to a systematic underestimation of risk. In this regard, we introduce a novel notion of unbiasedness to the estimation of risk which is motivated by economic principles. In general, the proposed concept does not coincide with the well-known statistical notion of unbiasedness. We show that an appropriate bias correction is available for many well-known estimators. In particular, we consider value-at-risk and expected shortfall (tail value-at-risk). In the special case of normal distributions, closed-formed solutions for unbiased estimators can be obtained. We present a number of motivating examples which show the outperformance of unbiased estimators in many circumstances. The unbiasedness has a direct impact on backtesting and therefore adds a further viewpoint to established statistical properties.
[ { "type": "A", "before": null, "after": "of risk measures", "start_char_pos": 239, "end_char_pos": 239 }, { "type": "R", "before": "from", "after": "by", "start_char_pos": 598, "end_char_pos": 602 } ]
[ 0, 156, 257, 378, 492, 623, 730, 819, 905, 1015, 1134 ]
1603.02896
1
We compute a sharp small-time estimate for the price of a basket call under a bi-variate SABR model with both \beta parameters equal to 1 and three correlation parameters, which extends the work of Bayer,Friz&Laurence [BFL14] for the multivariate Black-Scholes flat vol model , and we show that the BFL14%DIFDELCMD < ] %%% result has to be adjusted for strikes above a certain critical value K^* where the convolution density admits a higher order saddlepoint . The result follows from the heat kernel on hyperbolic space for n=3 combined with the Bellaiche [Bel81] heat kernel expansion and Laplace's method, and we give numerical results which corroborate our asymptotic formulae. Similar to the Black-Scholes case, we find that there is a phase transition from one "most-likely" path to two most-likely paths beyond some critical K^*.
We compute a sharp small-time estimate for the price of a basket call under a bi-variate SABR model with both \beta parameters equal to 1 and three correlation parameters, which extends the work of Bayer,Friz&Laurence [BFL14] for the multivariate Black-Scholes flat vol model %DIFDELCMD < ] %%% . The result follows from the heat kernel on hyperbolic space for n=3 combined with the Bellaiche [Bel81] heat kernel expansion and Laplace's method, and we give numerical results which corroborate our asymptotic formulae. Similar to the Black-Scholes case, we find that there is a phase transition from one "most-likely" path to two most-likely paths beyond some critical K^*.
[ { "type": "D", "before": ", and we show that the", "after": null, "start_char_pos": 276, "end_char_pos": 298 }, { "type": "D", "before": "BFL14", "after": null, "start_char_pos": 299, "end_char_pos": 304 }, { "type": "D", "before": "result has to be adjusted for strikes above a certain critical value K^* where the convolution density admits a higher order saddlepoint", "after": null, "start_char_pos": 323, "end_char_pos": 459 } ]
[ 0, 461, 682 ]
1603.03198
1
We consider the problem of modelling the term structure of bonds subject to default risk , under minimal assumptions on the default time. In particular, we do not assume the existence of a default intensity and we therefore allow for the possibility of default at predictable times. It turns out that this requires the introduction of an additional term to the forward-rate approach by Heath, Jarrow and Morton (1992). This term is driven by a random measure encoding information about those times where default can happen with positive probability. In this framework, we derive necessary and sufficient conditions for a reference probability measure to be a local martingale measure for the large financial market of credit risky bonds, also considering general recovery schemes . To this end, we establish a new Fubini theorem with respect to a random measure by means of enlargement of filtrations techniques .
We consider the problem of modelling the term structure of defaultable bonds , under minimal assumptions on the default time. In particular, we do not assume the existence of a default intensity and we therefore allow for the possibility of default at predictable times. It turns out that this requires the introduction of an additional term in the forward rate approach by Heath, Jarrow and Morton (1992). This term is driven by a random measure encoding information about those times where default can happen with positive probability. In this framework, we derive necessary and sufficient conditions for a reference probability measure to be a local martingale measure for the large financial market of credit risky bonds, also considering general recovery schemes .
[ { "type": "R", "before": "bonds subject to default risk", "after": "defaultable bonds", "start_char_pos": 59, "end_char_pos": 88 }, { "type": "R", "before": "to the forward-rate", "after": "in the forward rate", "start_char_pos": 354, "end_char_pos": 373 }, { "type": "D", "before": ". To this end, we establish a new Fubini theorem with respect to a random measure by means of enlargement of filtrations techniques", "after": null, "start_char_pos": 780, "end_char_pos": 911 } ]
[ 0, 137, 282, 418, 549, 781 ]
1603.03355
1
Living cells display a remarkable capacity to compartmentalize their functional biochemistry spatially . A particularly fascinating example is the cell nucleus. Exchange of macromolecules between the nucleus and the surrounding cytoplasm does not involve crossing a lipid bilayer membrane. Instead, large protein channels known as nuclear pores cross the nuclear envelope and regulate the passage of other proteins and RNA molecules. Together with associated soluble proteins, the nuclear pores constitute an important transport system. Beyond simply gating diffusion, this system is able to generate substantial concentration gradients, at the energetic expense of guanosine triphosphate (GTP) hydrolysis. Abstracting the biological paradigm, we examine this transport system as a thermodynamic machine of solution demixing. Building on the construct of free energy transduction and biochemical kinetics, we find conditions for stable operation and optimization of the concentration gradients as a function of dissipation in the form of entropy production . In contrast to conventional engineering approaches to demixing such as reverse osmosis, the biological system operates continuously, without application of cyclic changes in pressure or other intrinsic thermodynamic parameters .
Living cells display a remarkable capacity to compartmentalize their functional biochemistry . A particularly fascinating example is the cell nucleus. Exchange of macromolecules between the nucleus and the surrounding cytoplasm does not involve crossing a lipid bilayer membrane. Instead, large protein channels known as nuclear pores cross the nuclear envelope and regulate the passage of other proteins and RNA molecules. Beyond simply gating diffusion, the system of nuclear pores and associated transport receptors is able to generate substantial concentration gradients, at the energetic expense of guanosine triphosphate (GTP) hydrolysis. In contrast to conventional approaches to demixing such as reverse osmosis or dialysis, the biological system operates continuously, without application of cyclic changes in pressure or solution exchange. Abstracting the biological paradigm, we examine this transport system as a thermodynamic machine of solution demixing. Building on the construct of free energy transduction and biochemical kinetics, we find conditions for stable operation and optimization of the concentration gradients as a function of dissipation in the form of entropy production .
[ { "type": "D", "before": "spatially", "after": null, "start_char_pos": 93, "end_char_pos": 102 }, { "type": "D", "before": "Together with associated soluble proteins, the nuclear pores constitute an important transport system.", "after": null, "start_char_pos": 434, "end_char_pos": 536 }, { "type": "R", "before": "this system", "after": "the system of nuclear pores and associated transport receptors", "start_char_pos": 569, "end_char_pos": 580 }, { "type": "A", "before": null, "after": "In contrast to conventional approaches to demixing such as reverse osmosis or dialysis, the biological system operates continuously, without application of cyclic changes in pressure or solution exchange.", "start_char_pos": 707, "end_char_pos": 707 }, { "type": "D", "before": ". In contrast to conventional engineering approaches to demixing such as reverse osmosis, the biological system operates continuously, without application of cyclic changes in pressure or other intrinsic thermodynamic parameters", "after": null, "start_char_pos": 1058, "end_char_pos": 1286 } ]
[ 0, 104, 160, 289, 433, 536, 706, 826, 1059 ]
1603.03355
2
Living cells display a remarkable capacity to compartmentalize their functional biochemistry. A particularly fascinating example is the cell nucleus. Exchange of macromolecules between the nucleus and the surrounding cytoplasm does not involve crossing a lipid bilayer membrane. Instead, large protein channels known as nuclear pores cross the nuclear envelope and regulate the passage of other proteins and RNA molecules. Beyond simply gating diffusion, the system of nuclear pores and associated transport receptors is able to generate substantial concentration gradients, at the energetic expense of guanosine triphosphate (GTP) hydrolysis. In contrast to conventional approaches to demixing such as reverse osmosis or dialysis, the biological system operates continuously, without application of cyclic changes in pressure or solution exchange. Abstracting the biological paradigm, we examine this transport system as a thermodynamic machine of solution demixing. Building on the construct of free energy transduction and biochemical kinetics, we find conditions for stable operation and optimization of the concentration gradients as a function of dissipation in the form of entropy production.
Living cells display a remarkable capacity to compartmentalize their functional biochemistry. A particularly fascinating example is the cell nucleus. Exchange of macromolecules between the nucleus and the surrounding cytoplasm does not involve traversing a lipid bilayer membrane. Instead, large protein channels known as nuclear pores cross the nuclear envelope and regulate the passage of other proteins and RNA molecules. Beyond simply gating diffusion, the system of nuclear pores and associated transport receptors is able to generate substantial concentration gradients, at the energetic expense of guanosine triphosphate (GTP) hydrolysis. In contrast to conventional approaches to demixing such as reverse osmosis or dialysis, the biological system operates continuously, without application of cyclic changes in pressure or solvent exchange. Abstracting the biological paradigm, we examine this transport system as a thermodynamic machine of solution demixing. Building on the construct of free energy transduction and biochemical kinetics, we find conditions for stable operation and optimization of the concentration gradients as a function of dissipation in the form of entropy production.
[ { "type": "R", "before": "crossing", "after": "traversing", "start_char_pos": 244, "end_char_pos": 252 }, { "type": "R", "before": "solution", "after": "solvent", "start_char_pos": 830, "end_char_pos": 838 } ]
[ 0, 93, 149, 278, 422, 643, 848, 967 ]
1603.03577
1
We consider the Combinatorial RNA Design problem, a minimal instance of RNA design where one must produce an RNA sequence that adopts a given secondary structure as its minimal free-energy structure. We consider two free-energy models where the contributions of base pairs are additive and independent: the purely combinatorial Watson-Crick model, which only allows equally-contributing A -- U and C -- G base pairs, and the real-valued Nussinov-Jacobson model, which associates arbitrary energies to A--U , C -- G and G -- U base pairs. We first provide a complete characterization of designable structures using restricted alphabets and, in the four-letter alphabet, provide a complete characterization for designable structures without unpaired bases. When unpaired bases are allowed, we characterize extensive classes of (non-)designable structures, and prove the closure of the set of designable structures under the stutter operation. Membership of a given structure to any of the classes can be tested in \Theta(n) time, including the generation of a solution sequence for 2 Jozef Hale\v{s positive instances. Finally, we consider a structure-approximating relaxation of the design, and provide a \Theta(n) algorithm which, given a structure S that avoids two trivially non-designable motifs, transforms S into a designable structure constructively by adding at most one base-pair to each of its stems . Acknowledgements The authors would like to thank C\'edric Chauve (Simon Fraser University) for fruitful discussions and constructive criticisms. YP is greatly indebted to the French Centre National de la Recherche Scientifique and the Pacific Institute for the Mathematical Sciences for funding an extended visit at the Simon Fraser University .
We consider the Combinatorial RNA Design problem, a minimal instance of RNA design where one must produce an RNA sequence that adopts a given secondary structure as its minimal free-energy structure. We consider two free-energy models where the contributions of base pairs are additive and independent: the purely combinatorial Watson-Crick model, which only allows equally-contributing A -- U and C -- G base pairs, and the real-valued Nussinov-Jacobson model, which associates arbitrary energies to A -- U , C -- G and G -- U base pairs. We first provide a complete characterization of designable structures using restricted alphabets and, in the four-letter alphabet, provide a complete characterization for designable structures without unpaired bases. When unpaired bases are allowed, we characterize extensive classes of (non-)designable structures, and prove the closure of the set of designable structures under the stutter operation. Membership of a given structure to any of the classes can be tested in \Theta(n) time, including the generation of a solution sequence for positive instances. Finally, we consider a structure-approximating relaxation of the design, and provide a \Theta(n) algorithm which, given a structure S that avoids two trivially non-designable motifs, transforms S into a designable structure constructively by adding at most one base-pair to each of its stems .
[ { "type": "R", "before": "A--U", "after": "A -- U", "start_char_pos": 501, "end_char_pos": 505 }, { "type": "D", "before": "2 Jozef Hale\\v{s", "after": null, "start_char_pos": 1080, "end_char_pos": 1096 }, { "type": "D", "before": ". Acknowledgements The authors would like to thank C\\'edric Chauve (Simon Fraser University) for fruitful discussions and constructive criticisms. YP is greatly indebted to the French Centre National de la Recherche Scientifique and the Pacific Institute for the Mathematical Sciences for funding an extended visit at the Simon Fraser University", "after": null, "start_char_pos": 1409, "end_char_pos": 1754 } ]
[ 0, 199, 537, 754, 940, 1116, 1555 ]
1603.04364
1
We obtain general, exact formulas for the overlaps between the eigenvectors of large correlated random matrices, with additive or multiplicative noise. These results have potential applications in many different contexts, from quantum thermalisation to high dimensional statistics. We apply our results to the case of empirical correlation matrices, that allow us to estimate reliably the width of the spectrum of the 'true ' underlying correlation matrix, even when the latter is very close to the identity matrix . We illustrate our results on the example of stock returns correlations, that clearly reveal a non trivial structure for the bulk eigenvalues .
We obtain general, exact formulas for the overlaps between the eigenvectors of large correlated random matrices, with additive or multiplicative noise. These results have potential applications in many different contexts, from quantum thermalisation to high dimensional statistics. We find that the overlaps only depend on measurable quantities, and do not require the knowledge of the underlying "true" (noiseless) matrices. We apply our results to the case of empirical correlation matrices, that allow us to estimate reliably the width of the spectrum of the true correlation matrix, even when the latter is very close to the identity . We illustrate our results on the example of stock returns correlations, that clearly reveal a non trivial structure for the bulk eigenvalues . We also apply our results to the problem of matrix denoising in high dimension .
[ { "type": "A", "before": null, "after": "find that the overlaps only depend on measurable quantities, and do not require the knowledge of the underlying \"true\" (noiseless) matrices. We", "start_char_pos": 285, "end_char_pos": 285 }, { "type": "R", "before": "'true ' underlying", "after": "true", "start_char_pos": 419, "end_char_pos": 437 }, { "type": "D", "before": "matrix", "after": null, "start_char_pos": 509, "end_char_pos": 515 }, { "type": "A", "before": null, "after": ". We also apply our results to the problem of matrix denoising in high dimension", "start_char_pos": 659, "end_char_pos": 659 } ]
[ 0, 151, 281, 517 ]
1603.04477
1
We discuss the transition paths in a coupled bistable system consisting of interacting multiple identical bistable motifs. We propose a simple model of coupled bistable gene circuits as an example, and show that its transition paths are bifurcating. We then derive a criterion to predict the bifurcation of transition paths in a generalized coupled bistable system. We confirm the validity of the theory for the example system by numerical simulation. We also demonstrate in the example system that, if the steady states of individual gene circuits are not changed by the coupling, the bifurcation pattern is not dependent on the number of gene circuits. We further show that the transition rate exponentially decreases with the number of gene circuits when the transition path does not bifurcate, while a bifurcation softens this decrease. Finally we show that multiplicative noises facilitate the bifurcation of transition paths .
We discuss the transition paths in a coupled bistable system consisting of interacting multiple identical bistable motifs. We propose a simple model of coupled bistable gene circuits as an example, and show that its transition paths are bifurcating. We then derive a criterion to predict the bifurcation of transition paths in a generalized coupled bistable system. We confirm the validity of the theory for the example system by numerical simulation. We also demonstrate in the example system that, if the steady states of individual gene circuits are not changed by the coupling, the bifurcation pattern is not dependent on the number of gene circuits. We further show that the transition rate exponentially decreases with the number of gene circuits when the transition path does not bifurcate, while a bifurcation facilitates the transition by lowering the quasi-potential energy barrier .
[ { "type": "R", "before": "softens this decrease. Finally we show that multiplicative noises facilitate the bifurcation of transition paths", "after": "facilitates the transition by lowering the quasi-potential energy barrier", "start_char_pos": 818, "end_char_pos": 930 } ]
[ 0, 122, 249, 365, 451, 654, 840 ]
1603.05700
1
In this paper, we give a general time-varying parameter model, where the multidimensional parameter follows a continuous local martingale. As such, we call it the locally parametric model . The quantity of interest is defined as the integrated value over time of the parameter process \Theta := T^{-1} \int_0^T \theta_t^* dt. We provide a local parametric estimator of \Theta based on the original (non time-varying) parametric model estimator and conditions under which we can show consistency and the corresponding limit distribution. We show that the LPM class contains some models that come from popular problems in the high-frequency financial econometrics literature (estimating volatility, high-frequency covariance, integrated betas, leverage effect, volatility of volatility), as well as a new general asset-price diffusion model which allows for endogenous observations and time-varying noise which can be auto-correlated and correlated with the efficient price and the sampling times. Finally, as an example of how to apply the limit theory provided in this paper, we build a time-varying friction parameter extension of the (semiparametric) model with uncertainty zones (Robert and Rosenbaum (2012)) , which is noisy and endogenous, and we show that we can verify the conditions for the estimation of integrated volatility .
In this paper, we give a general time-varying parameter model, where the multidimensional parameter follows a continuous local martingale. As such, we call it the locally parametric model (LPM) . The quantity of interest is defined as the integrated value over time of the parameter process \Theta := T^{-1} \int_0^T \theta_t^* dt. We provide a local parametric estimator (LPE) of \Theta based on the original (non time-varying) parametric model estimator and conditions under which we can show the central limit theorem. As an example of how to apply the limit theory provided in this paper, we build a time-varying friction parameter extension of the (semiparametric) model with uncertainty zones (Robert and Rosenbaum (2012)) and we show that we can verify the conditions for the estimation of integrated volatility . Moreover, practical applications in time series, such as the optimal block length and local bias-correction, are discussed and numerical simulations are carried on the local MLE of a time-varying parameter MA(1) model to illustrate them .
[ { "type": "A", "before": null, "after": "(LPM)", "start_char_pos": 188, "end_char_pos": 188 }, { "type": "A", "before": null, "after": "(LPE)", "start_char_pos": 367, "end_char_pos": 367 }, { "type": "R", "before": "consistency and the corresponding limit distribution. We show that the LPM class contains some models that come from popular problems in the high-frequency financial econometrics literature (estimating volatility, high-frequency covariance, integrated betas, leverage effect, volatility of volatility), as well as a new general asset-price diffusion model which allows for endogenous observations and time-varying noise which can be auto-correlated and correlated with the efficient price and the sampling times. Finally, as", "after": "the central limit theorem. As", "start_char_pos": 485, "end_char_pos": 1009 }, { "type": "R", "before": ", which is noisy and endogenous, and", "after": "and", "start_char_pos": 1214, "end_char_pos": 1250 }, { "type": "A", "before": null, "after": ". Moreover, practical applications in time series, such as the optimal block length and local bias-correction, are discussed and numerical simulations are carried on the local MLE of a time-varying parameter MA(1) model to illustrate them", "start_char_pos": 1337, "end_char_pos": 1337 } ]
[ 0, 138, 190, 326, 538, 997 ]
1603.05700
2
In this paper, we give a general time-varying parameter model, where the multidimensional parameter follows a continuous local martingale. As such, we call it the locally parametric model (LPM). The quantity of interest is defined as the integrated value over time of the parameter process \Theta : = T^{-1} \int_0^T \theta_t^* dt. We provide a local parametric estimator (LPE) of \Theta based on the original (non time-varying) parametric model estimator and conditions under which we can show the central limit theorem. As an example of how to apply the limit theory provided in this paper, we build a time-varying friction parameter extension of the (semiparametric) model with uncertainty zones (Robert and Rosenbaum (2012)) and we show that we can verify the conditions for the estimation of integrated volatility. Moreover, practical applications in time series, such as the optimal block length and local bias-correction, are discussed and numerical simulations are carried on the local MLE of a time-varying parameter MA(1) model to illustrate them .
In this paper, we give a general time-varying parameter model, where the multidimensional parameter , which possibly includes jumps, follows a local martingale. The quantity of interest is defined as the integrated value over time of the parameter process \Theta = T^{-1} \int_0^T \theta_t^* dt. We provide a local parametric estimator (LPE) of \Theta and conditions under which we can show the central limit theorem. The framework is restricted to the specific convergence rate n^{1/2 time-varying model with uncertainty zones and the time-varying MA(1) .
[ { "type": "R", "before": "follows a continuous", "after": ", which possibly includes jumps, follows a", "start_char_pos": 100, "end_char_pos": 120 }, { "type": "D", "before": "As such, we call it the locally parametric model (LPM).", "after": null, "start_char_pos": 139, "end_char_pos": 194 }, { "type": "D", "before": ":", "after": null, "start_char_pos": 297, "end_char_pos": 298 }, { "type": "D", "before": "based on the original (non time-varying) parametric model estimator", "after": null, "start_char_pos": 388, "end_char_pos": 455 }, { "type": "R", "before": "As an example of how to apply the limit theory provided in this paper, we build a time-varying friction parameter extension of", "after": "The framework is restricted to", "start_char_pos": 522, "end_char_pos": 648 }, { "type": "R", "before": "(semiparametric) model with uncertainty zones (Robert and Rosenbaum (2012)) and we show that we can verify the conditions for the estimation of integrated volatility. Moreover, practical applications in time series, such as the optimal block length and local bias-correction, are discussed and numerical simulations are carried on the local MLE of a", "after": "specific convergence rate n^{1/2", "start_char_pos": 653, "end_char_pos": 1002 }, { "type": "R", "before": "parameter", "after": "model with uncertainty zones and the time-varying", "start_char_pos": 1016, "end_char_pos": 1025 }, { "type": "D", "before": "model to illustrate them", "after": null, "start_char_pos": 1032, "end_char_pos": 1056 } ]
[ 0, 138, 194, 331, 521, 819 ]
1603.05700
3
In this paper, we give a general time-varying parameter model, where the multidimensional parameter , which possibly includes jumps , follows a local martingale . The quantity of interest is defined as the integrated value over time of the parameter process \Theta = T^{-1} \int_0^T \theta_t^* dt. We provide a local parametric estimator (LPE) of \Theta and conditions under which we can show the central limit theorem. The framework is restricted to the specific convergence rate n^{1/2}. Several examples of LPE , in which the conditions are shown to be satisfied assuming that the microstructure noise is O_p(1/\sqrt{n : estimation of volatility, powers of volatility, high-frequency covariance and volatility when incorporating trading information . The LPE considered in those cases are variations of the maximum likelihood estimator. We also treat the case of time-varying model with uncertainty zones and the time-varying MA(1).
In this paper, we give a general time-varying parameter model, where the multidimensional parameter possibly includes jumps . The quantity of interest is defined as the integrated value over time of the parameter process \Theta = T^{-1} \int_0^T \theta_t^* dt. We provide a local parametric estimator (LPE) of \Theta and conditions under which we can show the central limit theorem. Roughly speaking those conditions correspond to some uniform limit theory in the parametric version of the problem. The framework is restricted to the specific convergence rate n^{1/2}. Several examples of LPE are studied : estimation of volatility, powers of volatility, volatility when incorporating trading information and time-varying MA(1).
[ { "type": "D", "before": ", which", "after": null, "start_char_pos": 100, "end_char_pos": 107 }, { "type": "D", "before": ", follows a local martingale", "after": null, "start_char_pos": 132, "end_char_pos": 160 }, { "type": "A", "before": null, "after": "Roughly speaking those conditions correspond to some uniform limit theory in the parametric version of the problem.", "start_char_pos": 420, "end_char_pos": 420 }, { "type": "R", "before": ", in which the conditions are shown to be satisfied assuming that the microstructure noise is O_p(1/\\sqrt{n", "after": "are studied", "start_char_pos": 515, "end_char_pos": 622 }, { "type": "D", "before": "high-frequency covariance and", "after": null, "start_char_pos": 673, "end_char_pos": 702 }, { "type": "R", "before": ". The LPE considered in those cases are variations of the maximum likelihood estimator. We also treat the case of time-varying model with uncertainty zones and the", "after": "and", "start_char_pos": 753, "end_char_pos": 916 } ]
[ 0, 162, 297, 419, 490, 754, 840 ]
1603.05914
1
We propose a similarity measure between portfolios with possibly very different numbers of assets and apply it to a historical database of institutional holdings ranging from 1999 to the end of 2013. The resulting portfolio similarity measure increased steadily before the global financial crisis , and reached a maximum when the crisis occurred. We argue that the nature of this measure implies that liquidation risk from fire sales was maximal at that time. After a sharp drop in 2008, portfolio similarity resumed its growth in 2009, with a notable acceleration in 2013, reaching levels not seen since 2007.
Common asset holding by financial institutions, namely portfolio overlap, is nowadays regarded as an important channel for financial contagion with the potential to trigger fire sales and thus severe losses at the systemic level. In this paper we propose a method to assess the statistical significance of the overlap between pairs of heterogeneously diversified portfolios, which then allows us to build a validated network of financial institutions where links indicate potential contagion channels due to realized portfolio overlaps. The method is implemented on a historical database of institutional holdings ranging from 1999 to the end of 2013, but can be in general applied to any bipartite network where the presence of similar sets of neighbors is of interest. We find that the proportion of validated network links (i.e., of statistically significant overlaps) increased steadily before the 2007-2008 global financial crisis and reached a maximum when the crisis occurred. We argue that the nature of this measure implies that systemic risk from fire sales liquidation was maximal at that time. After a sharp drop in 2008, systemic risk resumed its growth in 2009, with a notable acceleration in 2013, reaching levels not seen since 2007. We finally show that market trends tend to be amplified in the portfolios identified by the algorithm, such that it is possible to have an informative signal about financial institutions that are about to suffer (enjoy) the most significant losses (gains).
[ { "type": "R", "before": "We propose a similarity measure between portfolios with possibly very different numbers of assets and apply it to", "after": "Common asset holding by financial institutions, namely portfolio overlap, is nowadays regarded as an important channel for financial contagion with the potential to trigger fire sales and thus severe losses at the systemic level. In this paper we propose a method to assess the statistical significance of the overlap between pairs of heterogeneously diversified portfolios, which then allows us to build a validated network of financial institutions where links indicate potential contagion channels due to realized portfolio overlaps. The method is implemented on", "start_char_pos": 0, "end_char_pos": 113 }, { "type": "R", "before": "2013. The resulting portfolio similarity measure", "after": "2013, but can be in general applied to any bipartite network where the presence of similar sets of neighbors is of interest. We find that the proportion of validated network links (i.e., of statistically significant overlaps)", "start_char_pos": 194, "end_char_pos": 242 }, { "type": "A", "before": null, "after": "2007-2008", "start_char_pos": 273, "end_char_pos": 273 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 298, "end_char_pos": 299 }, { "type": "R", "before": "liquidation", "after": "systemic", "start_char_pos": 402, "end_char_pos": 413 }, { "type": "A", "before": null, "after": "liquidation", "start_char_pos": 435, "end_char_pos": 435 }, { "type": "R", "before": "portfolio similarity", "after": "systemic risk", "start_char_pos": 490, "end_char_pos": 510 }, { "type": "A", "before": null, "after": "We finally show that market trends tend to be amplified in the portfolios identified by the algorithm, such that it is possible to have an informative signal about financial institutions that are about to suffer (enjoy) the most significant losses (gains).", "start_char_pos": 613, "end_char_pos": 613 } ]
[ 0, 199, 347, 461 ]
1603.06050
2
Over the past half century, portfolio managers have carefully documented the advantages of the equally weighted S\&P 500 portfolio as well as the often overlooked disadvantages of the market capitalization weighted S\&P 500 portfolio (see Bloom, Uppal, Jacobs, Treynor). However, rather surprisingly, portfolio allocation based on the seven simple transformations of John Tukey's ladder are nowhere to be found in the literature. In this work, we consider the S\&P 500 portfolio over the 1958-2015 time horizon weighted using John Tukey's transformational ladder (Tukey2): 1/x^2,\,\, 1/x,\,\, 1/x,\,\, \text{log}(x),\,\, \sqrt{x},\,\, x, \\ \,\, \,\, } x^2, where x is the market capitalization weighted portfolio. We find that the 1/x^2 weighting strategy produces cumulative returns which significantly dominates all other portfolios, achieving an annual geometric mean return of 20.889\\%DIF < . Further, the 1/x^2 weighting strategy is superior to a 1/x weighting strategy, which is in turn superior to a 1/\sqrt{x} weighted portfolio, and so forth, culminating with the x^2 transformation, whose cumulative returns are the lowest of the seven transformations of John Tukey's transformational ladder. Rather shockingly, the order of cumulative returns precisely follows that of John Tukey's transformational ladder. To the best of our knowledge, we are the first to discover this phenomenon.\end{abstract} %DIF > over the 1958-2015 horizon. Our story is furthered by a startling phenomenon: both the cumulative and annual returns of the 1/x^2 weighting strategy are superior to those of the 1/x weighting strategy, which are in turn superior to those of the 1/\sqrt{x} weighted portfolio, and so forth, ending with the x^2 transformation, whose cumulative returns are the lowest of the seven transformations of Tukey's transformational ladder. Indeed, the order of cumulative returns precisely follows that of Tukey's transformational ladder. To the best of our knowledge, we are the first to discover this phenomenon, and we take care to differentiate it from the well-known "small-firm effect."
Over the past half-century, the empirical finance community has produced vast literature on the advantages of the equally weighted S\&P 500 portfolio as well as the often overlooked disadvantages of the market capitalization weighted Standard and Poor's ( S\&P 500 ) portfolio (see Bloom, Uppal, Jacobs, Treynor). However, portfolio allocation based on the transformations of John Tukey's ladder , rather surprisingly, have remained absent from the literature. In this work, we consider the S\&P 500 portfolio over the 1958-2015 time horizon weighted by Tukey's transformational ladder (Tukey2): 1/x^2,\,\, 1/x,\,\, 1/x,\,\, \text{log}(x),\,\, \sqrt{x},\,\, x, \\ \,\, \text{and \,\, } x^2, where x is the market capitalization weighted S\&P 500 portfolio. We find that the 1/x^2 weighting strategy produces cumulative returns that significantly dominates all other portfolios, achieving a compound annual growth rate of 20.889\\%DIF < . Further, the 1/x^2 weighting strategy is superior to a 1/x weighting strategy, which is in turn superior to a 1/\sqrt{x} weighted portfolio, and so forth, culminating with the x^2 transformation, whose cumulative returns are the lowest of the seven transformations of John Tukey's transformational ladder. Rather shockingly, the order of cumulative returns precisely follows that of John Tukey's transformational ladder. To the best of our knowledge, we are the first to discover this phenomenon.\end{abstract} %DIF > over the 1958-2015 horizon. Our story is furthered by a startling phenomenon: both the cumulative and annual returns of the 1/x^2 weighting strategy are superior to those of the 1/x weighting strategy, which are in turn superior to those of the 1/\sqrt{x} weighted portfolio, and so forth, ending with the x^2 transformation, whose cumulative returns are the lowest of the seven transformations of Tukey's transformational ladder. Indeed, the order of cumulative returns precisely follows that of Tukey's transformational ladder. To the best of our knowledge, we are the first to discover this phenomenon, and we take care to differentiate it from the well-known "small-firm effect."
[ { "type": "R", "before": "half century, portfolio managers have carefully documented the", "after": "half-century, the empirical finance community has produced vast literature on the", "start_char_pos": 14, "end_char_pos": 76 }, { "type": "A", "before": null, "after": "Standard and Poor's (", "start_char_pos": 215, "end_char_pos": 215 }, { "type": "A", "before": null, "after": ")", "start_char_pos": 225, "end_char_pos": 225 }, { "type": "D", "before": "rather surprisingly,", "after": null, "start_char_pos": 282, "end_char_pos": 302 }, { "type": "D", "before": "seven simple", "after": null, "start_char_pos": 337, "end_char_pos": 349 }, { "type": "R", "before": "are nowhere to be found in", "after": ", rather surprisingly, have remained absent from", "start_char_pos": 389, "end_char_pos": 415 }, { "type": "R", "before": "using John", "after": "by", "start_char_pos": 522, "end_char_pos": 532 }, { "type": "A", "before": null, "after": "\\text{and", "start_char_pos": 648, "end_char_pos": 648 }, { "type": "A", "before": null, "after": "S\\&P 500", "start_char_pos": 707, "end_char_pos": 707 }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 789, "end_char_pos": 794 }, { "type": "R", "before": "an annual geometric mean return", "after": "a compound annual growth rate", "start_char_pos": 851, "end_char_pos": 882 } ]
[ 0, 272, 431, 718, 1208, 1323, 1399, 1448, 1851, 1950 ]
1603.06050
3
Over the past half-century, the empirical finance community has produced vast literature on the advantages of the equally weighted S\&P 500 portfolio as well as the often overlooked disadvantages of the market capitalization weighted Standard and Poor's (S\&P 500) portfolio (see Bloom, Uppal, Jacobs, Treynor). However, portfolio allocation based on the transformations of John Tukey's ladder , rather surprisingly, have remained absent from the literature. In this work, we consider the S\&P 500 portfolio over the 1958-2015 time horizon weighted by Tukey's transformational ladder (Tukey2): 1/x^2,\,\, 1/x,\,\, 1/x,\,\, log(x),\,\, x,\,\, x, %DIFDELCMD < \\%%% \,\, and \,\, x^2, where x is the market capitalization weighted S\&P 500 portfolio. We find that the 1/x^2 weighting strategy produces cumulative returns that significantly dominates all other portfolios, achieving a compound annual growth rate of 20.889 \\%DIF < over the 1958-2015 horizon. Our story is furthered by a startling phenomenon: both the cumulative and annual returns of the 1/x^2 weighting strategy are superior to those of the 1/x weighting strategy, which are in turn superior to those of the 1/\sqrt{x} weighted portfolio, and so forth, ending with the x^2 transformation, whose cumulative returns are the lowest of the seven transformations of Tukey's transformational ladder. Indeed, the order of cumulative returns precisely follows that of Tukey's transformational ladder. To the best of our knowledge, we are the first to discover this phenomenon, and we take care to differentiate it from the well-known "small-firm effect."\end{abstract} %DIF > over the 1958-2015 horizon. Our story is furthered by a startling phenomenon: both the cumulative and annual returns of the 1/x^2 weighting strategy are superior to those of the 1/x weighting strategy, which are in turn superior to those of the 1/\sqrt{x} weighted portfolio, and so forth, ending with the x^2 transformation, whose cumulative returns are the lowest of the seven transformations of Tukey's transformational ladder. The order of cumulative returns precisely follows that of Tukey's transformational ladder. To the best of our knowledge, we are the first to discover this phenomenon.
Over the past half-century, the empirical finance community has produced vast literature on the advantages of the equally weighted S\&P 500 portfolio as well as the often overlooked disadvantages of the market capitalization weighted Standard and Poor's (S\&P 500) portfolio (see Bloom, Uppal, Jacobs, Treynor). However, portfolio allocation based on Tukey's transformational ladde have , rather surprisingly, remained absent from the literature. In this work, we consider the S\&P 500 portfolio over the 1958-2015 time horizon weighted by Tukey's transformational ladder (Tukey2): 1/x^2,\,\, 1/x,\,\, 1/x,\,\, log(x),\,\, x,\,\, x, %DIFDELCMD < \\%%% \,\, and \,\, x^2, where x is defined as the market capitalization weighted S\&P 500 portfolio. Accounting for dividends and transaction fees, we find that the 1/x^2 weighting strategy produces cumulative returns that significantly dominates all other portfolios, achieving a compound annual growth rate of 18 \\%DIF < over the 1958-2015 horizon. Our story is furthered by a startling phenomenon: both the cumulative and annual returns of the 1/x^2 weighting strategy are superior to those of the 1/x weighting strategy, which are in turn superior to those of the 1/\sqrt{x} weighted portfolio, and so forth, ending with the x^2 transformation, whose cumulative returns are the lowest of the seven transformations of Tukey's transformational ladder. Indeed, the order of cumulative returns precisely follows that of Tukey's transformational ladder. To the best of our knowledge, we are the first to discover this phenomenon, and we take care to differentiate it from the well-known "small-firm effect."\end{abstract} %DIF > over the 1958-2015 horizon. Our story is furthered by a startling phenomenon: both the cumulative and annual returns of the 1/x^2 weighting strategy are superior to those of the 1/x weighting strategy, which are in turn superior to those of the 1/\sqrt{x} weighted portfolio, and so forth, ending with the x^2 transformation, whose cumulative returns are the lowest of the seven transformations of Tukey's transformational ladder. The order of cumulative returns precisely follows that of Tukey's transformational ladder. To the best of our knowledge, we are the first to discover this phenomenon.
[ { "type": "D", "before": "the transformations of John", "after": null, "start_char_pos": 351, "end_char_pos": 378 }, { "type": "R", "before": "ladder", "after": "transformational ladde have", "start_char_pos": 387, "end_char_pos": 393 }, { "type": "D", "before": "have", "after": null, "start_char_pos": 417, "end_char_pos": 421 }, { "type": "A", "before": null, "after": "defined as", "start_char_pos": 694, "end_char_pos": 694 }, { "type": "R", "before": "We", "after": "Accounting for dividends and transaction fees, we", "start_char_pos": 750, "end_char_pos": 752 }, { "type": "R", "before": "20.889", "after": "18", "start_char_pos": 914, "end_char_pos": 920 } ]
[ 0, 311, 458, 749, 957, 1360, 1459, 1612, 1662, 2065, 2156 ]
1603.06498
1
We solve explicitly a two-dimensional singular control problem of finite fuel type in infinite time horizon. The problem stems from the optimal liquidation of an asset position in a financial market with multiplicative price impact with stochastic resilience . The optimal control is obtained as a diffusion process reflected at a non-constant free boundary. To solve the variational inequality and prove optimality, we show new results of independent interest on constructive approximations and Laplace transforms of the inverse local times for diffusions reflected at elastic boundaries.
We solve explicitly a two-dimensional singular control problem of finite fuel type for infinite time horizon. The problem stems from the optimal liquidation of an asset position in a financial market with multiplicative and transient price impact. Liquidity is stochastic in that the volume effect process, which determines the inter-temporal resilience of the market in spirit of Predoiu, Shaikhet and Shreve (2011), is taken to be stochastic, being driven by own random noise . The optimal control is obtained as the local time of a diffusion process reflected at a non-constant free boundary. To solve the HJB variational inequality and prove optimality, we need a combination of probabilistic arguments and calculus of variations methods, involving Laplace transforms of inverse local times for diffusions reflected at elastic boundaries.
[ { "type": "R", "before": "in", "after": "for", "start_char_pos": 83, "end_char_pos": 85 }, { "type": "R", "before": "price impact with stochastic resilience", "after": "and transient price impact. Liquidity is stochastic in that the volume effect process, which determines the inter-temporal resilience of the market in spirit of Predoiu, Shaikhet and Shreve (2011), is taken to be stochastic, being driven by own random noise", "start_char_pos": 219, "end_char_pos": 258 }, { "type": "A", "before": null, "after": "the local time of", "start_char_pos": 296, "end_char_pos": 296 }, { "type": "A", "before": null, "after": "HJB", "start_char_pos": 373, "end_char_pos": 373 }, { "type": "R", "before": "show new results of independent interest on constructive approximations and", "after": "need a combination of probabilistic arguments and calculus of variations methods, involving", "start_char_pos": 422, "end_char_pos": 497 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 520, "end_char_pos": 523 } ]
[ 0, 108, 260, 359 ]
1603.06825
1
The paper studies the first order backward stochastic partial differential equations suggested earlier for one-dimensional state space. Some examples of similar equations are obtained for a multidimensional state space . These equations represent analogs of Hamilton-Jacobi-Bellman equations for the value functions of optimal control problems in non-Markovian setting arising in financial modelling.
The paper studies the First Order BSPDEs (Backward Stochastic Partial Differential Equations) suggested earlier for a case of multidimensional state domain with a boundary . These equations represent analogs of Hamilton-Jacobi-Bellman equations and allow to construct the value function for stochastic optimal control problems with unspecified dynamics where the underlying processes do not necessarily satisfy stochastic differential equations of a known kind with a given structure. The problems considered arise in financial modelling.
[ { "type": "R", "before": "first order backward stochastic partial differential equations", "after": "First Order BSPDEs (Backward Stochastic Partial Differential Equations)", "start_char_pos": 22, "end_char_pos": 84 }, { "type": "R", "before": "one-dimensional state space. Some examples of similar equations are obtained for a multidimensional state space", "after": "a case of multidimensional state domain with a boundary", "start_char_pos": 107, "end_char_pos": 218 }, { "type": "R", "before": "for the value functions of", "after": "and allow to construct the value function for stochastic", "start_char_pos": 292, "end_char_pos": 318 }, { "type": "R", "before": "in non-Markovian setting arising", "after": "with unspecified dynamics where the underlying processes do not necessarily satisfy stochastic differential equations of a known kind with a given structure. The problems considered arise", "start_char_pos": 344, "end_char_pos": 376 } ]
[ 0, 135, 220 ]
1603.06986
1
Bacteria communicate using external chemical signals called autoinducers (AI) in a process known as quorum sensing (QS). QS efficiency is reduced by both limitations of AI diffusion and potential interference from neighboring strains. There is thus a need for theoretical approaches that yield nontrivial quantitative predictions of how spatial community structure shapes information processing in complex microbial ecosystems. As a step in this direction, we apply a reaction-diffusion model to study autoinducer signaling dynamics in a growing bacterial community as a function of the density of metapopulations, or spatially dispersed colonies , in the total system. We predict a non-equilibrium phase transition between a local quorum sensing (LQS) regime at low dispersal, with AI signaling dynamics primarily controlled by the local population density of colonies, and a global quorum sensing (GQS) regime at high dispersal , with the dynamics being governed by the collective metapopulation density. In addition, we propose an observable order parameter for this system, termed the Neighbor Interference Fraction (NIF), which accounts for the ratio of neighbor-produced to self-produced signal at a colony. The transition between LQS to GQS is intimately connected to a tradeoff between the signaling network's latency, or speed of activation, and its throughput, or the total spatial range over which all the components of the system communicate . Levels of dispersal near the phase boundary provide an optimal compromise that enables simultaneously high latency and throughput in a given environment .
Bacteria communicate using external chemical signals called autoinducers (AI) in a process known as quorum sensing (QS). QS efficiency is reduced by both limitations of AI diffusion and potential interference from neighboring strains. There is thus a need for predictive theories of how spatial community structure shapes information processing in complex microbial ecosystems. As a step in this direction, we apply a reaction-diffusion model to study autoinducer signaling dynamics in a single-species community as a function of the spatial distribution of colonies in the system. We predict a dynamical transition between a local quorum sensing (LQS) regime , with the AI signaling dynamics primarily controlled by the local population densities of individual colonies, and a global quorum sensing (GQS) regime , with the dynamics being dependent on collective inter-colony diffusive interactions. The crossover between LQS to GQS is intimately connected to a tradeoff between the signaling network's latency, or speed of activation, and its throughput, or the total spatial range over which all the components of the system communicate .
[ { "type": "R", "before": "theoretical approaches that yield nontrivial quantitative predictions", "after": "predictive theories", "start_char_pos": 260, "end_char_pos": 329 }, { "type": "R", "before": "growing bacterial", "after": "single-species", "start_char_pos": 538, "end_char_pos": 555 }, { "type": "R", "before": "density of metapopulations, or spatially dispersed colonies , in the total", "after": "spatial distribution of colonies in the", "start_char_pos": 587, "end_char_pos": 661 }, { "type": "R", "before": "non-equilibrium phase", "after": "dynamical", "start_char_pos": 683, "end_char_pos": 704 }, { "type": "R", "before": "at low dispersal, with", "after": ", with the", "start_char_pos": 760, "end_char_pos": 782 }, { "type": "R", "before": "density of", "after": "densities of individual", "start_char_pos": 850, "end_char_pos": 860 }, { "type": "D", "before": "at high dispersal", "after": null, "start_char_pos": 912, "end_char_pos": 929 }, { "type": "R", "before": "governed by the collective metapopulation density. In addition, we propose an observable order parameter for this system, termed the Neighbor Interference Fraction (NIF), which accounts for the ratio of neighbor-produced to self-produced signal at a colony. The transition", "after": "dependent on collective inter-colony diffusive interactions. The crossover", "start_char_pos": 956, "end_char_pos": 1228 }, { "type": "D", "before": ". Levels of dispersal near the phase boundary provide an optimal compromise that enables simultaneously high latency and throughput in a given environment", "after": null, "start_char_pos": 1454, "end_char_pos": 1608 } ]
[ 0, 120, 234, 427, 669, 1006, 1213, 1455 ]
1603.07020
1
Oil markets influence profoundly world economies through determination of prices of energy and transports. Using novel methodology devised in frequency domain, we study the information transmission mechanisms in oil-based commodity markets. Taking crude oil as a supply-side benchmark and heating oil and gasoline as demand-side benchmarks, we document new stylized facts about cyclical properties of transmission mechanism . Our first key finding is that shocks with shorter than one week response are increasingly important to the transmission mechanism over studied period. Second, demand-side shocks are becoming increasingly important in creating the short-run connectedness. Third, the supply-side shocks resonating in both long-run and short-run are important sources of connectedness.
Oil markets profoundly influence world economies through determination of prices of energy and transports. Using novel methodology devised in frequency domain, we study the information transmission mechanisms in oil-based commodity markets. Taking crude oil as a supply-side benchmark and heating oil and gasoline as demand-side benchmarks, we document new stylized facts about cyclical properties of the transmission mechanism generated by volatility shocks with heterogeneous frequency responses . Our first key finding is that shocks to volatility with response shorter than one week are increasingly important to the transmission mechanism over the studied period. Second, demand-side shocks to volatility are becoming increasingly important in creating short-run connectedness. Third, the supply-side shocks to volatility resonating in both the long run and short run are important sources of connectedness.
[ { "type": "R", "before": "influence profoundly", "after": "profoundly influence", "start_char_pos": 12, "end_char_pos": 32 }, { "type": "R", "before": "transmission mechanism", "after": "the transmission mechanism generated by volatility shocks with heterogeneous frequency responses", "start_char_pos": 401, "end_char_pos": 423 }, { "type": "R", "before": "with", "after": "to volatility with response", "start_char_pos": 463, "end_char_pos": 467 }, { "type": "D", "before": "response", "after": null, "start_char_pos": 490, "end_char_pos": 498 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 561, "end_char_pos": 561 }, { "type": "A", "before": null, "after": "to volatility", "start_char_pos": 605, "end_char_pos": 605 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 654, "end_char_pos": 657 }, { "type": "A", "before": null, "after": "to volatility", "start_char_pos": 713, "end_char_pos": 713 }, { "type": "R", "before": "long-run and short-run", "after": "the long run and short run", "start_char_pos": 733, "end_char_pos": 755 } ]
[ 0, 106, 240, 425, 577, 682 ]
1603.07322
1
In modern computer systems, long-running jobs are divided into a large number of short tasks and executed in parallel. Experience in practical systems suggests that task service times are highly random and the job service latency is bottlenecked by the slowest straggling task. One common solution for straggler mitigation is to replicate a task on multiple servers and wait for one replica of the task to finish early. The delay performance of replications depends heavily on the scheduling decisions of when to replicate, which servers to replicate on, and which task to serve first. So far, little is understood on how to optimize these scheduling decisions for minimizing the job service latency . In this paper, we present a comprehensive theoretical analysis on delay-optimal scheduling in queueing systemswith replications. In particular, low-complexity replication policies are designed , and are rigorously proven to be delay-optimal or near delay-optimal among all non-preemptive and causal policies. These theoretical results are established for very general system settings and delay metrics which allow for arbitrary arrival process , arbitrary job sizes, arbitrary soft deadlines , and heterogeneous servers with data locality constraints. In order to prove these results , novel sufficient conditions are developed for sample-path delay optimality and near delay optimality, which can be applied to any queueing system and are not limited to the study of replications .
In modern computer systems, jobs are divided into short tasks and executed in parallel. Empirical observations in practical systems suggest that the task service times are highly random and the job service time is bottlenecked by the slowest straggling task. One common solution for straggler mitigation is to replicate a task on multiple servers and wait for one replica of the task to finish early. The delay performance of replications depends heavily on the scheduling decisions of when to replicate, which servers to replicate on, and which job to serve first. So far, little is understood on how to optimize these scheduling decisions for minimizing the delay to complete the jobs . In this paper, we present a comprehensive study on delay-optimal scheduling of replications in both centralized and distributed multi-server systems. Low-complexity scheduling policies are designed and are proven to be delay-optimal or near delay-optimal in stochastic ordering among all causal and non-preemptive policies. These theoretical results are established for general system settings and delay metrics that allow for arbitrary arrival processes , arbitrary job sizes, arbitrary due times , and heterogeneous servers with data locality constraints. Novel sample-path tools are developed to prove these results .
[ { "type": "D", "before": "long-running", "after": null, "start_char_pos": 28, "end_char_pos": 40 }, { "type": "D", "before": "a large number of", "after": null, "start_char_pos": 63, "end_char_pos": 80 }, { "type": "R", "before": "Experience", "after": "Empirical observations", "start_char_pos": 119, "end_char_pos": 129 }, { "type": "R", "before": "suggests that", "after": "suggest that the", "start_char_pos": 151, "end_char_pos": 164 }, { "type": "R", "before": "latency", "after": "time", "start_char_pos": 222, "end_char_pos": 229 }, { "type": "R", "before": "task", "after": "job", "start_char_pos": 565, "end_char_pos": 569 }, { "type": "R", "before": "job service latency", "after": "delay to complete the jobs", "start_char_pos": 680, "end_char_pos": 699 }, { "type": "R", "before": "theoretical analysis", "after": "study", "start_char_pos": 744, "end_char_pos": 764 }, { "type": "R", "before": "in queueing systemswith replications. In particular, low-complexity replication", "after": "of replications in both centralized and distributed multi-server systems. Low-complexity scheduling", "start_char_pos": 793, "end_char_pos": 872 }, { "type": "R", "before": ", and are rigorously", "after": "and are", "start_char_pos": 895, "end_char_pos": 915 }, { "type": "R", "before": "among all", "after": "in stochastic ordering among all causal and", "start_char_pos": 965, "end_char_pos": 974 }, { "type": "D", "before": "and causal", "after": null, "start_char_pos": 990, "end_char_pos": 1000 }, { "type": "D", "before": "very", "after": null, "start_char_pos": 1057, "end_char_pos": 1061 }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 1104, "end_char_pos": 1109 }, { "type": "R", "before": "process", "after": "processes", "start_char_pos": 1138, "end_char_pos": 1145 }, { "type": "R", "before": "soft deadlines", "after": "due times", "start_char_pos": 1179, "end_char_pos": 1193 }, { "type": "R", "before": "In order", "after": "Novel sample-path tools are developed", "start_char_pos": 1254, "end_char_pos": 1262 }, { "type": "D", "before": ", novel sufficient conditions are developed for sample-path delay optimality and near delay optimality, which can be applied to any queueing system and are not limited to the study of replications", "after": null, "start_char_pos": 1286, "end_char_pos": 1482 } ]
[ 0, 118, 277, 419, 585, 701, 830, 1010, 1253 ]
1603.07532
1
We present an explicit and parsimonious probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena, having for sole parameter the median "true" p- value . P-values are extremely skewed and volatile, regardless of the sample size n, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon . The convenience of formula allows the investigation of scientific results, particularly meta-analyses.
We present an explicit and parsimonious probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena, having for sole parameter the median "true" p-value, as well as the distribution of the minimum p-value among m independents tests . P-values are extremely skewed and volatile, regardless of the sample size n, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon ; such volatility makes the minimum p value diverge significantly from the "true" one . The convenience of the formulas allows the investigation of scientific results, particularly meta-analyses.
[ { "type": "R", "before": "p- value", "after": "p-value, as well as the distribution of the minimum p-value among m independents tests", "start_char_pos": 197, "end_char_pos": 205 }, { "type": "A", "before": null, "after": "; such volatility makes the minimum p value diverge significantly from the \"true\" one", "start_char_pos": 399, "end_char_pos": 399 }, { "type": "R", "before": "formula", "after": "the formulas", "start_char_pos": 421, "end_char_pos": 428 } ]
[ 0, 401 ]
1603.07532
2
We present an explicit and parsimonious probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena, having for sole parameter the median "true" p-value, as well as the distribution of the minimum p-value among m independents tests. P-values are extremely skewed and volatile, regardless of the sample size n, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon; such volatility makes the minimum p value diverge significantly from the "true" one. The convenience of the formulas allows the investigation of scientific results , particularly meta-analyses .
We present an exact probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena, as well as the distribution of the minimum p-value among m independents tests. We derive the distribution for small samples 2<n \leq n^*\approx 30 as well as the limiting one as the sample size n becomes large. We also look at the properties of the "power" of a test through the distribution of its inverse for a given p-value and parametrization. P-values are shown to be extremely skewed and volatile, regardless of the sample size n, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon; such volatility makes the minimum p value diverge significantly from the "true" one. Setting the power is shown to offer little remedy unless sample size is increased markedly or the p-value is lowered by at least one order of magnitude. The formulas allow the investigation of the stability of the reproduction of results and "p-hacking" and other aspects of meta-analysis. From a probabilistic standpoint, neither a p-value of .05 nor a "power" at .9 appear to make the slightest sense .
[ { "type": "R", "before": "explicit and parsimonious", "after": "exact", "start_char_pos": 14, "end_char_pos": 39 }, { "type": "D", "before": "having for sole parameter the median \"true\" p-value,", "after": null, "start_char_pos": 153, "end_char_pos": 205 }, { "type": "A", "before": null, "after": "We derive the distribution for small samples 2<n \\leq n^*\\approx 30 as well as the limiting one as the sample size n becomes large. We also look at the properties of the \"power\" of a test through the distribution of its inverse for a given p-value and parametrization.", "start_char_pos": 285, "end_char_pos": 285 }, { "type": "A", "before": null, "after": "shown to be", "start_char_pos": 299, "end_char_pos": 299 }, { "type": "R", "before": "The convenience of the formulas allows", "after": "Setting the power is shown to offer little remedy unless sample size is increased markedly or the p-value is lowered by at least one order of magnitude. The formulas allow", "start_char_pos": 564, "end_char_pos": 602 }, { "type": "R", "before": "scientific results , particularly meta-analyses", "after": "the stability of the reproduction of results and \"p-hacking\" and other aspects of meta-analysis. From a probabilistic standpoint, neither a p-value of .05 nor a \"power\" at .9 appear to make the slightest sense", "start_char_pos": 624, "end_char_pos": 671 } ]
[ 0, 284, 478, 563 ]
1603.07532
3
We present an exact probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena , as well as the distribution of the minimum p-value among m independents tests . We derive the distribution for small samples 2<n \leq n^*\approx 30 as well as the limiting one as the sample size n becomes large. We also look at the properties of the "power" of a test through the distribution of its inverse for a given p-value and parametrization. P-values are shown to be extremely skewed and volatile, regardless of the sample size n, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon; such volatility makes the minimum p value diverge significantly from the "true" one. Setting the power is shown to offer little remedy unless sample size is increased markedly or the p-value is lowered by at least one order of magnitude . The formulas allow the investigation of the stability of the reproduction of results and "p-hacking" and other aspects of meta-analysis. From a probabilistic standpoint, neither a p-value of .05 nor a "power" at .9 appear to make the slightest sense .
We present the expected values from p-value hacking as a choice of the minimum p-value among m independents tests, which can be considerably lower than the "true" p-value, even with a single trial, owing to the extreme skewness of the meta-distribution. We first present an exact probability distribution (meta-distribution) for p-values across ensembles of statistically identical phenomena . We derive the distribution for small samples 2<n \leq n^*\approx 30 as well as the limiting one as the sample size n becomes large. We also look at the properties of the "power" of a test through the distribution of its inverse for a given p-value and parametrization. The formulas allow the investigation of the stability of the reproduction of results and "p-hacking" and other aspects of meta-analysis. P-values are shown to be extremely skewed and volatile, regardless of the sample size n, and vary greatly across repetitions of exactly same protocols under identical stochastic copies of the phenomenon; such volatility makes the minimum p value diverge significantly from the "true" one. Setting the power is shown to offer little remedy unless sample size is increased markedly or the p-value is lowered by at least one order of magnitude .
[ { "type": "A", "before": null, "after": "the expected values from p-value hacking as a choice of the minimum p-value among m independents tests, which can be considerably lower than the \"true\" p-value, even with a single trial, owing to the extreme skewness of the meta-distribution. We first present", "start_char_pos": 11, "end_char_pos": 11 }, { "type": "D", "before": ", as well as the distribution of the minimum p-value among m independents tests", "after": null, "start_char_pos": 133, "end_char_pos": 212 }, { "type": "A", "before": null, "after": "The formulas allow the investigation of the stability of the reproduction of results and \"p-hacking\" and other aspects of meta-analysis.", "start_char_pos": 484, "end_char_pos": 484 }, { "type": "D", "before": ". The formulas allow the investigation of the stability of the reproduction of results and \"p-hacking\" and other aspects of meta-analysis. From a probabilistic standpoint, neither a p-value of .05 nor a \"power\" at .9 appear to make the slightest sense", "after": null, "start_char_pos": 926, "end_char_pos": 1177 } ]
[ 0, 214, 346, 483, 688, 773, 927, 1064 ]
1603.08344
1
The so-called great divergence in the income per capita is described in the Unified Growth Theory as one of the mind-boggling and unresolved mysteries about the growth process. This mystery has now been solved: the great divergence never happened. It was created by the manipulation of data. Economic growth in various regions is at different levels of development but it follows similar, non-divergent trajectories. Unified Growth Theory is not only scientifically unacceptablebut also potentially dangerous because by promoting erroneous conceptsit diverts attention from the urgent need to control the fast-increasing growth of income per capita . The distorted presentation of data supporting the concept of the great divergence shows that most regions follow the gently-increasing trajectories describing the growth of income per capita but mathematical analysis of data and even their undistorted presentations show that these trajectories are now increasing approximately vertically with time. So, while the distorted presentation of data used in the Unified Growth Theory suggests sustainable and secure economic growth, the undistorted presentation of data demonstrates that the growth is unsustainable and insecure. The concept of takeoffs from stagnation to the sustained-growth regime promoted in the Unified Growth Theory is also dangerously misleading because it suggests a sustainable and prosperous future while the mathematical analysis of data shows that the current economic growth is dangerously insecure and unsustainable.
The so-called great divergence in the income per capita is described in the Unified Growth Theory as the mind-boggling and unresolved mystery about the growth process. This mystery has now been solved: the great divergence never happened. It was created by the manipulation of data. Economic growth in various regions is at different levels of development but it follows similar, non-divergent trajectories. Unified Growth Theory is shown yet again to be incorrect and scientifically unacceptable. It promotes incorrect and even potentially dangerous concepts . The distorted presentation of data supporting the concept of the great divergence shows that economic growth is now developing along moderately-increasing trajectories but mathematical analysis of the same data and even their undistorted presentation shows that these trajectories are now increasing approximately vertically with time. So, while the distorted presentation of data used in the Unified Growth Theory suggests generally sustainable and secure economic growth, the undistorted presentation of data demonstrates that the growth is unsustainable and insecure. The concept of takeoffs from stagnation to the sustained-growth regime promoted in the Unified Growth Theory is also dangerously misleading because it suggests a sustainable and prosperous future while the mathematical analysis of data shows that the current economic growth is insecure and unsustainable.
[ { "type": "D", "before": "one of", "after": null, "start_char_pos": 101, "end_char_pos": 107 }, { "type": "R", "before": "mysteries", "after": "mystery", "start_char_pos": 141, "end_char_pos": 150 }, { "type": "R", "before": "not only scientifically unacceptablebut also potentially dangerous because by promoting erroneous conceptsit diverts attention from the urgent need to control the fast-increasing growth of income per capita", "after": "shown yet again to be incorrect and scientifically unacceptable. It promotes incorrect and even potentially dangerous concepts", "start_char_pos": 442, "end_char_pos": 648 }, { "type": "R", "before": "most regions follow the gently-increasing trajectories describing the growth of income per capita", "after": "economic growth is now developing along moderately-increasing trajectories", "start_char_pos": 744, "end_char_pos": 841 }, { "type": "A", "before": null, "after": "the same", "start_char_pos": 871, "end_char_pos": 871 }, { "type": "R", "before": "presentations show", "after": "presentation shows", "start_char_pos": 904, "end_char_pos": 922 }, { "type": "A", "before": null, "after": "generally", "start_char_pos": 1090, "end_char_pos": 1090 }, { "type": "D", "before": "dangerously", "after": null, "start_char_pos": 1506, "end_char_pos": 1517 } ]
[ 0, 176, 247, 291, 416, 650, 1001, 1227 ]
1603.08666
1
The paper addresses the question of how to test the validity of candidate systems for general odor reproduction . A novel method and three variants of tests are proposedfor this, which involve ideas from recognition and imitation and take advantage of the availability of near-perfect reproduction methods for sight and sound.
In 1950 Alan Turing proposed his imitation game, better known as the Turing test, for determining whether a computer system claimed to adequately exhibit intelligence indeed does so. This work was carried out although no such system was anywhere in sight at the time, and we are still far from it now, many decades later. The current paper raises the similarly tantalizing question of how to test the validity of a candidate system for general odor reproduction , despite such systems still being far from viable. The reasons for the question being nontrivial are discussed, and a novel method is proposed, which involves ideas from recognition and imitation , taking advantage of the availability of near-perfect reproduction methods for sight and sound.
[ { "type": "R", "before": "The paper addresses the", "after": "In 1950 Alan Turing proposed his imitation game, better known as the Turing test, for determining whether a computer system claimed to adequately exhibit intelligence indeed does so. This work was carried out although no such system was anywhere in sight at the time, and we are still far from it now, many decades later. The current paper raises the similarly tantalizing", "start_char_pos": 0, "end_char_pos": 23 }, { "type": "R", "before": "candidate systems", "after": "a candidate system", "start_char_pos": 64, "end_char_pos": 81 }, { "type": "R", "before": ". A novel method and three variants of tests are proposedfor this, which involve", "after": ", despite such systems still being far from viable. The reasons for the question being nontrivial are discussed, and a novel method is proposed, which involves", "start_char_pos": 112, "end_char_pos": 192 }, { "type": "R", "before": "and take", "after": ", taking", "start_char_pos": 230, "end_char_pos": 238 } ]
[ 0, 113 ]
1603.08666
2
In 1950 Alan Turing proposed his imitation game, better known as the Turing test, for determining whether a computer system claimed to adequately exhibit intelligence indeed does so. This work was carried out although no such system was anywhere in sight at the time, and we are still far from it now, many decades later. The current paper raises the similarly tantalizing question of how to test the validity of a candidate system for general odor reproduction, despite such systems still being far from viable. The reasons for the question being nontrivialare discussed , and a novel method is proposed , which involves ideas from recognition and imitation, taking advantage of the availability of near-perfect reproduction methods for sight and sound.
In a 1950 article in Mind, decades before the existence of anything resembling an artificial intelligence system, Alan Turing addressed the question of how to test whether machines can think, or in modern terminology, whether a computer claimed to exhibit intelligence indeed does so. The current paper raises the analogous issue for olfaction: how to test the validity of a system claimed to reproduce arbitrary odors artificially, in a way recognizable to humans, in face of the unavailability of a general naming method for odors. Although odor reproduction systems are still far from being viable, the question of how to test candidates thereof is claimed to be interesting and nontrivial , and a novel method is proposed . To some extent, the method is inspired by Turing`s test for AI, in that it involves a human challenger and the real and artificial entities, yet it is very different: our test is conditional, requiring from the artificial no more than is required from the original, and it employs a novel method of immersion that takes advantage of the availability of near-perfect reproduction methods for sight and sound.
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 3, "end_char_pos": 3 }, { "type": "R", "before": "Alan Turing proposed his imitation game, better known as the Turing test, for determining", "after": "article in Mind, decades before the existence of anything resembling an artificial intelligence system, Alan Turing addressed the question of how to test whether machines can think, or in modern terminology,", "start_char_pos": 9, "end_char_pos": 98 }, { "type": "R", "before": "system claimed to adequately", "after": "claimed to", "start_char_pos": 118, "end_char_pos": 146 }, { "type": "D", "before": "This work was carried out although no such system was anywhere in sight at the time, and we are still far from it now, many decades later.", "after": null, "start_char_pos": 184, "end_char_pos": 322 }, { "type": "R", "before": "similarly tantalizing question of", "after": "analogous issue for olfaction:", "start_char_pos": 352, "end_char_pos": 385 }, { "type": "R", "before": "candidate system for general odor reproduction, despite such systems still being far from viable. The reasons for the question being nontrivialare discussed", "after": "system claimed to reproduce arbitrary odors artificially, in a way recognizable to humans, in face of the unavailability of a general naming method for odors. Although odor reproduction systems are still far from being viable, the question of how to test candidates thereof is claimed to be interesting and nontrivial", "start_char_pos": 416, "end_char_pos": 572 }, { "type": "R", "before": ", which involves ideas from recognition and imitation, taking", "after": ". To some extent, the method is inspired by Turing`s test for AI, in that it involves a human challenger and the real and artificial entities, yet it is very different: our test is conditional, requiring from the artificial no more than is required from the original, and it employs a novel method of immersion that takes", "start_char_pos": 606, "end_char_pos": 667 } ]
[ 0, 183, 322, 513 ]
1603.08828
1
We study in detail and explicitly solve the version of Kyle's model introduced in \mbox{%DIFAUXCMD BB where the trading horizon is given by an exponentially distributed random time. The first part of the paper is devoted to the analysis of time-homogeneous equilibria using tools from the theory of one-dimensional diffusions. It turns out that such an equilibrium is only possible if the finaly payoff is Bernoulli distributed as in BB. We show in the second part that the signal that the market makers use in the general case is a time-changed version of the one that they would use if the final pay-off had a Bernoulli distribution. In both cases we characterise explicitly the equilibrium price process and the optimal strategy of the informed trader. Contrary to the original Kyle model it is found that the reciprocal of market's depth, i.e. Kyle's lambda, is a uniformly integrable supermartingale. While Kyle's lambda is a potential, i.e. converges to 0, for the Bernoulli distribured final payoff, its limit in general is different than 0. Also, differently from \mbox{%DIFAUXCMD BB
We study in detail and explicitly solve the version of Kyle's model introduced in a specific case in \mbox{%DIFAUXCMD BB where the trading horizon is given by an exponentially distributed random time. The first part of the paper is devoted to the analysis of time-homogeneous equilibria using tools from the theory of one-dimensional diffusions. It turns out that such an equilibrium is only possible if the final payoff is Bernoulli distributed as in BB. We show in the second part that the signal of the market makers use in the general case is a time-changed version of the one that they would have used had the final payoff had a Bernoulli distribution. In both cases we characterise explicitly the equilibrium price process and the optimal strategy of the informed trader. Contrary to the original Kyle model it is found that the reciprocal of market's depth, i.e. Kyle's lambda, is a uniformly integrable supermartingale. While Kyle's lambda is a potential, i.e. converges to 0, for the Bernoulli distribured final payoff, its limit in general is different than 0.
[ { "type": "R", "before": "\\mbox{%DIFAUXCMD BB", "after": "a specific case in \\mbox{%DIFAUXCMD BB", "start_char_pos": 82, "end_char_pos": 101 }, { "type": "R", "before": "finaly", "after": "final", "start_char_pos": 389, "end_char_pos": 395 }, { "type": "R", "before": "that", "after": "of", "start_char_pos": 481, "end_char_pos": 485 }, { "type": "R", "before": "use if the final pay-off", "after": "have used had the final payoff", "start_char_pos": 581, "end_char_pos": 605 }, { "type": "D", "before": "Also, differently from \\mbox{%DIFAUXCMD BB", "after": null, "start_char_pos": 1049, "end_char_pos": 1091 } ]
[ 0, 181, 326, 437, 635, 755, 905 ]
1603.09030
1
In this work we give a comprehensive overview of the time consistency property of dynamic risk and performance measures, with focus on discrete time setup. The two key operational concepts used throughout are the notion of the LM-measure and the notion of the update rule that, we believe, are the key tools for studying the time consistency in a unified framework.
In this work we give a comprehensive overview of the time consistency property of dynamic risk and performance measures, focusing on a the discrete time setup. The two key operational concepts used throughout are the notion of the LM-measure and the notion of the update rule that, we believe, are the key tools for studying time consistency in a unified framework.
[ { "type": "R", "before": "with focus on", "after": "focusing on a the", "start_char_pos": 121, "end_char_pos": 134 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 321, "end_char_pos": 324 } ]
[ 0, 155 ]
1603.09149
1
In the literature, researchers have widely addressed the portfolio optimization problem with various stock price models, and by choosing different kinds of optimization criteria. Models driven by Jump processes seem to provide a flexible class of models which capture statistical and economical properties of market data. In particular, we consider a portfolio optimization problem, without any consumption and transaction cost, where the market consisting of stock prices is modelled by a multi dimensional jump diffusion process with semi-Markov modulated coefficients. We study risk sensitive portfolio optimization on finite time horizon. We address the above mentioned problem by using a probabilistic approach to establish the existence and uniqueness of the classical solution of corresponding Hamilton-Jacobi-Bellman (HJB) equation. We also implement a numerical scheme to see the behavior of solutions for different values of initial portfolio wealth, maturity and risk of aversion parameter.
This article studies a portfolio optimization problem, without any consumption and transaction cost, where the market consisting of several stocks is modeled by a multi-dimensional jump diffusion process with age-dependent semi-Markov modulated coefficients. We study risk sensitive portfolio optimization on the finite time horizon. We study the problem by using a probabilistic approach to establish the existence and uniqueness of the classical solution to the corresponding Hamilton-Jacobi-Bellman (HJB) equation. We also implement a numerical scheme to investigate the behavior of solutions for different values of the initial portfolio wealth, the maturity and the risk of aversion parameter.
[ { "type": "R", "before": "In the literature, researchers have widely addressed the portfolio optimization problem with various stock price models, and by choosing different kinds of optimization criteria. Models driven by Jump processes seem to provide a flexible class of models which capture statistical and economical properties of market data. In particular, we consider a", "after": "This article studies a", "start_char_pos": 0, "end_char_pos": 350 }, { "type": "R", "before": "stock prices is modelled by a multi dimensional", "after": "several stocks is modeled by a multi-dimensional", "start_char_pos": 460, "end_char_pos": 507 }, { "type": "A", "before": null, "after": "age-dependent", "start_char_pos": 536, "end_char_pos": 536 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 623, "end_char_pos": 623 }, { "type": "R", "before": "address the above mentioned", "after": "study the", "start_char_pos": 648, "end_char_pos": 675 }, { "type": "R", "before": "of", "after": "to the", "start_char_pos": 786, "end_char_pos": 788 }, { "type": "R", "before": "see", "after": "investigate", "start_char_pos": 883, "end_char_pos": 886 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 937, "end_char_pos": 937 }, { "type": "R", "before": "maturity and", "after": "the maturity and the", "start_char_pos": 964, "end_char_pos": 976 } ]
[ 0, 178, 321, 572, 644, 842 ]
1603.09149
2
This article studies a portfolio optimization problem, without any consumption and transaction cost, where the market consisting of several stocks is modeled by a multi-dimensional jump diffusion process with age-dependent semi-Markov modulated coefficients. We study risk sensitive portfolio optimization on the finite time horizon. We study the problem by using a probabilistic approach to establish the existence and uniqueness of the classical solution to the corresponding Hamilton-Jacobi-Bellman (HJB) equation. We also implement a numerical scheme to investigate the behavior of solutions for different values of the initial portfolio wealth, the maturity and the risk of aversion parameter.
This article studies a portfolio optimization problem, where the market consisting of several stocks is modeled by a multi-dimensional jump-diffusion process with age-dependent semi-Markov modulated coefficients. We study risk sensitive portfolio optimization on the finite time horizon. We study the problem by using a probabilistic approach to establish the existence and uniqueness of the classical solution to the corresponding Hamilton-Jacobi-Bellman (HJB) equation. We also implement a numerical scheme to investigate the behavior of solutions for different values of the initial portfolio wealth, the maturity , and the risk of aversion parameter.
[ { "type": "D", "before": "without any consumption and transaction cost,", "after": null, "start_char_pos": 55, "end_char_pos": 100 }, { "type": "R", "before": "jump diffusion", "after": "jump-diffusion", "start_char_pos": 181, "end_char_pos": 195 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 663, "end_char_pos": 663 } ]
[ 0, 258, 333, 517 ]
1603.09491
1
Recently, financial industry and regulators have enhanced the debate on the good properties of a risk measure. A fundamental issue is the evaluation of the quality of a risk estimation. On one hand a backtesting procedure is desirable for assessing the accuracy of such an estimation and this can be naturally achieved by elicitable risk measures. For the same objective an alternative approach has been introduced by Davis ( 2013 ) through the so-called consistency property. On the other hand a risk estimation should be less sensitive with respect to small changes in the available data set and exhibit qualitative robustness. A new risk measure, the Lambda value at risk (Lambda VaR), has been recently proposed by Frittelli et al. (2014), as a generalization of VaR , with the ability of discriminating the risk among P&L distributions with different tail behaviour. In this article, we show that Lambda VaR also satisfies the properties of robustness, elicitability and consistency under some conditions.
Recently, financial industry and regulators have enhanced the debate on the good properties of a risk measure. A fundamental issue is the evaluation of the quality of a risk estimation. On the one hand, a backtesting procedure is desirable for assessing the accuracy of such an estimation and this can be naturally achieved by elicitable risk measures. For the same objective , an alternative approach has been introduced by Davis ( 2016 ) through the so-called consistency property. On the other hand , a risk estimation should be less sensitive with respect to small changes in the available data set and exhibit qualitative robustness. A new risk measure, the Lambda value at risk (Lambda VaR), has been recently proposed by Frittelli et al. (2014), as a generalization of VaR with the ability to discriminate the risk among P&L distributions with different tail behaviour. In this article, we show that Lambda VaR also satisfies the properties of robustness, elicitability and consistency under some conditions.
[ { "type": "R", "before": "one hand", "after": "the one hand,", "start_char_pos": 189, "end_char_pos": 197 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 371, "end_char_pos": 371 }, { "type": "R", "before": "2013", "after": "2016", "start_char_pos": 427, "end_char_pos": 431 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 496, "end_char_pos": 496 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 773, "end_char_pos": 774 }, { "type": "R", "before": "of discriminating", "after": "to discriminate", "start_char_pos": 792, "end_char_pos": 809 } ]
[ 0, 110, 185, 347, 477, 631, 873 ]
1604.00103
1
In Bitcoin system, transactions are prioritized according to attributes such as the remittance amount and transaction fees, and transactions with low priority are likely to wait for confirmation. Because the demand of micro payment in Bitcoin is expected to increase due to low remittance cost, it is important to quantitatively investigate how the priority mechanism of Bitcoin affects the transaction-confirmation time. In this paper, we analyze the transaction-confirmation time by queueing theory. We model the transaction priority mechanism of Bitcoin as a priority queueing system with batch service, deriving the mean transaction-confirmation time. Numerical examples show how the demand of transactions of low remittance amount affects the transaction-confirmation time. We also consider the effect of the maximum block size on the transaction-confirmation time.
In Bitcoin system, transactions are prioritized according to transaction fees. Transactions without fees are given low priority and likely to wait for confirmation. Because the demand of micro payment in Bitcoin is expected to increase due to low remittance cost, it is important to quantitatively investigate how transactions with small fees of Bitcoin affect the transaction-confirmation time. In this paper, we analyze the transaction-confirmation time by queueing theory. We model the transaction-confirmation process of Bitcoin as a priority queueing system with batch service, deriving the mean transaction-confirmation time. Numerical examples show how the demand of transactions with low fees affects the transaction-confirmation time. We also consider the effect of the maximum block size on the transaction-confirmation time.
[ { "type": "R", "before": "attributes such as the remittance amount and transaction fees, and transactions with low priority are", "after": "transaction fees. Transactions without fees are given low priority and", "start_char_pos": 61, "end_char_pos": 162 }, { "type": "R", "before": "the priority mechanism of Bitcoin affects", "after": "transactions with small fees of Bitcoin affect", "start_char_pos": 345, "end_char_pos": 386 }, { "type": "R", "before": "transaction priority mechanism", "after": "transaction-confirmation process", "start_char_pos": 515, "end_char_pos": 545 }, { "type": "R", "before": "of low remittance amount", "after": "with low fees", "start_char_pos": 711, "end_char_pos": 735 } ]
[ 0, 195, 421, 501, 655, 778 ]
1604.00105
1
Recent empirical studies suggest that the volatility of an underlying price process may have correlations that decay relatively slowly under certain market conditions. In this paper, the volatility is modeled as a stationary process with long-range correlation properties to capture such a situation and we consider European option pricing. This means that the volatility process is neither a Markov process nor a martingale. However, by exploiting the fact that the price process still is a semimartingale and accordingly using the martingale method, one can get an analytical expression for the option price in the regime when the volatility process is fast mean reverting . The volatility process is here modeled as a smooth and bounded function of a fractional Ornstein Uhlenbeck processand we give the expression for the implied volatility which has a fractional term structure.
Recent empirical studies suggest that the volatility of an underlying price process may have correlations that decay slowly under certain market conditions. In this paper, the volatility is modeled as a stationary process with long-range correlation properties in order to capture such a situation , and we consider European option pricing. This means that the volatility process is neither a Markov process nor a martingale. However, by exploiting the fact that the price process is still a semimartingale and accordingly using the martingale method, we can obtain an analytical expression for the option price in the regime where the volatility process is fast mean-reverting . The volatility process is modeled as a smooth and bounded function of a fractional Ornstein-Uhlenbeck process. We give the expression for the implied volatility , which has a fractional term structure.
[ { "type": "D", "before": "relatively", "after": null, "start_char_pos": 117, "end_char_pos": 127 }, { "type": "A", "before": null, "after": "in order", "start_char_pos": 272, "end_char_pos": 272 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 301, "end_char_pos": 301 }, { "type": "R", "before": "still is", "after": "is still", "start_char_pos": 483, "end_char_pos": 491 }, { "type": "R", "before": "one can get", "after": "we can obtain", "start_char_pos": 554, "end_char_pos": 565 }, { "type": "R", "before": "when", "after": "where", "start_char_pos": 626, "end_char_pos": 630 }, { "type": "R", "before": "mean reverting", "after": "mean-reverting", "start_char_pos": 662, "end_char_pos": 676 }, { "type": "D", "before": "here", "after": null, "start_char_pos": 705, "end_char_pos": 709 }, { "type": "R", "before": "Ornstein Uhlenbeck processand we", "after": "Ornstein-Uhlenbeck process. We", "start_char_pos": 767, "end_char_pos": 799 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 847, "end_char_pos": 847 } ]
[ 0, 167, 342, 427, 678 ]
1604.00148
1
This paper examines the integration process of the Japanese major rice markets (Tokyo and Osaka) from 1881 to 1932. Using a non-Bayesian time-varying vector error correction (VEC) model, we argue that the process strongly depended on the government's policy on the network system of telegram and telephone; rice traders with an intention in using the modern communication tools were usually affected by the changes of the policy. We find that (i) the Japanese rice markets had been integrated in the 1910s; (ii) the increasing use of telegraphs had accelerated the rice market integration since the Meiji period in Japan; (iii) the local phone , which reduced the urban users' time for sending and receiving telegrams, promoted the market integration.
This paper examines the integration process of the Japanese major rice markets (Tokyo and Osaka) from 1881 to 1932. Using a non-Bayesian time-varying vector error correction model, we argue that the process strongly depended on the government's policy on the network system of the telegram and telephone; rice traders with an intention to use modern communication tools were usually affected by the changes in policy. We find that (i) the Japanese rice markets had been integrated in the 1910s; (ii) increasing use of telegraphs had accelerated rice market integration from the Meiji period in Japan; and (iii) local telephone system , which reduced the time spent by urban users sending and receiving telegrams, promoted market integration.
[ { "type": "D", "before": "(VEC)", "after": null, "start_char_pos": 174, "end_char_pos": 179 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 283, "end_char_pos": 283 }, { "type": "R", "before": "in using the", "after": "to use", "start_char_pos": 339, "end_char_pos": 351 }, { "type": "R", "before": "of the", "after": "in", "start_char_pos": 416, "end_char_pos": 422 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 513, "end_char_pos": 516 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 562, "end_char_pos": 565 }, { "type": "R", "before": "since", "after": "from", "start_char_pos": 590, "end_char_pos": 595 }, { "type": "A", "before": null, "after": "and", "start_char_pos": 623, "end_char_pos": 623 }, { "type": "R", "before": "the local phone", "after": "local telephone system", "start_char_pos": 630, "end_char_pos": 645 }, { "type": "R", "before": "urban users' time for", "after": "time spent by urban users", "start_char_pos": 666, "end_char_pos": 687 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 730, "end_char_pos": 733 } ]
[ 0, 115, 307, 430, 507, 622 ]
1604.00412
1
Diverse molecules induce general anesthesia with potency strongly correlated both with their hydrophobicity and their effects on certain ion channels. We recently observed that several anesthetics inhibit heterogeneity in plasma membrane derived vesicles by lowering the critical temperature (T_c) for phase separation. Here we exploit conditions that stabilize membrane heterogeneity to test the correlation between the anesthetic potency of n-alcohols and effects on T_c. First we show that hexadecanol acts oppositely to anesthetics on membrane mixing and antagonizes ethanol induced anesthesia in a tadpole behavioral assay. Second, we show that two previously described `intoxication reversers' raise T_c in vesicles and counter ethanol's effects in vesicles, mimicking the findings of previous electrophysiological measurements. Third, we find that hydrostatic pressure, long known to reverse anesthesia, also raises T_c in vesicles with a magnitude that counters the effect of an anesthetic at relevant concentrations and pressures. Taken togetner , these results demonstrate that \Delta T_c predicts anesthetic potency for n-alcohols better than hydrophobicity in a range of contexts, supporting a mechanistic role for membrane heterogeneity in general anesthesia.
Diverse molecules induce general anesthesia with potency strongly correlated both with their hydrophobicity and their effects on certain ion channels. We recently observed that several n-alcohol anesthetics inhibit heterogeneity in plasma membrane derived vesicles by lowering the critical temperature (T_c) for phase separation. Here we exploit conditions that stabilize membrane heterogeneity to further test the correlation between the anesthetic potency of n-alcohols and effects on T_c. First we show that hexadecanol acts oppositely to n-alcohol anesthetics on membrane mixing and antagonizes ethanol induced anesthesia in a tadpole behavioral assay. Second, we show that two previously described `intoxication reversers' raise T_c and counter ethanol's effects in vesicles, mimicking the findings of previous electrophysiological and behavioral measurements. Third, we find that hydrostatic pressure, long known to reverse anesthesia, also raises T_c in vesicles with a magnitude that counters the effect of butanol at relevant concentrations and pressures. Taken together , these results demonstrate that \Delta T_c predicts anesthetic potency for n-alcohols better than hydrophobicity in a range of contexts, supporting a mechanistic role for membrane heterogeneity in general anesthesia.
[ { "type": "A", "before": null, "after": "n-alcohol", "start_char_pos": 185, "end_char_pos": 185 }, { "type": "A", "before": null, "after": "further", "start_char_pos": 389, "end_char_pos": 389 }, { "type": "A", "before": null, "after": "n-alcohol", "start_char_pos": 526, "end_char_pos": 526 }, { "type": "D", "before": "in vesicles", "after": null, "start_char_pos": 713, "end_char_pos": 724 }, { "type": "A", "before": null, "after": "and behavioral", "start_char_pos": 824, "end_char_pos": 824 }, { "type": "R", "before": "an anesthetic", "after": "butanol", "start_char_pos": 988, "end_char_pos": 1001 }, { "type": "R", "before": "togetner", "after": "together", "start_char_pos": 1050, "end_char_pos": 1058 } ]
[ 0, 150, 320, 475, 631, 838, 1043 ]
1604.00596
1
This note proposes a new get-rich-quick scheme that involves trading in a stock with a continuous but not constant price path. The existence of such a scheme, whose practical value is tempered by its use of the Axiom of Choice , shows that imposing regularity conditions (such as measurability) is essential even in the foundations of game-theoretic probability .
This paper proposes new get-rich-quick schemes that involve trading in a stock with a non-degenerate price path. For simplicity the interest rate is assumed zero. If the price path is assumed continuous, the trader can become infinitely rich immediately after it becomes non-constant (if it ever does). If it is assumed positive, he can become infinitely rich immediately after reaching a point in time such that the variation of the log price is infinite in any right neighbourhood of that point (whereas reaching a point in time such that the variation of the log price is infinite in any left neighbourhood of that point is not sufficient). The practical value of these schemes is tempered by their use of the Axiom of Choice .
[ { "type": "R", "before": "note proposes a", "after": "paper proposes", "start_char_pos": 5, "end_char_pos": 20 }, { "type": "R", "before": "scheme that involves", "after": "schemes that involve", "start_char_pos": 40, "end_char_pos": 60 }, { "type": "R", "before": "continuous but not constant", "after": "non-degenerate", "start_char_pos": 87, "end_char_pos": 114 }, { "type": "R", "before": "The existence of such a scheme, whose practical value", "after": "For simplicity the interest rate is assumed zero. If the price path is assumed continuous, the trader can become infinitely rich immediately after it becomes non-constant (if it ever does). If it is assumed positive, he can become infinitely rich immediately after reaching a point in time such that the variation of the log price is infinite in any right neighbourhood of that point (whereas reaching a point in time such that the variation of the log price is infinite in any left neighbourhood of that point is not sufficient). The practical value of these schemes", "start_char_pos": 127, "end_char_pos": 180 }, { "type": "R", "before": "its", "after": "their", "start_char_pos": 196, "end_char_pos": 199 }, { "type": "D", "before": ", shows that imposing regularity conditions (such as measurability) is essential even in the foundations of game-theoretic probability", "after": null, "start_char_pos": 227, "end_char_pos": 361 } ]
[ 0, 126 ]
1604.01210
1
Network enrichment analysis (NEA) is a powerful method, that integrates gene enrichment analysis with information on dependences between genes . Existing tests for NEA rely on normality assumptions, they can deal only with undirected networks and are computationally slow . We propose PNEA, an alternative test based on the hypergeometric distribution . PNEA can be applied also to directed and mixed networks , and our simulations show that it is faster and more powerful than existing NEA tests. The method is implemented in the R package pnea, that can be freely downloaded from CRAN repositories. Application to genetic data shows that PNEA detects most of the enrichments that are found with traditional GEA tests, and unveils some further enrichments that would be overlooked, if dependences between genes were ignored.
Network enrichment analysis is a powerful method, which allows to integrate gene enrichment analysis with the information on relationships between genes that is provided by gene networks . Existing tests for network enrichment analysis deal only with undirected networks , they can be computationally slow and are based on normality assumptions . We propose NEAT, a test for network enrichment analysis. The test is based on the hypergeometric distribution , which naturally arises as the null distribution in this context. NEAT can be applied not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by testing for associations between GO Slim categories themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based tests. The method is implemented in the R package neat, which can be freely downloaded from CRAN URL
[ { "type": "D", "before": "(NEA)", "after": null, "start_char_pos": 28, "end_char_pos": 33 }, { "type": "R", "before": "that integrates", "after": "which allows to integrate", "start_char_pos": 56, "end_char_pos": 71 }, { "type": "R", "before": "information on dependences between genes", "after": "the information on relationships between genes that is provided by gene networks", "start_char_pos": 102, "end_char_pos": 142 }, { "type": "R", "before": "NEA rely on normality assumptions, they can", "after": "network enrichment analysis", "start_char_pos": 164, "end_char_pos": 207 }, { "type": "R", "before": "and are computationally slow", "after": ", they can be computationally slow and are based on normality assumptions", "start_char_pos": 243, "end_char_pos": 271 }, { "type": "R", "before": "PNEA, an alternative test", "after": "NEAT, a test for network enrichment analysis. The test is", "start_char_pos": 285, "end_char_pos": 310 }, { "type": "R", "before": ". PNEA", "after": ", which naturally arises as the null distribution in this context. NEAT", "start_char_pos": 352, "end_char_pos": 358 }, { "type": "R", "before": "also to directed and mixed networks , and our simulations show that it is faster and more powerful than existing NEA", "after": "not only to undirected, but to directed and partially directed networks as well. Our simulations indicate that NEAT is considerably faster than alternative resampling-based methods, and that its capacity to detect enrichments is at least as good as the one of alternative tests. We discuss applications of NEAT to network analyses in yeast by testing for enrichment of the Environmental Stress Response target gene set with GO Slim and KEGG functional gene sets, and also by testing for associations between GO Slim categories themselves. NEAT is a flexible and efficient test for network enrichment analysis that aims to overcome some limitations of existing resampling-based", "start_char_pos": 374, "end_char_pos": 490 }, { "type": "R", "before": "pnea, that", "after": "neat, which", "start_char_pos": 541, "end_char_pos": 551 }, { "type": "R", "before": "repositories. Application to genetic data shows that PNEA detects most of the enrichments that are found with traditional GEA tests, and unveils some further enrichments that would be overlooked, if dependences between genes were ignored.", "after": "URL", "start_char_pos": 587, "end_char_pos": 825 } ]
[ 0, 273, 497, 600 ]
1604.01329
1
Till now, in biological sciences, the term, transcription, only refers to DNA to RNA transcription. But our recently published experimental findings obtained from Plasmodium falciparum strongly suggest the existence of DNA to DNA transcription in the genome of eukaryotic cells, which could shed some light on the mystery of large amounts of noncoding DNA in the human and other eukaryotic genomes.
Till now, in biological sciences, the term, transcription, mainly refers to DNA to RNA transcription. But our recently published experimental findings obtained from Plasmodium falciparum strongly suggest the existence of DNA to DNA transcription in the genome of eukaryotic cells, which could shed some light on the functions of large amounts of noncoding DNA in the human and other eukaryotic genomes.
[ { "type": "R", "before": "only", "after": "mainly", "start_char_pos": 59, "end_char_pos": 63 }, { "type": "R", "before": "mystery", "after": "functions", "start_char_pos": 314, "end_char_pos": 321 } ]
[ 0, 99 ]
1604.01359
1
This is a commentary on three recent reports in Nature journals regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report creation of a magnetically-gated ion channel by attaching ferritin to an ion channel and pulling on the ferritin with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims are in conflict with basic laws of physics taught in college . The discrepancies are large: from 5 to 10 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. One can conveniently analyze these reports with the same back-of-the-envelope calculations .
This is an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report creation of a magnetically-gated ion channel by attaching ferritin to an ion channel and pulling on the ferritin with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims are in conflict with basic laws of physics . The discrepancies are large: from 5 to 9 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells .
[ { "type": "R", "before": "a commentary on three recent reports in Nature journals", "after": "an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports", "start_char_pos": 8, "end_char_pos": 63 }, { "type": "D", "before": "taught in college", "after": null, "start_char_pos": 535, "end_char_pos": 552 }, { "type": "R", "before": "10", "after": "9", "start_char_pos": 594, "end_char_pos": 596 }, { "type": "R", "before": "One can conveniently analyze these reports with the same back-of-the-envelope calculations", "after": "The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells", "start_char_pos": 732, "end_char_pos": 822 } ]
[ 0, 106, 253, 437, 460, 554, 607, 731 ]
1604.01359
2
This is an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report creation of a magnetically-gated ion channel by attaching ferritin to an ion channel and pulling on the ferritin with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims are in conflict with basic laws of physics. The discrepancies are large: from 5 to 9 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells.
This is an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report magnetic control of membrane conductance by attaching ferritin to an ion channel protein and then tugging the ferritin or heating it with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims conflict with basic laws of physics. The discrepancies are large: from 5 to 10 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells.
[ { "type": "R", "before": "creation of a magnetically-gated ion channel", "after": "magnetic control of membrane conductance", "start_char_pos": 347, "end_char_pos": 391 }, { "type": "R", "before": "and pulling on the ferritin", "after": "protein and then tugging the ferritin or heating it", "start_char_pos": 432, "end_char_pos": 459 }, { "type": "D", "before": "are in", "after": null, "start_char_pos": 559, "end_char_pos": 565 }, { "type": "R", "before": "9", "after": "10", "start_char_pos": 642, "end_char_pos": 643 } ]
[ 0, 81, 173, 320, 504, 527, 602, 654, 778 ]
1604.02370
1
The Extended Yard-Sale Model of asset exchange is an agent-based economic model with binary transactions, and simple models of redistribution and Wealth-Attained Advantage. As recently shown, the model exhibits a second-order phase transition to a coexistence regime with partial wealth condensation. The evolution of its wealth distribution is described by a nonlinear, nonlocal Fokker-Planck equation. In this work, we demonstrate that solutions to this equation fit remarkably well to the actual wealth distribution of the U. S. in 2013. The two fit parameters provide evidence that the U.S. wealth distribution is partially wealth condensed.\\%DIF > over this time period. We present the model parameters for the US wealth distribution data as a function of time under the assumption that the distribution responds to their variation adiabatically. We argue that the time series of model parameters thus obtained provides a valuable new diagnostic tool for analyzing wealth inequality.
We present a stochastic, agent-based , binary-transaction Asset-Exchange Model (AEM) for wealth distribution that allows for agents with negative wealth. This model retains certain features of prior AEMs such as redistribution and wealth-attained advantage, but it also allows for shifts as well as scalings of the agent density function. We derive the Fokker-Planck equation describing its time evolution and we describe its numerical solution, including a methodology for solving the inverse problem of finding the model parameters that best match empirical data. Using this methodology, we compare the steady-state solutions of the Fokker-Planck equation with data from the United States Survey of Consumer Finances over a time period of 27 years. In doing so, we demonstrate agreement with empirical data of an average error less than 0.16\\%DIF > over this time period. We present the model parameters for the US wealth distribution data as a function of time under the assumption that the distribution responds to their variation adiabatically. We argue that the time series of model parameters thus obtained provides a valuable new diagnostic tool for analyzing wealth inequality.
[ { "type": "R", "before": "The Extended Yard-Sale Model of asset exchange is an", "after": "We present a stochastic,", "start_char_pos": 0, "end_char_pos": 52 }, { "type": "R", "before": "economic model with binary transactions, and simple models of redistribution and Wealth-Attained Advantage. As recently shown, the model exhibits a second-order phase transition to a coexistence regime with partial wealth condensation. The evolution of its wealth distribution is described by a nonlinear, nonlocal", "after": ", binary-transaction Asset-Exchange Model (AEM) for wealth distribution that allows for agents with negative wealth. This model retains certain features of prior AEMs such as redistribution and wealth-attained advantage, but it also allows for shifts as well as scalings of the agent density function. We derive the", "start_char_pos": 65, "end_char_pos": 379 }, { "type": "R", "before": "equation. In this work, we demonstrate that solutions to this equation fit remarkably well to the actual wealth distribution of the U. S. in 2013. The two fit parameters provide evidence that the U.S. wealth distribution is partially wealth condensed.", "after": "equation describing its time evolution and we describe its numerical solution, including a methodology for solving the inverse problem of finding the model parameters that best match empirical data. Using this methodology, we compare the steady-state solutions of the Fokker-Planck equation with data from the United States Survey of Consumer Finances over a time period of 27 years. In doing so, we demonstrate agreement with empirical data of an average error less than 0.16", "start_char_pos": 394, "end_char_pos": 645 } ]
[ 0, 172, 300, 403, 540, 645, 676, 852 ]
1604.02708
1
Adhesion molecules play an integral role in diverse biological functions ranging from cellular growth to transport . Estimation of their binding affinity , therefore, becomes important to quantify their biophysical impact on these phenomena. In this paper , we use curvature elasticity to present non-intuitive, yet remarkably simple, universal relationships to tease out adhesion energy from vesicle-substrate experiments . Our study reveals that the inverse of the height, exponential of the contact area, and the force required to detach the vesicle from the substrate vary linearly with the square root of the adhesion energy . We validate the modeling predictions with experimental data from two previous studies.
Adhesion plays an integral role in diverse biological functions ranging from cellular transport to tissue development . Estimation of adhesion strength , therefore, becomes important to gain biophysical insight into these phenomena. In this Letter , we use curvature elasticity to present non-intuitive, yet remarkably simple, universal relationships that capture vesicle-substrate interactions . Our study reveals that the inverse of the height, exponential of the contact area, and the force required to detach the vesicle from the substrate vary linearly with the square root of the adhesion energy . These relationships not only provide efficient strategies to tease out adhesion energy of biological molecules but can also be used to characterize the physical properties of elastic biomimetic nanoparticles . We validate the modeling predictions with experimental data from two previous studies.
[ { "type": "R", "before": "molecules play", "after": "plays", "start_char_pos": 9, "end_char_pos": 23 }, { "type": "R", "before": "growth to transport", "after": "transport to tissue development", "start_char_pos": 95, "end_char_pos": 114 }, { "type": "R", "before": "their binding affinity", "after": "adhesion strength", "start_char_pos": 131, "end_char_pos": 153 }, { "type": "R", "before": "quantify their biophysical impact on", "after": "gain biophysical insight into", "start_char_pos": 188, "end_char_pos": 224 }, { "type": "R", "before": "paper", "after": "Letter", "start_char_pos": 250, "end_char_pos": 255 }, { "type": "R", "before": "to tease out adhesion energy from", "after": "that capture", "start_char_pos": 359, "end_char_pos": 392 }, { "type": "R", "before": "experiments", "after": "interactions", "start_char_pos": 411, "end_char_pos": 422 }, { "type": "A", "before": null, "after": ". These relationships not only provide efficient strategies to tease out adhesion energy of biological molecules but can also be used to characterize the physical properties of elastic biomimetic nanoparticles", "start_char_pos": 630, "end_char_pos": 630 } ]
[ 0, 241, 424, 632 ]
1604.03305
1
We perform a statistical-mechanical study of the asymptotic behaviors of stochastic cell fate decision between proliferation and differentiation. We propose a model based on a self-replicating Langevin system, where cells choose their fate (i.e. , proliferation or differentiation) , depending on the local cell density. We show that our modelensures tissue homeostasis , which is regarded as URLanized criticality. Furthermore, we numerically demonstrate that the asymptotic clonal analysis exhibits the dynamical crossover of clone size statistics . Our results provide a unified platform for the study of stochastic cell fate decision in terms of nonequilibrium statistical physics .
We study the asymptotic behaviors of stochastic cell fate decision between proliferation and differentiation. We propose a model of a self-replicating Langevin system, where cells choose their fate (i.e. proliferation or differentiation) depending on local cell density. Based on this model, we propose a scenario for URLanisms to maintain the density of cells (i.e., homeostasis) through cell-cell interactions , which is regarded as URLanized criticality. Furthermore, we numerically show that the distribution of the number of descendant cells changes over time, thus unifying the previously proposed two models regarding homeostasis: the critical birth death process and the voter model . Our results provide a general platform for the study of stochastic cell fate decision in terms of nonequilibrium statistical mechanics .
[ { "type": "R", "before": "perform a statistical-mechanical study of", "after": "study", "start_char_pos": 3, "end_char_pos": 44 }, { "type": "R", "before": "based on", "after": "of", "start_char_pos": 165, "end_char_pos": 173 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 246, "end_char_pos": 247 }, { "type": "R", "before": ", depending on the", "after": "depending on", "start_char_pos": 282, "end_char_pos": 300 }, { "type": "R", "before": "We show that our modelensures tissue homeostasis", "after": "Based on this model, we propose a scenario for URLanisms to maintain the density of cells (i.e., homeostasis) through cell-cell interactions", "start_char_pos": 321, "end_char_pos": 369 }, { "type": "R", "before": "demonstrate that the asymptotic clonal analysis exhibits the dynamical crossover of clone size statistics", "after": "show that the distribution of the number of descendant cells changes over time, thus unifying the previously proposed two models regarding homeostasis: the critical birth death process and the voter model", "start_char_pos": 444, "end_char_pos": 549 }, { "type": "R", "before": "unified", "after": "general", "start_char_pos": 574, "end_char_pos": 581 }, { "type": "R", "before": "physics", "after": "mechanics", "start_char_pos": 677, "end_char_pos": 684 } ]
[ 0, 145, 320, 415, 551 ]
1604.03305
2
We study the asymptotic behaviors of stochastic cell fate decision between proliferation and differentiation. We propose a model of a self-replicating Langevin system, where cells choose their fate (i.e. proliferation or differentiation) depending on local cell density. Based on this model, we propose a scenario for URLanisms to maintain the density of cells (i.e., homeostasis) through cell-cell interactions , which is regarded as URLanized criticality . Furthermore, we numerically show that the distribution of the number of descendant cells changes over time, thus unifying the previously proposed two models regarding homeostasis: the critical birth death process and the voter model. Our results provide a general platform for the study of stochastic cell fate decision in terms of nonequilibrium statistical mechanics.
We study the asymptotic behaviors of stochastic cell fate decision between proliferation and differentiation. We propose a model of a self-replicating Langevin system, where cells choose their fate (i.e. proliferation or differentiation) depending on local cell density. Based on this model, we propose a scenario for URLanisms to maintain the density of cells (i.e., homeostasis) through finite-ranged cell-cell interactions . Furthermore, we numerically show that the distribution of the number of descendant cells changes over time, thus unifying the previously proposed two models regarding homeostasis: the critical birth death process and the voter model. Our results provide a general platform for the study of stochastic cell fate decision in terms of nonequilibrium statistical mechanics.
[ { "type": "A", "before": null, "after": "finite-ranged", "start_char_pos": 389, "end_char_pos": 389 }, { "type": "D", "before": ", which is regarded as URLanized criticality", "after": null, "start_char_pos": 413, "end_char_pos": 457 } ]
[ 0, 109, 270, 459, 693 ]
1604.03409
1
Pentameric ligand-gated ion channels (pLGICs) of the Cys-loop superfamily are important neuroreceptors that mediate fast synaptic transmission. They are activated by the binding of a neurotransmitter, but the details of this process are still not fully understood. As a prototypical pLGIC, here we choose the insect resistance to dieldrin (RDL) receptor, involved in the resistance to insecticides, and investigate the binding of the neurotransmitter GABA to its extracellular domain at the atomistic level. We achieve this by means of \mu-sec funnel-metadynamics simulations, which efficiently enhance the sampling of bound and unbound states using a funnel-shaped restraining potential to limit the exploration in the solvent. We reveal the sequence of events in the binding process, from the capture of GABA from the solvent to its pinning between the charged residues Arg111 and Glu204 in the binding pocket. We characterize the associated free energy landscapes in the wild-type RDL receptor and in two mutant forms, where the key residues Arg111 and Glu204 are mutated to Ala. Experimentally these mutations produce non-functional channels, which is reflected in the reduced ligand binding affinities, due to the loss of essential interactions. We also analyze the dynamical behaviour of the crucial loop C, whose opening allows the access of GABA to the binding site, while its closure locks the ligand into the protein. The RDL receptor shares structural and functional features with other pLGICs . Hence our work outlines a valuable protocol to study the binding of ligands to pLGICs beyond conventional docking and molecular dynamics techniques.
Pentameric ligand-gated ion channels (pLGICs) of the Cys-loop superfamily are important neuroreceptors that mediate fast synaptic transmission. They are activated by the binding of a neurotransmitter, but the details of this process are still not fully understood. As a prototypical pLGIC, here we choose the insect resistance to dieldrin (RDL) receptor, involved in the resistance to insecticides, and investigate the binding of the neurotransmitter GABA to its extracellular domain at the atomistic level. We achieve this by means of \mu-sec funnel-metadynamics simulations, which efficiently enhance the sampling of bound and unbound states by using a funnel-shaped restraining potential to limit the exploration in the solvent. We reveal the sequence of events in the binding process, from the capture of GABA from the solvent to its pinning between the charged residues Arg111 and Glu204 in the binding pocket. We characterize the associated free energy landscapes in the wild-type RDL receptor and in two mutant forms, where the key residues Arg111 and Glu204 are mutated to Ala. Experimentally these mutations produce non-functional channels, which is reflected in the reduced ligand binding affinities, due to the loss of essential interactions. We also analyze the dynamical behaviour of the crucial loop C, whose opening allows the access of GABA to the binding site, while its closure locks the ligand into the protein. The RDL receptor shares structural and functional features with other pLGICs , hence our work outlines a valuable protocol to study the binding of ligands to pLGICs beyond conventional docking and molecular dynamics techniques.
[ { "type": "A", "before": null, "after": "by", "start_char_pos": 644, "end_char_pos": 644 }, { "type": "R", "before": ". Hence", "after": ", hence", "start_char_pos": 1506, "end_char_pos": 1513 } ]
[ 0, 143, 264, 507, 729, 913, 1083, 1251, 1428, 1507 ]
1604.03687
1
We show that some natural output conventions for error-free computation in chemical reaction networks (CRN) lead to a common level of computational expressivity. Our main results are that the standard definition of error-free CRNs have equivalent computational power to 1) asymmetric and 2) democratic CRNs. The former have only "yes" voters, with the interpretation that the CRN's output is yes if any voters are present and no otherwise. The latter define output by majority vote among "yes" and "no" voters. Both results are proven via a generalized framework that simultaneously captures several definitions, directly inspired by a recent Petri net result of Esparza, Ganty, Leroux, and Majumder [CONCUR 2015]. These results support the thesis that the computational expressivity of error-free CRNs is intrinsic, not sensitive to arbitrary definitional choices.
We show that some natural output conventions for error-free computation in chemical reaction networks (CRN) lead to a common level of computational expressivity. Our main results are that the standard consensus-based output convention have equivalent computational power to ( 1) existence-based and ( 2) democracy-based output conventions. The CRNs using the former output convention have only "yes" voters, with the interpretation that the CRN's output is yes if any voters are present and no otherwise. The CRNs using the latter output convention define output by majority vote among "yes" and "no" voters. Both results are proven via a generalized framework that simultaneously captures several definitions, directly inspired by a Petri net result of Esparza, Ganty, Leroux, and Majumder [CONCUR 2015]. These results support the thesis that the computational expressivity of error-free CRNs is intrinsic, not sensitive to arbitrary definitional choices.
[ { "type": "R", "before": "definition of error-free CRNs", "after": "consensus-based output convention", "start_char_pos": 201, "end_char_pos": 230 }, { "type": "A", "before": null, "after": "(", "start_char_pos": 270, "end_char_pos": 270 }, { "type": "R", "before": "asymmetric and", "after": "existence-based and (", "start_char_pos": 274, "end_char_pos": 288 }, { "type": "R", "before": "democratic CRNs. The former", "after": "democracy-based output conventions. The CRNs using the former output convention", "start_char_pos": 292, "end_char_pos": 319 }, { "type": "R", "before": "latter", "after": "CRNs using the latter output convention", "start_char_pos": 445, "end_char_pos": 451 }, { "type": "D", "before": "recent", "after": null, "start_char_pos": 637, "end_char_pos": 643 } ]
[ 0, 161, 440, 511, 715 ]
1604.03733
1
We generated a new computational approach to analyze the biomechanics of epithelial cell islands that combines both vertex and contact-inhibition-of-locomotion models to include both cell-cell and cell-substrate adhesion. Examination of the distribution of cell protrusions (adhesion to the substrate) in the model predicted high order profiles of URLanization that agree with those previously seen experimentally. Cells acquired an asymmetric distribution of protrusions (and traction forces ) that decreased when moving from the edge to the island center. Our in silico analysis also showed that tension on cell-cell junctions (and monolayer stress ) is not homogeneous across the island. Instead it is higher at the island center and scales up with island size, which we confirmed experimentally using laser ablation assays and immunofluorescence. Moreover , our approach has the minimal elements necessary to reproduce mechanical crosstalk between both cell-cell and cell substrate adhesion systems. We found that an increase in cell motility increased junctional tension and monolayer stress on cells several cell diameters behind the island edge. Conversely, an increase in junctional contractility increased the length scale within the island where traction forces were generated. We conclude that the computational method presented here has the capacity to reproduce emergent properties ( distribution of cellular forces and mechanical crosstalk ) of epithelial cell aggregates and make predictions for experimental validation. This would benefit the mechanical analysis of epithelial tissues, especially when local changes in cell-cell and/or cell-substrate adhesion drive collective cell behavior.
We generated a computational approach to analyze the biomechanics of epithelial cell aggregates, either island or stripes or entire monolayers, that combines both vertex and contact-inhibition-of-locomotion models to include both cell-cell and cell-substrate adhesion. Examination of the distribution of cell protrusions (adhesion to the substrate) in the model predicted high order profiles of URLanization that agree with those previously seen experimentally. Cells acquired an asymmetric distribution of basal protrusions, traction forces and apical aspect ratios that decreased when moving from the edge to the island center. Our in silico analysis also showed that tension on cell-cell junctions and apical stress is not homogeneous across the island. Instead , these parameters are higher at the island center and scales up with island size, which we confirmed experimentally using laser ablation assays and immunofluorescence. Without formally being a 3-dimensional model , our approach has the minimal elements necessary to reproduce the distribution of cellular forces and mechanical crosstalk as well as distribution of principal stress in cells within epithelial cell aggregates . By making experimental testable predictions, our approach would benefit the mechanical analysis of epithelial tissues, especially when local changes in cell-cell and/or cell-substrate adhesion drive collective cell behavior.
[ { "type": "D", "before": "new", "after": null, "start_char_pos": 15, "end_char_pos": 18 }, { "type": "R", "before": "islands", "after": "aggregates, either island or stripes or entire monolayers,", "start_char_pos": 89, "end_char_pos": 96 }, { "type": "R", "before": "protrusions (and traction forces )", "after": "basal protrusions, traction forces and apical aspect ratios", "start_char_pos": 460, "end_char_pos": 494 }, { "type": "R", "before": "(and monolayer stress )", "after": "and apical stress", "start_char_pos": 629, "end_char_pos": 652 }, { "type": "R", "before": "it is", "after": ", these parameters are", "start_char_pos": 699, "end_char_pos": 704 }, { "type": "R", "before": "Moreover", "after": "Without formally being a 3-dimensional model", "start_char_pos": 851, "end_char_pos": 859 }, { "type": "R", "before": "mechanical crosstalk between both cell-cell and cell substrate adhesion systems. We found that an increase in cell motility increased junctional tension and monolayer stress on cells several cell diameters behind the island edge. Conversely, an increase in junctional contractility increased the length scale within the island where traction forces were generated. We conclude that the computational method presented here has the capacity to reproduce emergent properties (", "after": "the", "start_char_pos": 923, "end_char_pos": 1396 }, { "type": "R", "before": ") of", "after": "as well as distribution of principal stress in cells within", "start_char_pos": 1454, "end_char_pos": 1458 }, { "type": "R", "before": "and make predictions for experimental validation. This", "after": ". By making experimental testable predictions, our approach", "start_char_pos": 1486, "end_char_pos": 1540 } ]
[ 0, 221, 414, 557, 690, 850, 1003, 1152, 1287, 1535 ]
1604.03996
1
Financial time series are approached from a systemic perspective, looking for evidence of URLanization. A methodology was developed to identify as units of study, each fall from a given maximum price level. A range, within the space of states, in which price falls could be explained as a process that follows a power law was explored. A critical level in the depth of price falls was found to separate a segment operating under a random walk regime, from a segment operating under a power law. This level was interpreted as a point of phase transition in a URLanized system. Evidence of URLanization was found in all stock market indices studied but in none of the control synthetic random series. Findings partially explain the fractal structure characteristic of financial time series and suggests that price fluctuations adopt two different operating regimes. We propose to identify downward movements larger than the critical level , explainable as subject to the power law, as URLanized states, making allowance to explaining price descends smaller than the critical level, as a random walk with the Markov property.
A methodology is developed to identify , as units of study, each decrease in the value of a stock from a given maximum price level. A critical level in the amount of price declines is found to separate a segment operating under a random walk from a segment operating under a power law. This level is interpreted as a point of phase transition into a URLanized system. Evidence of URLanization was found in all the stock market indices studied but in none of the control synthetic random series. Findings partially explain the fractal structure characteristic of financial time series and suggest that price fluctuations adopt two different operating regimes. We propose to identify downward movements larger than the critical level apparently subject to the power law, as URLanized states, and price decreases smaller than the critical level, as a random walk with the Markov property.
[ { "type": "R", "before": "Financial time series are approached from a systemic perspective, looking for evidence of URLanization. A methodology was", "after": "A methodology is", "start_char_pos": 0, "end_char_pos": 121 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 144, "end_char_pos": 144 }, { "type": "R", "before": "fall", "after": "decrease in the value of a stock", "start_char_pos": 169, "end_char_pos": 173 }, { "type": "D", "before": "range, within the space of states, in which price falls could be explained as a process that follows a power law was explored. A", "after": null, "start_char_pos": 210, "end_char_pos": 338 }, { "type": "R", "before": "depth of price falls was", "after": "amount of price declines is", "start_char_pos": 361, "end_char_pos": 385 }, { "type": "D", "before": "regime,", "after": null, "start_char_pos": 444, "end_char_pos": 451 }, { "type": "R", "before": "was", "after": "is", "start_char_pos": 507, "end_char_pos": 510 }, { "type": "R", "before": "in", "after": "into", "start_char_pos": 554, "end_char_pos": 556 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 619, "end_char_pos": 619 }, { "type": "R", "before": "suggests", "after": "suggest", "start_char_pos": 794, "end_char_pos": 802 }, { "type": "R", "before": ", explainable as", "after": "apparently", "start_char_pos": 939, "end_char_pos": 955 }, { "type": "R", "before": "making allowance to explaining price descends", "after": "and price decreases", "start_char_pos": 1003, "end_char_pos": 1048 } ]
[ 0, 103, 207, 336, 495, 576, 700, 865 ]
1604.04312
1
Macroeconomic theories of growth and wealth distribution have an outsized influence on national and international social and economic policy . Yet, due to a relative lack of reliable, system wide data, many such theories remain, at best, unvalidated and, at worst, misleading. In this paper, we introduce a novel economic observatory and framework for high resolution comparison and assessment of the distributional impact of economic development through remote sensing of the earth's surface. Striking visual and empirical validation is observed for broad macroeconomic sigma-convergence in the period immediately following the end of the Cold war as well as strong global divergence dynamics immediately following the financial crisis and Great Recession, the rise of China, the decline of U.S. manufacturing, the euro crisis, Arab Spring, and Middle East conflicts .
Macroeconomic theories of growth and wealth distribution have an outsized influence on national and international social and economic policies . Yet, due to a relative lack of reliable, system wide data, many such theories remain, at best, unvalidated and, at worst, misleading. In this paper, we introduce a novel economic observatory and framework enabling high resolution comparisons and assessments of the distributional impact of economic development through the remote sensing of planet earth's surface. Striking visual and empirical validation is observed for a broad, global macroeconomic sigma-convergence in the period immediately following the end of the Cold War. What is more, we observe strong empirical evidence that the mechanisms driving sigma-convergence failed immediately after the financial crisis and the start of the Great Recession. Nevertheless, analysis of both cross-country and cross-state samples indicates that, globally, disproportionately high growth levels and excessively high decay levels have become rarer over time. We also see that urban areas, especially concentrated within short distances of major capital cities were more likely than rural or suburban areas to see relatively high growth in the aftermath of the financial crisis. Observed changes in growth polarity can be attributed plausibly to post-crisis government intervention and subsidy policies introduced around the world. Overall, the data and techniques we present here make economic evidence for the rise of China, the decline of U.S. manufacturing, the euro crisis, the Arab Spring, and various, recent, Middle East conflicts visually evident for the first time .
[ { "type": "R", "before": "policy", "after": "policies", "start_char_pos": 134, "end_char_pos": 140 }, { "type": "R", "before": "for high resolution comparison and assessment", "after": "enabling high resolution comparisons and assessments", "start_char_pos": 348, "end_char_pos": 393 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 455, "end_char_pos": 455 }, { "type": "R", "before": "the", "after": "planet", "start_char_pos": 474, "end_char_pos": 477 }, { "type": "R", "before": "broad", "after": "a broad, global", "start_char_pos": 552, "end_char_pos": 557 }, { "type": "R", "before": "war as well as strong global divergence dynamics immediately following", "after": "War. What is more, we observe strong empirical evidence that the mechanisms driving sigma-convergence failed immediately after", "start_char_pos": 646, "end_char_pos": 716 }, { "type": "R", "before": "Great Recession, the", "after": "the start of the Great Recession. Nevertheless, analysis of both cross-country and cross-state samples indicates that, globally, disproportionately high growth levels and excessively high decay levels have become rarer over time. We also see that urban areas, especially concentrated within short distances of major capital cities were more likely than rural or suburban areas to see relatively high growth in the aftermath of the financial crisis. Observed changes in growth polarity can be attributed plausibly to post-crisis government intervention and subsidy policies introduced around the world. Overall, the data and techniques we present here make economic evidence for the", "start_char_pos": 742, "end_char_pos": 762 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 830, "end_char_pos": 830 }, { "type": "A", "before": null, "after": "various, recent,", "start_char_pos": 848, "end_char_pos": 848 }, { "type": "A", "before": null, "after": "visually evident for the first time", "start_char_pos": 871, "end_char_pos": 871 } ]
[ 0, 142, 276, 494 ]
1604.04608
1
We consider the super-hedging price of an American option in a discrete-time market in which stocks are available for dynamic trading and European options are available for static trading. We show that the super-hedging price \pi is given by the supremum over the prices of the American option under randomized models. That is, \pi=(c_i,Q_i)_i\sum_ic_i\phi^{Q_i}, where c_i \in %DIFDELCMD < \R%%% _+ and the martingale measure Q^i are chosen such that \sum_i c_i=1 and \sum_i c_iQ_i prices the European options correctly, and \phi^{Q_i} is the price of the American option under the model Q_i .
We consider the super-hedging price of an American option in a discrete-time market in which stocks are available for dynamic trading and European options are available for static trading. We show that the super-hedging price \pi is given by the supremum over the prices of the American option under randomized models. That is, \pi=(c_i,Q_i)_i\sum_ic_i\phi^{Q_i}, where c_i \in %DIFDELCMD < \R%%% \mathbb{R _+ and the martingale measure Q^i are chosen such that \sum_i c_i=1 and \sum_i c_iQ_i prices the European options correctly, and \phi^{Q_i} is the price of the American option under the model Q_i . Our result generalizes the example given in ArXiv:1604.02274 that the highest model based price can be considered as a randomization over models .
[ { "type": "A", "before": null, "after": "\\mathbb{R", "start_char_pos": 397, "end_char_pos": 397 }, { "type": "A", "before": null, "after": ". Our result generalizes the example given in ArXiv:1604.02274 that the highest model based price can be considered as a randomization over models", "start_char_pos": 594, "end_char_pos": 594 } ]
[ 0, 188, 318 ]
1604.04872
1
The equity risk premium puzzle is that the return on equities has far exceeded the average return on short-term risk-free debt and cannot be explained by conventional representative-agent consumption based equilibrium models. We review a few attempts to explain this anomaly: 1. Inclusion of highly unlikely events with low probability (Ugly state along with Good and Bad), or market crashes , recently also termed as Black Swans. 2. Slow moving habit added to the basic power utility function. 3. Allowing for a separation of the inter-temporal elasticity of substitution and risk aversion, combined with consumption and dividend growth rates modeled as containing a small persistent expected growth rate component and a fluctuating volatility which captures time varying economic uncertainty. We explore whether a fusion of the above approaches supplemented with better methods to handle the below reservations would provide a more realistic and yet tractable framework to tackle the various conundrums in the social sciences: 1. Unlimited ability of individuals to invest as compared to their ability to consume. 2. Lack of an objective measuring stick of value which gives rise to heterogeneous preferences and beliefs. 3. Unintended consequences due to the dynamic nature of social systems , where changes can be observed and decisions effected by participants to influence the system. 4. Relaxation of the transversality condition to avoid the formation of asset price bubbles . 5. How durable is durable? Since nothing lasts forever, accounting for durable goods to create a comprehensive measure of consumption volatility. The world we live in produces fascinating phenomenon despite (or perhaps, due to) being a hotchpotch of varying doses of the above elements. The rationale for a unified theory is that beauty can emerge from chaos since the best test for a stew is its taste .
The equity risk premium puzzle is that the return on equities has far exceeded the average return on short-term risk-free debt and cannot be explained by conventional representative-agent consumption based equilibrium models. We review a few attempts done over the years to explain this anomaly: 1. Inclusion of highly unlikely events with low probability (Ugly state along with Good and Bad), or market crashes / Black Swans. 2. Slow moving habit , or time-varying subsistence level, added to the basic power utility function. 3. A separation of the inter-temporal elasticity of substitution and risk aversion, combined with long run risks which captures time varying economic uncertainty. We explore whether a fusion of the above approaches supplemented with better methods to handle the below reservations would provide a more realistic and yet tractable framework to tackle the various conundrums in the social sciences: 1. Unlimited ability of individuals to invest as compared to their ability to consume. 2. Lack of an objective measuring stick of value 3. Unintended consequences due to the dynamic nature of social systems 4. Relaxation of the transversality condition to avoid the formation of asset price bubbles 5. How durable is durable? Accounting for durable goods since nothing lasts forever The world we live in produces fascinating phenomenon despite (or perhaps, due to) being a hotchpotch of varying doses of the above elements. The rationale for a unified theory is that beauty can emerge from chaos since the best test for a stew is its taste . Many long standing puzzles seem to have been resolved using different techniques. The various explanations need to stand the test of time before acceptance; but then unexpected outcomes set in and new puzzles emerge. As real analysis and limits tell us: We are getting Closer and Closer; Yet it seems we are still Far Far Away.. .
[ { "type": "A", "before": null, "after": "done over the years", "start_char_pos": 251, "end_char_pos": 251 }, { "type": "R", "before": ", recently also termed as", "after": "/", "start_char_pos": 393, "end_char_pos": 418 }, { "type": "A", "before": null, "after": ", or time-varying subsistence level,", "start_char_pos": 453, "end_char_pos": 453 }, { "type": "R", "before": "Allowing for a", "after": "A", "start_char_pos": 500, "end_char_pos": 514 }, { "type": "R", "before": "consumption and dividend growth rates modeled as containing a small persistent expected growth rate component and a fluctuating volatility", "after": "long run risks", "start_char_pos": 608, "end_char_pos": 746 }, { "type": "D", "before": "which gives rise to heterogeneous preferences and beliefs.", "after": null, "start_char_pos": 1167, "end_char_pos": 1225 }, { "type": "D", "before": ", where changes can be observed and decisions effected by participants to influence the system.", "after": null, "start_char_pos": 1297, "end_char_pos": 1392 }, { "type": "D", "before": ".", "after": null, "start_char_pos": 1485, "end_char_pos": 1486 }, { "type": "R", "before": "Since nothing lasts forever, accounting", "after": "Accounting", "start_char_pos": 1514, "end_char_pos": 1553 }, { "type": "R", "before": "to create a comprehensive measure of consumption volatility.", "after": "since nothing lasts forever", "start_char_pos": 1572, "end_char_pos": 1632 }, { "type": "A", "before": null, "after": ". Many long standing puzzles seem to have been resolved using different techniques. The various explanations need to stand the test of time before acceptance; but then unexpected outcomes set in and new puzzles emerge. As real analysis and limits tell us: We are getting Closer and Closer; Yet it seems we are still Far Far Away..", "start_char_pos": 1890, "end_char_pos": 1890 } ]
[ 0, 225, 431, 496, 796, 1117, 1225, 1392, 1513, 1632, 1773 ]
1604.05404
1
This article presents a model of haircuts and economic capital for repo. We propose a credit approach to solve haircuts such that the exposure to market riskmeets a prescribed credit rating scale measured by expected loss. Specifically for securities financing business, a credit risk capital approach is also adopted where the borrower dependent haircut is set to a level such that the resultant credit risk VaR is zero. The repo haircuts model incorporates asset risk, borrower credit risk, wrong way risk, and market liquidity risk. Double exponential jump-diffusion type processes are used to model single asset or portfolio price dynamics. Borrower credit is captured by a log-Ornstein-Uhlenbeck default intensity model. Economic capital defined either as unexpected loss from CVaR or expected shortfall is computed for securities financing transactions with negotiated haircuts, to form the basis to levy a capital charge in pre-trade and to fair value in post-trade. Numerical techniques employing two-sided Laplace transform inversion and maximum likelihood estimation of the jump-diffusion model are applied to compute haircuts of SPX500 index, US corporate bond and CMBS indices. Preliminary findings are that stress period calibrated jump-diffusion models can produce haircuts at the levels of BASEL's supervisory haircuts and that repo economic capital far exceeds expected loss and cost of capital has to be included in repo-style transactions pricing .
This article develops a haircut model by treating repos as debt investments and seeks haircuts to control counterparty contingent exposure to asset price gap risk. It corroborates well with empirically stylized facts, explains tri-party and bilateral repo haircut differences, recasts haircut increases during the financial crisis, and sets a limit on access liquidity dealers can extract while acting as funding intermediaries between money market funds and hedge funds. Once a haircut is set, repo's residual risk becomes a pricing challenge, as is neither hedgeable nor diversifiable. We propose a capital pricing approach of computing repo economic capital and charging the borrower a cost of capital . Capital charge is shown to be countercyclical and a key element of repo pricing and used in explaining the repo pricing puzzle and maturity compression phenomenon .
[ { "type": "R", "before": "presents a model of haircuts and economic capital for repo. We propose a credit approach to solve haircuts such that the exposure to market riskmeets a prescribed credit rating scale measured by expected loss. Specifically for securities financing business, a credit risk capital approach is also adopted where the borrower dependent haircut is set to a level such that the resultant credit risk VaR is zero. The repo haircuts model incorporates asset risk, borrower credit risk, wrong way risk, and market liquidity risk. Double exponential jump-diffusion type processes are used to model single asset or portfolio price dynamics. Borrower credit is captured by a log-Ornstein-Uhlenbeck default intensity model. Economic capital defined either as unexpected loss from CVaR or expected shortfall is computed for securities financing transactions with negotiated haircuts, to form the basis to levy a capital charge in pre-trade and to fair value in post-trade. Numerical techniques employing two-sided Laplace transform inversion and maximum likelihood estimation of the jump-diffusion model are applied to compute haircuts of SPX500 index, US corporate bond and CMBS indices. Preliminary findings are that stress period calibrated jump-diffusion models can produce haircuts at the levels of BASEL's supervisory haircuts and that", "after": "develops a haircut model by treating repos as debt investments and seeks haircuts to control counterparty contingent exposure to asset price gap risk. It corroborates well with empirically stylized facts, explains tri-party and bilateral repo haircut differences, recasts haircut increases during the financial crisis, and sets a limit on access liquidity dealers can extract while acting as funding intermediaries between money market funds and hedge funds. Once a haircut is set, repo's residual risk becomes a pricing challenge, as is neither hedgeable nor diversifiable. We propose a capital pricing approach of computing", "start_char_pos": 13, "end_char_pos": 1342 }, { "type": "R", "before": "far exceeds expected loss and", "after": "and charging the borrower a", "start_char_pos": 1365, "end_char_pos": 1394 }, { "type": "R", "before": "has to be included in repo-style transactions pricing", "after": ". Capital charge is shown to be countercyclical and a key element of repo pricing and used in explaining the repo pricing puzzle and maturity compression phenomenon", "start_char_pos": 1411, "end_char_pos": 1464 } ]
[ 0, 72, 222, 421, 535, 644, 725, 973, 1189 ]
1604.05404
2
This article develops a haircut model by treating repos as debt investments and seeks haircuts to control counterparty contingent exposure to asset price gap risk. It corroborates well with empirically stylized facts, explains tri-party and bilateral repo haircut differences, recasts haircut increases during the financial crisis , and sets a limit on access liquidity dealers can extract while acting as funding intermediaries between money market funds and hedge funds . Once a haircut is set, repo's residual risk becomes a pricing challenge, as is neither hedgeable nor diversifiable. We propose a capital pricing approach of computing repo economic capital and charging the borrower a cost of capital. Capital charge is shown to be countercyclical and a key element of repo pricing and used in explaining the repo pricing puzzle and maturity compression phenomenon .
A repurchase agreement lets investors borrow cash to buy securities. Financier only lends to securities' market value after a haircut and charges interest. Repo pricing is characterized with its puzzling dual pricing measures: repo haircut and repo spread. This article develops a repo haircut model by designing haircuts to achieve high credit criteria, and identifies economic capital for repo's default risk as the main driver of repo pricing. A simple repo spread formula is obtained that relates spread to haircuts negative linearly. An investor wishing to minimize all-in funding cost can settle at an optimal combination of haircut and repo rate. The model empirically reproduces repo haircut hikes concerning asset backed securities during the financial crisis. It explains tri-party and bilateral repo haircut differences, quantifies shortening tenor's risk reduction effect , and sets a limit on excess liquidity intermediating dealers can extract between money market funds and hedge funds .
[ { "type": "A", "before": null, "after": "A repurchase agreement lets investors borrow cash to buy securities. Financier only lends to securities' market value after a haircut and charges interest. Repo pricing is characterized with its puzzling dual pricing measures: repo haircut and repo spread.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "A", "before": null, "after": "repo", "start_char_pos": 25, "end_char_pos": 25 }, { "type": "R", "before": "treating repos as debt investments and seeks haircuts to control counterparty contingent exposure to asset price gap risk. It corroborates well with empirically stylized facts,", "after": "designing haircuts to achieve high credit criteria, and identifies economic capital for repo's default risk as the main driver of repo pricing. A simple repo spread formula is obtained that relates spread to haircuts negative linearly. An investor wishing to minimize all-in funding cost can settle at an optimal combination of haircut and repo rate. The model empirically reproduces repo haircut hikes concerning asset backed securities during the financial crisis. It", "start_char_pos": 43, "end_char_pos": 219 }, { "type": "R", "before": "recasts haircut increases during the financial crisis", "after": "quantifies shortening tenor's risk reduction effect", "start_char_pos": 279, "end_char_pos": 332 }, { "type": "R", "before": "access liquidity", "after": "excess liquidity intermediating", "start_char_pos": 355, "end_char_pos": 371 }, { "type": "D", "before": "while acting as funding intermediaries", "after": null, "start_char_pos": 392, "end_char_pos": 430 }, { "type": "D", "before": ". Once a haircut is set, repo's residual risk becomes a pricing challenge, as is neither hedgeable nor diversifiable. We propose a capital pricing approach of computing repo economic capital and charging the borrower a cost of capital. Capital charge is shown to be countercyclical and a key element of repo pricing and used in explaining the repo pricing puzzle and maturity compression phenomenon", "after": null, "start_char_pos": 474, "end_char_pos": 872 } ]
[ 0, 165, 475, 591, 709 ]
1604.05516
1
Motivated by recent concerns that queuing delays in the Internet are on the rise, we conduct a performance evaluation of Compound TCP (C-TCP) in two topologies: a single bottleneck and a multi-bottleneck topology. The first topology consists of a single core router, and the second consists of two distinct sets of TCP flows, regulated by two edge routers, feeding into a common core router. For both topologies, we develop fluid models and conduct a detailed local stability analysis in the small buffer regime, and obtain necessary and sufficient conditions for local stability. Further, we show that the underlying nonlinear models undergo a Hopf bifurcation as the stability conditions just get violated. Using a combination of analysis and packet-level simulations, we emphasise that larger buffer thresholds, in addition to increasing latency, are prone to inducing limit cycles. These limit cycles in turn cause synchronisation among the TCP flows, and also result in a loss of link utilisation. For the single bottleneck topology, we empirically analyse some statistical properties of the bottleneck queue. We highlight that in a high bandwidth-delay product regime, and with a large number of long-lived flows, the bottleneck queue may be modelled as an M/M/1/B or an M/D/1/B queue. The combination of the dynamical and the statistical properties explored in this paper could have important implications for quality of service in the Internet.
Motivated by recent concerns that queuing delays in the Internet are on the rise, we conduct a performance evaluation of Compound TCP (C-TCP) in two topologies: a single bottleneck and a multi-bottleneck topology. The first topology consists of a single core router, and the second consists of two distinct sets of TCP flows, regulated by two edge routers, feeding into a common core router. For both topologies, we develop fluid models and conduct a detailed local stability analysis in the small buffer regime, and obtain necessary and sufficient conditions for local stability. Further, we show that the underlying non-linear models undergo a Hopf bifurcation as the stability conditions just get violated. Using a combination of analysis and packet-level simulations, we emphasise that larger buffer thresholds, in addition to increasing latency, are prone to inducing limit cycles. These limit cycles in turn cause synchronisation among the TCP flows, and also result in a loss of link utilisation. For the single bottleneck topology, we empirically analyse some statistical properties of the bottleneck queue. We highlight that in a high bandwidth-delay product and a small buffer regime, and with a large number of long-lived flows, the bottleneck queue may be modelled as an M/M/1/B or an M/D/1/B queue. The combination of the dynamical and the statistical properties explored in this paper could have important implications for quality of service in the Internet.
[ { "type": "R", "before": "nonlinear", "after": "non-linear", "start_char_pos": 618, "end_char_pos": 627 }, { "type": "A", "before": null, "after": "and a small buffer", "start_char_pos": 1167, "end_char_pos": 1167 } ]
[ 0, 213, 391, 580, 708, 885, 1002, 1114, 1292 ]
1604.05516
2
Motivated by recent concerns that queuing delays in the Internet are on the rise, we conduct a performance evaluation of Compound TCP (C-TCP) in two topologies: a single bottleneck and a multi-bottleneck topology . The first topology consists of a single core router, and the second consists of two distinct sets of TCP flows, regulated by two edge routers, feeding into a common core router. For both topologies , we develop fluid models and conduct a detailed local stability analysis in the small buffer regime, and obtain necessary and sufficient conditions for local stability. Further, we show that the underlying non-linear models undergo a Hopf bifurcation as the stability conditions just get violated. Using a combination of analysis and packet-level simulations, we emphasise that larger buffer thresholds , in addition to increasing latency, are prone to inducing limit cycles . These limit cycles in turn cause synchronisation among the TCP flows, and also result in a loss of link utilisation. For the single bottleneck topology, we empirically analyse some statistical properties of the bottleneck queue. We highlight that in a high bandwidth-delay product and a small buffer regime , and with a large number of long-lived flows, the bottleneck queue may be modelled as an M/M/1/B or an M/D/1/B queue. The combination of the dynamical and the statistical properties explored in this paper could have important implications for quality of service in the Internet .
Motivated by recent concerns that queuing delays in the Internet are on the rise, we conduct a performance evaluation of Compound TCP (C-TCP) in two topologies: a single bottleneck and a multi-bottleneck topology , under different traffic scenarios . The first topology consists of a single bottleneck router, and the second consists of two distinct sets of TCP flows, regulated by two edge routers, feeding into a common core router. We focus on some dynamical and statistical properties of the underlying system. From a dynamical perspective , we develop fluid models in a regime wherein the number of flows is large, bandwidth-delay product is high, buffers are dimensioned small (independent of the bandwidth-delay product) and routers deploy a Drop-Tail queue policy. A detailed local stability analysis for these models yields the following key insight: smaller buffers favour stability. Additionally, we highlight that larger buffers , in addition to increasing latency, are prone to inducing limit cycles in the system dynamics, via a Hopf bifurcation . These limit cycles in turn cause synchronisation among the TCP flows, and also result in a loss of link utilisation. For the topologies considered, we also empirically analyse some statistical properties of the bottleneck queues. These statistical analyses serve to validate an important modelling assumption: that in the regime considered, each bottleneck queue may be approximated as either an M/M/1/B or an M/D/1/B queue. This immediately makes the modelling perspective attractive and the analysis tractable. Finally, we show that smaller buffers, in addition to ensuring stability and low latency, would also yield fairly good system performance, in terms of throughput and flow completion times .
[ { "type": "A", "before": null, "after": ", under different traffic scenarios", "start_char_pos": 213, "end_char_pos": 213 }, { "type": "R", "before": "core", "after": "bottleneck", "start_char_pos": 256, "end_char_pos": 260 }, { "type": "R", "before": "For both topologies", "after": "We focus on some dynamical and statistical properties of the underlying system. From a dynamical perspective", "start_char_pos": 394, "end_char_pos": 413 }, { "type": "R", "before": "and conduct a", "after": "in a regime wherein the number of flows is large, bandwidth-delay product is high, buffers are dimensioned small (independent of the bandwidth-delay product) and routers deploy a Drop-Tail queue policy. A", "start_char_pos": 440, "end_char_pos": 453 }, { "type": "R", "before": "in the small buffer regime, and obtain necessary and sufficient conditions for local stability. Further, we show that the underlying non-linear models undergo a Hopf bifurcation as the stability conditions just get violated. Using a combination of analysis and packet-level simulations, we emphasise that larger buffer thresholds", "after": "for these models yields the following key insight: smaller buffers favour stability. Additionally, we highlight that larger buffers", "start_char_pos": 488, "end_char_pos": 817 }, { "type": "A", "before": null, "after": "in the system dynamics, via a Hopf bifurcation", "start_char_pos": 890, "end_char_pos": 890 }, { "type": "R", "before": "single bottleneck topology, we", "after": "topologies considered, we also", "start_char_pos": 1018, "end_char_pos": 1048 }, { "type": "R", "before": "queue. We highlight that in a high bandwidth-delay product and a small buffer regime , and with a large number of long-lived flows, the", "after": "queues. These statistical analyses serve to validate an important modelling assumption: that in the regime considered, each", "start_char_pos": 1115, "end_char_pos": 1250 }, { "type": "R", "before": "modelled as", "after": "approximated as either", "start_char_pos": 1275, "end_char_pos": 1286 }, { "type": "R", "before": "The combination of the dynamical and the statistical properties explored in this paper could have important implications for quality of service in the Internet", "after": "This immediately makes the modelling perspective attractive and the analysis tractable. Finally, we show that smaller buffers, in addition to ensuring stability and low latency, would also yield fairly good system performance, in terms of throughput and flow completion times", "start_char_pos": 1319, "end_char_pos": 1478 } ]
[ 0, 215, 393, 583, 712, 892, 1009, 1121, 1318 ]
1604.05517
1
We aim to generalize the duality results of Bouchard and Nutz (2015) to the case of American options. By introducing an enlarged canonical space, we reformulate the superhedging problem for American options as a problem for European options . Then in a discrete time market with finitely many liquid options, we show that the minimum superhedging cost of an American option equals to the supremum of the expectation of the payoff at all (weak) stopping times and under a suitable family of martingale measures. Moreover, by taking the limit on the number of liquid options, we obtain a new class of martingale optimal transport problems as well as a Kantorovich duality result .
We investigate pricing-hedging duality for American options in discrete time financial models where some assets are traded dynamically and others, e.g. a family of European options, only statically. In the first part of the paper we consider an abstract setting, which includes the classical case with a fixed reference probability measure as well as the robust framework with a non-dominated family of probability measures. Our first insight is that by considering a (universal) enlargement of the space, we can see American options as European options and recover the pricing-hedging duality, which may fail in the original formulation. This may be seen as a weak formulation of the original problem. Our second insight is that lack of duality is caused by the lack of dynamic consistency and hence a different enlargement with dynamic consistency is sufficient to recover duality: it is enough to consider (fictitious) extensions of the market in which all the assets are traded dynamically. In the second part of the paper we study two important examples of robust framework: the setup of Bouchard and Nutz (2015) and the martingale optimal transport setup of Beiglb\"ock et al. (2013), and show that our general results apply in both cases and allow us to obtain pricing-hedging duality for American options .
[ { "type": "R", "before": "aim to generalize the duality results of Bouchard and Nutz (2015) to the case of American options. By introducing an enlarged canonical", "after": "investigate pricing-hedging duality for American options in discrete time financial models where some assets are traded dynamically and others, e.g. a family of European options, only statically. In the first part of the paper we consider an abstract setting, which includes the classical case with a fixed reference probability measure as well as the robust framework with a non-dominated family of probability measures. Our first insight is that by considering a (universal) enlargement of the", "start_char_pos": 3, "end_char_pos": 138 }, { "type": "R", "before": "reformulate the superhedging problem for", "after": "can see", "start_char_pos": 149, "end_char_pos": 189 }, { "type": "R", "before": "a problem for European options . Then in a discrete time market with finitely many liquid options, we show that the minimum superhedging cost of an American option equals to the supremum of", "after": "European options and recover the pricing-hedging duality, which may fail in the original formulation. This may be seen as a weak formulation of", "start_char_pos": 210, "end_char_pos": 399 }, { "type": "R", "before": "expectation of the payoff at all (weak) stopping times and under a suitable family of martingale measures. Moreover, by taking the limit on the number of liquid options, we obtain a new class of", "after": "original problem. Our second insight is that lack of duality is caused by the lack of dynamic consistency and hence a different enlargement with dynamic consistency is sufficient to recover duality: it is enough to consider (fictitious) extensions of the market in which all the assets are traded dynamically. In the second part of the paper we study two important examples of robust framework: the setup of Bouchard and Nutz (2015) and the", "start_char_pos": 404, "end_char_pos": 598 }, { "type": "R", "before": "problems as well as a Kantorovich duality result", "after": "setup of Beiglb\\\"ock et al. (2013), and show that our general results apply in both cases and allow us to obtain pricing-hedging duality for American options", "start_char_pos": 628, "end_char_pos": 676 } ]
[ 0, 101, 242, 510 ]
1604.05896
1
Factor models are commonly used in financial applications to analyze portfolio risk and to decompose it to loadings of risk factors. A linear factor model often depends on a small number of carefully-chosen factors and it has been assumed that an arbitrary selection of factors does not yield a feasible factor model. We develop a statistical factor model, the random factor model, in which factors are chosen at random based on the random projection method. Random selection of factors has the important consequence that the factors are almost orthogonal with respect to each other. The developed random factor model is expected to preserve covariance between time-series. We derive probabilistic bounds for the accuracy of the random factor representation of time-series, their cross-correlations and covariances. As an application of the random factor model , we analyze reproduction of correlation coefficients in the well-diversified Russell 3,000 equity index using the random factor model. Comparison with the principal component analysis (PCA) shows that the random factor model requires significantly fewer factors to provide an equally accurate reproduction of correlation coefficients. This occurs despite the finding that PCA reproduces single equity return time-series more faithfully than the random factor model. Accuracy of a random factor model is not very sensitive to which particular set of randomly-chosen factors is used. A more general kind of universality of random factor models is also present: it does not much matter which particular method is used to construct the random factor model, accuracy of the resulting factor model is almost identical .
In a very high-dimensional vector space, two randomly-chosen vectors are almost orthogonal with high probability. Starting from this observation, we develop a statistical factor model, the random factor model, in which factors are chosen at random based on the random projection method. Randomness of factors has the consequence that covariance matrix is well preserved in a linear factor representation. It also enables derivation of probabilistic bounds for the accuracy of the random factor representation of time-series, their cross-correlations and covariances. As an application , we analyze reproduction of time-series and their cross-correlation coefficients in the well-diversified Russell 3,000 equity index .
[ { "type": "R", "before": "Factor models are commonly used in financial applications to analyze portfolio risk and to decompose it to loadings of risk factors. A linear factor model often depends on a small number of carefully-chosen factors and it has been assumed that an arbitrary selection of factors does not yield a feasible factor model. We", "after": "In a very high-dimensional vector space, two randomly-chosen vectors are almost orthogonal with high probability. Starting from this observation, we", "start_char_pos": 0, "end_char_pos": 320 }, { "type": "R", "before": "Random selection", "after": "Randomness", "start_char_pos": 459, "end_char_pos": 475 }, { "type": "R", "before": "important consequence that the factors are almost orthogonal with respect to each other. The developed random factor model is expected to preserve covariance between time-series. We derive", "after": "consequence that covariance matrix is well preserved in a linear factor representation. It also enables derivation of", "start_char_pos": 495, "end_char_pos": 683 }, { "type": "D", "before": "of the random factor model", "after": null, "start_char_pos": 834, "end_char_pos": 860 }, { "type": "R", "before": "correlation", "after": "time-series and their cross-correlation", "start_char_pos": 890, "end_char_pos": 901 }, { "type": "D", "before": "using the random factor model. Comparison with the principal component analysis (PCA) shows that the random factor model requires significantly fewer factors to provide an equally accurate reproduction of correlation coefficients. This occurs despite the finding that PCA reproduces single equity return time-series more faithfully than the random factor model. Accuracy of a random factor model is not very sensitive to which particular set of randomly-chosen factors is used. A more general kind of universality of random factor models is also present: it does not much matter which particular method is used to construct the random factor model, accuracy of the resulting factor model is almost identical", "after": null, "start_char_pos": 966, "end_char_pos": 1673 } ]
[ 0, 132, 317, 458, 583, 673, 815, 996, 1196, 1327, 1443 ]
1604.06111
1
Biological network alignment (NA) aims to find regions of similarities between molecular networks of different species. NA can be either local (LNA) or global (GNA). LNA aims to identify highly conserved common subnetworks, which are typically small, while GNA aims to identify large common subnetworks, which are typically suboptimally conserved. We recently showed that LNA and GNA yield complementary results: LNA has high functional but low topological alignment quality, while GNA has high topological but low functional alignment quality. Thus, we propose IGLOO, a new approach that integrates GNA and LNA in hope to reconcile the two. We evaluate IGLOO against state-of-the-art LNA ( i.e., NetworkBLAST, NetAligner, AlignNemo, and AlignMCL) and GNA ( i.e., GHOST, NETAL, GEDEVO, MAGNA++, WAVE, and L-GRAAL) methods. We show that IGLOO allows for a trade-off between topological and functional alignment quality better than any of the existing LNA and GNA methods considered in our study.
Analogous to genomic sequence alignment, biological network alignment (NA) aims to find regions of similarities between molecular networks (rather than sequences) of different species. NA can be either local (LNA) or global (GNA). LNA aims to identify highly conserved common subnetworks, which are typically small, while GNA aims to identify large common subnetworks, which are typically suboptimally conserved. We recently showed that LNA and GNA yield complementary results: LNA has high functional but low topological alignment quality, while GNA has high topological but low functional alignment quality. Thus, we propose IGLOO, a new approach that integrates GNA and LNA in hope to reconcile the two. We evaluate IGLOO against state-of-the-art LNA ( NetworkBLAST, NetAligner, AlignNemo, and AlignMCL) and GNA ( GHOST, NETAL, GEDEVO, MAGNA++, WAVE, and L-GRAAL) methods. We show that IGLOO allows for a trade-off between topological and functional alignment quality better than the existing LNA and GNA methods considered in our study.
[ { "type": "R", "before": "Biological", "after": "Analogous to genomic sequence alignment, biological", "start_char_pos": 0, "end_char_pos": 10 }, { "type": "A", "before": null, "after": "(rather than sequences)", "start_char_pos": 98, "end_char_pos": 98 }, { "type": "D", "before": "i.e.,", "after": null, "start_char_pos": 692, "end_char_pos": 697 }, { "type": "D", "before": "i.e.,", "after": null, "start_char_pos": 759, "end_char_pos": 764 }, { "type": "D", "before": "any of", "after": null, "start_char_pos": 931, "end_char_pos": 937 } ]
[ 0, 120, 166, 348, 545, 642, 823 ]
1604.06763
1
We consider a network of multi-server queues wherein each job can be processed in parallel by any subset of servers within a pre-defined set that depends on its class . Each server is allocated in FCFS order at each queue. Jobs arrive according to Poisson processes, have independent exponential service requirements and are routed independently at random. We prove that , when stable, the network has a product-form stationary distribution. From a practical perspective, we propose an algorithm on this basis to allocate the resources of a computer cluster .
We represent a computer cluster as a multi-server queue with some arbitrary bipartite graph of compatibilities between jobs and servers . Each server processes its jobs sequentially in FCFS order . The service rate of a job at any given time is the sum of the service rates of all servers processing this job. We show that the corresponding queue is quasi-reversible and use this property to design a scheduling algorithm achieving balanced fair sharing of the service capacity .
[ { "type": "R", "before": "consider a network of", "after": "represent a computer cluster as a", "start_char_pos": 3, "end_char_pos": 24 }, { "type": "R", "before": "queues wherein each job can be processed in parallel by any subset of servers within a pre-defined set that depends on its class", "after": "queue with some arbitrary bipartite graph of compatibilities between jobs and servers", "start_char_pos": 38, "end_char_pos": 166 }, { "type": "R", "before": "is allocated", "after": "processes its jobs sequentially", "start_char_pos": 181, "end_char_pos": 193 }, { "type": "R", "before": "at each queue. Jobs arrive according to Poisson processes, have independent exponential service requirements and are routed independently at random. We prove that , when stable, the network has a product-form stationary distribution. From a practical perspective, we propose an algorithm on this basis to allocate the resources of a computer cluster", "after": ". The service rate of a job at any given time is the sum of the service rates of all servers processing this job. We show that the corresponding queue is quasi-reversible and use this property to design a scheduling algorithm achieving balanced fair sharing of the service capacity", "start_char_pos": 208, "end_char_pos": 557 } ]
[ 0, 222, 356, 441 ]
1604.06815
1
The TREX is a recently introduced method for performing sparse high-dimensional regression. Despite its statistical promise as an alternative to the lasso, square-root lasso, and scaled lasso, the TREX is computationally challenging in that it requires solving a non-convex optimization problem. This paper shows a remarkable result: despite the non-convexity of the TREX problem, there exists a polynomial-time algorithm that is guaranteed to find the global minimum. This result adds the TREX to a very short list of non-convex optimization problems that can be globally optimized (principal components analysis being a famous example). After deriving and developing this new approach, we demonstrate that (i) the ability of the TREX heuristic to reach the global minimum is strongly dependent on the difficulty of the underlying statistical problem, (ii) the polynomial-time algorithm for TREX permits a novel variable ranking and selection scheme, (iii) this scheme can be incorporated into a rule that controls the false discovery rate (FDR) of included features in the model. To achieve this last aim, we provide an extension of the results of Barber & Candes (2015) to establish that the knockoff filter framework can be applied to the TREX. This investigation thus provides both a rare case study of a heuristic for non-convex optimization and a novel way of exploiting non-convexity for statistical inference.
The TREX is a recently introduced method for performing sparse high-dimensional regression. Despite its statistical promise as an alternative to the lasso, square-root lasso, and scaled lasso, the TREX is computationally challenging in that it requires solving a non-convex optimization problem. This paper shows a remarkable result: despite the non-convexity of the TREX problem, there exists a polynomial-time algorithm that is guaranteed to find the global minimum. This result adds the TREX to a very short list of non-convex optimization problems that can be globally optimized (principal components analysis being a famous example). After deriving and developing this new approach, we demonstrate that (i) the ability of the preexisting TREX heuristic to reach the global minimum is strongly dependent on the difficulty of the underlying statistical problem, (ii) the new polynomial-time algorithm for TREX permits a novel variable ranking and selection scheme, (iii) this scheme can be incorporated into a rule that controls the false discovery rate (FDR) of included features in the model. To achieve this last aim, we provide an extension of the results of Barber & Candes (2015) to establish that the knockoff filter framework can be applied to the TREX. This investigation thus provides both a rare case study of a heuristic for non-convex optimization and a novel way of exploiting non-convexity for statistical inference.
[ { "type": "A", "before": null, "after": "preexisting", "start_char_pos": 731, "end_char_pos": 731 }, { "type": "A", "before": null, "after": "new", "start_char_pos": 863, "end_char_pos": 863 } ]
[ 0, 91, 295, 468, 638, 1083, 1250 ]
1604.06917
1
Within the framework of the Merton model, we consider the problem of concurrent portfolio losses in two non-overlapping credit portfolios. In order to explore the full statistical dependence structure , we estimate the pairwise copulasof such portfolio losses . Instead of a Gaussian dependence, we typically find a strong asymmetry in the copulas. Concurrent large portfolio losses are much more likely than small ones. Studying the dependences of these losses as a function of portfolio size, we moreover reveal that not only large portfolios of thousands of contracts, but also medium-sized and small ones with only a few dozens of contracts exhibit notable loss correlations. Anticipated idiosyncratic effects turn out to be negligible in almost every realistic setting . These are troublesome insights not only for investors in structured fixed-income products, but particularly for the stability of the financial sector.
We consider the problem of concurrent portfolio losses in two non-overlapping credit portfolios. In order to explore the full statistical dependence structure of such portfolio losses , we estimate their empirical pairwise copulas . Instead of a Gaussian dependence, we typically find a strong asymmetry in the copulas. Concurrent large portfolio losses are much more likely than small ones. Studying the dependences of these losses as a function of portfolio size, we moreover reveal that not only large portfolios of thousands of contracts, but also medium-sized and small ones with only a few dozens of contracts exhibit notable portfolio loss correlations. Anticipated idiosyncratic effects turn out to be negligible . These are troublesome insights not only for investors in structured fixed-income products, but particularly for the stability of the financial sector.
[ { "type": "R", "before": "Within the framework of the Merton model, we", "after": "We", "start_char_pos": 0, "end_char_pos": 44 }, { "type": "A", "before": null, "after": "of such portfolio losses", "start_char_pos": 201, "end_char_pos": 201 }, { "type": "R", "before": "the pairwise copulasof such portfolio losses", "after": "their empirical pairwise copulas", "start_char_pos": 216, "end_char_pos": 260 }, { "type": "A", "before": null, "after": "portfolio", "start_char_pos": 662, "end_char_pos": 662 }, { "type": "D", "before": "in almost every realistic setting", "after": null, "start_char_pos": 742, "end_char_pos": 775 } ]
[ 0, 138, 262, 349, 421, 681, 777 ]