doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1404.1181
1
In this and a companion paper we outline a general framework for the thermodynamic description of open chemical reaction networks, with special regard to metabolic networks regulating cellular physiology and biochemical functions. We first introduce closed networks `` in a box '' , whose thermodynamics is subjected to strict physical constraints: the mass-action law, elementarity of processes, and detailed balance. We further digress on the role of solvents and on the seemingly unacknowledged property of network independence of free energy landscapes. We then open the system by assuming that the concentrations of certain substrate species (the chemostats) are fixed, whether because promptly regulated by the environment via contact with reservoirs, or because nearly constant in a time window. As a result, the system is driven out of equilibrium. A rich algebraic and topological structure ensues in the network of internal species: Emergent irreversible cycles are associated to nonvanishing affinities, whose symmetries are dictated by the breakage of conservation laws . We decompose the steady state entropy production rate in terms of fundamental fluxes and affinities in the spirit of Schnakenberg's theory of network thermodynamics, paving the way for the forthcoming treatment of the linear regime, of efficiency and tight coupling, of free energy transduction and of thermodynamic constraints for network reconstruction.
In this and a companion paper we outline a general framework for the thermodynamic description of open chemical reaction networks, with special regard to metabolic networks regulating cellular physiology and biochemical functions. We first introduce closed networks " in a box " , whose thermodynamics is subjected to strict physical constraints: the mass-action law, elementarity of processes, and detailed balance. We further digress on the role of solvents and on the seemingly unacknowledged property of network independence of free energy landscapes. We then open the system by assuming that the concentrations of certain substrate species (the chemostats) are fixed, whether because promptly regulated by the environment via contact with reservoirs, or because nearly constant in a time window. As a result, the system is driven out of equilibrium. A rich algebraic and topological structure ensues in the network of internal species: Emergent irreversible cycles are associated to nonvanishing affinities, whose symmetries are dictated by the breakage of conservation laws . These central results are resumed in the relation a + b = s^Y between the number of fundamental affinities a, that of broken conservation laws b and the number of chemostats s^Y . We decompose the steady state entropy production rate in terms of fundamental fluxes and affinities in the spirit of Schnakenberg's theory of network thermodynamics, paving the way for the forthcoming treatment of the linear regime, of efficiency and tight coupling, of free energy transduction and of thermodynamic constraints for network reconstruction.
[ { "type": "R", "before": "``", "after": "\"", "start_char_pos": 266, "end_char_pos": 268 }, { "type": "R", "before": "''", "after": "\"", "start_char_pos": 278, "end_char_pos": 280 }, { "type": "A", "before": null, "after": ". These central results are resumed in the relation a + b = s^Y between the number of fundamental affinities a, that of broken conservation laws b and the number of chemostats s^Y", "start_char_pos": 1082, "end_char_pos": 1082 } ]
[ 0, 230, 418, 557, 802, 856, 1084 ]
1404.1351
1
Extensive research illustrates the jump and discretisation errors that affect the valuation of standard swap contracts. We introduce a vector space of price and return characteristics that allow to define swapswhich can be valued exactly, assuming only that the market is free of arbitrage. Although fair-value swap rates are independent of monitoring frequency, the associated risk premiums are not. A historical analysis based on 16 years of S& P500 data demonstrates the diversity of the risk exposures attainable through trading these swaps, as well as floating-floating swaps that trade differential risk premiums and maturities .
We derive a general multivariate theory for realised characteristics of `model-free discretisation-invariant swaps', so-called because the standard no-arbitrage assumption of martingale forward prices is sufficient to derive fair-value swap rates for such characteristics which have no jump or discretisation errors. This theory underpins specific examples for swaps based on higher moments of a single log return distribution where exact replication is possible via option-implied `fundamental contracts' like the log contact. The common factors determining the S& P 500 risk premia associated with these higher-moment characteristics are investigated empirically at the daily, weekly and monthly frequencies .
[ { "type": "R", "before": "Extensive research illustrates the jump and discretisation errors that affect the valuation of standard swap contracts. We introduce a vector space of price and return characteristics that allow to define swapswhich can be valued exactly, assuming only that the market is free of arbitrage. Although", "after": "We derive a general multivariate theory for realised characteristics of `model-free discretisation-invariant swaps', so-called because the standard no-arbitrage assumption of martingale forward prices is sufficient to derive", "start_char_pos": 0, "end_char_pos": 299 }, { "type": "R", "before": "are independent of monitoring frequency, the associated risk premiums are not. A historical analysis based on 16 years of", "after": "for such characteristics which have no jump or discretisation errors. This theory underpins specific examples for swaps based on higher moments of a single log return distribution where exact replication is possible via option-implied `fundamental contracts' like the log contact. The common factors determining the", "start_char_pos": 322, "end_char_pos": 443 }, { "type": "R", "before": "P500 data demonstrates the diversity of the risk exposures attainable through trading these swaps, as well as floating-floating swaps that trade differential risk premiums and maturities", "after": "P 500 risk premia associated with these higher-moment characteristics are investigated empirically at the daily, weekly and monthly frequencies", "start_char_pos": 447, "end_char_pos": 633 } ]
[ 0, 119, 290, 400 ]
1404.1516
1
The dual representation of the martingale optimal transport problem in the Skorokhod space of multi dimensional c?adl?ag processes is proved. The dual is a minimization problem with constraints involving stochastic integrals and is similar to the Kantorovich dual of the standard optimal transport problem. The constraints are required to hold for very path in the Skorokhod space. This problem has the ?nancial interpretation as the robust hedging of path dependent European options .
The dual representation of the martingale optimal transport problem in the Skorokhod space of multi dimensional cadlag processes is proved. The dual is a minimization problem with constraints involving stochastic integrals and is similar to the Kantorovich dual of the standard optimal transport problem. The constraints are required to hold for very path in the Skorokhod space. This problem has the financial interpretation as the robust hedging of path dependent European options . In this second version, we included the multi-marginal case .
[ { "type": "R", "before": "c?adl?ag", "after": "cadlag", "start_char_pos": 112, "end_char_pos": 120 }, { "type": "R", "before": "?nancial", "after": "financial", "start_char_pos": 403, "end_char_pos": 411 }, { "type": "A", "before": null, "after": ". In this second version, we included the multi-marginal case", "start_char_pos": 484, "end_char_pos": 484 } ]
[ 0, 141, 306, 381 ]
1404.1587
1
Tissue cells are in a state of permanent mechanical tension that is maintained mainly by myosin II minifilaments, which are bipolar assemblies of tens of myosin II molecular motors contracting actin networks and bundles. Here we introduce a stochastic dynamics model for myosin II minifilaments as two small myosin II motor ensembles engaging in a stochastic tug-of-war. Each of the two ensembles is described by the parallel cluster model that allows us to use exact stochastic simulations and at the same time to keep important molecular details of the myosin II crossbridge cycle. Our simulation and analytical results reveal a strong dependance of myosin II minifilament dynamics on environmental stiffness that is reminiscent of the cellular response to substrate stiffness. For small stiffness, minifilaments form transient crosslinks exerting short spikes of force with negligible mean. For large stiffness, minifilaments form near permanent crosslinks exerting a mean force which hardly depends on environmental elasticity. This functional switch arises because dissociation after the powerstroke is suppressed by force (catch bonding) and because the ensemble shifts to the pre-powerstroke state in a soft environment. We also find that in rigid environments, symmetric myosin II minifilaments perform a random walk with an effective diffusion constant which decreases with increasing ensemble size, in marked contrast to the behavior of ensembles of processive motors that function in cargo transport .
Tissue cells are in a state of permanent mechanical tension that is maintained mainly by myosin II minifilaments, which are bipolar assemblies of tens of myosin II molecular motors contracting actin networks and bundles. Here we introduce a stochastic model for myosin II minifilaments as two small myosin II motor ensembles engaging in a stochastic tug-of-war. Each of the two ensembles is described by the parallel cluster model that allows us to use exact stochastic simulations and at the same time to keep important molecular details of the myosin II cross-bridge cycle. Our simulation and analytical results reveal a strong dependence of myosin II minifilament dynamics on environmental stiffness that is reminiscent of the cellular response to substrate stiffness. For small stiffness, minifilaments form transient crosslinks exerting short spikes of force with negligible mean. For large stiffness, minifilaments form near permanent crosslinks exerting a mean force which hardly depends on environmental elasticity. This functional switch arises because dissociation after the power stroke is suppressed by force (catch bonding) and because ensembles can no longer perform the power stroke at large forces. Symmetric myosin II minifilaments perform a random walk with an effective diffusion constant which decreases with increasing ensemble size, as demonstrated for rigid substrates with an analytical treatment .
[ { "type": "D", "before": "dynamics", "after": null, "start_char_pos": 252, "end_char_pos": 260 }, { "type": "R", "before": "crossbridge", "after": "cross-bridge", "start_char_pos": 565, "end_char_pos": 576 }, { "type": "R", "before": "dependance", "after": "dependence", "start_char_pos": 638, "end_char_pos": 648 }, { "type": "R", "before": "powerstroke", "after": "power stroke", "start_char_pos": 1093, "end_char_pos": 1104 }, { "type": "R", "before": "the ensemble shifts to the pre-powerstroke state in a soft environment. We also find that in rigid environments, symmetric", "after": "ensembles can no longer perform the power stroke at large forces. Symmetric", "start_char_pos": 1156, "end_char_pos": 1278 }, { "type": "R", "before": "in marked contrast to the behavior of ensembles of processive motors that function in cargo transport", "after": "as demonstrated for rigid substrates with an analytical treatment", "start_char_pos": 1409, "end_char_pos": 1510 } ]
[ 0, 220, 370, 583, 779, 893, 1031, 1227 ]
1404.2228
1
Cloud computing is a new paradigm where a company makes money by selling computer resources including both software and hardware. The core part of cloud computing is data center where a huge number of servers are available. These servers consume a large amount of energy to run and to keep cool. Therefore, a reduction of a few percent of the power consumption means saving a large amount of money and the environment. In the current technology, an idle server still consumes about 60\\%DIF < of its peak. Thus, the only way to save energy is to turn off servers which are not processing a job. However, when there are some waiting jobs, we have to turn on the OFF servers. A server needs some setup time to be active during which it consumes energy but cannot process a job. Therefore, there exists a trade-off between power consumption and delay performance. In Gandhi10a,Gandhi10, the authors analyze this tradeoff using an M/M/c queue with setup time for which they present a decomposition property by solving difference equations. In this paper, using an alternative simple approach, we obtain explicit expressions for partial generating functions, factorial moments and the joint stationary distribution of the number of active servers and that in the system.abstract %DIF > of its peak processing a job. Thus, the only way to save energy is to turn off servers which are not processing a job. However, when there are some waiting jobs, we have to turn on the OFF servers. A server needs some setup time to be active during which it consumes energy but cannot process a job. Therefore, there exists a trade-off between power consumption and delay performance. Gandhi et al. Gandhi10a,Gandhi10 analyze this trade-off using an M/M/c queue with staggered setup (one server in setup at a time). In this paper, using an alternative approach, we obtain generating functions for the joint stationary distribution of the number of active servers and that of jobs in the system for a more general model with batch arrivals and state-dependent setup time. We further obtain moments for the queue size. Numerical results reveal that keeping the same traffic intensity, the mean power consumption decreases with the mean batch size for the case of fixed batch size. One of the main theoretical contribution is a new conditional decomposition formula showing that the number of waiting customers under the condition that all servers are busy can be decomposed to the sum of two independent random variables where the first is the same quantity in the corresponding model without setup time while the second is the number of waiting customers before an arbitrary customer.
Queues with setup time are extensively studied because they have application in performance evaluation of power-saving data centers. In a data center, there are a huge number of servers which consume a large amount of energy . In the current technology, an idle server still consumes about 60\\%DIF < of its peak. Thus, the only way to save energy is to turn off servers which are not processing a job. However, when there are some waiting jobs, we have to turn on the OFF servers. A server needs some setup time to be active during which it consumes energy but cannot process a job. Therefore, there exists a trade-off between power consumption and delay performance. In Gandhi10a,Gandhi10, the authors analyze this tradeoff using an M/M/c queue with setup time for which they present a decomposition property by solving difference equations. In this paper, using an alternative simple approach, we obtain explicit expressions for partial generating functions, factorial moments and the joint stationary distribution of the number of active servers and that in the system.abstract %DIF > of its peak processing a job. Thus, the only way to save energy is to turn off servers which are not processing a job. However, when there are some waiting jobs, we have to turn on the OFF servers. A server needs some setup time to be active during which it consumes energy but cannot process a job. Therefore, there exists a trade-off between power consumption and delay performance. Gandhi et al. Gandhi10a,Gandhi10 analyze this trade-off using an M/M/c queue with staggered setup (one server in setup at a time). In this paper, using an alternative approach, we obtain generating functions for the joint stationary distribution of the number of active servers and that of jobs in the system for a more general model with batch arrivals and state-dependent setup time. We further obtain moments for the queue size. Numerical results reveal that keeping the same traffic intensity, the mean power consumption decreases with the mean batch size for the case of fixed batch size. One of the main theoretical contribution is a new conditional decomposition formula showing that the number of waiting customers under the condition that all servers are busy can be decomposed to the sum of two independent random variables where the first is the same quantity in the corresponding model without setup time while the second is the number of waiting customers before an arbitrary customer.
[ { "type": "R", "before": "Cloud computing is a new paradigm where a company makes money by selling computer resources including both software and hardware. The core part of cloud computing is data center where", "after": "Queues with setup time are extensively studied because they have application in performance evaluation of power-saving data centers. In a data center, there are", "start_char_pos": 0, "end_char_pos": 183 }, { "type": "R", "before": "are available. These servers", "after": "which", "start_char_pos": 209, "end_char_pos": 237 }, { "type": "R", "before": "to run and to keep cool. Therefore, a reduction of a few percent of the power consumption means saving a large amount of money and the environment.", "after": ".", "start_char_pos": 271, "end_char_pos": 418 } ]
[ 0, 129, 223, 295, 418, 505, 594, 673, 775, 860, 1035, 1310, 1399, 1478, 1580, 1665, 1679, 1796, 2051, 2097, 2259 ]
1404.2558
1
Fractal globule state is widely believed to be the best known model to describe the chromatin packing in the eucaryotic nuclei. Here we provide a scaling theory and dissipative particle dynamics (DPD) computer simulation for the thermal motion of monomers in the fractal globule state. We show this motion to be subdiffusive described by \langle X^2 (t)\rangle \sim t^{\alpha_F} with \alpha_F close to 0.4. We also suggest a novel way to construct a fractal globule state in computer simulation, and provide simulation evidence supporting the conjecture that different initial entanglement-free states of a polymer chain converge as they are thermally annealed .
The fractal globule state is a popular model for describing chromatin packing in eukaryotic nuclei. Here we provide a scaling theory and dissipative particle dynamics (DPD) computer simulation for the thermal motion of monomers in the fractal globule state. Simulations starting from different entanglement-free initial states show good convergence which provides evidence supporting the existence of unique metastable fractal globule state. We show monomer motion in this state to be sub-diffusive described by \langle X^2 (t)\rangle \sim t^{\alpha_F} with \alpha_F close to 0.4. This result is in good agreement with existing experimental data on the chromatin dynamics which makes an additional argument in support of the fractal globule model of chromatin packing .
[ { "type": "R", "before": "Fractal", "after": "The fractal", "start_char_pos": 0, "end_char_pos": 7 }, { "type": "R", "before": "widely believed to be the best known model to describe the", "after": "a popular model for describing", "start_char_pos": 25, "end_char_pos": 83 }, { "type": "R", "before": "the eucaryotic", "after": "eukaryotic", "start_char_pos": 105, "end_char_pos": 119 }, { "type": "R", "before": "We show this motion to be subdiffusive", "after": "Simulations starting from different entanglement-free initial states show good convergence which provides evidence supporting the existence of unique metastable fractal globule state. We show monomer motion in this state to be sub-diffusive", "start_char_pos": 286, "end_char_pos": 324 }, { "type": "R", "before": "We also suggest a novel way to construct a fractal globule state in computer simulation, and provide simulation evidence supporting the conjecture that different initial entanglement-free states of a polymer chain converge as they are thermally annealed", "after": "This result is in good agreement with existing experimental data on the chromatin dynamics which makes an additional argument in support of the fractal globule model of chromatin packing", "start_char_pos": 407, "end_char_pos": 660 } ]
[ 0, 127, 285, 406 ]
1404.3258
1
It is important for portfolio manager to estimate and analyze the recent portfolio volatility to keep portfolio's risk within limit. Though number of financial instruments in the portfolio are very large, some times more than thousands, however daily returns considered for analysis is only for a month or even less. In this case rank of portfolio covariance matrix is less than full, hence solution is not unique. It is typically known as "large p - small n" or " ill-posed" problem. In this paper we discuss a Bayesian approach to regularize the problem. One of the additional advantages of this approach is to analyze the source of risk by estimating the probability of positive `conditional contribution to total risk' (CCTR). Each source's CCTR sum upto total volatility risk. Existing method only estimates CCTR of a source, but it does not estimate the probability of CCTR to be significantly greater (or less) than zero. This paper presents Bayesian methodology to do so. We use parallalizable and easy to use Monte Carlo (MC) approach to achieve our objective .
It is important for a portfolio manager to estimate and analyze recent portfolio volatility to keep the portfolio's risk within limit. Though the number of financial instruments in the portfolio can be very large, sometimes more than thousands, daily returns considered for analysis are only for a month or even less. In this case rank of portfolio covariance matrix is less than full, hence solution is not unique. It is typically known as the `` ill-posed" problem. In this paper we discuss a Bayesian approach to regularize the problem. One of the additional advantages of this approach is to analyze the source of risk by estimating the probability of positive `conditional contribution to total risk' (CCTR). Each source's CCTR would sum up to the portfolio's total volatility risk. Existing methods only estimate CCTR of a source, and does not estimate the probability of CCTR to be significantly greater (or less) than zero. This paper presents Bayesian methodology to do so. We use a parallelizable and easy to use Monte Carlo (MC) approach to achieve our objective . Estimation of various risk measures, such as Value at Risk and Expected Shortfall, becomes a by-product of this Monte-Carlo approach .
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 20, "end_char_pos": 20 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 63, "end_char_pos": 66 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 103, "end_char_pos": 103 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 142, "end_char_pos": 142 }, { "type": "R", "before": "are", "after": "can be", "start_char_pos": 192, "end_char_pos": 195 }, { "type": "R", "before": "some times", "after": "sometimes", "start_char_pos": 208, "end_char_pos": 218 }, { "type": "D", "before": "however", "after": null, "start_char_pos": 240, "end_char_pos": 247 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 286, "end_char_pos": 288 }, { "type": "R", "before": "\"large p - small n\" or \"", "after": "the ``", "start_char_pos": 443, "end_char_pos": 467 }, { "type": "R", "before": "sum upto", "after": "would sum up to the portfolio's", "start_char_pos": 753, "end_char_pos": 761 }, { "type": "R", "before": "method only estimates", "after": "methods only estimate", "start_char_pos": 794, "end_char_pos": 815 }, { "type": "R", "before": "but it", "after": "and", "start_char_pos": 834, "end_char_pos": 840 }, { "type": "R", "before": "parallalizable", "after": "a parallelizable", "start_char_pos": 990, "end_char_pos": 1004 }, { "type": "A", "before": null, "after": ". Estimation of various risk measures, such as Value at Risk and Expected Shortfall, becomes a by-product of this Monte-Carlo approach", "start_char_pos": 1072, "end_char_pos": 1072 } ]
[ 0, 134, 319, 417, 487, 559, 733, 784, 931, 982 ]
1404.3262
1
Bipolar myosin II filaments engage actin filament arrays to generate contractile forces in both muscle and non-muscle cells. Key determinants of actomyosin force generation include the mechanochemistry of individual motors, motor filament size, and the compliance and turnover of actin filament arrays. How these properties interact to control rate, magnitude, and mechanosensitivity of force production remains poorly understood. Here, we extend a simple myosin II cross-bridge model to consider an ensemble of myosin motors engaging a single actin filament with an elastic tether. Consistent with previous work, we find that the duration of actin engagement and average force are highly sensitive to changes in ensemble size(Nheads) , motor duty ratio (dr), and environmental stiffness(k). Catch-bond kineticsshared by myosin II isoforms sharpen this sensitivity through positive feedback such that increases in Nheads, dr, or k above threshold values drive a rapid transition from non-processive to processive engagement. Tuning motor parameters to match different myosinII isoforms suggest that myosin filaments are poised to respond sharply to environmental stiffness or externally applied force. Thus, force production by myosin filaments is subject to switch-like control via tunable internal properties or mechanical context. We show further that for processive motors, the time required to build to stall (tbuild) scales with Fmax/(k*Vmax), where Fmax is the ensemble stall force and Vmax is the unloaded gliding velocity of the motor ensemble. Thus, increased environmental stiffnesspromotes faster force-buildup even without the myosin catch-bond, and force production will be limited when tbuild exceeds time scales of force relaxation such as actin turnover. Together, these results reveal how motor filament properties and environmental mechanics shape force production by actomyosin networks .
Myosin II isoforms with varying mechanochemistry and filament size interact with filamentous actin (F-actin) networks to generate contractile forces in cells. How their properties control force production in environments with varying stiffness is poorly understood. Here, we incorporated literature values for properties of myosin II isoforms into a cross-bridge model . Similar actin gliding speeds and force-velocity curves expected from previous experiments were observed. Motor force output on an elastic load was regulated by two timescales--that of their attachment to F-actin, which varied sharply with the ensemble size , motor duty ratio , and external load, and that of force build up, which scaled with ensemble stall force, gliding speed, and load stiffness. While such regulation did not require force-dependent kinetics, the myosin catch bond produced positive feedback between attachment time and force to trigger switch-like transitions from short attachments and small forces to high force-generating runs at threshold parameter values. Parameters representing skeletal muscle myosin, non-muscle myosin IIB, and non-muscle myosin IIA revealed distinct regimes of behavior respectively: (1) large assemblies of fast, low-duty ratio motors rapidly build stable forces over a large range of environmental stiffness, (2) ensembles of slow, high-duty ratio motors serve as high-affinity cross-links with force build-up times that exceed physiological timescales, and (3) small assemblies of low-duty ratio motors operating at intermediate speeds may respond sharply to changes in mechanical context--at low forces or stiffness, they serve as low affinity cross-links but they can transition to effective force production via the positive feedback mechanism described above. These results reveal how myosin isoform properties may be tuned to produce force and respond to mechanical cues in their environment .
[ { "type": "R", "before": "Bipolar myosin II filaments engage actin filament arrays", "after": "Myosin II isoforms with varying mechanochemistry and filament size interact with filamentous actin (F-actin) networks", "start_char_pos": 0, "end_char_pos": 56 }, { "type": "R", "before": "both muscle and non-muscle cells. Key determinants of actomyosin force generation include the mechanochemistry of individual motors, motor filament size, and the compliance and turnover of actin filament arrays. How these properties interact to control rate, magnitude, and mechanosensitivity of force production remains", "after": "cells. How their properties control force production in environments with varying stiffness is", "start_char_pos": 91, "end_char_pos": 411 }, { "type": "R", "before": "extend a simple myosin II", "after": "incorporated literature values for properties of myosin II isoforms into a", "start_char_pos": 440, "end_char_pos": 465 }, { "type": "R", "before": "to consider an ensemble of myosin motors engaging a single actin filament with an elastic tether. Consistent with previous work, we find that the duration of actin engagement and average force are highly sensitive to changes in ensemble size(Nheads)", "after": ". Similar actin gliding speeds and force-velocity curves expected from previous experiments were observed. Motor force output on an elastic load was regulated by two timescales--that of their attachment to F-actin, which varied sharply with the ensemble size", "start_char_pos": 485, "end_char_pos": 734 }, { "type": "R", "before": "(dr), and environmental stiffness(k). Catch-bond kineticsshared by myosin II isoforms sharpen this sensitivity through positive feedback such that increases in Nheads, dr, or k above threshold values drive a rapid transition from non-processive to processive engagement. Tuning motor parameters to match different myosinII isoforms suggest that myosin filaments are poised to respond sharply to environmental stiffness or externally applied force. Thus, force production by myosin filaments is subject to", "after": ", and external load, and that of force build up, which scaled with ensemble stall force, gliding speed, and load stiffness. While such regulation did not require force-dependent kinetics, the myosin catch bond produced positive feedback between attachment time and force to trigger", "start_char_pos": 754, "end_char_pos": 1258 }, { "type": "R", "before": "control via tunable internal properties or mechanical context. We show further that for processive motors, the time required to build to stall (tbuild) scales with Fmax/(k*Vmax), where Fmax is the ensemble stall force", "after": "transitions from short attachments and small forces to high force-generating runs at threshold parameter values. Parameters representing skeletal muscle myosin, non-muscle myosin IIB,", "start_char_pos": 1271, "end_char_pos": 1488 }, { "type": "R", "before": "Vmax is the unloaded gliding velocity of the motor ensemble. Thus, increased environmental stiffnesspromotes faster force-buildup even without the myosin catch-bond, and force production will be limited when tbuild exceeds time scales of force relaxation such as actin turnover. Together, these", "after": "non-muscle myosin IIA revealed distinct regimes of behavior respectively: (1) large assemblies of fast, low-duty ratio motors rapidly build stable forces over a large range of environmental stiffness, (2) ensembles of slow, high-duty ratio motors serve as high-affinity cross-links with force build-up times that exceed physiological timescales, and (3) small assemblies of low-duty ratio motors operating at intermediate speeds may respond sharply to changes in mechanical context--at low forces or stiffness, they serve as low affinity cross-links but they can transition to effective force production via the positive feedback mechanism described above. These", "start_char_pos": 1493, "end_char_pos": 1787 }, { "type": "R", "before": "motor filament properties and environmental mechanics shape force production by actomyosin networks", "after": "myosin isoform properties may be tuned to produce force and respond to mechanical cues in their environment", "start_char_pos": 1807, "end_char_pos": 1906 } ]
[ 0, 124, 302, 430, 582, 791, 1024, 1201, 1333, 1553, 1771 ]
1404.3891
1
In this paper, we consider the joint opportunistic routing and channel assignment problem in multi-channel multiradio (MCMR) cognitive radio networks (CRNs) for improving aggregate throughput of the secondary users. To the best of our knowledge, we first present a linear programming optimization model for this joint problem taking into account the feature of CRNs channel uncertainty. Considering the queue state of a node, we propose a new scheme to select proper forwarding candidates for opportunistic routing. Furthermore, a new algorithm for calculating the forwarding probability of any packet at a node is proposed, which is used to calculate how many packets a forwarder should send, so that the redundant packets can be reduced compared with MAC independent opportunistic routing & encoding (MORE) [11]. The numerical results show that the proposed scheme performs significantly better than traditional routing and opportunistic routing in which channel assignment strategy is employed.
In this paper, we consider the joint opportunistic routing and channel assignment problem in multi-channel multi-radio (MCMR) cognitive radio networks (CRNs) for improving aggregate throughput of the secondary users. We first present the linear programming optimization model for this joint problem , taking into account the feature of CRNs-channel uncertainty. Then considering the queue state of a node, we propose a new scheme to select proper forwarding candidates for opportunistic routing. Furthermore, a new algorithm for calculating the forwarding probability of any packet at a node is proposed, which is used to calculate how many packets a forwarder should send, so that the duplicate transmission can be reduced compared with MAC-independent opportunistic routing & encoding (MORE) [11]. Our numerical results show that the proposed scheme performs significantly better that traditional routing and opportunistic routing in which channel assignment strategy is employed.
[ { "type": "R", "before": "multiradio", "after": "multi-radio", "start_char_pos": 107, "end_char_pos": 117 }, { "type": "R", "before": "To the best of our knowledge, we first present a", "after": "We first present the", "start_char_pos": 216, "end_char_pos": 264 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 326, "end_char_pos": 326 }, { "type": "R", "before": "CRNs channel uncertainty. Considering", "after": "CRNs-channel uncertainty. Then considering", "start_char_pos": 362, "end_char_pos": 399 }, { "type": "R", "before": "redundant packets", "after": "duplicate transmission", "start_char_pos": 707, "end_char_pos": 724 }, { "type": "R", "before": "MAC independent", "after": "MAC-independent", "start_char_pos": 754, "end_char_pos": 769 }, { "type": "R", "before": "The", "after": "Our", "start_char_pos": 816, "end_char_pos": 819 }, { "type": "R", "before": "than", "after": "that", "start_char_pos": 898, "end_char_pos": 902 } ]
[ 0, 215, 387, 516, 815 ]
1404.4005
1
The fitness contribution of an allele at one genetic site may depend on the states of other sites, a phenomenon known as epistasis. Epistasis can profoundly influence the process of evolution in populations under selection, and shape the course of protein evolution across divergent species. Whereas epistasis among adaptive substitutions has been the subject of extensive study, relatively little is known about epistasis under purifying selection. Here we use mechanistic models of thermodynamic stability in a ligand-binding protein to explore computationally the structure of epistatic interactions among substitutions that fix in protein sequences under purifying selection. We find that the selection coefficients of mutations that are nearly neutral when they fix are highly conditional on the presence of preceding mutations. In addition, substitutions which are initially neutral become increasingly entrenched over time due to antagonistic epistasis with subsequent substitutions. Our evolutionary model includes insertions and deletions, as well as point mutations, which allows us to quantify epistasis between these classes of mutations, and also to study the evolution of protein length. We find that protein length remains largely constant over time, because indels are more deleterious than point mutations. Our results imply that, even under purifying selection, protein sequence evolution is highly contingent on history and it cannot be predicted by the phenotypic effects of mutations introduced into the wildtype sequencealone .
The fitness contribution of an allele at one genetic site may depend on alleles at other sites, a phenomenon known as epistasis. Epistasis can profoundly influence the process of evolution in populations under selection, and can shape the course of protein evolution across divergent species. Whereas epistasis between adaptive substitutions has been the subject of extensive study, relatively little is known about epistasis under purifying selection. Here we use mechanistic models of thermodynamic stability in a ligand-binding protein to explore the structure of epistatic interactions between substitutions that fix in protein sequences under purifying selection. We find that the selection coefficients of mutations that are nearly-neutral when they fix are highly contingent on the presence of preceding mutations. Conversely, mutations that are nearly-neutral when they fix are subsequently entrenched due to epistasis with later substitutions. Our evolutionary model includes insertions and deletions, as well as point mutations, and so it allows us to quantify epistasis within each of these classes of mutations, and also to study the evolution of protein length. We find that protein length remains largely constant over time, because indels are more deleterious than point mutations. Our results imply that, even under purifying selection, protein sequence evolution is highly contingent on history and so it cannot be predicted by the phenotypic effects of mutations assayed in the wild-type sequence .
[ { "type": "R", "before": "the states of", "after": "alleles at", "start_char_pos": 72, "end_char_pos": 85 }, { "type": "A", "before": null, "after": "can", "start_char_pos": 228, "end_char_pos": 228 }, { "type": "R", "before": "among", "after": "between", "start_char_pos": 311, "end_char_pos": 316 }, { "type": "D", "before": "computationally", "after": null, "start_char_pos": 548, "end_char_pos": 563 }, { "type": "R", "before": "among", "after": "between", "start_char_pos": 604, "end_char_pos": 609 }, { "type": "R", "before": "nearly neutral", "after": "nearly-neutral", "start_char_pos": 743, "end_char_pos": 757 }, { "type": "R", "before": "conditional", "after": "contingent", "start_char_pos": 783, "end_char_pos": 794 }, { "type": "R", "before": "In addition, substitutions which are initially neutral become increasingly entrenched over time due to antagonistic epistasis with subsequent", "after": "Conversely, mutations that are nearly-neutral when they fix are subsequently entrenched due to epistasis with later", "start_char_pos": 835, "end_char_pos": 976 }, { "type": "R", "before": "which", "after": "and so it", "start_char_pos": 1078, "end_char_pos": 1083 }, { "type": "R", "before": "between", "after": "within each of", "start_char_pos": 1116, "end_char_pos": 1123 }, { "type": "A", "before": null, "after": "so", "start_char_pos": 1444, "end_char_pos": 1444 }, { "type": "R", "before": "introduced into the wildtype sequencealone", "after": "assayed in the wild-type sequence", "start_char_pos": 1507, "end_char_pos": 1549 } ]
[ 0, 131, 292, 450, 680, 834, 991, 1202, 1324 ]
1404.4275
1
This article changes a lot of the original Bitcoin system, including, fast currency distribution within 1 year by utilizing buyer's different characters, removing bloated history transactions from data synchronization, no mining, no blockchain, it's environmentally friendly, no checkpoint, it's purely decentralized and purely based on proof of stake. The logic is very simple and intuitive, 51\% stakes talk. In aspect of security, we propose TILP & SSS strategies to secure our system . We utilize high credit individual as initial source of credit, taking Google Company as an example .
In this paper we propose a new framework of cryptocurrency system. The major parts what we have changed include removing the bloated history transactions from data synchronization, no mining, no blockchain, it's environmentally friendly, no checkpoint, no exchange hub needed, it's purely decentralized and purely based on proof of stake. The logic is very simple and intuitive, 51\% stakes talk. A new data synchronization mechanism named "Converged Consensus" is proposed to ensure the system reaches a consistent distributed consensus. We think the famous blockchain mechanism based on PoW is nolonger an essential element of a cryptocurrency system. In aspect of security, we propose TILP & SSS strategies to secure our system .
[ { "type": "R", "before": "This article changes a lot of the original Bitcoin system, including, fast currency distribution within 1 year by utilizing buyer's different characters, removing", "after": "In this paper we propose a new framework of cryptocurrency system. The major parts what we have changed include removing the", "start_char_pos": 0, "end_char_pos": 162 }, { "type": "A", "before": null, "after": "no exchange hub needed,", "start_char_pos": 291, "end_char_pos": 291 }, { "type": "A", "before": null, "after": "A new data synchronization mechanism named \"Converged Consensus\" is proposed to ensure the system reaches a consistent distributed consensus. We think the famous blockchain mechanism based on PoW is nolonger an essential element of a cryptocurrency system.", "start_char_pos": 412, "end_char_pos": 412 }, { "type": "D", "before": ". We utilize high credit individual as initial source of credit, taking Google Company as an example", "after": null, "start_char_pos": 490, "end_char_pos": 590 } ]
[ 0, 353, 411, 491 ]
1404.4275
2
There are some alternative Cryptocurrency systems which claim that they are based on PoS are actually based on PoSTW which denotes the Proof of Stake(coin), Time(day) and Work(hashing), while the other pure PoS Cryptocurrency systems are actually centralized. In this paper we propose a new framework of Cryptocurrency system. The major parts what we have changed include removing the bloated history transactions from data synchronization, no mining, no blockchain, it's environmentally friendly, no checkpoint, no exchange hub needed, it's purely decentralized and purely based on proof of stake. The logic is very simple and intuitive, 51\% of stakes talk. The highlight of this paper is a proposal of a new concise data synchronization mechanism named "Converged Consensus" which ensures the system reaches a consistent distributed consensus. We think the famous blockchain mechanism based on PoW is no longer an essential element of a Cryptocurrency system. In aspect of security, we propose TILP & SSS strategies to secure our system .
There are some alternative Cryptocurrency systems which claim that they are based on PoS are actually based on PoSTW which denotes the Proof of Stake(coin), Time(day) and Work(hashing), while the other pure PoS Cryptocurrency systems are actually centralized. In this paper we propose a new framework of Cryptocurrency system. The major parts what we have changed include , a fast transparent distribution solution which can avoid deceptions between the sponsor and the audience, removing the bloated history transactions from data synchronization, no mining, no blockchain, it's environmentally friendly, no checkpoint, no exchange hub needed, it's truly decentralized and purely based on proof of stake. The logic is very simple and intuitive, 51\% of stakes talk. The highlight of this paper is a proposal of a new concise data synchronization mechanism named "Converged Consensus" which ensures the system reaches a consistent distributed consensus. We think the famous blockchain mechanism based on PoW is no longer an essential element of a Cryptocurrency system. In aspect of security, we propose TILP & SSS strategies to secure our system . At the end, we try to give an explicit definition of decentralization .
[ { "type": "A", "before": null, "after": ", a fast transparent distribution solution which can avoid deceptions between the sponsor and the audience,", "start_char_pos": 372, "end_char_pos": 372 }, { "type": "R", "before": "purely", "after": "truly", "start_char_pos": 543, "end_char_pos": 549 }, { "type": "A", "before": null, "after": ". At the end, we try to give an explicit definition of decentralization", "start_char_pos": 1041, "end_char_pos": 1041 } ]
[ 0, 259, 326, 599, 660, 847, 963 ]
1404.4275
3
There are some alternative Cryptocurrency systems which claim that they are based on PoS are actually based on PoSTW which denotes the Proof of Stake(coin), Time(day) and Work(hashing), while the other pure PoS Cryptocurrency systems are actually centralized. In this paper we propose a new framework of Cryptocurrency system . The major parts what we have changed include, a fast transparent distribution solution which can avoid deceptions between the sponsor and the audience, removing the bloated history transactions from data synchronization , no mining, no blockchain, it's environmentally friendly, no checkpoint, no exchange hub needed, it's truly decentralized and purely based on proof of stake. The logic is very simple and intuitive, 51\% of stakes talk. The highlight of this paper is a proposal of a new concise data synchronization mechanism named "Converged Consensus" which ensures the system reaches a consistent distributed consensus. We think the famous blockchain mechanism based on PoW is no longer an essential element of a Cryptocurrency system. In aspect of security, we propose TILP SSS strategies to secure our system. At the end, we try to give an explicit definition of decentralization .
We give an explicit definition of decentralization and show you that decentralization is almost impossible for the current stage. We propose a new framework of noncentralized cryptocurrency system with an assumption of the existence of a weak adversary for a bank alliance. It abandons the mining process and blockchain, and removes history transactions from data synchronization . We propose a consensus algorithm named "Converged Consensus" for a noncentralized cryptocurrency system .
[ { "type": "R", "before": "There are some alternative Cryptocurrency systems which claim that they are based on PoS are actually based on PoSTW which denotes the Proof of Stake(coin), Time(day) and Work(hashing), while the other pure PoS Cryptocurrency systems are actually centralized. In this paper we", "after": "We give an explicit definition of decentralization and show you that decentralization is almost impossible for the current stage. We", "start_char_pos": 0, "end_char_pos": 276 }, { "type": "R", "before": "Cryptocurrency system . The major parts what we have changed include, a fast transparent distribution solution which can avoid deceptions between the sponsor and the audience, removing the bloated", "after": "noncentralized cryptocurrency system with an assumption of the existence of a weak adversary for a bank alliance. It abandons the mining process and blockchain, and removes", "start_char_pos": 304, "end_char_pos": 500 }, { "type": "R", "before": ", no mining, no blockchain, it's environmentally friendly, no checkpoint, no exchange hub needed, it's truly decentralized and purely based on proof of stake. The logic is very simple and intuitive, 51\\% of stakes talk. The highlight of this paper is a proposal of a new concise data synchronization mechanism", "after": ". We propose a consensus algorithm", "start_char_pos": 548, "end_char_pos": 857 }, { "type": "D", "before": "which ensures the system reaches a consistent distributed consensus. We think the famous blockchain mechanism based on PoW is no longer an essential element of a Cryptocurrency system. In aspect of security, we propose TILP", "after": null, "start_char_pos": 886, "end_char_pos": 1109 }, { "type": "R", "before": "SSS strategies to secure our system. At the end, we try to give an explicit definition of decentralization", "after": "for a noncentralized cryptocurrency system", "start_char_pos": 1110, "end_char_pos": 1216 } ]
[ 0, 259, 327, 706, 767, 954, 1070, 1146 ]
1404.4275
4
We give an explicit definition of decentralization and show you that decentralization is almost impossible for the current stage . We propose a new framework of noncentralized cryptocurrency system with an assumption of the existence of a weak adversary for a bank alliance. It abandons the mining process and blockchain, and removes history transactions from data synchronization. We propose a consensus algorithm named "Converged Consensus " for a noncentralized cryptocurrency system.
We give an explicit definition of decentralization and show you that decentralization is almost impossible for the current stage and Bitcoin is the first truly noncentralized currency in the currency history . We propose a new framework of noncentralized cryptocurrency system with an assumption of the existence of a weak adversary for a bank alliance. It abandons the mining process and blockchain, and removes history transactions from data synchronization. We propose a consensus algorithm named Converged Consensus for a noncentralized cryptocurrency system.
[ { "type": "A", "before": null, "after": "and Bitcoin is the first truly noncentralized currency in the currency history", "start_char_pos": 129, "end_char_pos": 129 }, { "type": "R", "before": "\"Converged Consensus \"", "after": "Converged Consensus", "start_char_pos": 422, "end_char_pos": 444 } ]
[ 0, 131, 275, 382 ]
1404.4464
1
The one-dimensional SDE with non Lipschitz diffusion coefficient dX_{t} = b(X_{t})dt + \sigma X_{t}^{\gamma} dB_{t}, \ X_{0}=x, \ \gamma<1 is widely studied in mathematical finance. Several works have proposed asymptotic analysis of densities and implied volatilities in models involving specific instances of this equation, based on a careful implementation of saddle-point methods and (essentially) the explicit knowledge of Fourier transforms. Recent research on tail asymptotics for heat kernels [J-D. Deuschel, P.~Friz, A.~Jacquier, and S.~Violante. Marginal density expansions for diffusions and stochastic volatility, part II: Applications. 2013, arxiv:1305.6765 suggests to work with the rescaled variable X^{\varepsilon}:=\varepsilon^{1/(1-\gamma)} X: while allowing to turn a spatial asymptotic problem into a fixed terminal point, small-\varepsilon problem, the process X^{\varepsilon} satisfies a SDE in Wentzell--Freidlin form (i.e. with driving noise \varepsilon dB). We prove a pathwise large deviation principle for the process X^{\varepsilon} as \varepsilon \to 0. As it will become clear, the limiting ODE governing the large deviations admits infinitely many solutions, a non-standard situation in the Wentzell--Freidlin theory. The \varepsilon-scaling allows to derive exact log-asymptotics for path functionals of the process ; while on the one hand the resulting formulae are confirmed by the CIR-CEV benchmarks, on the other hand the large deviation approach (i) applies to equations with a more general drift term (ii) potentially opens the way to heat kernel analysis for higher-dimensional diffusions involving such an SDE as a component.
The one-dimensional SDE with non Lipschitz diffusion coefficient dX_{t} = b(X_{t})dt + \sigma X_{t}^{\gamma} dB_{t}, \ X_{0}=x, \ \gamma<1 is widely studied in mathematical finance. Several works have proposed asymptotic analysis of densities and implied volatilities in models involving instances of this equation, based on a careful implementation of saddle-point methods and (essentially) the explicit knowledge of Fourier transforms. Recent research on tail asymptotics for heat kernels [J-D. Deuschel, P.~Friz, A.~Jacquier, and S.~Violante. Marginal density expansions for diffusions and stochastic volatility, part II: Applications. 2013, arxiv:1305.6765 suggests to work with the rescaled variable X^{\varepsilon}:=\varepsilon^{1/(1-\gamma)} X: while allowing to turn a space asymptotic problem into a small-\varepsilon problem with fixed terminal point, the process X^{\varepsilon} satisfies a SDE in Wentzell--Freidlin form (i.e. with driving noise \varepsilon dB). We prove a pathwise large deviation principle for the process X^{\varepsilon} as \varepsilon \to 0. As it will become clear, the limiting ODE governing the large deviations admits infinitely many solutions, a non-standard situation in the Wentzell--Freidlin theory. As for applications, the \varepsilon-scaling allows to derive exact log-asymptotics for path functionals of the process : while on the one hand the resulting formulae are confirmed by the CIR-CEV benchmarks, on the other hand the large deviation approach (i) applies to equations with a more general drift term and (ii) potentially opens the way to heat kernel analysis for higher-dimensional diffusions involving such an SDE as a component.
[ { "type": "D", "before": "specific", "after": null, "start_char_pos": 288, "end_char_pos": 296 }, { "type": "R", "before": "spatial", "after": "space", "start_char_pos": 786, "end_char_pos": 793 }, { "type": "A", "before": null, "after": "small-\\varepsilon problem with", "start_char_pos": 820, "end_char_pos": 820 }, { "type": "D", "before": "small-\\varepsilon problem,", "after": null, "start_char_pos": 843, "end_char_pos": 869 }, { "type": "R", "before": "The", "after": "As for applications, the", "start_char_pos": 1249, "end_char_pos": 1252 }, { "type": "R", "before": ";", "after": ":", "start_char_pos": 1348, "end_char_pos": 1349 }, { "type": "A", "before": null, "after": "and", "start_char_pos": 1539, "end_char_pos": 1539 } ]
[ 0, 181, 446, 554, 647, 982, 1082, 1248, 1349 ]
1404.4547
1
We consider a queueing system composed of a dispatcher that routes deterministically jobs to a set of non-observable queues working in parallel. In this setting, the fundamental problem is which policy should the dispatcher implement to minimize the stationary mean waiting time of the incoming jobs. We present a structural property that holds in the classic scaling of the system where the network demand (arrival rate of jobs) grows proportionally with the number of queues. Assume that each queue of type r is replicated k times and consider the set of policies that are periodic with period k \sum_r p_r and such that exactly p_r jobs are sent in a period to each queue of type r. When k\to\infty, our main result shows that all the policies in this set are equivalent, in the sense that they yield the same mean stationary waiting time, and optimal, in the sense that no other policy having the same aggregate arrival rate to all queues of a given type can do better in minimizing the stationary mean waiting time. This property holds in a strong probabilistic sense. Furthermore, the limiting mean waiting time achieved by our policies is a convex function of the arrival rate in each queue, which facilitates the development of a further optimization aimed at solving the fundamental problem above for large systems.
We consider a queueing system composed of a dispatcher that routes deterministically jobs to a set of non-observable queues working in parallel. In this setting, the fundamental problem is which policy should the dispatcher implement to minimize the stationary mean waiting time of the incoming jobs. We present a structural property that holds in the classic scaling of the system where the network demand (arrival rate of jobs) grows proportionally with the number of queues. Assuming that each queue of type ~ r is replicated ~k times, we consider a set of policies that are periodic with period k \sum_r p_r and such that exactly p_r jobs are sent in a period to each queue of type ~ r. When k\to\infty, our main result shows that all the policies in this set are equivalent, in the sense that they yield the same mean stationary waiting time, and optimal, in the sense that no other policy having the same aggregate arrival rate to all queues of a given type can do better in minimizing the stationary mean waiting time. This property holds in a strong probabilistic sense. Furthermore, the limiting mean waiting time achieved by our policies is a convex function of the arrival rate in each queue, which facilitates the development of a further optimization aimed at solving the fundamental problem above for large systems.
[ { "type": "R", "before": "Assume", "after": "Assuming", "start_char_pos": 478, "end_char_pos": 484 }, { "type": "A", "before": null, "after": "~", "start_char_pos": 509, "end_char_pos": 509 }, { "type": "R", "before": "k times and consider the", "after": "~k times, we consider a", "start_char_pos": 526, "end_char_pos": 550 }, { "type": "A", "before": null, "after": "~", "start_char_pos": 684, "end_char_pos": 684 }, { "type": "R", "before": "all", "after": "all", "start_char_pos": 934, "end_char_pos": 937 } ]
[ 0, 144, 300, 477, 687, 1022, 1075 ]
1404.4554
1
Though the problem of pattern formation in biological systems has been studied more than sixty years, it is from the beginning of this cen- tury that the synergy between theoretical and experimental researchers are depicting the ins and outs of this problem. In the present work we are deciphering the bases of the pattern for the ocellar complex formation in Drosophila melanogaster fly. We have modeled the Gene Regulatory Network (GRN) that drives the development of this visual system pruning this network to obtain the minimum pathway able to satisfy this pattern. We found that the mechanism underlying the patterning obeys to the dy- namics of a 3-nodes network motif with a double negative feedback loop fed by a morphogenetic gradient that triggers the inhibition in a French flag problem fashion. A Boolean modeling of the GRN reveals robust- ness in the patterning mechanism showing the same result for different network complexity levels. Interestingly, the network provides a steady state solution in the interocellar part of the patterning and an oscillatory regime in the ocelli . Though the dynamical models can achieve steady state solutions for the full pattern, it is possible that the transcriptional regulation in part of this network is actually done in an oscillatory way .
Though the problem of pattern formation in biological systems has been studied for more than sixty years, it is from the beginning of this century that the synergy between theoretical and experimental researchers are depicting the ins and outs of this problem. In the present work we are deciphering the bases of the pattern for the ocellar complex formation in Drosophila melanogaster fly. We have modeled the Gene Regulatory Network (GRN) that drives the development of this visual system pruning this network to obtain the minimum pathway able to satisfy this pattern. We found that the mechanism underlying the patterning obeys to the dynamics of an activator-repressor feedback loop fed by a morphogenetic gradient in a French flag problem fashion. We determine that this pattern is robust agains perturbations in the structure of the equations used to describe the system. Moreover, a Boolean modeling of the GRN confirms the robustness in the patterning mechanism for different network complexity levels. Interestingly, the Boolean networks analysis reveals a steady state solution in the interocellar part of the patterning and an oscillatory regime in the ocelli .
[ { "type": "A", "before": null, "after": "for", "start_char_pos": 79, "end_char_pos": 79 }, { "type": "R", "before": "cen- tury", "after": "century", "start_char_pos": 136, "end_char_pos": 145 }, { "type": "R", "before": "dy- namics of a 3-nodes network motif with a double negative", "after": "dynamics of an activator-repressor", "start_char_pos": 638, "end_char_pos": 698 }, { "type": "D", "before": "that triggers the inhibition", "after": null, "start_char_pos": 745, "end_char_pos": 773 }, { "type": "R", "before": "A", "after": "We determine that this pattern is robust agains perturbations in the structure of the equations used to describe the system. Moreover, a", "start_char_pos": 808, "end_char_pos": 809 }, { "type": "R", "before": "reveals robust- ness", "after": "confirms the robustness", "start_char_pos": 838, "end_char_pos": 858 }, { "type": "D", "before": "showing the same result", "after": null, "start_char_pos": 887, "end_char_pos": 910 }, { "type": "R", "before": "network provides", "after": "Boolean networks analysis reveals", "start_char_pos": 971, "end_char_pos": 987 }, { "type": "D", "before": ". Though the dynamical models can achieve steady state solutions for the full pattern, it is possible that the transcriptional regulation in part of this network is actually done in an oscillatory way", "after": null, "start_char_pos": 1095, "end_char_pos": 1295 } ]
[ 0, 259, 389, 570, 807, 951, 1096 ]
1404.4554
2
Though the problem of pattern formation in biological systems has been studied for more than sixty years, it is from the beginning of this century that the synergy between theoretical and experimental researchers are depicting the ins and outs of this problem . In the present work we are deciphering the bases of the pattern for the ocellar complex formation in Drosophila melanogaster fly. We have modeled the Gene Regulatory Network (GRN ) that drives the development of this visual system pruning this network to obtain the minimum pathway able to satisfy this pattern. We found that the mechanism underlying the patterning obeys to the dynamics of an activator-repressor feedback loop fed by a morphogenetic gradient in a French flag problem fashion. We determine that this pattern is robust agains perturbations in the structure of the equations used to describe the system. Moreover, a Boolean modeling of the GRN confirms the robustness in the patterning mechanism for different network complexity levels. Interestingly, the Boolean networks analysis reveals a steady state solution in the interocellar part of the patterning and an oscillatory regime in the ocelli .
URLanogenesis, developmental programs governed by Gene Regulatory Networks (GRN) define the functionality, size and shape of the different constituents of URLanisms. Robustness, thus, is an essential characteristic that GRNs need to fulfill in order to maintain viability and reproducibility in a species . In the present work we analyze the robustness of the patterning for the ocellar complex formation in Drosophila melanogaster fly. We have systematically pruned the GRN that drives the development of this visual system to obtain the minimum pathway able to satisfy this pattern. We found that the mechanism underlying the patterning obeys to the dynamics of a 3-nodes network motif with a double negative feedback loop fed by a morphogenetic gradient that triggers the inhibition in a French flag problem fashion. A Boolean modeling of the GRN confirms robustness in the patterning mechanism showing the same result for different network complexity levels. Interestingly, the network provides a steady state solution in the interocellar part of the patterning and an oscillatory regime in the ocelli . This theoretical result predicts that the ocellar pattern may underlie oscillatory dynamics in its genetic regulation .
[ { "type": "R", "before": "Though the problem of pattern formation in biological systems has been studied for more than sixty years, it is from the beginning of this century that the synergy between theoretical and experimental researchers are depicting the ins and outs of this problem", "after": "URLanogenesis, developmental programs governed by Gene Regulatory Networks (GRN) define the functionality, size and shape of the different constituents of URLanisms. Robustness, thus, is an essential characteristic that GRNs need to fulfill in order to maintain viability and reproducibility in a species", "start_char_pos": 0, "end_char_pos": 259 }, { "type": "R", "before": "are deciphering the bases of the pattern", "after": "analyze the robustness of the patterning", "start_char_pos": 285, "end_char_pos": 325 }, { "type": "R", "before": "modeled the Gene Regulatory Network (GRN )", "after": "systematically pruned the GRN", "start_char_pos": 400, "end_char_pos": 442 }, { "type": "D", "before": "pruning this network", "after": null, "start_char_pos": 493, "end_char_pos": 513 }, { "type": "R", "before": "an activator-repressor", "after": "a 3-nodes network motif with a double negative", "start_char_pos": 653, "end_char_pos": 675 }, { "type": "A", "before": null, "after": "that triggers the inhibition", "start_char_pos": 722, "end_char_pos": 722 }, { "type": "R", "before": "We determine that this pattern is robust agains perturbations in the structure of the equations used to describe the system. Moreover, a", "after": "A", "start_char_pos": 757, "end_char_pos": 893 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 931, "end_char_pos": 934 }, { "type": "A", "before": null, "after": "showing the same result", "start_char_pos": 974, "end_char_pos": 974 }, { "type": "R", "before": "Boolean networks analysis reveals", "after": "network provides", "start_char_pos": 1035, "end_char_pos": 1068 }, { "type": "A", "before": null, "after": ". This theoretical result predicts that the ocellar pattern may underlie oscillatory dynamics in its genetic regulation", "start_char_pos": 1176, "end_char_pos": 1176 } ]
[ 0, 261, 391, 573, 756, 881, 1015 ]
1404.5050
1
When trades are crossed between multiple alpha streams, portfolio turnover decreases. Turnover reduction needs to be taken into account for optimizing asset allocation to these alphas. We propose a spectral model of turnover reduction for a general alpha correlation matrix in the limit where the number of alphas is large .
We give a simple explicit formula for turnover reduction when a large number of alphas are traded on the same execution platform and trades are crossed internally. We model turnover reduction via alpha correlations. Then, for a large number of alphas , turnover reduction is related to the largest eigenvalue and the corresponding eigenvector of the alpha correlation matrix .
[ { "type": "R", "before": "When", "after": "We give a simple explicit formula for turnover reduction when a large number of alphas are traded on the same execution platform and", "start_char_pos": 0, "end_char_pos": 4 }, { "type": "R", "before": "between multiple alpha streams, portfolio turnover decreases. Turnover reduction needs to be taken into account for optimizing asset allocation to these alphas. We propose a spectral model of turnover reduction for a general alpha correlation matrix in the limit where the", "after": "internally. We model turnover reduction via alpha correlations. Then, for a large", "start_char_pos": 24, "end_char_pos": 296 }, { "type": "R", "before": "is large", "after": ", turnover reduction is related to the largest eigenvalue and the corresponding eigenvector of the alpha correlation matrix", "start_char_pos": 314, "end_char_pos": 322 } ]
[ 0, 85, 184 ]
1404.5584
1
In the last years the vertex enumeration problem of polyhedra has seen a revival in the study of metabolic networks, which increased the demand for efficient vertex enumeration algorithms for high-dimensional polyhedra given by inequalities. In this paper we apply the concept of branch-decomposition to the vertex enumeration problem of polyhedra P = \{x : Sx = b, x \geq 0\}. Therefore , we introduce the concept of k-module and show how it relates to the separators of the linear matroid generated by the columns of S. This then translates structural properties of the matroidal branch-decomposition to the context of polyhedra. We then use this to present a total polynomial time algorithm for polytopes P for which the branch-width of the linear matroid generated by S is bounded by a constant k.
Over the last years the vertex enumeration problem of polyhedra has seen a revival in the study of metabolic networks, which increased the demand for efficient vertex enumeration algorithms for high-dimensional polyhedra given by inequalities. It is a famous and long standing open question in polyhedral theory and computational geometry whether the vertices of a polytope (bounded polyhedron), described by a set of linear constraints, can be enumerated in total polynomial time. In this paper we apply the concept of branch-decomposition to the vertex enumeration problem of polyhedra P = \{x : Ax = b, x \geq 0\}. For this purpose , we introduce the concept of k-module and show how it relates to the separators of the linear matroid generated by the columns of A. We then use this to present a total polynomial time algorithm for polytopes P for which the branch-width of the linear matroid generated by A is bounded by a constant k.
[ { "type": "R", "before": "In", "after": "Over", "start_char_pos": 0, "end_char_pos": 2 }, { "type": "A", "before": null, "after": "It is a famous and long standing open question in polyhedral theory and computational geometry whether the vertices of a polytope (bounded polyhedron), described by a set of linear constraints, can be enumerated in total polynomial time.", "start_char_pos": 242, "end_char_pos": 242 }, { "type": "R", "before": "Sx", "after": "Ax", "start_char_pos": 359, "end_char_pos": 361 }, { "type": "R", "before": "Therefore", "after": "For this purpose", "start_char_pos": 379, "end_char_pos": 388 }, { "type": "R", "before": "S. This then translates structural properties of the matroidal branch-decomposition to the context of polyhedra.", "after": "A.", "start_char_pos": 520, "end_char_pos": 632 }, { "type": "R", "before": "S", "after": "A", "start_char_pos": 773, "end_char_pos": 774 } ]
[ 0, 241, 378, 522, 632 ]
1404.7227
1
Buildings across the world contribute significantly to the over-all energy consumption and are thus stakeholders in grid operations. Towards the development of a smart grid, utilities and governments across the world are encouraging smart meter deployments. High resolution (often at every 15 min-utes ) data from these smart meters can be used to understand and optimize energy consumptions in buildings. In addition to smart meters, buildings are also increasingly managed with Building Management Systems (BMS) which control different sub-systems such as lighting and heating, ventila-tion , and air conditioning (HVAC). With the advent of these smart meters, increased usage of BMS and easy availability and widespread installation of ambient sensors, there is a deluge of building energy data. This data has been lever-aged for a variety of applications such as demand response, appliance fault detection and optimizing HVAC schedules. Beyond the traditional use of such data sets, they can be put to effective use towards making buildings smarter and hence driving every possible bit of energy efficiency. Effective use of this data entails several critical areas from sensing to de-cision making and participatory involvement of occupants. Picking from wide literature in building energy efficiency, we identify five crust areas (also referred to as 5 Is) for realizing data driven energy efficiency in buildings : i) instrument optimally; ii) interconnect sub-systems; iii) inferred decision making; iv) involve occupants and v) intelligent operations. We classify prior work as per these 5 Is and dis-cuss challenges, opportunities and applications across them. Building upon these 5 Is we discuss a well studied problem in building energy efficiency -non-intrusive load monitoring (NILM) and how research in this area spans across the 5 Is.
Buildings across the world contribute significantly to the overall energy consumption and are thus stakeholders in grid operations. Towards the development of a smart grid, utilities and governments across the world are encouraging smart meter deployments. High resolution (often at every 15 minutes ) data from these smart meters can be used to understand and optimize energy consumptions in buildings. In addition to smart meters, buildings are also increasingly managed with Building Management Systems (BMS) which control different sub-systems such as lighting and heating, ventilation , and air conditioning (HVAC). With the advent of these smart meters, increased usage of BMS and easy availability and widespread installation of ambient sensors, there is a deluge of building energy data. This data has been leveraged for a variety of applications such as demand response, appliance fault detection and optimizing HVAC schedules. Beyond the traditional use of such data sets, they can be put to effective use towards making buildings smarter and hence driving every possible bit of energy efficiency. Effective use of this data entails several critical areas from sensing to decision making and participatory involvement of occupants. Picking from wide literature in building energy efficiency, we identify five crust areas (also referred to as 5 Is) for realizing data driven energy efficiency in buildings : i) instrument optimally; ii) interconnect sub-systems; iii) inferred decision making; iv) involve occupants and v) intelligent operations. We classify prior work as per these 5 Is and dis-cuss challenges, opportunities and applications across them. Building upon these 5 Is we discuss a well studied problem in building energy efficiency -non-intrusive load monitoring (NILM) and how research in this area spans across the 5 Is.
[ { "type": "R", "before": "over-all", "after": "overall", "start_char_pos": 59, "end_char_pos": 67 }, { "type": "R", "before": "min-utes", "after": "minutes", "start_char_pos": 293, "end_char_pos": 301 }, { "type": "R", "before": "ventila-tion", "after": "ventilation", "start_char_pos": 580, "end_char_pos": 592 }, { "type": "R", "before": "lever-aged", "after": "leveraged", "start_char_pos": 818, "end_char_pos": 828 }, { "type": "R", "before": "de-cision", "after": "decision", "start_char_pos": 1186, "end_char_pos": 1195 } ]
[ 0, 132, 257, 405, 623, 798, 940, 1111, 1246, 1446, 1476, 1507, 1560, 1670 ]
1404.7364
1
We attempt to explain stock market dynamics in terms of the interaction among three variables: market price, investor opinion and information flow. We propose a framework for such interaction upon which are based two models of stock market dynamics : the model for empirical study and its extended version for theoretical study . We demonstrate that these models replicate observed stock market behavior on all relevant timescales (from days to years) reasonably well. Using the models , we obtain and discuss a number of results that pose implications for current market theory and offer potential practical applications.
We attempt to explain stock market dynamics in terms of the interaction among three variables: market price, investor opinion and information flow. We propose a framework for such interaction and apply it to build a model of stock market dynamics which we study both empirically and theoretically . We demonstrate that this model replicates observed market behavior on all relevant timescales (from days to years) reasonably well. Using the model , we obtain and discuss a number of results that pose implications for current market theory and offer potential practical applications.
[ { "type": "R", "before": "upon which are based two models", "after": "and apply it to build a model", "start_char_pos": 192, "end_char_pos": 223 }, { "type": "R", "before": ": the model for empirical study and its extended version for theoretical study", "after": "which we study both empirically and theoretically", "start_char_pos": 249, "end_char_pos": 327 }, { "type": "R", "before": "these models replicate observed stock", "after": "this model replicates observed", "start_char_pos": 350, "end_char_pos": 387 }, { "type": "R", "before": "models", "after": "model", "start_char_pos": 479, "end_char_pos": 485 } ]
[ 0, 147, 329, 468 ]
1404.7493
1
Maximum drawdown, the largest cumulative loss from peak to trough, is one of the most widely used indicators of risk in the fund management industry, but one of the least developed in the context of probabilistic risk metrics. We formalize drawdown risk as Conditional Expected Drawdown (CED), which is the tail mean of maximum drawdown distributions. We show that CED is a degree one positive homogenous risk measure, so that it can be attributed to factors; and convex, so that it can be used in quantitative optimization. We develop an efficient linear program for minimum CED optimization and empirically explore the differences in risk attributions based on CED, Expected Shortfall (ES) and volatility. An important feature of CED is its sensitivity to serial correlation. In an empirical study that fits AR(1) models to US Equity and US Bonds, we find substantially higher correlation between the autoregressive parameter and CED than with ES or with volatility.
Maximum drawdown, the largest cumulative loss from peak to trough, is one of the most widely used indicators of risk in the fund management industry, but one of the least developed in the context of probabilistic risk metrics. We formalize drawdown risk as Conditional Expected Drawdown (CED), which is the tail mean of maximum drawdown distributions. We show that CED is a degree one positive homogenous risk measure, so that it can be attributed to factors; and convex, so that it can be used in quantitative optimization. We provide an efficient linear program for minimum CED optimization and empirically explore the differences in risk attributions based on CED, Expected Shortfall (ES) and volatility. An important feature of CED is its sensitivity to serial correlation. In an empirical study that fits AR(1) models to US Equity and US Bonds, we find substantially higher correlation between the autoregressive parameter and CED than with ES or with volatility.
[ { "type": "R", "before": "develop", "after": "provide", "start_char_pos": 528, "end_char_pos": 535 } ]
[ 0, 226, 351, 459, 524, 707, 777 ]
1404.7493
2
Maximum drawdown, the largest cumulative loss from peak to trough, is one of the most widely used indicators of risk in the fund management industry, but one of the least developed in the context of probabilistic riskmetrics . We formalize drawdown risk as Conditional Expected Drawdown (CED), which is the tail mean of maximum drawdown distributions. We show that CED is a degree one positive homogenous risk measure, so that it can be attributed to factors; and convex, so that it can be used in quantitative optimization. We provide an efficient linear program for minimum CED optimization and empirically explore the differences in risk attributions based on CED, Expected Shortfall (ES) and volatility. An important feature of CED is its sensitivity to serial correlation. In an empirical study that fits AR(1) models to US Equity and US Bonds, we find substantially higher correlation between the autoregressive parameter and CED than with ES or with volatility.
Maximum drawdown, the largest cumulative loss from peak to trough, is one of the most widely used indicators of risk in the fund management industry, but one of the least developed in the context of measures of risk . We formalize drawdown risk as Conditional Expected Drawdown (CED), which is the tail mean of maximum drawdown distributions. We show that CED is a degree one positive homogenous risk measure, so that it can be linearly attributed to factors; and convex, so that it can be used in quantitative optimization. We empirically explore the differences in risk attributions based on CED, Expected Shortfall (ES) and volatility. An important feature of CED is its sensitivity to serial correlation. In an empirical study that fits AR(1) models to US Equity and US Bonds, we find substantially higher correlation between the autoregressive parameter and CED than with ES or with volatility.
[ { "type": "R", "before": "probabilistic riskmetrics", "after": "measures of risk", "start_char_pos": 199, "end_char_pos": 224 }, { "type": "A", "before": null, "after": "linearly", "start_char_pos": 437, "end_char_pos": 437 }, { "type": "D", "before": "provide an efficient linear program for minimum CED optimization and", "after": null, "start_char_pos": 529, "end_char_pos": 597 } ]
[ 0, 226, 351, 460, 525, 708, 778 ]
1404.7511
1
Several recent works have shown that protein structure can predict site-specific evolutionary sequence variation. In particular, sites that are buried and/or have many contacts with other sites in a structure have been shown to evolve more slowly, on average, than surface sites with few contacts. Here, we present a comprehensive study of the extent to which numerous structural properties can predict sequence variation. The structural properties we considered include buriedness ( relative solvent accessibility and contact number), structural flexibility ( B factors, root-mean-square fluctuations, and variation in dihedral angles), and variability in designed structures. We obtained structural flexibility measures both from molecular dynamics simulations performed on 9 non-homologous viral protein structures and from variation in homologous variants of those proteins, where available. We obtained measures of variability in designed structures from flexible-backbone design in the Rosetta software. We found that most of the structural properties correlate with site variation in the majority of structures, though the correlations are generally weak (correlation coefficients of 0.1 to 0.4). Moreover, we found that measures of buriedness were better predictors of evolutionary variation than were measures of structural flexibility. Finally, variability in designed structures was a weaker predictor of evolutionary variability than was buriedness , but was comparable in its predictive power to the best structural flexibility measures. We conclude that simple measures of buriedness are better predictors of evolutionary variation than are more complicated predictors obtained from dynamic simulations, ensembles of homologous structures, or computational protein design.
Several recent works have shown that protein structure can predict site-specific evolutionary sequence variation. In particular, sites that are buried and/or have many contacts with other sites in a structure have been shown to evolve more slowly, on average, than surface sites with few contacts. Here, we present a comprehensive study of the extent to which numerous structural properties can predict sequence variation. The quantities we considered include buriedness ( as measured by relative solvent accessibility ), packing density (as measured by contact number), structural flexibility ( as measured by B factors, root-mean-square fluctuations, and variation in dihedral angles), and variability in designed structures. We obtained structural flexibility measures both from molecular dynamics simulations performed on 9 non-homologous viral protein structures and from variation in homologous variants of those proteins, where available. We obtained measures of variability in designed structures from flexible-backbone design in the Rosetta software. We found that most of the structural properties correlate with site variation in the majority of structures, though the correlations are generally weak (correlation coefficients of 0.1 to 0.4). Moreover, we found that buriedness and packing density were better predictors of evolutionary variation than was structural flexibility. Finally, variability in designed structures was a weaker predictor of evolutionary variability than was buriedness or packing density, but it was comparable in its predictive power to the best structural flexibility measures. We conclude that simple measures of buriedness and packing density are better predictors of evolutionary variation than are more complicated predictors obtained from dynamic simulations, ensembles of homologous structures, or computational protein design.
[ { "type": "R", "before": "structural properties", "after": "quantities", "start_char_pos": 427, "end_char_pos": 448 }, { "type": "A", "before": null, "after": "as measured by", "start_char_pos": 484, "end_char_pos": 484 }, { "type": "R", "before": "and", "after": "), packing density (as measured by", "start_char_pos": 516, "end_char_pos": 519 }, { "type": "A", "before": null, "after": "as measured by", "start_char_pos": 562, "end_char_pos": 562 }, { "type": "R", "before": "measures of buriedness", "after": "buriedness and packing density", "start_char_pos": 1230, "end_char_pos": 1252 }, { "type": "R", "before": "were measures of", "after": "was", "start_char_pos": 1307, "end_char_pos": 1323 }, { "type": "R", "before": ", but", "after": "or packing density, but it", "start_char_pos": 1463, "end_char_pos": 1468 }, { "type": "A", "before": null, "after": "and packing density", "start_char_pos": 1600, "end_char_pos": 1600 } ]
[ 0, 113, 297, 422, 679, 897, 1011, 1205, 1347, 1552 ]
1404.7632
1
We study a bivariate mean reverting stochastic volatility model , finding an explicit expression for the decay of cross-asset correlations over time. We compare our result with the empirical time series of the Dow Jones Industrial Average and the Financial Times Stock Exchange 100 in the period 1984-2013 , finding an excellent agreement. The main features of the model consist in the jumps in the volatilities and a nonlinear mean reversion. Based on these features , we propose an algorithm for the detection of jumps in the volatility .
We consider a mean-reverting stochastic volatility model which satisfies some relevant stylized facts of financial markets. We introduce an algorithm for the detection of peaks in the volatility profile, that we apply to the time series of Dow Jones Industrial Average and Financial Times Stock Exchange 100 in the period 1984-2013 . Based on empirical results , we propose a bivariate version of the model, for which we find an explicit expression for the decay over time of cross-asset correlations between absolute returns. We compare our theoretical predictions with empirical estimates on the same financial time series, finding an excellent agreement .
[ { "type": "R", "before": "study a bivariate mean reverting", "after": "consider a mean-reverting", "start_char_pos": 3, "end_char_pos": 35 }, { "type": "R", "before": ", finding an explicit expression for the decay of cross-asset correlations over time. We compare our result with the empirical", "after": "which satisfies some relevant stylized facts of financial markets. We introduce an algorithm for the detection of peaks in the volatility profile, that we apply to the", "start_char_pos": 64, "end_char_pos": 190 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 206, "end_char_pos": 209 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 243, "end_char_pos": 246 }, { "type": "R", "before": ", finding an excellent agreement. The main features of the model consist in the jumps in the volatilities and a nonlinear mean reversion. Based on these features", "after": ". Based on empirical results", "start_char_pos": 306, "end_char_pos": 467 }, { "type": "R", "before": "an algorithm for the detection of jumps in the volatility", "after": "a bivariate version of the model, for which we find an explicit expression for the decay over time of cross-asset correlations between absolute returns. We compare our theoretical predictions with empirical estimates on the same financial time series, finding an excellent agreement", "start_char_pos": 481, "end_char_pos": 538 } ]
[ 0, 149, 339, 443 ]
1405.0508
1
Central counterparties (CCPs) require initial margin (IM) to be posted for derivative portfolios cleared through them. Additionally, the Basel Committee on Banking Supervision has proposed in BCBS-261 that all significant OTC derivatives trading must also post IM by 2019. IM is typically calculated using Value-at-Risk (VAR)or Conditional Value-at-Risk (CVAR, aka Expected Shortfall) , based on historical simulation. As previously noted (Green2013a), (Green2013b)IM requirements give rise to a need for unsecured funding similar to FVA on unsecured derivatives. The IM cost to the derivatives originator requires an integral of the funding cost over the funding profile which depends on VAR- or CVAR-based calculation. VAR, here, involves running a historical simulation Monte Carlo inside a risk-neutral Monte Carlo simulation. Brute force calculation is computationally unfeasible. This paper presents a computationally efficient method of calculating IM costs for any derivative portfolio: Longstaff-Schwartz Augmented Compression , (LSAC) . Essentially, Longstaff-Schwartz is used with an augmented state space to retain accuracy for VAR-relevant changes to the state variables. This method allows rapid calculation of IM costs both for portfolios, and on an incremental basis. LSAC can be applied wherever historic simulation VAR is required such as lifetime cost of market risk regulatory capital using internal models. We present example costs for IM under BCBS-261 for interest rate swap portfolios of up to 10000 swaps and 30 year maturity showing significant IM FVA costs and two orders of magnitude speedup compared to direct calculation .
Initial margin requirements are becoming an increasingly common feature of derivative markets. However, while the valuation of derivatives under collateralisation (Piterbarg 2010 , Piterbarg2012), under counterparty risk with unsecured funding costs (FVA) (Burgard2011, Burgard2011, Burgard2013) and in the presence of regulatory capital (KVA) (Green2014) are established through valuation adjustments, hitherto initial margin has not been considered. This paper further extends the semi-replication framework of (Burgard2013a), itself later extended by (Green2014), to cover the cost of initial margin, leading to Margin Valuation Adjustment (MVA). Initial margin requirements are typically generated through the use of VAR or CVAR models. Given the form of MVA as an integral over the expected initial margin profile this would lead to excessive computational costs if a brute force calculation were to be used. Hence we also propose a computationally efficient approach to the calculation of MVA through the use of regression techniques, Longstaff-Schwartz Augmented Compression (LSAC) .
[ { "type": "R", "before": "Central counterparties (CCPs) require initial margin (IM) to be posted for derivative portfolios cleared through them. Additionally, the Basel Committee on Banking Supervision has proposed in BCBS-261 that all significant OTC derivatives trading must also post IM by 2019. IM is typically calculated using Value-at-Risk (VAR)or Conditional Value-at-Risk (CVAR, aka Expected Shortfall)", "after": "Initial margin requirements are becoming an increasingly common feature of derivative markets. However, while the valuation of derivatives under collateralisation (Piterbarg 2010", "start_char_pos": 0, "end_char_pos": 384 }, { "type": "R", "before": "based on historical simulation. As previously noted (Green2013a), (Green2013b)IM requirements give rise to a need for unsecured funding similar to FVA on unsecured derivatives. The IM cost to the derivatives originator requires an integral of the funding cost over the funding profile which depends on VAR- or CVAR-based calculation. VAR, here, involves running a historical simulation Monte Carlo inside a risk-neutral Monte Carlo simulation. Brute force calculation is computationally unfeasible. This paper presents", "after": "Piterbarg2012), under counterparty risk with unsecured funding costs (FVA) (Burgard2011, Burgard2011, Burgard2013) and in the presence of regulatory capital (KVA) (Green2014) are established through valuation adjustments, hitherto initial margin has not been considered. This paper further extends the semi-replication framework of (Burgard2013a), itself later extended by (Green2014), to cover the cost of initial margin, leading to Margin Valuation Adjustment (MVA). Initial margin requirements are typically generated through the use of VAR or CVAR models. Given the form of MVA as an integral over the expected initial margin profile this would lead to excessive computational costs if a brute force calculation were to be used. Hence we also propose", "start_char_pos": 387, "end_char_pos": 905 }, { "type": "R", "before": "method of calculating IM costs for any derivative portfolio:", "after": "approach to the calculation of MVA through the use of regression techniques,", "start_char_pos": 934, "end_char_pos": 994 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 1036, "end_char_pos": 1037 }, { "type": "D", "before": ". Essentially, Longstaff-Schwartz is used with an augmented state space to retain accuracy for VAR-relevant changes to the state variables. This method allows rapid calculation of IM costs both for portfolios, and on an incremental basis. LSAC can be applied wherever historic simulation VAR is required such as lifetime cost of market risk regulatory capital using internal models. We present example costs for IM under BCBS-261 for interest rate swap portfolios of up to 10000 swaps and 30 year maturity showing significant IM FVA costs and two orders of magnitude speedup compared to direct calculation", "after": null, "start_char_pos": 1045, "end_char_pos": 1650 } ]
[ 0, 118, 272, 418, 563, 720, 830, 885, 1184, 1283, 1427 ]
1405.0515
1
Credit (CVA), Debit (DVA) and Funding Valuation Adjustments (FVA) are now familiar valuation adjustments made to the value of a portfolio of derivatives to account for credit risks and funding costs. However, recent changes in the regulatory regime and the increases in regulatory capital requirements has led many banks to include the cost of capital in derivative pricing. This paper formalises the addition of cost of capital by extending the Burgard-Kjaer semi-replication approach to CVA and FVA to include an addition capital term, Capital Valuation Adjustment (KVA, i.e. Kapital Valuation Adjustment to distinguish from CVA . Two approaches are considered, one where the (regulatory) capital is released back to shareholders upon counterparty default and one where the capital can be used to offset losses in the event of counterparty default . The use of the semi-replication approach means that the flexibility around the treatment of self-default is carried over into this analysis. The paper further considers the practical calculation of KVA with reference to the Basel II (BCBS-128) and Basel III (BCBS-189) Capital regimes and its implementation via CRD IV (CRD-IV-Regulation,CRD-IV-Directive) . The paper assesses how KVA may be hedged, given that any hedging transactions themselves would lead to regulatory capital requirements and hence KVA. To conclude, a number of numerical examples are presented to gauge the cost impact of KVA on vanilla derivative products.
Credit (CVA), Debit (DVA) and Funding Valuation Adjustments (FVA) are now familiar valuation adjustments made to the value of a portfolio of derivatives to account for credit risks and funding costs. However, recent changes in the regulatory regime and the increases in regulatory capital requirements has led many banks to include the cost of capital in derivative pricing. This paper formalises the addition of cost of capital by extending the Burgard-Kjaer (2013) semi-replication approach to CVA and FVA to include an addition capital term, Capital Valuation Adjustment (KVA, i.e. Kapital Valuation Adjustment to distinguish from CVA ). The utilization of the capital for funding purposes and to offset losses in the event of counterparty default are considered . The use of the semi-replication approach means that the flexibility around the treatment of self-default is carried over into this analysis. The paper further considers the practical calculation of KVA with reference to the Basel II and Basel III capital regimes and their implementation via CRD IV . The paper assesses how KVA may be hedged, given that any hedging transactions themselves lead to regulatory capital requirements and hence KVA. Finally a number of numerical examples are presented to gauge the cost impact of KVA on vanilla derivative products.
[ { "type": "A", "before": null, "after": "(2013)", "start_char_pos": 460, "end_char_pos": 460 }, { "type": "R", "before": ". Two approaches are considered, one where the (regulatory) capital is released back to shareholders upon counterparty default and one where the capital can be used", "after": "). The utilization of the capital for funding purposes and", "start_char_pos": 632, "end_char_pos": 796 }, { "type": "A", "before": null, "after": "are considered", "start_char_pos": 851, "end_char_pos": 851 }, { "type": "D", "before": "(BCBS-128)", "after": null, "start_char_pos": 1087, "end_char_pos": 1097 }, { "type": "R", "before": "(BCBS-189) Capital regimes and its", "after": "capital regimes and their", "start_char_pos": 1112, "end_char_pos": 1146 }, { "type": "D", "before": "(CRD-IV-Regulation,CRD-IV-Directive)", "after": null, "start_char_pos": 1173, "end_char_pos": 1209 }, { "type": "D", "before": "would", "after": null, "start_char_pos": 1301, "end_char_pos": 1306 }, { "type": "R", "before": "To conclude,", "after": "Finally", "start_char_pos": 1362, "end_char_pos": 1374 } ]
[ 0, 199, 374, 633, 853, 994, 1211, 1361 ]
1405.0515
2
Credit (CVA), Debit (DVA) and Funding Valuation Adjustments (FVA) are now familiar valuation adjustments made to the value of a portfolio of derivatives to account for credit risks and funding costs. However, recent changes in the regulatory regime and the increases in regulatory capital requirements has led many banks to include the cost of capital in derivative pricing. This paper formalises the addition of cost of capital by extending the Burgard-Kjaer (2013) semi-replication approach to CVA and FVA to include an addition capital term, Capital Valuation Adjustment (KVA, i.e. Kapital Valuation Adjustment to distinguish from CVA ). The utilization of the capital for funding purposes and to offset losses in the event of counterparty default are considered. The use of the semi-replication approach means that the flexibility around the treatment of self-default is carried over into this analysis. The paper further considers the practical calculation of KVA with reference to the Basel II and Basel III capital regimes and their implementation via CRD IV. The paper assesses how KVA may be hedged, given that any hedging transactions themselves lead to regulatory capital requirements and hence KVA . Finally a number of numerical examples are presented to gauge the cost impact of KVA on vanilla derivative products.
Credit (CVA), Debit (DVA) and Funding Valuation Adjustments (FVA) are now familiar valuation adjustments made to the value of a portfolio of derivatives to account for credit risks and funding costs. However, recent changes in the regulatory regime and the increases in regulatory capital requirements has led many banks to include the cost of capital in derivative pricing. This paper formalises the addition of cost of capital by extending the Burgard-Kjaer (2013) semi-replication approach to CVA and FVA to include an addition capital term, Capital Valuation Adjustment (KVA, i.e. Kapital Valuation Adjustment to distinguish from CVA .) The utilization of the capital for funding purposes is also considered. The use of the semi-replication approach means that the flexibility around the treatment of self-default is carried over into this analysis. The paper further considers the practical calculation of KVA with reference to the Basel II (BCBS-128) and Basel III (BCBS-189) capital regimes and their implementation via CRD IV. The paper also assesses how KVA may be hedged, given that any hedging transactions themselves lead to regulatory capital requirements and hence capital costs . Finally a number of numerical examples are presented to gauge the cost impact of KVA on vanilla derivative products.
[ { "type": "R", "before": ").", "after": ".)", "start_char_pos": 638, "end_char_pos": 640 }, { "type": "R", "before": "and to offset losses in the event of counterparty default are", "after": "is also", "start_char_pos": 693, "end_char_pos": 754 }, { "type": "A", "before": null, "after": "(BCBS-128)", "start_char_pos": 1000, "end_char_pos": 1000 }, { "type": "A", "before": null, "after": "(BCBS-189)", "start_char_pos": 1015, "end_char_pos": 1015 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 1079, "end_char_pos": 1079 }, { "type": "R", "before": "KVA", "after": "capital costs", "start_char_pos": 1209, "end_char_pos": 1212 } ]
[ 0, 199, 374, 640, 766, 907, 1068, 1214 ]
1405.0585
1
The classic decision-theory problem of evaluating a gamble is treated from a modern perspective using dynamics . Linear and logarithmic utility functions appear not as expressions for the value of money but as mappings that result in ergodic observables for purely additive and purely multiplicative dynamics, the most natural stochastic processes to model wealth. This perspective is at odds with the boundedness requirement for utility functions in the dominant formalism of decision theory. We highlight conceptual and mathematical inconsistencies throughout the development of decision theory, whose correction clarifies that the modern perspective is legitimate and that boundedness of utility functionsis not required .
Gambles are random variables that model possible changes in monetary wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages . Linear and logarithmic "utility functions" appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate . These invalidate a commonly cited argument for bounded utility functions .
[ { "type": "R", "before": "The classic decision-theory problem of evaluating a gamble is treated from a modern perspective using dynamics", "after": "Gambles are random variables that model possible changes in monetary wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages", "start_char_pos": 0, "end_char_pos": 110 }, { "type": "R", "before": "utility functions appear not as expressions for the value of money but as mappings that result in", "after": "\"utility functions\" appear as transformations that generate", "start_char_pos": 136, "end_char_pos": 233 }, { "type": "R", "before": "the most natural stochastic processes to model wealth. This perspective is at odds with the boundedness requirement for utility functions in the dominant formalism of decision theory. We highlight conceptual and mathematical", "after": "respectively. We highlight", "start_char_pos": 310, "end_char_pos": 534 }, { "type": "R", "before": "the modern", "after": "our", "start_char_pos": 630, "end_char_pos": 640 }, { "type": "R", "before": "and that boundedness of utility functionsis not required", "after": ". These invalidate a commonly cited argument for bounded utility functions", "start_char_pos": 667, "end_char_pos": 723 } ]
[ 0, 364, 493 ]
1405.0902
1
While the characteristics of the driven translocation for asymptotically long polymers are well understood, this is not the case for finite-sized polymers, which are relevant for real-world experiments and simulation studies. Most notably, the behavior of the exponent \alpha, which describes the scaling of the translocation time with polymer length, when the driving force f_p in the pore is changed, is under debate. By Langevin dynamics simulations of regular and modified translocation models we find that an incomplete model, where the trans side is {\it excluded, gives rise to characteristics that are in stark contradiction with those of the complete model, for which \alpha increases with f_p. Our results suggest that contribution due to fluctuations is important. We construct a minimal model where dynamics is completely excluded to show that close alignment with a full translocation model can be achieved. Our findings set very stringent requirements for a minimal model that is supposed to describe the driven polymer translocation correctly.
While the characteristics of the driven translocation for asymptotically long polymers are well understood, this is not the case for finite-sized polymers, which are relevant for real-world experiments and simulation studies. Most notably, the behavior of the exponent \alpha, which describes the scaling of the translocation time with polymer length, when the driving force f_p in the pore is changed, is under debate. By Langevin dynamics simulations of regular and modified translocation models using the freely-jointed-chain polymer model we find that a previously reported incomplete model, where the {\it trans side and fluctuations were excluded, gives rise to characteristics that are in stark contradiction with those of the complete model, for which \alpha increases with f_p. Our results suggest that contribution due to fluctuations is important. We construct a minimal model where dynamics is completely excluded to show that close alignment with a full translocation model can be achieved. Our findings set very stringent requirements for a minimal model that is supposed to describe the driven polymer translocation correctly.
[ { "type": "A", "before": null, "after": "using the freely-jointed-chain polymer model", "start_char_pos": 498, "end_char_pos": 498 }, { "type": "R", "before": "an", "after": "a previously reported", "start_char_pos": 512, "end_char_pos": 514 }, { "type": "D", "before": "trans side is", "after": null, "start_char_pos": 543, "end_char_pos": 556 }, { "type": "A", "before": null, "after": "trans", "start_char_pos": 562, "end_char_pos": 562 }, { "type": "A", "before": null, "after": "side and fluctuations were", "start_char_pos": 563, "end_char_pos": 563 } ]
[ 0, 225, 419, 706, 778, 923 ]
1405.0929
1
High-throughput experiments in bacteria and eukaryotic cells have identified tens of thousands of possible interactions between proteins. This genome-wide view of the protein interaction universe is coarse-grained, whilst fine-grained detail of macro-molecular interactions critically depends on lower throughput, labor-intensive experiments. Computational approaches using measures of residue co-evolution across proteins show promise, but have been limited to specific interactions. Here we present a new generalized method showing that patterns of evolutionary sequence changes across proteins reflect residues that are close in space, and with sufficient accuracy to determine the three-dimensional structure of the protein complexes. We demonstrate that the inferred evolutionary coupling scores distinguish between interacting and non-interacting proteins and the accurate prediction of residue interactions . To illustrate the utility of the method, we predict unknown 3D interactions between subunits of ATP synthase and find results consistent with detailed experimental data. We expect that the method can be generalized to genome-wide interaction predictions at residue resolution.
High-throughput experiments in bacteria and eukaryotic cells have identified tens of thousands of interactions between proteins. This genome-wide view of the protein interaction universe is coarse-grained, whilst fine-grained detail of macro-molecular interactions critically depends on lower throughput, labor-intensive experiments. Computational approaches using measures of residue co-evolution across proteins show promise, but have been limited to specific interactions. Here we present a new generalized method showing that patterns of evolutionary sequence changes across proteins reflect residues that are close in space, with sufficient accuracy to determine the three-dimensional structure of the protein complexes. We demonstrate that the inferred evolutionary coupling scores accurately predict inter-protein residue interactions and can distinguish between interacting and non-interacting proteins . To illustrate the utility of the method, we predict co-evolved contacts between 50 E. coli complexes (of unknown structure), including the unknown 3D interactions between subunits of ATP synthase and find results consistent with detailed experimental data. We expect that the method can be generalized to genome-wide interaction predictions at residue resolution.
[ { "type": "D", "before": "possible", "after": null, "start_char_pos": 98, "end_char_pos": 106 }, { "type": "D", "before": "and", "after": null, "start_char_pos": 639, "end_char_pos": 642 }, { "type": "A", "before": null, "after": "accurately predict inter-protein residue interactions and can", "start_char_pos": 801, "end_char_pos": 801 }, { "type": "D", "before": "and the accurate prediction of residue interactions", "after": null, "start_char_pos": 863, "end_char_pos": 914 }, { "type": "A", "before": null, "after": "co-evolved contacts between 50 E. coli complexes (of unknown structure), including the", "start_char_pos": 969, "end_char_pos": 969 } ]
[ 0, 137, 342, 484, 738, 916, 1087 ]
1405.0929
2
High-throughput experiments in bacteria and eukaryotic cells have identified tens of thousands of interactions between proteins. This genome-wide view of the protein interaction universe is coarse-grained, whilst fine-grained detail of macro-molecular interactions critically depends on lower throughput, labor-intensive experiments. Computational approaches using measures of residue co-evolution across proteins show promise, but have been limited to specific interactions. Here we present a new generalized method showing that patterns of evolutionary sequence changes across proteins reflect residues that are close in space , with sufficient accuracy to determine the three-dimensional structure of the protein complexes. We demonstrate that the inferred evolutionary coupling scores accurately predict inter-protein residue interactions and can distinguish between interacting and non-interacting proteins. To illustrate the utility of the method, we predict co-evolved contacts between 50 E. coli complexes (of unknown structure), including the unknown 3D interactions between subunits of ATP synthase and find results consistent with detailed experimental data. We expect that the method can be generalized to genome-wide interaction predictions at residue resolution.
Protein-protein interactions are fundamental to many biological processes. Experimental screens have identified tens of thousands of interactions and structural biology has provided detailed functional insight for select 3D protein complexes. An alternative rich source of information about protein interactions is the evolutionary sequence record. Building on earlier work, we show that analysis of correlated evolutionary sequence changes across proteins identifies residues that are close in space with sufficient accuracy to determine the three-dimensional structure of the protein complexes. We evaluate prediction performance in blinded tests on 76 complexes of known 3D structure, predict protein-protein contacts in 32 complexes of unknown structure, and demonstrate how evolutionary couplings can be used to distinguish between interacting and non-interacting protein pairs in a large complex. With the current growth of sequence databases, we expect that the method can be generalized to genome-wide elucidation of protein-protein interaction networks and used for interaction predictions at residue resolution.
[ { "type": "R", "before": "High-throughput experiments in bacteria and eukaryotic cells", "after": "Protein-protein interactions are fundamental to many biological processes. Experimental screens", "start_char_pos": 0, "end_char_pos": 60 }, { "type": "R", "before": "between proteins. This genome-wide view of the protein interaction universe is coarse-grained, whilst fine-grained detail of macro-molecular interactions critically depends on lower throughput, labor-intensive experiments. Computational approaches using measures of residue co-evolution across proteins show promise, but have been limited to specific interactions. Here we present a new generalized method showing that patterns of", "after": "and structural biology has provided detailed functional insight for select 3D protein complexes. An alternative rich source of information about protein interactions is the evolutionary sequence record. Building on earlier work, we show that analysis of correlated", "start_char_pos": 111, "end_char_pos": 541 }, { "type": "R", "before": "reflect", "after": "identifies", "start_char_pos": 588, "end_char_pos": 595 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 629, "end_char_pos": 630 }, { "type": "R", "before": "demonstrate that the inferred evolutionary coupling scores accurately predict inter-protein residue interactions and can", "after": "evaluate prediction performance in blinded tests on 76 complexes of known 3D structure, predict protein-protein contacts in 32 complexes of unknown structure, and demonstrate how evolutionary couplings can be used to", "start_char_pos": 730, "end_char_pos": 850 }, { "type": "R", "before": "proteins. To illustrate the utility of the method, we predict co-evolved contacts between 50 E. coli complexes (of unknown structure), including the unknown 3D interactions between subunits of ATP synthase and find results consistent with detailed experimental data. We", "after": "protein pairs in a large complex. With the current growth of sequence databases, we", "start_char_pos": 903, "end_char_pos": 1172 }, { "type": "A", "before": null, "after": "elucidation of protein-protein interaction networks and used for", "start_char_pos": 1230, "end_char_pos": 1230 } ]
[ 0, 128, 333, 475, 726, 912, 1169 ]
1405.1206
1
The different cell types in a URLanism are an outcome of the process of cell differentiation in which a progenitor cell state differentiates into two distinct cell types. Experimental evidence and analysis of large-scale microarray data establish the key role of a two-gene motif in the process of cell differentiation . The two genes express transcription factors which repress each other's expression and autoactivate their own production. A number of theoretical models have recently been proposed based on the two-gene motif to provide a physical understanding of how cell differentiation works . In this paper, we study a simple model of cell differentiation which assumes no cooperativity in the regulation of gene expression by the transcription factors. The latter repress each other's activity directly through DNA binding and indirectly through the formation of heterodimers. We specifically investigate how deterministic processes combined with stochasticity contribute in bringing about cell differentiation. The deterministic dynamics of our model give rise to a supercritical pichfork bifurcation from an undifferentiated stable steady state to two differentiated stable steady states. The stochastic dynamics of our model are studied using the approach based on the Langevin equations . The simulation results are consistent with the recent experimental observations. We further propose experimental measurements of quantities like the variance and the lag-1 autocorrelation function in protein fluctuations as early signatures of an approaching bifurcation point in the cell differentiation process.
The different cell types in a URLanism acquire their identity through the process of cell differentiation in which the multipotent progenitor cells differentiate into distinct cell types. Experimental evidence and analysis of large-scale microarray data establish the key role played by a two-gene motif in cell differentiation in a number of cell systems . The two genes express transcription factors which repress each other's expression and autoactivate their own production. A number of theoretical models have recently been proposed based on the two-gene motif to provide a physical understanding of how cell differentiation occurs . In this paper, we study a simple model of cell differentiation which assumes no cooperativity in the regulation of gene expression by the transcription factors. The later repress each other's activity directly through DNA binding and indirectly through the formation of heterodimers. We specifically investigate how deterministic processes combined with stochasticity contribute in bringing about cell differentiation. The deterministic dynamics of our model give rise to a supercritical pitchfork bifurcation from an undifferentiated stable steady state to two differentiated stable steady states. The stochastic dynamics of our model are studied using the approaches based on the Langevin equations and the linear noise approximation . The simulation results provide a new physical understanding of recent experimental observations. We further propose experimental measurements of quantities like the variance and the lag-1 autocorrelation function in protein fluctuations as the early signatures of an approaching bifurcation point in the cell differentiation process.
[ { "type": "R", "before": "are an outcome of", "after": "acquire their identity through", "start_char_pos": 39, "end_char_pos": 56 }, { "type": "R", "before": "a progenitor cell state differentiates into two", "after": "the multipotent progenitor cells differentiate into", "start_char_pos": 102, "end_char_pos": 149 }, { "type": "R", "before": "of", "after": "played by", "start_char_pos": 260, "end_char_pos": 262 }, { "type": "R", "before": "the process of cell differentiation", "after": "cell differentiation in a number of cell systems", "start_char_pos": 283, "end_char_pos": 318 }, { "type": "R", "before": "works", "after": "occurs", "start_char_pos": 593, "end_char_pos": 598 }, { "type": "R", "before": "latter", "after": "later", "start_char_pos": 766, "end_char_pos": 772 }, { "type": "R", "before": "pichfork", "after": "pitchfork", "start_char_pos": 1090, "end_char_pos": 1098 }, { "type": "R", "before": "approach", "after": "approaches", "start_char_pos": 1259, "end_char_pos": 1267 }, { "type": "A", "before": null, "after": "and the linear noise approximation", "start_char_pos": 1300, "end_char_pos": 1300 }, { "type": "R", "before": "are consistent with the", "after": "provide a new physical understanding of", "start_char_pos": 1326, "end_char_pos": 1349 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1527, "end_char_pos": 1527 } ]
[ 0, 170, 320, 441, 600, 761, 885, 1020, 1199, 1302, 1383 ]
1405.1326
1
We argue that the classical indices of tail dependence quite often underestimate the tail dependence in copulas and thus may not always convey useful information. We illustrate this phenomenon using a number of bivariate copulas and suggest an alternative way for assessing taildependence .
We demonstrate both analytically and numerically that the existing methods for measuring tail dependence in copulas may sometimes underestimate the extent of extreme co-movements of dependent risks and, therefore, may not always comply with the new paradigm of prudent risk management. This phenomenon holds in the context of both symmetric and asymmetric copulas with and without singularities. As a remedy, we introduce a notion of paths of maximal (tail) dependence and utilize it to propose several new indices of tail dependence. The suggested new indices are conservative, conform with the basic concepts of modern quantitative risk management, and are able to distinguish between distinct risky positions in situations when the existing indices fail to do so .
[ { "type": "R", "before": "argue that the classical indices of tail dependence quite often underestimate the tail dependence in copulas and thus", "after": "demonstrate both analytically and numerically that the existing methods for measuring tail dependence in copulas may sometimes underestimate the extent of extreme co-movements of dependent risks and, therefore,", "start_char_pos": 3, "end_char_pos": 120 }, { "type": "R", "before": "convey useful information. We illustrate this phenomenon using a number of bivariate copulas and suggest an alternative way for assessing taildependence", "after": "comply with the new paradigm of prudent risk management. This phenomenon holds in the context of both symmetric and asymmetric copulas with and without singularities. As a remedy, we introduce a notion of paths of maximal (tail) dependence and utilize it to propose several new indices of tail dependence. The suggested new indices are conservative, conform with the basic concepts of modern quantitative risk management, and are able to distinguish between distinct risky positions in situations when the existing indices fail to do so", "start_char_pos": 136, "end_char_pos": 288 } ]
[ 0, 162 ]
1405.1791
1
In fat-tailed domains , sample measures of top centile contributions to the total (concentration) are biased, unstable estimators extremely sensitive to sample size and concave in accounting for large deviations. They can vary over time merely from the increase of sample space , thus providing the illusion of structural changes in concentration. They are also inconsistent under aggregation and mixing distributions, as weighted concen- tration measures for A and B will tend to be lower than that from A + B. In addition, it can be shown that under fat tails, increases in the total sum need to be accompanied by increased measurement of concentration. We examine the bias and error under straight and mixed distributions.
In domains with power law tails , sample measures of top centile contributions to the total (concentration) are biased, unstable estimators extremely sensitive to both unit and sample size and concave in accounting for large deviations. They can vary over time merely from the increase of unit size , thus providing the illusion of structural changes in concentration. They are also inconsistent under aggregation and mixing distributions, as weighted concentration measures for A and B will tend to be lower than that from A + B. In addition, it can be shown that under fat tails, increases in the total sum need to be accompanied by increased measurement of concentration. We examine the bias and error under straight and mixed distributions.
[ { "type": "R", "before": "fat-tailed domains", "after": "domains with power law tails", "start_char_pos": 3, "end_char_pos": 21 }, { "type": "A", "before": null, "after": "both unit and", "start_char_pos": 153, "end_char_pos": 153 }, { "type": "R", "before": "sample space", "after": "unit size", "start_char_pos": 266, "end_char_pos": 278 }, { "type": "R", "before": "concen- tration", "after": "concentration", "start_char_pos": 432, "end_char_pos": 447 } ]
[ 0, 213, 348, 656 ]
1405.1791
2
In domains with power law tails, sample measures of top centile contributions to the total (concentration) are biased, unstable estimators extremely sensitive to both unit and sample size and concave in accounting for large deviations. They can vary over time merely from the increase of unit size, thus providing the illusion of structural changes in concentration. They are also inconsistent under aggregation and mixing distributions, as weighted concentration measures for A and B will tend to be lower than that from A + B. In addition, it can be shown that under fat tails, increases in the total sum need to be accompanied by increased measurement of concentration . We examine the bias and error under straight and mixed distributions.
Sample measures of top centile contributions to the total (concentration) are downward biased, unstable estimators , extremely sensitive to sample size and concave in accounting for large deviations. It makes them particularly unfit in domains with power law tails, especially for low values of the exponent. These estimators can vary over time and increase with the population size, as shown in this article, thus providing the illusion of structural changes in concentration. They are also inconsistent under aggregation and mixing distributions, as the weighted average of concentration measures for A and B will tend to be lower than that from A U B. In addition, it can be shown that under such fat tails, increases in the total sum need to be accompanied by increased sample size of the concentration measurement . We examine the estimation superadditivity and bias under homogeneous and mixed distributions.
[ { "type": "R", "before": "In domains with power law tails, sample", "after": "Sample", "start_char_pos": 0, "end_char_pos": 39 }, { "type": "A", "before": null, "after": "downward", "start_char_pos": 111, "end_char_pos": 111 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 140, "end_char_pos": 140 }, { "type": "D", "before": "both unit and", "after": null, "start_char_pos": 164, "end_char_pos": 177 }, { "type": "R", "before": "They", "after": "It makes them particularly unfit in domains with power law tails, especially for low values of the exponent. These estimators", "start_char_pos": 238, "end_char_pos": 242 }, { "type": "R", "before": "merely from the increase of unit size,", "after": "and increase with the population size, as shown in this article,", "start_char_pos": 262, "end_char_pos": 300 }, { "type": "R", "before": "weighted", "after": "the weighted average of", "start_char_pos": 443, "end_char_pos": 451 }, { "type": "R", "before": "+", "after": "U", "start_char_pos": 526, "end_char_pos": 527 }, { "type": "A", "before": null, "after": "such", "start_char_pos": 571, "end_char_pos": 571 }, { "type": "R", "before": "measurement of concentration", "after": "sample size of the concentration measurement", "start_char_pos": 646, "end_char_pos": 674 }, { "type": "R", "before": "bias and error under straight", "after": "estimation superadditivity and bias under homogeneous", "start_char_pos": 692, "end_char_pos": 721 } ]
[ 0, 237, 368, 676 ]
1405.2023
1
We consider an optimal execution problem over a finite period of time during which an investor has access to both a standard exchange and a dark pool. We take the exchange to be an order-driven market and propose a continuous-time setup for the best bid and best ask prices , both modelled by arbitrary functions of incoming market and limit orders . We consider a random drift so to include the impact of market orders, and we describe the arrival of limit orders and of order cancellations by means of Poisson random measures. In the situation where the trades take place only in the exchange, we find that the optimal execution strategy depends significantly on the resilience of the limit order book. We assume that the trading price in the dark pool is the mid-price and that no fees are due for posting orders. We allow for partial trade executions in the dark pool, and we find the optimal order-size placement in both venues. Since the mid-price is taken from the exchange, the resilience of the limit order book also affects the optimal allocation of shares in the dark pool. We propose a general objective function and we show that, subject to suitable technical conditions, the value function can be characterised by the unique continuous viscosity solution to the associated system of partial integro differential equations. We present a numerical example of which model parameters are analysed in detail.
We consider an optimal execution problem over a finite period of time during which an investor has access to both a standard exchange and a dark pool. We take the exchange to be an order-driven market and propose a continuous-time setup for the best bid price and the market spread , both modelled by functions of the market activity . We consider a random drift so to include the impact of market orders, and we describe the arrival of limit orders and of order cancellations by means of Poisson random measures. In the situation where the trades take place only in the exchange, we find that the optimal execution strategy depends significantly on the resilience of the limit order book. We assume that the trading price in the dark pool is the mid-price and that no fees are due for posting orders. We allow for partial trade executions in the dark pool, and we find the optimal order-size placement in both venues. Since the mid-price is taken from the exchange, the resilience of the limit order book also affects the optimal allocation of shares in the dark pool. We propose a general objective function and we show that, subject to suitable technical conditions, the value function can be characterised by the unique continuous viscosity solution to the associated system of partial integro differential equations. We present a numerical example of which model parameters are analysed in detail.
[ { "type": "R", "before": "and best ask prices", "after": "price and the market spread", "start_char_pos": 254, "end_char_pos": 273 }, { "type": "R", "before": "arbitrary functions of incoming market and limit orders", "after": "functions of the market activity", "start_char_pos": 293, "end_char_pos": 348 } ]
[ 0, 150, 350, 528, 704, 816, 933, 1084, 1336 ]
1405.2023
2
We consider an optimal execution problem over a finite period of time during which an investor has access to both a standard exchange and a dark pool. We take the exchange to be an order-driven market and propose a continuous-time setup for the best bid price and the market spread, both modelled by functions of the market activity. We consider a random drift so to include the impact of market orders, and we describe the arrival of limit orders and of order cancellations by means of Poisson random measures. In the situation where the trades take place only in the exchange, we find that the optimal execution strategy depends significantly on the resilience of the limit order book. We assume that the trading price in the dark pool is the mid-price and that no fees are due for posting orders. We allow for partial trade executions in the dark pool, and we find the optimal order-size placement in both venues. Since the mid-price is taken from the exchange, the resilience of the limit order book also affects the optimal allocation of shares in the dark pool. We propose a general objective function and we show that, subject to suitable technical conditions, the value function can be characterised by the unique continuous viscosity solution to the associated system of partial integro differential equations . We present a numerical example of which model parameters are analysed in detail .
We consider an optimal execution problem over a finite period of time during which an investor has access to both a standard exchange and a dark pool. We take the exchange to be an order-driven market and propose a continuous-time setup for the best bid price and the market spread, both modelled by Levy processes. In the situation where the trades take place only in the exchange, we find that the optimal execution strategy depends significantly on the dynamics of the limit order book. We assume that the trading price in the dark pool is the mid-price and that no fees are due for posting orders. We allow for partial trade executions in the dark pool, and we find the optimal trading strategy in both venues. Since the mid-price is taken from the exchange, the dynamics of the limit order book also affects the optimal allocation of shares in the dark pool. We propose a general objective function and we show that, subject to suitable technical conditions, the value function can be characterised by the unique continuous viscosity solution to the associated partial integro differential equation . We present two explicit examples of the price and the spread models, and derive the associated optimal trading strategy numerically .
[ { "type": "R", "before": "functions of the market activity. We consider a random drift so to include the impact of market orders, and we describe the arrival of limit orders and of order cancellations by means of Poisson random measures.", "after": "Levy processes.", "start_char_pos": 300, "end_char_pos": 511 }, { "type": "R", "before": "resilience", "after": "dynamics", "start_char_pos": 652, "end_char_pos": 662 }, { "type": "R", "before": "order-size placement", "after": "trading strategy", "start_char_pos": 880, "end_char_pos": 900 }, { "type": "R", "before": "resilience", "after": "dynamics", "start_char_pos": 969, "end_char_pos": 979 }, { "type": "D", "before": "system of", "after": null, "start_char_pos": 1270, "end_char_pos": 1279 }, { "type": "R", "before": "equations", "after": "equation", "start_char_pos": 1309, "end_char_pos": 1318 }, { "type": "R", "before": "a numerical example of which model parameters are analysed in detail", "after": "two explicit examples of the price and the spread models, and derive the associated optimal trading strategy numerically", "start_char_pos": 1332, "end_char_pos": 1400 } ]
[ 0, 150, 333, 511, 687, 799, 916, 1067, 1320 ]
1405.2023
3
We consider an optimal execution problem over a finite period of time during which an investor has access to both a standard exchange and a dark pool. We take the exchange to be an order-driven market and propose a continuous-time setup for the best bid price and the market spread, both modelled by Levy processes. In the situation where the trades take place only in the exchange, we find that the optimal execution strategy depends significantly on the dynamics of the limit order book . We assume that the trading price in the dark pool is the mid-price and that no fees are due for posting orders. We allow for partial trade executions in the dark pool, and we find the optimal trading strategy in both venues. Since the mid-price is taken from the exchange, the dynamics of the limit order book also affects the optimal allocation of shares in the dark pool. We propose a general objective function and we show that, subject to suitable technical conditions, the value function can be characterised by the unique continuous viscosity solution to the associated partial integro differential equation. We present two explicit examples of the price and the spread models, and derive the associated optimal trading strategy numerically .
We consider an optimal trading problem over a finite period of time during which an investor has access to both a standard exchange and a dark pool. We take the exchange to be an order-driven market and propose a continuous-time setup for the best bid price and the market spread, both modelled by L\'evy processes. Effects on the best bid price arising from the arrival of limit buy orders at more favourable prices, the incoming market sell orders potentially walking the book, and deriving from the cancellations of limit sell orders at the best ask price are incorporated in the proposed price dynamics. A permanent impact that occurs when 'lit' pool trades cannot be avoided is built in, and an instantaneous impact that models the slippage, to which all 'lit' exchange trades are subject, is also considered . We assume that the trading price in the dark pool is the mid-price and that no fees are due for posting orders. We allow for partial trade executions in the dark pool, and we find the optimal trading strategy in both venues. Since the mid-price is taken from the exchange, the dynamics of the limit order book also affects the optimal allocation of shares in the dark pool. We propose a general objective function and we show that, subject to suitable technical conditions, the value function can be characterised by the unique continuous viscosity solution to the associated partial integro differential equation. We present two explicit examples of the price and the spread models, and derive the associated optimal trading strategy numerically . We discuss the various degrees of the agent's risk aversion and further show that roundtrips, i.e. posting the remaining inventory in the dark pool at every point in time, are not necessarily beneficial .
[ { "type": "R", "before": "execution", "after": "trading", "start_char_pos": 23, "end_char_pos": 32 }, { "type": "R", "before": "Levy processes. In the situation where the trades take place only in the exchange, we find that the optimal execution strategy depends significantly on the dynamics of the limit order book", "after": "L\\'evy processes. Effects on the best bid price arising from the arrival of limit buy orders at more favourable prices, the incoming market sell orders potentially walking the book, and deriving from the cancellations of limit sell orders at the best ask price are incorporated in the proposed price dynamics. A permanent impact that occurs when 'lit' pool trades cannot be avoided is built in, and an instantaneous impact that models the slippage, to which all 'lit' exchange trades are subject, is also considered", "start_char_pos": 300, "end_char_pos": 488 }, { "type": "A", "before": null, "after": ". We discuss the various degrees of the agent's risk aversion and further show that roundtrips, i.e. posting the remaining inventory in the dark pool at every point in time, are not necessarily beneficial", "start_char_pos": 1238, "end_char_pos": 1238 } ]
[ 0, 150, 315, 490, 602, 715, 864, 1105 ]
1405.2373
1
URLanism is embedded in an environment (specifying for example the concentrations of basic nutrients) that changes over time. The timescale for and statistics of environmental change, the precision with which URLanism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies (such as constitutive expression or graded response) for regulating protein levels in response to environmental inputs. We propose a general framework (here specifically applied to the enzymatic regulation of metabolism ) to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing Bayesian inference ; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for sophisticated Bayesian inference ; and (iii) in dynamical contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme URLanizes known regulatory strategies and may help conceptualize heretofore unknown ones.
URLanism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which URLanism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies-such as constitutive expression or graded response-for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule ; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule ; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme URLanizes known regulatory strategies and may help conceptualize heretofore unknown ones.
[ { "type": "D", "before": "(specifying for example the concentrations of basic nutrients)", "after": null, "start_char_pos": 39, "end_char_pos": 101 }, { "type": "R", "before": "strategies (such", "after": "strategies-such", "start_char_pos": 358, "end_char_pos": 374 }, { "type": "R", "before": "response) for", "after": "response-for", "start_char_pos": 412, "end_char_pos": 425 }, { "type": "R", "before": "framework (here", "after": "framework-here", "start_char_pos": 510, "end_char_pos": 525 }, { "type": "R", "before": ") to", "after": "in response to changing concentrations of a basic nutrient-to", "start_char_pos": 589, "end_char_pos": 593 }, { "type": "R", "before": "Bayesian inference", "after": "a Bayesian decision rule", "start_char_pos": 962, "end_char_pos": 980 }, { "type": "R", "before": "sophisticated Bayesian inference", "after": "a sophisticated Bayesian decision rule", "start_char_pos": 1268, "end_char_pos": 1300 }, { "type": "R", "before": "dynamical", "after": "dynamic", "start_char_pos": 1316, "end_char_pos": 1325 } ]
[ 0, 125, 488, 779, 906, 982, 1058, 1206, 1302, 1409, 1589 ]
1405.2442
1
We show that the equivalence between certain problems of singular stochastic control (SSC) and related questions of optimal stopping known for convex performance criteria (see, for example , Karatzas and Shreve (1984) ) continues to hold in a non convex problem provided a related discretionary stopping time is introduced. Our problem is one of storage and consumption for electricity, a partially storable commodity with both positive and negative prices in some markets, and has similarities to the finite fuel monotone follower problem. In particular we consider a non convex infinite time horizon SSC problem whose state consists of an uncontrolled diffusion representing a real-valued commodity price, and a controlled increasing bounded process representing an inventory. We analyse the geometry of the action and inaction regions by characterising the related optimal stopping boundaries .
Equivalences are known between problems of singular stochastic control (SSC) with convex performance criteria and related questions of optimal stopping , see for example Karatzas and Shreve SIAM J. Control Optim. 22 (1984) . The aim of this paper is to investigate how far connections of this type generalise to a non convex problem of purchasing electricity. Where the classical equivalence breaks down we provide alternative connections to optimal stopping problems. We consider a non convex infinite time horizon SSC problem whose state consists of an uncontrolled diffusion representing a real-valued commodity price, and a controlled increasing bounded process representing an inventory. We analyse the geometry of the action and inaction regions by characterising their (optimal) boundaries. Unlike the case of convex SSC problems we find that the optimal boundaries may be both reflecting and repelling and it is natural to interpret the problem as one of SSC with discretionary stopping .
[ { "type": "R", "before": "We show that the equivalence between certain", "after": "Equivalences are known between", "start_char_pos": 0, "end_char_pos": 44 }, { "type": "A", "before": null, "after": "with convex performance criteria", "start_char_pos": 91, "end_char_pos": 91 }, { "type": "D", "before": "known for convex performance criteria (see, for example", "after": null, "start_char_pos": 134, "end_char_pos": 189 }, { "type": "A", "before": null, "after": "see for example", "start_char_pos": 192, "end_char_pos": 192 }, { "type": "A", "before": null, "after": "SIAM J. Control Optim. 22", "start_char_pos": 213, "end_char_pos": 213 }, { "type": "R", "before": ") continues to hold in", "after": ". The aim of this paper is to investigate how far connections of this type generalise to", "start_char_pos": 221, "end_char_pos": 243 }, { "type": "R", "before": "provided a related discretionary stopping time is introduced. Our problem is one of storage and consumption for electricity, a partially storable commodity with both positive and negative prices in some markets, and has similarities to the finite fuel monotone follower problem. In particular we", "after": "of purchasing electricity. Where the classical equivalence breaks down we provide alternative connections to optimal stopping problems. We", "start_char_pos": 265, "end_char_pos": 560 }, { "type": "R", "before": "the related optimal stopping boundaries", "after": "their (optimal) boundaries. Unlike the case of convex SSC problems we find that the optimal boundaries may be both reflecting and repelling and it is natural to interpret the problem as one of SSC with discretionary stopping", "start_char_pos": 859, "end_char_pos": 898 } ]
[ 0, 326, 543, 781 ]
1405.2450
1
We introduce a multiple curve LIBOR framework that combines tractable dynamics and semi-analytic pricing formulas with positive interest rates and basis spreads. The dynamics of OIS and LIBOR rates are specified following the methodology of the affine LIBOR models and are driven by the wide and flexible class of affine processes. The affine property is preserved under forward measures, which allows to derive Fourier pricing formulas for caps, swaptions and basis swaptions. A model specification with dependent LIBOR rates is developed, that allows for an efficient and accurate calibration to a system of caplet prices.
We introduce a multiple curve framework that combines tractable dynamics and semi-analytic pricing formulas with positive interest rates and basis spreads. Negatives rates and positive spreads can also be accommodated in this framework. The dynamics of OIS and LIBOR rates are specified following the methodology of the affine LIBOR models and are driven by the wide and flexible class of affine processes. The affine property is preserved under forward measures, which allows us to derive Fourier pricing formulas for caps, swaptions and basis swaptions. A model specification with dependent LIBOR rates is developed, that allows for an efficient and accurate calibration to a system of caplet prices.
[ { "type": "D", "before": "LIBOR", "after": null, "start_char_pos": 30, "end_char_pos": 35 }, { "type": "A", "before": null, "after": "Negatives rates and positive spreads can also be accommodated in this framework.", "start_char_pos": 162, "end_char_pos": 162 }, { "type": "A", "before": null, "after": "us", "start_char_pos": 403, "end_char_pos": 403 } ]
[ 0, 161, 332, 479 ]
1405.2512
1
Consider high-dimensional data set such that for every data-point there exist an informationat only part of its dimensions, and the rest is unknown . We assume that the data emerge from real-world data, i. e. , the true (unknown) points lie close to each other such that they may be group together .
Consider a high-dimensional data set , such that for every data-point there is incomplete information. Each object in the data set represents a real entity, which models as a point in high-dimensional space . We assume that all real entities are embedded in the same space, which means they have the same dimension. We model the lack of information for a given object as affine subspace in \mathbb{R Data clustering using flats minimum distances, using the following assumptions: 1) There are m clusters. 2) Each cluster is modeled as a ball in \mathbb{R separable data. Our suggested algorithm calculates pair-wise projection of the data. We use probabilistic considerations to prove the algorithm correctness. These probabilistic results are of independent interest, as can serve to better understand the geometry of high dimensional objects .
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 9, "end_char_pos": 9 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 36, "end_char_pos": 36 }, { "type": "R", "before": "exist an informationat only part of its dimensions, and the rest is unknown", "after": "is incomplete information. Each object in the data set represents a real entity, which models as a point in high-dimensional space", "start_char_pos": 74, "end_char_pos": 149 }, { "type": "R", "before": "the data emerge from real-world data, i. e. , the true (unknown) points lie close to each other such that they may be group together", "after": "all real entities are embedded in the same space, which means they have the same dimension. We model the lack of information for a given object as affine subspace in \\mathbb{R", "start_char_pos": 167, "end_char_pos": 299 }, { "type": "A", "before": null, "after": "Data clustering using flats minimum distances", "start_char_pos": 300, "end_char_pos": 300 }, { "type": "A", "before": null, "after": ", using the following assumptions: 1) There are m clusters. 2) Each cluster is modeled as a ball in \\mathbb{R", "start_char_pos": 300, "end_char_pos": 300 }, { "type": "A", "before": null, "after": "separable data", "start_char_pos": 301, "end_char_pos": 301 }, { "type": "A", "before": null, "after": ". Our suggested algorithm calculates pair-wise projection of the data. We use probabilistic considerations to prove the algorithm correctness. These probabilistic results are of independent interest, as can serve to better understand the geometry of high dimensional objects", "start_char_pos": 301, "end_char_pos": 301 } ]
[ 0, 151 ]
1405.2609
1
Proof that under simple assumptions, such as con- straints of Put-Call Parity, the probability measure for the valuation of a European option has the mean of the risk-neutral one, under any general probability distribution, bypassing the Black-Scholes-Merton dynamic hedging argument, and without the requirement of complete markets . We confirm that the heuristics used by traders for centuries are both more robust and more rigorous than held in the economics literature .
Proof that under simple assumptions, such as constraints of Put-Call Parity, the probability measure for the valuation of a European option has the mean derived from the forward price which can, but does not have to be the risk-neutral one, under any general probability distribution, bypassing the Black-Scholes-Merton dynamic hedging argument, and without the requirement of complete markets and other strong assumptions . We confirm that the heuristics used by traders for centuries are both more robust , more consistent, and more rigorous than held in the economics literature . We also show that options can be priced using infinite variance (finite mean) distributions .
[ { "type": "R", "before": "con- straints", "after": "constraints", "start_char_pos": 45, "end_char_pos": 58 }, { "type": "R", "before": "of the", "after": "derived from the forward price which can, but does not have to be the", "start_char_pos": 155, "end_char_pos": 161 }, { "type": "A", "before": null, "after": "and other strong assumptions", "start_char_pos": 333, "end_char_pos": 333 }, { "type": "A", "before": null, "after": ", more consistent,", "start_char_pos": 418, "end_char_pos": 418 }, { "type": "A", "before": null, "after": ". We also show that options can be priced using infinite variance (finite mean) distributions", "start_char_pos": 475, "end_char_pos": 475 } ]
[ 0, 335 ]
1405.2888
1
Self-replicating systems based on information-coding polymers are of crucial importance in biology. They also recently emerged as a paradigm in design on nano- and micro-scales. We present a general theoretical and numerical analysis of the problem of spontaneous emergence of autocatalysis for heteropolymers capable of template-assisted ligation driven by cyclic changes in the environment. Our central result is the existence of the first order transition between the regime dominated by free monomers and that with a self-sustaining population of sufficiently long oligomers. We provide a simple mathematically tractable model that predicts the parameters for the onset of autocatalysis and the distribution of chain lengths, in terms of monomer concentration, and two fundamental rate constants. Another key result is the emergence of the kinetically-limited optimal overlap length between a template and its two substrates. Template-assisted ligation allows for heritable transmission of information encoded in oligomer sequences thus opening up the possibility of spontaneous emergence of Darwinian evolution in such systems.
Self-replicating systems based on information-coding polymers are of crucial importance in biology. They also recently emerged as a paradigm in design on nano- and micro-scales. We present a general theoretical and numerical analysis of the problem of spontaneous emergence of autocatalysis for heteropolymers capable of template-assisted ligation driven by cyclic changes in the environment. Our central result is the existence of the first order transition between the regime dominated by free monomers and that with a self-sustaining population of sufficiently long oligomers. We provide a simple mathematically tractable model that predicts the parameters for the onset of autocatalysis and the distribution of chain lengths, in terms of monomer concentration, and two fundamental rate constants. Another key result is the emergence of the kinetically-limited optimal overlap length between a template and its two substrates. Template-assisted ligation allows for heritable transmission of information encoded in oligomer sequences thus opening up the possibility of long-term memory and evolvability of such systems.
[ { "type": "R", "before": "spontaneous emergence of Darwinian evolution in", "after": "long-term memory and evolvability of", "start_char_pos": 1071, "end_char_pos": 1118 } ]
[ 0, 99, 177, 392, 579, 800, 929 ]
1405.2888
2
Self-replicating systems based on information-coding polymers are of crucial importance in biology. They also recently emerged as a paradigm in design on nano- and micro-scales. We present a general theoretical and numerical analysis of the problem of spontaneous emergence of autocatalysis for heteropolymers capable of template-assisted ligation driven by cyclic changes in the environment. Our central result is the existence of the first order transition between the regime dominated by free monomers and that with a self-sustaining population of sufficiently long oligomers . We provide a simple mathematically tractable model that predicts the parameters for the onset of autocatalysis and the distribution of chain lengths , in terms of monomer concentration, and two fundamental rate constants. Another key result is the emergence of the kinetically-limited optimal overlap length between a template and its two substrates. Template-assisted ligation allows for heritable transmission of information encoded in oligomer sequences thus opening up the possibility of long-term memory and evolvability of such systems.
Self-replicating systems based on information-coding polymers are of crucial importance in biology. They also recently emerged as a paradigm in material design on nano- and micro-scales. We present a general theoretical and numerical analysis of the problem of spontaneous emergence of autocatalysis for heteropolymers capable of template-assisted ligation driven by cyclic changes in the environment. Our central result is the existence of the first order transition between the regime dominated by free monomers and that with a self-sustaining population of sufficiently long chains . We provide a simple , mathematically tractable model supported by numerical simulations, which predicts the distribution of chain lengths and the onset of autocatalysis in terms of the overall monomer concentration and two fundamental rate constants. Another key result of our study is the emergence of the kinetically-limited optimal overlap length between a template and each of its two substrates. The template-assisted ligation allows for heritable transmission of the information encoded in chain sequences thus opening up the possibility of long-term memory and evolvability in such systems.
[ { "type": "A", "before": null, "after": "material", "start_char_pos": 144, "end_char_pos": 144 }, { "type": "R", "before": "oligomers", "after": "chains", "start_char_pos": 570, "end_char_pos": 579 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 602, "end_char_pos": 602 }, { "type": "R", "before": "that predicts the parameters for the onset of autocatalysis and the", "after": "supported by numerical simulations, which predicts the", "start_char_pos": 634, "end_char_pos": 701 }, { "type": "R", "before": ",", "after": "and the onset of autocatalysis", "start_char_pos": 732, "end_char_pos": 733 }, { "type": "R", "before": "monomer concentration,", "after": "the overall monomer concentration", "start_char_pos": 746, "end_char_pos": 768 }, { "type": "A", "before": null, "after": "of our study", "start_char_pos": 824, "end_char_pos": 824 }, { "type": "A", "before": null, "after": "each of", "start_char_pos": 915, "end_char_pos": 915 }, { "type": "R", "before": "Template-assisted", "after": "The template-assisted", "start_char_pos": 936, "end_char_pos": 953 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 1000, "end_char_pos": 1000 }, { "type": "R", "before": "oligomer", "after": "chain", "start_char_pos": 1024, "end_char_pos": 1032 }, { "type": "R", "before": "of", "after": "in", "start_char_pos": 1112, "end_char_pos": 1114 } ]
[ 0, 99, 178, 393, 581, 804, 935 ]
1405.3512
1
We investigate the relevance between quantum open systems and stock markets. A Quantum Brownian motion model is proposed for studying the interaction between the Brownian system and the reservoir, i. e., the stock index and the entire stock market. Based on the model, we investigate the Shanghai Stock Exchange of China from perspective of quantum statistics, and thereby examine the behaviors of the stock index violating the efficient market hypothesis, such as fat-tail phenomena and non-Markovian features. Our interdisciplinary works thus help to discovery the underlying quantum characteristics of stock markets and develop new research fields of econophysics .
It is believed by the majority today that the efficient market hypothesis is imperfect because of market irrationality. Using the physical concepts and mathematical structures of quantum mechanics, we construct an econophysics framework for the stock market, based on which we analogously map massive numbers of single stocks into a reservoir consisting of many quantum harmonic oscillators and their stock index into a typical quantum open system--a quantum Brownian particle. In particular, the irrationality of stock transactions is quantitatively considered as the Planck constant within Heisenberg's uncertainty relationship of quantum mechanics in an analogous manner. We analyze real stock data of Shanghai Stock Exchange of China and investigate fat-tail phenomena and non-Markovian behaviors of the stock index with the assistance of the quantum Brownian motion model, thereby interpreting and studying the limitations of the classical Brownian motion model for the efficient market hypothesis from a new perspective of quantum open system dynamics .
[ { "type": "R", "before": "We investigate the relevance between quantum open systems and stock markets. A Quantum Brownian motion model is proposed for studying the interaction between the Brownian system and the reservoir, i. e., the stock index and the entire stock market. Based on the model, we investigate the", "after": "It is believed by the majority today that the efficient market hypothesis is imperfect because of market irrationality. Using the physical concepts and mathematical structures of quantum mechanics, we construct an econophysics framework for the stock market, based on which we analogously map massive numbers of single stocks into a reservoir consisting of many quantum harmonic oscillators and their stock index into a typical quantum open system--a quantum Brownian particle. In particular, the irrationality of stock transactions is quantitatively considered as the Planck constant within Heisenberg's uncertainty relationship of quantum mechanics in an analogous manner. We analyze real stock data of", "start_char_pos": 0, "end_char_pos": 287 }, { "type": "R", "before": "from perspective of quantum statistics, and thereby examine the", "after": "and investigate fat-tail phenomena and non-Markovian", "start_char_pos": 321, "end_char_pos": 384 }, { "type": "R", "before": "violating the efficient market hypothesis, such as fat-tail phenomena and non-Markovian features. Our interdisciplinary works thus help to discovery the underlying quantum characteristics of stock markets and develop new research fields of econophysics", "after": "with the assistance of the quantum Brownian motion model, thereby interpreting and studying the limitations of the classical Brownian motion model for the efficient market hypothesis from a new perspective of quantum open system dynamics", "start_char_pos": 414, "end_char_pos": 666 } ]
[ 0, 76, 248, 511 ]
1405.3867
1
The issue of the nucleation and slow closure mechanisms of denaturation bubbles in DNA is tackled using coarse-grained MetaDynamics and Brownian simulations. A minimal mesoscopic model is used where the double helix is made of two interacting bead-spring freely rotating strands with a prescribed torsional modulus in the duplex state. We demonstrate that timescales for the nucleation (resp. closure) of an approximately 10 base-pairs bubble, in agreement with experiments, are associated to the crossing of a free-energy barrier of 22~k_{\rm B}T (resp. 13~k_{\rm B}T) at room temperature T. MetaDynamics allows us to reconstruct accurately the free-energy landscape, to show that the free-energy barriers come from the difference in torsional energy between the bubble and duplex states, to highlight the limiting step, a collective twisting, that controls the nucleation/closure mechanism, and to access opening time scales on the millisecond range .
The issue of the nucleation and slow closure mechanisms of non superhelical stress-induced denaturation bubbles in DNA is tackled using coarse-grained MetaDynamics and Brownian simulations. A minimal mesoscopic model is used where the double helix is made of two interacting bead-spring rotating strands with a prescribed torsional modulus in the duplex state. We demonstrate that timescales for the nucleation (resp. closure) of an approximately 10 base-pair bubble, in agreement with experiments, are associated with the crossing of a free-energy barrier of 22~k_{\rm B}T (resp. 13~k_{\rm B}T) at room temperature T. MetaDynamics allows us to reconstruct accurately the free-energy landscape, to show that the free-energy barriers come from the difference in torsional energy between the bubble and duplex states, and thus to highlight the limiting step, a collective twisting, that controls the nucleation/closure mechanism, and to access opening time scales on the millisecond range . Contrary to small breathing bubbles, these more than 4~base-pair bubbles are of biological relevance, for example when a preexisting state of denaturation is required by specific DNA-binding proteins .
[ { "type": "A", "before": null, "after": "non superhelical stress-induced", "start_char_pos": 59, "end_char_pos": 59 }, { "type": "D", "before": "freely", "after": null, "start_char_pos": 256, "end_char_pos": 262 }, { "type": "R", "before": "base-pairs", "after": "base-pair", "start_char_pos": 426, "end_char_pos": 436 }, { "type": "R", "before": "to", "after": "with", "start_char_pos": 491, "end_char_pos": 493 }, { "type": "A", "before": null, "after": "and thus", "start_char_pos": 791, "end_char_pos": 791 }, { "type": "A", "before": null, "after": ". Contrary to small breathing bubbles, these more than 4~base-pair bubbles are of biological relevance, for example when a preexisting state of denaturation is required by specific DNA-binding proteins", "start_char_pos": 954, "end_char_pos": 954 } ]
[ 0, 158, 336, 593 ]
1405.4421
1
Following a hedging based approach to model free financial mathematics, we prove that it is possible to make an arbitrarily large profit by investing in those one-dimensional paths which do not possess local times. The local time is constructed from discrete approximations, and it is shown that it is of finite p-variation for all p> 2. Additionally, we provide various generalizations of F\"ollmer's pathwise It\^o formula.
Following a hedging based approach to model free financial mathematics, we prove that it should be possible to make an arbitrarily large profit by investing in those one-dimensional paths which do not possess local times. The local time is constructed from discrete approximations, and it is shown that it is \alpha-H\"older continuous for all \alpha<1/ 2. Additionally, we provide various generalizations of F\"ollmer's pathwise It\^o formula.
[ { "type": "R", "before": "is", "after": "should be", "start_char_pos": 89, "end_char_pos": 91 }, { "type": "R", "before": "of finite p-variation for all p>", "after": "\\alpha-H\\\"older continuous for all \\alpha<1/", "start_char_pos": 302, "end_char_pos": 334 } ]
[ 0, 214 ]
1405.4474
1
We are concerned with stochastic modeling of financial risk based on a reference filtration \mathbb{F the progressive enlargement of F with t. We prove the fact that , if no-arbitrage of the first kind holds on S in F, the process S ^{\mathfrak{t}-} also has the property of no-arbitrage of the first kind in \mathbb{G}. This result has a natural interpretation in application, when S denotes the gain process of a hedging strategy .
Let \mathbb{F the progressive enlargement of F with t. Let S be a \mathbb{F in F, we find conditions which ensure the \mathtt{NA the processes S ^{\mathfrak{t}-} and S^\mathfrak{t in \mathbb{G}. We also discuss the relevance of the progressive enlargement of filtration technique used to obtain the results .
[ { "type": "R", "before": "We are concerned with stochastic modeling of financial risk based on a reference filtration \\mathbb{F", "after": "Let \\mathbb{F", "start_char_pos": 0, "end_char_pos": 101 }, { "type": "R", "before": "We prove the fact that , if no-arbitrage of the first kind holds on S", "after": "Let S be a \\mathbb{F", "start_char_pos": 143, "end_char_pos": 212 }, { "type": "A", "before": null, "after": "we find conditions which ensure the \\mathtt{NA", "start_char_pos": 219, "end_char_pos": 219 }, { "type": "R", "before": "process S", "after": "processes S", "start_char_pos": 224, "end_char_pos": 233 }, { "type": "R", "before": "also has the property of no-arbitrage of the first kind", "after": "and S^\\mathfrak{t", "start_char_pos": 251, "end_char_pos": 306 }, { "type": "R", "before": "This result has a natural interpretation in application, when S denotes the gain process of a hedging strategy", "after": "We also discuss the relevance of the progressive enlargement of filtration technique used to obtain the results", "start_char_pos": 322, "end_char_pos": 432 } ]
[ 0, 142, 321 ]
1405.4474
2
Let F be a filtration on some probability space and let \mathfrak{t be two filtrations and S be a F semimartingale possessing a F local martingale deflator. Consider \tau a G stopping } time. We denote by \mathbb{G or S^{\tau} can have \mathbb{G} local martingale deflators. A suitable theoretical framework is set up in this paper, within which necessary/sufficient conditions for the problem to be solved have been proved. Under these conditions, we will construct \mathbb{G} local martingale deflators for S^{\tau-} or for S^{\tau}. Among others, it is proved that \mathbb{G} local martingale deflators are multiples of \mathbb{F} local martingale deflators, with a multiplicator coming from the multiplicative decomposition of the Az\'ema supermartingale of \tau. The proofs of the necessary/sufficient conditions require various results to be established about Az\'ema supermartingale, about local martingale deflator, about filtration enlargement, which are interesting in themselves. Our study is based on a filtration enlargement setting. For applications, it is important to have a method to infer the existence of such setting from the knowledge of the market information. This question is discussed at the end of the paper} .
Let F \subset \mathbb{G be two filtrations and S be a F semimartingale possessing a F local martingale deflator. Consider \tau a G stopping } time. We study the problem whether S^{\tau- or S^{\tau} can have \mathbb{G} local martingale deflators. A suitable theoretical framework is set up in this paper, within which necessary/sufficient conditions for the problem to be solved have been proved. Under these conditions, we will construct \mathbb{G} local martingale deflators for S^{\tau-} or for S^{\tau}. Among others, it is proved that \mathbb{G} local martingale deflators are multiples of \mathbb{F} local martingale deflators, with a multiplicator coming from the multiplicative decomposition of the Az\'ema supermartingale of \tau. The proofs of the necessary/sufficient conditions require various results to be established about Az\'ema supermartingale, about local martingale deflator, about filtration enlargement, which are interesting in themselves. Our study is based on a filtration enlargement setting. For applications, it is important to have a method to infer the existence of such setting from the knowledge of the market information. This question is discussed at the end of the paper} .
[ { "type": "R", "before": "be a filtration on some probability space and let \\mathfrak{t", "after": "\\subset \\mathbb{G", "start_char_pos": 6, "end_char_pos": 67 }, { "type": "R", "before": "denote by \\mathbb{G", "after": "study the problem whether S^{\\tau-", "start_char_pos": 195, "end_char_pos": 214 } ]
[ 0, 156, 191, 274, 424, 535, 767, 990, 1046, 1182 ]
1405.4905
1
This paper is concerned with the utility-based risk of a financial position in a multi-asset market with frictions. Risk is quantified by set-valued risk measures, and market frictions are modeled by conical/convex random solvency regions representing proportional transaction costs or illiquidity effects, and convex random sets representing trading constraints. First, with a general set-valued risk measure, the effect of having trading opportunities on the risk measure is considered, and a corresponding dual representation theorem is given. Then, assuming individual utility functions for the assets, utility-based shortfall and divergence risk measures are defined , which form two classes of set-valued convex risk measures . Minimal penalty functions are computed in terms of the vector versions of the well-known divergence functionals (generalized relative entropy). As special cases, set-valued versions of the entropic risk measure and the average value at risk are obtained. The general results on the effect of market frictions are applied to the utility-based framework and conditions concerning applicability are presented .
Risk measures for multivariate financial positions are studied in a utility-based framework. Under a certain incomplete preference relation, shortfall and divergence risk measures are defined as the optimal values of specific set minimization problems. The dual relationship between these two classes of multivariate risk measures is constructed via a recent Lagrange duality for set optimization. In particular, it is shown that a shortfall risk measure can be written as an intersection over a family of divergence risk measures indexed by a scalarization parameter. Examples include set-valued versions of the entropic risk measure and the average value at risk . As a second step, the minimization of these risk measures subject to trading opportunities is studied in a general convex market in discrete time. The optimal value of the minimization problem, called the market risk measure, is also a set-valued risk measure. A dual representation for the market risk measure that decomposes the effects of the original risk measure and the frictions of the market is proved .
[ { "type": "R", "before": "This paper is concerned with the", "after": "Risk measures for multivariate financial positions are studied in a", "start_char_pos": 0, "end_char_pos": 32 }, { "type": "R", "before": "risk of a financial position in a multi-asset market with frictions. Risk is quantified by set-valued risk measures, and market frictions are modeled by conical/convex random solvency regions representing proportional transaction costs or illiquidity effects, and convex random sets representing trading constraints. First, with a general set-valued risk measure, the effect of having trading opportunities on the risk measure is considered, and a corresponding dual representation theorem is given. Then, assuming individual utility functions for the assets, utility-based", "after": "framework. Under a certain incomplete preference relation,", "start_char_pos": 47, "end_char_pos": 620 }, { "type": "R", "before": ", which form", "after": "as the optimal values of specific set minimization problems. The dual relationship between these", "start_char_pos": 672, "end_char_pos": 684 }, { "type": "A", "before": null, "after": "multivariate risk measures is constructed via a recent Lagrange duality for set optimization. In particular, it is shown that a shortfall risk measure can be written as an intersection over a family of divergence risk measures indexed by a scalarization parameter. Examples include", "start_char_pos": 700, "end_char_pos": 700 }, { "type": "D", "before": "convex risk measures . Minimal penalty functions are computed in terms of the vector versions of the well-known divergence functionals (generalized relative entropy). As special cases, set-valued", "after": null, "start_char_pos": 712, "end_char_pos": 907 }, { "type": "R", "before": "are obtained. The general results on the effect of market frictions are applied to the utility-based framework and conditions concerning applicability are presented", "after": ". As a second step, the minimization of these risk measures subject to trading opportunities is studied in a general convex market in discrete time. The optimal value of the minimization problem, called the market risk measure, is also a set-valued risk measure. A dual representation for the market risk measure that decomposes the effects of the original risk measure and the frictions of the market is proved", "start_char_pos": 976, "end_char_pos": 1140 } ]
[ 0, 115, 363, 546, 734, 878, 989 ]
1405.5230
1
We consider a stochastic model for the dynamics of the two-sided limit order book (LOB). For the joint dynamics of best bid and ask prices and the standing buy and sell volume densities, we derive a functional limit theorem, which states that our LOB model converges to a continuous-time limit when the order arrival rates tend to infinity , the impact of an individual order arrival on the book as well as the tick size tend to zero. The limits of the standing buy and sell volume densities are described by two linear stochastic partial differential equations, which are coupled with a two-dimensional reflected Brownian motion that is the limit of the best bidand ask price processes .
We consider a stochastic model for the dynamics of the two-sided limit order book (LOB). Our model is flexible enough to allow for a dependence of the price dynamics on volumes. For the joint dynamics of best bid and ask prices and the standing buy and sell volume densities, we derive a functional limit theorem, which states that our LOB model converges in distribution to a coupled SDE-SPDE system when the order arrival rates tend to infinity and the impact of an individual order arrival on the book as well as the tick size tends to zero. The SDE describes the bid/ask price dynamics while the SPDE describes the volume dynamics .
[ { "type": "A", "before": null, "after": "Our model is flexible enough to allow for a dependence of the price dynamics on volumes.", "start_char_pos": 89, "end_char_pos": 89 }, { "type": "R", "before": "to a continuous-time limit", "after": "in distribution to a coupled SDE-SPDE system", "start_char_pos": 268, "end_char_pos": 294 }, { "type": "R", "before": ",", "after": "and", "start_char_pos": 341, "end_char_pos": 342 }, { "type": "R", "before": "tend", "after": "tends", "start_char_pos": 422, "end_char_pos": 426 }, { "type": "R", "before": "limits of the standing buy and sell volume densities are described by two linear stochastic partial differential equations, which are coupled with a two-dimensional reflected Brownian motion that is the limit of the best bidand ask price processes", "after": "SDE describes the bid/ask price dynamics while the SPDE describes the volume dynamics", "start_char_pos": 440, "end_char_pos": 687 } ]
[ 0, 88, 435 ]
1405.6104
1
We introduce a variational approximation to the microscopic dynamics of rare conformational transitions of macromolecules. We show that within this framework it is possible to simulate on a small computer cluster conformational reactions as complex as protein folding, using state-of-the-art all-atom force fields in explicit solvent. The same approach also yields the potential of mean-force for reaction coordinates, the reaction rate and transition path time. For illustration and validation purposes, we test this method against the results of protein folding MDsimulations which were obtained on the Anton supercomputer, using the same all-atom force field . We find that our approach yields consistent results at a computational cost which is many orders of magnitude smaller than that required by standard MD simulations .
We introduce a variational approximation to the microscopic dynamics of rare conformational transitions of macromolecules. Within this framework it is possible to simulate on a small computer cluster reactions as complex as protein folding, using state of the art all-atom force fields in explicit solvent. We test this method against molecular dynamics (MD) simulations of the folding of an alpha- and a beta-protein performed with the same all-atom force field on the Anton supercomputer . We find that our approach yields results consistent with those of MD simulations, at a computational cost orders of magnitude smaller .
[ { "type": "R", "before": "We show that within", "after": "Within", "start_char_pos": 123, "end_char_pos": 142 }, { "type": "D", "before": "conformational", "after": null, "start_char_pos": 213, "end_char_pos": 227 }, { "type": "R", "before": "state-of-the-art", "after": "state of the art", "start_char_pos": 275, "end_char_pos": 291 }, { "type": "R", "before": "The same approach also yields the potential of mean-force for reaction coordinates, the reaction rate and transition path time. For illustration and validation purposes, we", "after": "We", "start_char_pos": 335, "end_char_pos": 507 }, { "type": "R", "before": "the results of protein folding MDsimulations which were obtained on the Anton supercomputer, using", "after": "molecular dynamics (MD) simulations of the folding of an alpha- and a beta-protein performed with", "start_char_pos": 533, "end_char_pos": 631 }, { "type": "A", "before": null, "after": "on the Anton supercomputer", "start_char_pos": 662, "end_char_pos": 662 }, { "type": "R", "before": "consistent results", "after": "results consistent with those of MD simulations,", "start_char_pos": 698, "end_char_pos": 716 }, { "type": "D", "before": "which is many", "after": null, "start_char_pos": 741, "end_char_pos": 754 }, { "type": "D", "before": "than that required by standard MD simulations", "after": null, "start_char_pos": 783, "end_char_pos": 828 } ]
[ 0, 122, 334, 462, 664 ]
1405.6400
1
We investigate the role of networks of military alliances in preventing or encouraging wars between groups of countries. A country is vulnerable to attack if some allied group of countries can defeat the defending country and its (remaining) allies based on their collective military strengths. We show that there do not exist any networks which contain no vulnerable countries and that are stable against the pairwise addition of a new alliance as well as against the unilateral deletion of any existing alliance . We then show that economic benefits from international trade provide incentives to form alliances in ways that restore stability and prevent wars, both by increasing the density of alliances so that countries are less vulnerable and by removing the incentives of countriesto attack their allies. In closing, we examine historical data on interstate wars and trade, noting that a dramatic (more than ten-fold) drop in the rate of interstate wars since 1950 is paralleled by the advent of nuclear weapons and an unprecedented growth in trade over the same period, matched with a similar densification and stabilization of alliances, consistent with the model .
We investigate the role of networks of alliances in preventing (multilateral) interstate wars. We first show that, in the absence of international trade, no network of alliances is peaceful and stable . We then show that international trade induces peaceful and stable networks: trade increases the density of alliances so that countries are less vulnerable to attack and also reduces countries' incentives to attack an ally. We present historical data on wars and trade, noting that the dramatic drop in interstate wars since 1950 , and accompanying densification and stabilization of alliances, are consistent with the model but not other prominent theories .
[ { "type": "D", "before": "military", "after": null, "start_char_pos": 39, "end_char_pos": 47 }, { "type": "R", "before": "or encouraging wars between groups of countries. A country is vulnerable to attack if some allied group of countries can defeat the defending country and its (remaining) allies based on their collective military strengths. We show that there do not exist any networks which contain no vulnerable countries and that are stable against the pairwise addition of a new alliance as well as against the unilateral deletion of any existing alliance", "after": "(multilateral) interstate wars. We first show that, in the absence of international trade, no network of alliances is peaceful and stable", "start_char_pos": 72, "end_char_pos": 513 }, { "type": "R", "before": "economic benefits from international trade provide incentives to form alliances in ways that restore stability and prevent wars, both by increasing", "after": "international trade induces peaceful and stable networks: trade increases", "start_char_pos": 534, "end_char_pos": 681 }, { "type": "R", "before": "and by removing the incentives of countriesto attack their allies. In closing, we examine", "after": "to attack and also reduces countries' incentives to attack an ally. We present", "start_char_pos": 745, "end_char_pos": 834 }, { "type": "D", "before": "interstate", "after": null, "start_char_pos": 854, "end_char_pos": 864 }, { "type": "R", "before": "a dramatic (more than ten-fold) drop in the rate of", "after": "the dramatic drop in", "start_char_pos": 893, "end_char_pos": 944 }, { "type": "R", "before": "is paralleled by the advent of nuclear weapons and an unprecedented growth in trade over the same period, matched with a similar", "after": ", and accompanying", "start_char_pos": 972, "end_char_pos": 1100 }, { "type": "A", "before": null, "after": "are", "start_char_pos": 1147, "end_char_pos": 1147 }, { "type": "A", "before": null, "after": "but not other prominent theories", "start_char_pos": 1174, "end_char_pos": 1174 } ]
[ 0, 120, 294, 515, 811 ]
1405.7013
1
We study translocation dynamics of a driven compressible semi-flexible chain consisting of alternate blocks of stiff (S) and flexible (F) segments of size m and n respectively for different chain length N . The free parameters in the model are the bending rigidity \kappa_b which controls the three body interaction term, the elastic constant k_F in the FENE (bond) potential between successive monomers, as well as the block lengths m and n and the repeat unit p (N=m_pn_p) . We demonstrate that the due to change in the entropic barrier and the inhomogeneous friction on the chain a variety of scenario are possible amply manifested in the incremental mean first passage time (IMFPT) or in the waiting time distribution of the translocating chain. These informations can be deconvoluted to extract information about the mechanical properties of the chain at various length scales and thus can be used to nanopore based methods to probe biomolecules , such as DNA, RNA and proteins.
We study translocation dynamics of a driven compressible semi-flexible chain consisting of alternate blocks of stiff (S) and flexible (F) segments of size m and n respectively for different chain length N in two dimension (2D) . The free parameters in the model are the bending rigidity \kappa_b which controls the three body interaction term, the elastic constant k_F in the FENE (bond) potential between successive monomers, as well as the segmental lengths m and n and the repeat unit p (N=m_pn_p) and the solvent viscosity \gamma . We demonstrate that due to the change in entropic barrier and the inhomogeneous viscous drag on the chain backbone a variety of scenarios are possible amply manifested in the waiting time distribution of the translocating chain. These information can be deconvoluted to extract the mechanical properties of the chain at various length scales and thus can be used to nanopore based methods to probe bio-molecules , such as DNA, RNA and proteins.
[ { "type": "A", "before": null, "after": "in two dimension (2D)", "start_char_pos": 205, "end_char_pos": 205 }, { "type": "R", "before": "block", "after": "segmental", "start_char_pos": 421, "end_char_pos": 426 }, { "type": "A", "before": null, "after": "and the solvent viscosity \\gamma", "start_char_pos": 476, "end_char_pos": 476 }, { "type": "R", "before": "the due to change in the", "after": "due to the change in", "start_char_pos": 499, "end_char_pos": 523 }, { "type": "R", "before": "friction", "after": "viscous drag", "start_char_pos": 563, "end_char_pos": 571 }, { "type": "A", "before": null, "after": "backbone", "start_char_pos": 585, "end_char_pos": 585 }, { "type": "R", "before": "scenario", "after": "scenarios", "start_char_pos": 599, "end_char_pos": 607 }, { "type": "D", "before": "incremental mean first passage time (IMFPT) or in the", "after": null, "start_char_pos": 645, "end_char_pos": 698 }, { "type": "R", "before": "informations", "after": "information", "start_char_pos": 759, "end_char_pos": 771 }, { "type": "D", "before": "information about", "after": null, "start_char_pos": 803, "end_char_pos": 820 }, { "type": "R", "before": "biomolecules", "after": "bio-molecules", "start_char_pos": 941, "end_char_pos": 953 } ]
[ 0, 120, 137, 366, 752 ]
1405.7081
1
Statistical coupling analysis (SCA) is a method for analyzing multiple sequence alignments that was used to identify groups of coevolving residues termed "sectors". The method applies spectral analysis to a matrix obtained by combining correlation information with single-site statistics . It has been reported in a number of studies that the protein sectors found by SCA are functionally significant, with different sectors controlling different biochemical properties of the protein. We analyze the available experimental data and show that for proteins where a single SCA sectoris identified, the functionally-significant residues can also be found using single-site statistics such as conservation. We thus point to the need for more data for the cases in which several sectors are predicted by SCA .
Statistical coupling analysis (SCA) is a method for analyzing multiple sequence alignments that was used to identify groups of coevolving residues termed "sectors". The method applies spectral analysis to a matrix obtained by combining correlation information with sequence conservation . It has been asserted that the protein sectors identified by SCA are functionally significant, with different sectors controlling different biochemical properties of the protein. Here we reconsider the available experimental data and note that it involves almost exclusively proteins with a single sector. We show that in this case sequence conservation is the dominating factor in SCA, and can alone be used to make statistically equivalent functional predictions. Therefore, we suggest shifting the experimental focus to proteins for which SCA identifies several sectors. Correlations in protein alignments, which have been shown to be informative in a number of independent studies, would then be less dominated by sequence conservation .
[ { "type": "R", "before": "single-site statistics", "after": "sequence conservation", "start_char_pos": 265, "end_char_pos": 287 }, { "type": "R", "before": "reported in a number of studies", "after": "asserted", "start_char_pos": 302, "end_char_pos": 333 }, { "type": "R", "before": "found", "after": "identified", "start_char_pos": 359, "end_char_pos": 364 }, { "type": "R", "before": "We analyze", "after": "Here we reconsider", "start_char_pos": 486, "end_char_pos": 496 }, { "type": "R", "before": "show that for proteins where a single SCA sectoris identified, the functionally-significant residues can also be found using single-site statistics such as conservation. We thus point to", "after": "note that it involves almost exclusively proteins with a single sector. We show that in this case sequence conservation is", "start_char_pos": 533, "end_char_pos": 719 }, { "type": "R", "before": "need for more data for the cases in which several sectors are predicted by SCA", "after": "dominating factor in SCA, and can alone be used to make statistically equivalent functional predictions. Therefore, we suggest shifting the experimental focus to proteins for which SCA identifies several sectors. Correlations in protein alignments, which have been shown to be informative in a number of independent studies, would then be less dominated by sequence conservation", "start_char_pos": 724, "end_char_pos": 802 } ]
[ 0, 164, 289, 485, 702 ]
1406.0389
1
The largest US banks are required by regulatory mandate to estimate the operational risk capital they must hold using an Advanced Measurement Approach (AMA) as defined by the Basel II/III Accords. Most use the Loss Distribution Approach (LDA) which defines the aggregate loss distribution as the convolution of a frequency and a severity distribution representing the number and magnitude of losses, respectively. Estimated capital is a Value-at-Risk (99.9th percentile) estimate of this annual loss distribution. In practice, the severity distribution drives the capital estimate, which is essentially a very high quantile of the estimated severity distribution. Unfortunately, because the relevant severities are heavy-tailed AND the quantiles being estimated are so high, VaR is a convex function of the severity parameters, so all widely-used estimators will generate biased capital estimates due to Jensen's Inequality. This capital inflation is sometimes enormous, even hundreds of millions of dollars at the unit-of-measure (UoM) level . Herein I present an estimator of capital that essentially eliminates this upward bias. The Reduced-bias Capital Estimator (RCE) is more consistent with the regulatory intent of the LDA framework than implementations that fail to mitigate, if not eliminate this bias. RCE also notably increases the precision of the capital estimate and consistently increases its robustness to violations of the i.i.d. data presumption (which are endemic to operational risk loss event data). So with greater capital accuracy, precision, and robustness, RCE lowers capital requirements at both the UoM and enterprise levels, increases capital stability from quarter to quarter, ceteris paribus, and does both while more accurately and precisely reflecting regulatory intent. RCE is straightforward to explain, understand, and implement using any major statistical software package.
The largest US banks are required by regulatory mandate to estimate the operational risk capital they must hold using an Advanced Measurement Approach (AMA) as defined by the Basel II/III Accords. Most use the Loss Distribution Approach (LDA) which defines the aggregate loss distribution as the convolution of a frequency and a severity distribution representing the number and magnitude of losses, respectively. Estimated capital is a Value-at-Risk (99.9th percentile) estimate of this annual loss distribution. In practice, the severity distribution drives the capital estimate, which is essentially a very high quantile of the estimated severity distribution. Unfortunately, because the relevant severities are heavy-tailed AND the quantiles being estimated are so high, VaR always appears to be a convex function of the severity parameters, causing all widely-used estimators to generate biased capital estimates due to Jensen's Inequality. The observed capital inflation is sometimes enormous, even at the unit-of-measure (UoM) level (hundreds of millions USD) . Herein I present an estimator of capital that essentially eliminates this upward bias. The Reduced-bias Capital Estimator (RCE) is more consistent with the regulatory intent of the LDA framework than implementations that fail to mitigate, if not eliminate this bias. RCE also notably increases the precision of the capital estimate and consistently increases its robustness to violations of the i.i.d. data presumption (which are endemic to operational risk loss event data). So with greater capital accuracy, precision, and robustness, RCE lowers capital requirements at both the UoM and enterprise levels, increases capital stability from quarter to quarter, ceteris paribus, and does both while more accurately and precisely reflecting regulatory intent. RCE is straightforward to implement using any major statistical software package.
[ { "type": "R", "before": "is", "after": "always appears to be", "start_char_pos": 779, "end_char_pos": 781 }, { "type": "R", "before": "so", "after": "causing", "start_char_pos": 828, "end_char_pos": 830 }, { "type": "R", "before": "will", "after": "to", "start_char_pos": 858, "end_char_pos": 862 }, { "type": "R", "before": "This", "after": "The observed", "start_char_pos": 925, "end_char_pos": 929 }, { "type": "D", "before": "hundreds of millions of dollars", "after": null, "start_char_pos": 976, "end_char_pos": 1007 }, { "type": "A", "before": null, "after": "(hundreds of millions USD)", "start_char_pos": 1043, "end_char_pos": 1043 }, { "type": "D", "before": "explain, understand, and", "after": null, "start_char_pos": 1830, "end_char_pos": 1854 } ]
[ 0, 196, 413, 513, 663, 924, 1045, 1132, 1312, 1521, 1803 ]
1406.0389
2
The largest US banks are required by regulatory mandate to estimate the operational risk capital they must hold using an Advanced Measurement Approach (AMA) as defined by the Basel II/III Accords. Most use the Loss Distribution Approach (LDA) which defines the aggregate loss distribution as the convolution of a frequency and a severity distribution representing the number and magnitude of losses, respectively. Estimated capital is a Value-at-Risk (99.9th percentile) estimate of this annual loss distribution. In practice, the severity distribution drives the capital estimate, which is essentially a very high quantile of the estimated severity distribution. Unfortunately, because the relevant severities are heavy-tailed AND the quantiles being estimated are so high, VaR always appears to be a convex function of the severity parameters, causing all widely-used estimators to generate biased capital estimates due to Jensen's Inequality. The observed capital inflation is sometimes enormous, even at the unit-of-measure (UoM) level ( hundreds of millions USD). Herein I present an estimator of capital that essentially eliminates this upward bias. The Reduced-bias Capital Estimator (RCE) is more consistent with the regulatory intent of the LDA framework than implementations that fail to mitigate , if not eliminate this bias. RCE also notably increases the precision of the capital estimate and consistently increases its robustness to violations of the i.i.d. data presumption (which are endemic to operational risk loss event data). So with greater capital accuracy, precision, and robustness, RCE lowers capital requirements at both the UoM and enterprise levels, increases capital stability from quarter to quarter, ceteris paribus, and does both while more accurately and precisely reflecting regulatory intent. RCE is straightforward to implement using any major statistical software package.
The largest US banks are required by regulatory mandate to estimate the operational risk capital they must hold using an Advanced Measurement Approach (AMA) as defined by the Basel II/III Accords. Most use the Loss Distribution Approach (LDA) which defines the aggregate loss distribution as the convolution of a frequency and a severity distribution representing the number and magnitude of losses, respectively. Estimated capital is a Value-at-Risk (99.9th percentile) estimate of this annual loss distribution. In practice, the severity distribution drives the capital estimate, which is essentially a very high quantile of the estimated severity distribution. Unfortunately, because the relevant severities are heavy-tailed AND the quantiles being estimated are so high, VaR always appears to be a convex function of the severity parameters, causing all widely-used estimators to generate biased capital estimates (apparently) due to Jensen's Inequality. The observed capital inflation is sometimes enormous, even at the unit-of-measure (UoM) level ( even billions USD). Herein I present an estimator of capital that essentially eliminates this upward bias. The Reduced-bias Capital Estimator (RCE) is more consistent with the regulatory intent of the LDA framework than implementations that fail to mitigate this bias. RCE also notably increases the precision of the capital estimate and consistently increases its robustness to violations of the i.i.d. data presumption (which are endemic to operational risk loss event data). So with greater capital accuracy, precision, and robustness, RCE lowers capital requirements at both the UoM and enterprise levels, increases capital stability from quarter to quarter, ceteris paribus, and does both while more accurately and precisely reflecting regulatory intent. RCE is straightforward to implement using any major statistical software package.
[ { "type": "A", "before": null, "after": "(apparently)", "start_char_pos": 918, "end_char_pos": 918 }, { "type": "R", "before": "hundreds of millions", "after": "even billions", "start_char_pos": 1043, "end_char_pos": 1063 }, { "type": "D", "before": ", if not eliminate", "after": null, "start_char_pos": 1308, "end_char_pos": 1326 } ]
[ 0, 196, 413, 513, 663, 946, 1069, 1156, 1337, 1546, 1828 ]
1406.0399
1
Understanding the regulation and structure of the eukaryotic ribosome is essential to understanding protein synthesis and its deregulation in disease. Traditionally ribosomes are believed to have a fixed stoichiometry among their core ribosomal proteins (RPs), but recent experiments suggest a more variable composition. Reconciling these views requires direct and precise quantification of RPs. We used mass-spectrometry to directly quantify RPs across monosomes and polysomes of budding yeast and mouse embryonic stem cells (ESC) . Our data show that the stoichiometry among core RPs in wild-type yeast cells and ESC depends both on the growth conditions and on the number of ribosomes bound per mRNA. Furthermore, we find that the fitness of cells with a deleted RP-gene is inversely proportional to the enrichment of the corresponding RP in ribosomes bound to multiple mRNAs . Together, our findings support the existence of ribosomes with distinct protein composition and physiological function.
Understanding the regulation and structure of ribosomes is essential to understanding protein synthesis and its deregulation in disease. While ribosomes are believed to have a fixed stoichiometry among their core ribosomal proteins (RPs), some experiments suggest a more variable composition. Testing such variability requires direct and precise quantification of RPs. We used mass-spectrometry to directly quantify RPs across monosomes and polysomes of mouse embryonic stem cells (ESC) and budding yeast . Our data show that the stoichiometry among core RPs in wild-type yeast cells and ESC depends both on the growth conditions and on the number of ribosomes bound per mRNA. Furthermore, we find that the fitness of cells with a deleted RP-gene is inversely proportional to the enrichment of the corresponding RP in polysomes . Together, our findings support the existence of ribosomes with distinct protein composition and physiological function.
[ { "type": "R", "before": "the eukaryotic ribosome", "after": "ribosomes", "start_char_pos": 46, "end_char_pos": 69 }, { "type": "R", "before": "Traditionally", "after": "While", "start_char_pos": 151, "end_char_pos": 164 }, { "type": "R", "before": "but recent", "after": "some", "start_char_pos": 261, "end_char_pos": 271 }, { "type": "R", "before": "Reconciling these views", "after": "Testing such variability", "start_char_pos": 321, "end_char_pos": 344 }, { "type": "D", "before": "budding yeast and", "after": null, "start_char_pos": 481, "end_char_pos": 498 }, { "type": "A", "before": null, "after": "and budding yeast", "start_char_pos": 532, "end_char_pos": 532 }, { "type": "R", "before": "ribosomes bound to multiple mRNAs", "after": "polysomes", "start_char_pos": 846, "end_char_pos": 879 } ]
[ 0, 150, 320, 395, 531 ]
1406.0496
1
We present a set of analyses aiming at quantifying the amount of information filtered by different hierarchical clustering methods on correlations between stock returns . In particular we apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree (DBHT), and we compare it with other methods including the Linkage and k-medoids. In particular by taking the industrial sector classification of stocks as a benchmark partition we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree outperforms the other methods, being able to retrieve more information with fewer clusters. Moreover, we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis also reveals that the different methods show different degrees of sensitivity to financial events , like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging.
We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing it with the underlying industrial activity structure. Specifically, we apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. In particular , by taking the industrial sector classification of stocks as a benchmark partition , we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover, we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets , like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging.
[ { "type": "R", "before": "present a set of analyses aiming at quantifying", "after": "quantify", "start_char_pos": 3, "end_char_pos": 50 }, { "type": "R", "before": ". In particular", "after": "comparing it with the underlying industrial activity structure. Specifically,", "start_char_pos": 169, "end_char_pos": 184 }, { "type": "D", "before": "(DBHT),", "after": null, "start_char_pos": 313, "end_char_pos": 320 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 409, "end_char_pos": 409 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 492, "end_char_pos": 492 }, { "type": "R", "before": "outperforms the", "after": "can outperform", "start_char_pos": 621, "end_char_pos": 636 }, { "type": "A", "before": null, "after": "on a rolling window", "start_char_pos": 881, "end_char_pos": 881 }, { "type": "R", "before": "financial events", "after": "events affecting financial markets", "start_char_pos": 963, "end_char_pos": 979 } ]
[ 0, 170, 394, 560, 712, 857, 994 ]
1406.1352
1
In this paper we consider large state space continuous time Markov chains arising in the field of systems biology. For a class of such models, namely, for density dependent families , Kurtz has proposed two kinds of approximations. One is based on ordinary differential equations and provides a deterministic approximation while the other uses a diffusion process with which the resulting approximation is stochastic . The computational cost of the deterministic approximation is significantly lower but the diffusion approximation retains stochasticity and is able to reproduce relevant random features like variance, bimodality, and tail behavior that cannot be captured by a single deterministic quantity . In a recent paper, for particular stochastic Petri net models, we proposed a jump diffusion approximation that extends Kurtz's diffusion approximation to the case when the process reaches the boundary with non-negligible probability. In this paper we generalize the method further. Other limitations of the diffusion approximation are that it can provide inaccurate results when the number of objects in some groups is often or constantly low and that it can be applied only to pure density dependent Markov chains. In this paper we propose to apply the jump-diffusion approximation only to the density dependent components associated with high population levels. The remaining components are treated as discrete quantities. The resulting process is a hybrid switching jump diffusion. We show that the stochastic differential equations that characterize this process can be derived automatically both from the description of the original Markov chain or starting from the correspondent Petri net . The proposed approach is illustrated on two models .
In this paper we consider large state space continuous time Markov chains (MCs) arising in the field of systems biology. For density dependent families of MCs that represent the interaction of large groups of identical objects , Kurtz has proposed two kinds of approximations. One is based on ordinary differential equations , while the other uses a diffusion process . The computational cost of the deterministic approximation is significantly lower , but the diffusion approximation retains stochasticity and is able to reproduce relevant random features like variance, bimodality, and tail behavior . In a recent paper, for particular stochastic Petri net models, we proposed a jump diffusion approximation that aims at being applicable beyond the limits of Kurtz's diffusion approximation , namely when the process reaches the boundary with non-negligible probability. Other limitations of the diffusion approximation in its original form are that it can provide inaccurate results when the number of objects in some groups is often or constantly low and that it can be applied only to pure density dependent Markov chains. In order to overcome these drawbacks, in this paper we propose to apply the jump-diffusion approximation only to those components of the model that are in density dependent form and are associated with high population levels. The remaining components are treated as discrete quantities. The resulting process is a hybrid switching jump diffusion. We show that the stochastic differential equations that characterize this process can be derived automatically both from the description of the original Markov chains or starting from a higher level description language, like stochastic Petri nets . The proposed approach is illustrated on three models: one modeling the so called crazy clock reaction, one describing viral infection kinetics and the last considering transcription regulation .
[ { "type": "A", "before": null, "after": "(MCs)", "start_char_pos": 74, "end_char_pos": 74 }, { "type": "D", "before": "a class of such models, namely, for", "after": null, "start_char_pos": 120, "end_char_pos": 155 }, { "type": "A", "before": null, "after": "of MCs that represent the interaction of large groups of identical objects", "start_char_pos": 183, "end_char_pos": 183 }, { "type": "R", "before": "and provides a deterministic approximation", "after": ",", "start_char_pos": 282, "end_char_pos": 324 }, { "type": "D", "before": "with which the resulting approximation is stochastic", "after": null, "start_char_pos": 366, "end_char_pos": 418 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 502, "end_char_pos": 502 }, { "type": "D", "before": "that cannot be captured by a single deterministic quantity", "after": null, "start_char_pos": 652, "end_char_pos": 710 }, { "type": "R", "before": "extends", "after": "aims at being applicable beyond the limits of", "start_char_pos": 824, "end_char_pos": 831 }, { "type": "R", "before": "to the case", "after": ", namely", "start_char_pos": 864, "end_char_pos": 875 }, { "type": "D", "before": "In this paper we generalize the method further.", "after": null, "start_char_pos": 947, "end_char_pos": 994 }, { "type": "A", "before": null, "after": "in its original form", "start_char_pos": 1044, "end_char_pos": 1044 }, { "type": "A", "before": null, "after": "order to overcome these drawbacks, in", "start_char_pos": 1233, "end_char_pos": 1233 }, { "type": "R", "before": "the density dependent components", "after": "those components of the model that are in density dependent form and are", "start_char_pos": 1306, "end_char_pos": 1338 }, { "type": "R", "before": "chain", "after": "chains", "start_char_pos": 1660, "end_char_pos": 1665 }, { "type": "R", "before": "the correspondent Petri net", "after": "a higher level description language, like stochastic Petri nets", "start_char_pos": 1683, "end_char_pos": 1710 }, { "type": "R", "before": "two models", "after": "three models: one modeling the so called crazy clock reaction, one describing viral infection kinetics and the last considering transcription regulation", "start_char_pos": 1753, "end_char_pos": 1763 } ]
[ 0, 115, 233, 420, 712, 946, 994, 1229, 1378, 1439, 1499, 1712 ]
1406.1811
1
It is well known that traded foreign exchange forwards and cross currency swaps (CCS) cannot be priced applying cash and carry arguments . This paper proposes a generalized multi-currency pricing and hedging framework that allows the flexibility of choosing the perspective from which funding is managed for each currency . When cross currency basis spreads collapse to zero , this method converges to the well established single currency setting in which each leg is funded in its own currency. A worked example tests the quality of the method .
It is well known that traded foreign exchange forwards and cross currency swaps (CCS) cannot be priced applying overnight cash and carry arguments as they imply absence of funding advantage of one currency to the other . This paper proposes a heuristic present value concept for multi-currency pricing and hedging which allows taking into account the funding and therefore the collateral currency and its pricing impact. For uncollateralized operations, it provides more funding optionality to achieve either cheaper or more connected funding to the hedging instruments. When foreign exchange forwards get aligned with overnight cash and carry arguments , this method naturally converges to the well established OIS discounting where each leg is funded in its own currency. A worked example compares this approach with a benchmark .
[ { "type": "A", "before": null, "after": "overnight", "start_char_pos": 112, "end_char_pos": 112 }, { "type": "A", "before": null, "after": "as they imply absence of funding advantage of one currency to the other", "start_char_pos": 138, "end_char_pos": 138 }, { "type": "R", "before": "generalized", "after": "heuristic present value concept for", "start_char_pos": 163, "end_char_pos": 174 }, { "type": "R", "before": "framework that allows the flexibility of choosing the perspective from which funding is managed for each currency . When cross currency basis spreads collapse to zero", "after": "which allows taking into account the funding and therefore the collateral currency and its pricing impact. For uncollateralized operations, it provides more funding optionality to achieve either cheaper or more connected funding to the hedging instruments. When foreign exchange forwards get aligned with overnight cash and carry arguments", "start_char_pos": 210, "end_char_pos": 376 }, { "type": "A", "before": null, "after": "naturally", "start_char_pos": 391, "end_char_pos": 391 }, { "type": "R", "before": "single currency setting in which", "after": "OIS discounting where", "start_char_pos": 426, "end_char_pos": 458 }, { "type": "R", "before": "tests the quality of the method", "after": "compares this approach with a benchmark", "start_char_pos": 516, "end_char_pos": 547 } ]
[ 0, 140, 325, 498 ]
1406.3967
1
Most fits of Hawkes processes in financial literature are not statistically significant. Focusing on FX data (EBS limit order book) with 0.1s time resolution, we find that significance is impossible if the event log is incomplete (e. g. best quote changes). Transactions on the other hand can be tracked by analysing the respective volumes of transactions on both sides in each time slice. Assuming a constant exogenous activity rate and using parametric kernels , fits of trade activity satisfy Kolmogorov-Smirnov tests for about an hour , provided that the kernel consists of at least two exponentials; the endogeneity factor does not depend on the time of the day and is about 0.7. Significant fits of a full day can be achieved if one accounts for intra-day variability of exogenous activity, which yields a larger endogeneity factor ( 0.8 ). We could not obtain significant fits beyond a single day. Variable seasonalities are major obstacles to fitting FX activity with Hawkes processes accurately and result in larger apparent endogeneity factors and lower statistical significance .
Many fits of Hawkes processes to financial data look rather good but most of them are not statistically significant. This raises the question of what part of market dynamics this model is able to account for exactly. We document the accuracy of such processes as one varies the time interval of calibration and compare the performance of various types of kernels made up of sums of exponentials. Because of their around-the-clock opening times, FX markets are ideally suited to our aim as they allow us to avoid the complications of the long daily overnight closures of equity markets. One can achieve statistical significance according to three simultaneous tests provided that one uses kernels with two exponentials for fitting an hour at a time, and two or three exponentials for full days, while longer periods could not be fitted within statistical satisfaction because of the non-stationarity of the endogenous process. Fitted timescales are relatively short and endogeneity factor is high but sub-critical at about 0.8 .
[ { "type": "R", "before": "Most", "after": "Many", "start_char_pos": 0, "end_char_pos": 4 }, { "type": "R", "before": "in financial literature", "after": "to financial data look rather good but most of them", "start_char_pos": 30, "end_char_pos": 53 }, { "type": "R", "before": "Focusing on FX data (EBS limit order book) with 0.1s time resolution, we find that significance is impossible if the event log is incomplete (e. g. best quote changes). Transactions on the other hand can be tracked by analysing the respective volumes of transactions on both sides in each time slice. Assuming a constant exogenous activity rate and using parametric kernels , fits of trade activity satisfy Kolmogorov-Smirnov tests for about an hour , provided that the kernel consists of at least two exponentials; the endogeneity factor does not depend on the time of the day and is about 0.7. Significant fits of a full day can be achieved if one accounts for intra-day variability of exogenous activity, which yields a larger endogeneity factor (", "after": "This raises the question of what part of market dynamics this model is able to account for exactly. We document the accuracy of such processes as one varies the time interval of calibration and compare the performance of various types of kernels made up of sums of exponentials. Because of their around-the-clock opening times, FX markets are ideally suited to our aim as they allow us to avoid the complications of the long daily overnight closures of equity markets. One can achieve statistical significance according to three simultaneous tests provided that one uses kernels with two exponentials for fitting an hour at a time, and two or three exponentials for full days, while longer periods could not be fitted within statistical satisfaction because of the non-stationarity of the endogenous process. Fitted timescales are relatively short and endogeneity factor is high but sub-critical at about", "start_char_pos": 89, "end_char_pos": 839 }, { "type": "D", "before": "). We could not obtain significant fits beyond a single day. Variable seasonalities are major obstacles to fitting FX activity with Hawkes processes accurately and result in larger apparent endogeneity factors and lower statistical significance", "after": null, "start_char_pos": 844, "end_char_pos": 1088 } ]
[ 0, 88, 257, 389, 604, 846, 904 ]
1406.4297
1
This paper examines a Markovian model for the optimal irreversible investment problem of a firm aiming at minimizing total expected costs of production. We model market uncertainty and the cost of investment per unit of production capacity as two independent one-dimensional regular diffusions, and we consider a general convex running cost function. The optimization problem is set as a three-dimensional degenerate singular stochastic control problem. We provide the optimal control as the solution of a Skorohod reflection problem at a suitable free-boundary surface. Such boundary arises from the analysis of a family of two-dimensional parameter-dependent optimal stopping problems and it is characterized in terms of the family of unique continuous solutions to parameter-dependent nonlinear integral equations of Fredholm type.
This paper examines a Markovian model for the optimal irreversible investment problem of a firm aiming at minimizing total expected costs of production. We model market uncertainty and the cost of investment per unit of production capacity as two independent one-dimensional regular diffusions, and we consider a general convex running cost function. The optimization problem is set as a three-dimensional degenerate singular stochastic control problem. We provide the optimal control as the solution of a Skorohod reflection problem at a suitable boundary surface. Such boundary arises from the analysis of a family of two-dimensional parameter-dependent optimal stopping problems and it is characterized in terms of the family of unique continuous solutions to parameter-dependent nonlinear integral equations of Fredholm type.
[ { "type": "R", "before": "free-boundary", "after": "boundary", "start_char_pos": 548, "end_char_pos": 561 } ]
[ 0, 152, 350, 453, 570 ]
1406.4301
1
We propose a general framework for modeling multiple yield curves which have emerged after the last financial crisis. In a general semimartingale setting, we provide an HJM approach to model the term structure of multiplicative spreads between (normalized) FRA rates and simply compounded OIS risk-free forward rates. We derive an HJM drift and consistency condition ensuring absence of arbitrage and, in addition, we show how to construct models such that multiplicative spreads are greater than one and ordered with respect to the tenor's length. When the driving semimartingale is specified as an affine process, we obtain a flexible Markovian structure which allows for tractable valuation formulas for most interest rate derivatives . Finally, we show that the proposed framework allows to unify and extend several recent approaches to multiple yield curve modeling.
We propose a general framework for modeling multiple yield curves which have emerged after the last financial crisis. In a general semimartingale setting, we provide an HJM approach to model the term structure of multiplicative spreads between FRA rates and simply compounded OIS risk-free forward rates. We derive an HJM drift and consistency condition ensuring absence of arbitrage and, in addition, we show how to construct models such that multiplicative spreads are greater than one and ordered with respect to the tenor's length. When the driving semimartingale is specified as an affine process, we obtain a flexible Markovian structure . Finally, we show that the proposed framework allows to unify and extend several recent approaches to multiple yield curve modeling.
[ { "type": "D", "before": "(normalized)", "after": null, "start_char_pos": 244, "end_char_pos": 256 }, { "type": "D", "before": "which allows for tractable valuation formulas for most interest rate derivatives", "after": null, "start_char_pos": 657, "end_char_pos": 737 } ]
[ 0, 117, 317, 548, 739 ]
1406.5641
1
We show that a mesoscale model, with a minimal number of parameters, can describe well the thermomechanical and mechanochemical behavior of homogeneous DNA at thermal equilibrium under tension and torque. We predict critical temperatures for denaturation under torque and stretch, phase diagrams for stable DNA, probe/response profiles under mechanical loads, and the density of dsDNA as a function of stretch and twist. We find strong agreement with available single molecule manipulation experiments .
We show that a mesoscale model, with a minimal number of parameters, can well describe the thermomechanical and mechanochemical behavior of homogeneous DNA at thermal equilibrium under tension and torque. We predict critical temperatures for denaturation under torque and stretch, phase diagrams for stable DNA, probe/response profiles under mechanical loads, and the density of dsDNA as a function of stretch and twist. We compare our predictions with available single molecule manipulation experiments and find strong agreement. In particular we elucidate the difference between angularly constrained and unconstrained overstretching. We propose that the smoothness of the angularly constrained overstreching transition is a consequence of the molecule being in the vicinity of criticality for a broad range of values of applied tension .
[ { "type": "R", "before": "describe well", "after": "well describe", "start_char_pos": 73, "end_char_pos": 86 }, { "type": "R", "before": "find strong agreement", "after": "compare our predictions", "start_char_pos": 424, "end_char_pos": 445 }, { "type": "A", "before": null, "after": "and find strong agreement. In particular we elucidate the difference between angularly constrained and unconstrained overstretching. We propose that the smoothness of the angularly constrained overstreching transition is a consequence of the molecule being in the vicinity of criticality for a broad range of values of applied tension", "start_char_pos": 502, "end_char_pos": 502 } ]
[ 0, 204, 420 ]
1406.5852
1
We consider a contracting problem in which a principal hires an agent to manage a risky project. When the agent chooses volatility components of the output process and the principal observes the output continuously, the principal can compute the quadratic variation of the output, but not the individual components. This leads to moral hazard with respect to the risk choices of the agent. Using a recent theory of singular changes of measures for Ito processes, we formulate a principal-agent problem in this context, and solve it in the case of CARA preferences . In that case, the optimal contract is linear in these factors: the contractible sources of risk, including the output, the quadratic variation of the output and the cross-variations between the output and the contractible risk sources. Thus, like sample Sharpe ratios used in practice, path-dependent contracts naturally arise when there is moral hazard with respect to risk management. We also provide comparative statistics via numerical examples, showing that the optimal contract is sensitive to the values of risk premia and the initial values of the risk exposures .
We consider a contracting problem in which a principal hires an agent to manage a risky project. When the agent chooses volatility components of the output process and the principal observes the output continuously, the principal can compute the quadratic variation of the output, but not the individual components. This leads to moral hazard with respect to the risk choices of the agent. We identify a family of admissible contracts for which the optimal agent's action is explicitly characterized, and, using the recent theory of singular changes of measures for It\^o processes, we study how restrictive this family is. In particular, in the special case of the standard Homlstr\"om-Milgrom model with fixed volatility, the family includes all possible contracts. We solve the principal-agent problem in the case of CARA preferences , and show that the optimal contract is linear in these factors: the contractible sources of risk, including the output, the quadratic variation of the output and the cross-variations between the output and the contractible risk sources. Thus, like sample Sharpe ratios used in practice, path-dependent contracts naturally arise when there is moral hazard with respect to risk management. In a numerical example, we show that the loss of efficiency can be significant if the principal does not use the quadratic variation component of the optimal contract .
[ { "type": "R", "before": "Using a", "after": "We identify a family of admissible contracts for which the optimal agent's action is explicitly characterized, and, using the", "start_char_pos": 390, "end_char_pos": 397 }, { "type": "R", "before": "Ito", "after": "It\\^o", "start_char_pos": 448, "end_char_pos": 451 }, { "type": "R", "before": "formulate a", "after": "study how restrictive this family is. In particular, in the special case of the standard Homlstr\\\"om-Milgrom model with fixed volatility, the family includes all possible contracts. We solve the", "start_char_pos": 466, "end_char_pos": 477 }, { "type": "R", "before": "problem in this context, and solve it", "after": "problem", "start_char_pos": 494, "end_char_pos": 531 }, { "type": "R", "before": ". In that case,", "after": ", and show that", "start_char_pos": 564, "end_char_pos": 579 }, { "type": "R", "before": "We also provide comparative statistics via numerical examples, showing that the optimal contract is sensitive to the values of risk premia and the initial values of the risk exposures", "after": "In a numerical example, we show that the loss of efficiency can be significant if the principal does not use the quadratic variation component of the optimal contract", "start_char_pos": 953, "end_char_pos": 1136 } ]
[ 0, 96, 315, 389, 565, 801, 952 ]
1406.6557
1
Synthetic lethal reaction/gene-sets are sets of reactions/genes where only the simultaneous removal of all reactions/genes in the set abolishes growth of URLanism. In silico, synthetic lethal sets can be identified by simulating the effect of removal of gene sets from the reconstructed genome-scale metabolic network of URLanism. Flux balance analysis (FBA), based on linear programming, has emerged as a powerful tool for the in silico analyses of metabolic networks. To identify all possible synthetic lethal reactions combinations, an exhaustive sampling of all possible combinations is computationally expensive. We surmount the computational complexity of exhaustive search by iteratively restricting the sample space of reaction combinations for search, resulting in a substantial reduction in the running time. We here propose an algorithm, Fast-SL, which provides an efficient way to analyse metabolic networks for higher order lethal reaction sets. We have implemented the algorithm in MATLAB, building upon COBRA toolbox v2.0 . Fast-SL also compares favourably with SL Finder, an algorithm for identifying synthetic lethal sets, by Suthers et al (2009), which involves the solution of bi-level Mixed Integer Linear Programming problem .
Synthetic lethal reaction/gene-sets are sets of reactions/genes where only the simultaneous removal of all reactions/genes in the set abolishes growth of URLanism. In silico, synthetic lethal sets can be identified by simulating the effect of removal of gene sets from the reconstructed genome-scale metabolic network of URLanism. Flux balance analysis (FBA), based on linear programming, has emerged as a powerful tool for the in silico analyses of metabolic networks. To identify all possible synthetic lethal reactions combinations, an exhaustive sampling of all possible combinations is computationally expensive. We surmount the computational complexity of exhaustive search by iteratively restricting the sample space of reaction combinations for search, resulting in a substantial reduction in the running time. We here propose an algorithm, Fast-SL, which provides an efficient way to analyse metabolic networks for higher order lethal reaction sets. Fast-SL offers a substantial speed-up through a massive reduction in the search space for synthetic lethals; in the case of E. coli, Fast-SL reduces the search space for synthetic lethal triplets by over 4000-fold . Fast-SL also compares favourably with SL Finder, an algorithm for identifying synthetic lethal sets, by Suthers et al (2009), which involves the solution of a bi-level Mixed Integer Linear Programming problem . We have implemented the Fast-SL algorithm in MATLAB, building upon COBRA toolbox v2.0 .
[ { "type": "R", "before": "We have implemented the algorithm in MATLAB, building upon COBRA toolbox v2.0", "after": "Fast-SL offers a substantial speed-up through a massive reduction in the search space for synthetic lethals; in the case of E. coli, Fast-SL reduces the search space for synthetic lethal triplets by over 4000-fold", "start_char_pos": 959, "end_char_pos": 1036 }, { "type": "A", "before": null, "after": "a", "start_char_pos": 1196, "end_char_pos": 1196 }, { "type": "A", "before": null, "after": ". We have implemented the Fast-SL algorithm in MATLAB, building upon COBRA toolbox v2.0", "start_char_pos": 1247, "end_char_pos": 1247 } ]
[ 0, 163, 330, 469, 617, 818, 958 ]
1406.6612
1
Cluster-based descriptions of biological networks have received much attention in recent years , fostered by accumulated evidence of the existence of meaningful correlations between topological network clusters and biological functional modules. Several well-performing clustering algorithms exist to infer topological network partitions. However, due to respective technical idiosyncrasies , they might produce modular descriptions that provide networkpictures at different resolution levels. We aimed to analyze how these alternative modular descriptions could condition the outcome of follow-up network biology analysis. We considered a human protein interaction network and two paradigmatic cluster recognition algorithms, namely: the Clauset-Newman-Moore and the infomap procedures. We analyzed at what extent both procedures yielded different results in terms of cluster sizes, their biological congruencyand displayed meso-scale connectivity patterns. We specifically studied the case of aging related proteins, and showed that only the high-resolution modular description , achieved by infomap, could unveil statistically significant associations between them and inter/intra modular connectivity schemes. In particular, we found that aging related proteins were more likely to reside in the interface of network modules, possibly linking distinct biological processes. Besides reporting novel biological insights that could be gained from the discovered associations, our results warns against possible technical concerns that might affect the tools used to mine for interaction patterns in network biology studies .
Cluster-based descriptions of biological networks have received much attention in recent years fostered by accumulated evidence of the existence of meaningful correlations between topological network clusters and biological functional modules. Several well-performing clustering algorithms exist to infer topological network partitions. However, due to respective technical idiosyncrasies they might produce dissimilar modular decompositions of a given network. In this contribution, we aimed to analyze how alternative modular descriptions could condition the outcome of follow-up network biology analysis. We considered a human protein interaction network and two paradigmatic cluster recognition algorithms, namely: the Clauset-Newman-Moore and the infomap procedures. We analyzed at what extent both methodologies yielded different results in terms of granularity and biological congruency. In addition, taking into account Guimera cartographic role characterization of network nodes, we explored how the adoption of a given clustering methodology impinged on the ability to highlight relevant network meso-scale connectivity patterns. As a case study we considered a set of aging related proteins, and showed that only the high-resolution modular description provided by infomap, could unveil statistically significant associations between them and inter-intra modular cartographic features. Besides reporting novel biological insights that could be gained from the discovered associations, our contribution warns against possible technical concerns that might affect the tools used to mine for interaction patterns in network biology studies . In particular our results suggested that sub-optimal partitions from the strict point of view of their modularity levels might still be worth being analyzed when meso-scale features were to be explored in connection with external source of biological knowledge .
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 95, "end_char_pos": 96 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 391, "end_char_pos": 392 }, { "type": "R", "before": "modular descriptions that provide networkpictures at different resolution levels. We", "after": "dissimilar modular decompositions of a given network. In this contribution, we", "start_char_pos": 412, "end_char_pos": 496 }, { "type": "D", "before": "these", "after": null, "start_char_pos": 518, "end_char_pos": 523 }, { "type": "R", "before": "procedures", "after": "methodologies", "start_char_pos": 820, "end_char_pos": 830 }, { "type": "R", "before": "cluster sizes, their biological congruencyand displayed", "after": "granularity and biological congruency. In addition, taking into account Guimera cartographic role characterization of network nodes, we explored how the adoption of a given clustering methodology impinged on the ability to highlight relevant network", "start_char_pos": 869, "end_char_pos": 924 }, { "type": "R", "before": "We specifically studied the case", "after": "As a case study we considered a set", "start_char_pos": 959, "end_char_pos": 991 }, { "type": "R", "before": ", achieved", "after": "provided", "start_char_pos": 1080, "end_char_pos": 1090 }, { "type": "R", "before": "inter/intra modular connectivity schemes. In particular, we found that aging related proteins were more likely to reside in the interface of network modules, possibly linking distinct biological processes.", "after": "inter-intra modular cartographic features.", "start_char_pos": 1172, "end_char_pos": 1377 }, { "type": "R", "before": "results", "after": "contribution", "start_char_pos": 1481, "end_char_pos": 1488 }, { "type": "A", "before": null, "after": ". In particular our results suggested that sub-optimal partitions from the strict point of view of their modularity levels might still be worth being analyzed when meso-scale features were to be explored in connection with external source of biological knowledge", "start_char_pos": 1624, "end_char_pos": 1624 } ]
[ 0, 245, 338, 493, 623, 787, 958, 1213, 1377 ]
1406.6620
1
The widening inequality in income distribution in recent years, and the associated excessive pay packages of CEOs in the U.S. and elsewhere, is of growing concern among policy makers as well as the common person. However, there seems to be no satisfactory answer, in conventional economic theories and models, to the fundamental question of what kind of pay distribution we ought to see in a free market environment , at least under ideal conditions . We propose a novel game theoretic framework that addresses this question and shows that the lognormal distribution is the fairest inequality of pay in URLanization , achieved at equilibrium, under ideal free market conditions . Our theory also shows the deep and direct connection between potential game theory and statistical mechanics through entropy, which is a measure of fairness in a distribution. This leads us to propose the fair market hypothesis, that the URLanizing dynamics of the ideal free market, i.e., Adam Smith's "invisible hand", not only promotes efficiency but also maximizes fairness under the given constraints.
The widening inequality in income distribution in recent years, and the associated excessive pay packages of CEOs in the U.S. and elsewhere, is of growing concern among policy makers as well as the common person. However, there seems to be no satisfactory answer, in conventional economic theories and models, to the fundamental question of what kind of pay distribution we ought to see , at least under ideal conditions , in a free market environment and whether this distribution is fair . We propose a game theoretic framework that addresses these questions and show that the lognormal distribution is the fairest inequality of pay in URLanization comprising of homogenous agents , achieved at equilibrium, under ideal free market conditions . We also show that for a population of two different classes of agents, the final distribution is a combination of two different lognormal distributions where one of them, corresponding to the top 3-5\% of the population, can be misidentified as a Pareto distribution . Our theory also shows the deep and direct connection between potential game theory and statistical mechanics through entropy, which is a measure of fairness in a distribution. This leads us to propose the fair market hypothesis, that the URLanizing dynamics of the ideal free market, i.e., Adam Smith's "invisible hand", not only promotes efficiency but also maximizes fairness under the given constraints.
[ { "type": "D", "before": "in a free market environment", "after": null, "start_char_pos": 387, "end_char_pos": 415 }, { "type": "A", "before": null, "after": ", in a free market environment and whether this distribution is fair", "start_char_pos": 450, "end_char_pos": 450 }, { "type": "D", "before": "novel", "after": null, "start_char_pos": 466, "end_char_pos": 471 }, { "type": "R", "before": "this question and shows", "after": "these questions and show", "start_char_pos": 512, "end_char_pos": 535 }, { "type": "A", "before": null, "after": "comprising of homogenous agents", "start_char_pos": 617, "end_char_pos": 617 }, { "type": "A", "before": null, "after": ". We also show that for a population of two different classes of agents, the final distribution is a combination of two different lognormal distributions where one of them, corresponding to the top 3-5\\% of the population, can be misidentified as a Pareto distribution", "start_char_pos": 680, "end_char_pos": 680 } ]
[ 0, 212, 452, 682, 858 ]
1406.6805
1
Geometric Arbitrage Theory , where a generic market is modelled with a principal fibre bundle and arbitrage corresponds to its curvature, is applied to credit marketsto model default risk and recovery, leading to closed form no arbitrage characterizations for corporate bonds .
We apply Geometric Arbitrage Theory to obtain results in mathematical finance for credit markets, which do not need stochastic differential geometry in their formulation. We obtain closed form equations involving default intensities and loss given defaults characterizing the no-free-lunch-with-vanishing-risk condition for corporate bonds , as well as the generic dynamics for credit market allowing for arbitrage possibilities. Moreover, arbitrage credit bubbles for both base credit assets and credit derivatives are explicitly computed for the market dynamics minimizing the arbitrage .
[ { "type": "A", "before": null, "after": "We apply", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": ", where a generic market is modelled with a principal fibre bundle and arbitrage corresponds to its curvature, is applied to credit marketsto model default risk and recovery, leading to closed form no arbitrage characterizations", "after": "to obtain results in mathematical finance for credit markets, which do not need stochastic differential geometry in their formulation. We obtain closed form equations involving default intensities and loss given defaults characterizing the no-free-lunch-with-vanishing-risk condition", "start_char_pos": 28, "end_char_pos": 256 }, { "type": "A", "before": null, "after": ", as well as the generic dynamics for credit market allowing for arbitrage possibilities. Moreover, arbitrage credit bubbles for both base credit assets and credit derivatives are explicitly computed for the market dynamics minimizing the arbitrage", "start_char_pos": 277, "end_char_pos": 277 } ]
[ 0 ]
1406.6951
1
In this paper we consider the optimal transport approach for computing the model-free prices of a given path-dependent contingent claim in a two periods model. More precisely, we revisit the optimal transport plans constructed in \mbox{%DIFAUXCMD BrenierMartingale0pt%DIFAUXCMD , following the construction of \mbox{%DIFAUXCMD BrenierMartingale}0pt%DIFAUXCMD , as well as the one in } HobsonKlimmek2013 in the case of positive martingales and a single maximizer for the difference between the c.d.f.'s of the two marginals. These characterizations allow us to study the effect of the change of numeraire on the corresponding superhedging and subhedging model-free prices. It turns out that, for BrenierMartingale's optimal transport plan , the change of numeraire can be viewed as a mirror coupling for positive martingales, while for HobsonKlimmek2013 it exchanges forward start straddles of type I and type II giving also that the optimal transport plan in the subhedging problems is the same for both types of options. Some numerical applications are provided.
In this paper we consider the optimal transport approach for computing the model-free prices of a given path-dependent contingent claim in a two periods model. More precisely, we first revisit the optimal transport plan introduced in \mbox{%DIFAUXCMD BeiglJuil0pt%DIFAUXCMD , following the construction of \mbox{%DIFAUXCMD BrenierMartingale}0pt%DIFAUXCMD , as well as the one in } HobsonKlimmek2013 in the case of positive martingales and a single maximizer for the difference between the c.d.f.'s of the two marginals. These characterizations allow us to study the effect of the change of numeraire on the corresponding superhedging and subhedging model-free prices. It turns out that, for BrenierMartingale's construction , the change of numeraire can be viewed as a mirror coupling for positive martingales, while for HobsonKlimmek2013 it exchanges forward start straddles of type I and type II giving also that the optimal transport plan in the subhedging problems is the same for both types of options. Some numerical applications are provided.
[ { "type": "A", "before": null, "after": "first", "start_char_pos": 179, "end_char_pos": 179 }, { "type": "R", "before": "plans constructed in \\mbox{%DIFAUXCMD BrenierMartingale", "after": "plan introduced in \\mbox{%DIFAUXCMD BeiglJuil", "start_char_pos": 210, "end_char_pos": 265 }, { "type": "R", "before": "optimal transport plan", "after": "construction", "start_char_pos": 716, "end_char_pos": 738 } ]
[ 0, 159, 524, 672, 1022 ]
1406.6951
2
In this paper we consider the optimal transport approach for computing the model-free prices of a given path-dependent contingent claim in a two periods model. More precisely, we first revisit the optimal transport plan introduced in BeiglJuil, following the construction of BrenierMartingale, as well as the one in HobsonKlimmek2013 in the case of positive martingales and a single maximizer for the difference between the c.d.f.'s of the two marginals. These characterizations allow us to study the effect of the change of numeraire on the corresponding superhedging and subhedging model-free prices. It turns out that, for BrenierMartingale's construction, the change of numeraire can be viewed as a mirror coupling for positive martingales, while for HobsonKlimmek2013 it exchanges forward start straddles of type I and type II giving also that the optimal transport plan in the subhedging problems is the same for both types of options. Some numerical applications are provided.
In this paper we consider the optimal transport approach for computing the model-free prices of a given path-dependent contingent claim in a two periods model. More precisely, we first specialize the optimal transport plan introduced in BeiglJuil, following the construction of BrenierMartingale, as well as the one in HobsonKlimmek2013 , to the case of positive martingales and a single maximizer for the difference between the c.d.f.'s of the two marginals. These characterizations allow us to study the effect of the change of numeraire on the corresponding super and subhedging model-free prices. It turns out that, for BrenierMartingale's construction, the change of numeraire can be viewed as a mirror coupling for positive martingales, while for HobsonKlimmek2013 it exchanges forward start straddles of type I and type II giving also that the optimal transport plan in the subhedging problems is the same for both types of options. Some numerical applications are provided.
[ { "type": "R", "before": "revisit", "after": "specialize", "start_char_pos": 185, "end_char_pos": 192 }, { "type": "R", "before": "in", "after": ", to", "start_char_pos": 334, "end_char_pos": 336 }, { "type": "R", "before": "superhedging", "after": "super", "start_char_pos": 556, "end_char_pos": 568 } ]
[ 0, 159, 454, 602, 941 ]
1406.7441
1
We define a measure of cooperativity for gene regulatory networks which we propose should be maximized under a demand for energy efficiency . We investigate its dependence on network size, connectivity and the fraction of repressory/activatory interactions. Next, we consider the cell-cycle regulatory network of the yeast, Saccharomyces cerevisiae, as a case study and calculate its degree of cooperativity . A comparison with random networks of similar size and composition reveals that the yeast's cell-cycle regulation is exceptionally cooperative .
We define a measure of coherent activity for gene regulatory networks , a property that reflects the unity of purpose between the regulatory agents with a common target. We propose that such harmonious regulatory action is desirable under a demand for energy efficiency and may be selected for under evolutionary pressures. We consider two recent models of the cell-cycle regulatory network of the budding yeast, Saccharomyces cerevisiae, as a case study and calculate their degree of coherence . A comparison with random networks of similar size and composition reveals that the yeast's cell-cycle regulation is wired to yield and exceptionally high level of coherent regulatory activity. We also investigate the mean degree of coherence as a function of the network size, connectivity and the fraction of repressory/activatory interactions .
[ { "type": "R", "before": "cooperativity", "after": "coherent activity", "start_char_pos": 23, "end_char_pos": 36 }, { "type": "R", "before": "which we propose should be maximized", "after": ", a property that reflects the unity of purpose between the regulatory agents with a common target. We propose that such harmonious regulatory action is desirable", "start_char_pos": 66, "end_char_pos": 102 }, { "type": "R", "before": ". We investigate its dependence on network size, connectivity and the fraction of repressory/activatory interactions. Next, we consider", "after": "and may be selected for under evolutionary pressures. We consider two recent models of", "start_char_pos": 140, "end_char_pos": 275 }, { "type": "A", "before": null, "after": "budding", "start_char_pos": 317, "end_char_pos": 317 }, { "type": "R", "before": "its degree of cooperativity", "after": "their degree of coherence", "start_char_pos": 381, "end_char_pos": 408 }, { "type": "R", "before": "exceptionally cooperative", "after": "wired to yield and exceptionally high level of coherent regulatory activity. We also investigate the mean degree of coherence as a function of the network size, connectivity and the fraction of repressory/activatory interactions", "start_char_pos": 527, "end_char_pos": 552 } ]
[ 0, 141, 257, 410 ]
1406.7752
1
In the wake of the still ongoing global financial crisis, interdependencies among banks have come into focus in trying to assess systemic risk. To date, such analysis has largely been based on numerical data. By contrast, this study attempts to gain further insight into bank interconnections by tapping into financial discourse. We present a text-to-network process, which has its basis in co-occurrences of bank names and can be analyzed quantitatively and visualized. To quantify bank importance, we propose an information centrality measure to rank and assess trends of bank centrality in discussion. For qualitative assessment of bank networks, we put forward a visual, interactive interface for better illustrating network structures. We illustrate the text-based approach on European Large and Complex Banking Groups (LCBGs) during the ongoing financial crisis by quantifying bank interrelations from discussion in 1.3M news articles, spanning the years 2007 to 2013.
In the wake of the still ongoing global financial crisis, bank interdependencies have come into focus in trying to assess linkages among banks and systemic risk. To date, such analysis has largely been based on numerical data. By contrast, this study attempts to gain further insight into bank interconnections by tapping into financial discourse. We present a text-to-network process, which has its basis in co-occurrences of bank names and can be analyzed quantitatively and visualized. To quantify bank importance, we propose an information centrality measure to rank and assess trends of bank centrality in discussion. For qualitative assessment of bank networks, we put forward a visual, interactive interface for better illustrating network structures. We illustrate the text-based approach on European Large and Complex Banking Groups (LCBGs) during the ongoing financial crisis by quantifying bank interrelations and centrality from discussion in 3M news articles, spanning 2007Q1 to 2014Q3.
[ { "type": "R", "before": "interdependencies among banks", "after": "bank interdependencies", "start_char_pos": 58, "end_char_pos": 87 }, { "type": "A", "before": null, "after": "linkages among banks and", "start_char_pos": 129, "end_char_pos": 129 }, { "type": "A", "before": null, "after": "and centrality", "start_char_pos": 904, "end_char_pos": 904 }, { "type": "R", "before": "1.3M", "after": "3M", "start_char_pos": 924, "end_char_pos": 928 }, { "type": "R", "before": "the years 2007 to 2013.", "after": "2007Q1 to 2014Q3.", "start_char_pos": 953, "end_char_pos": 976 } ]
[ 0, 144, 209, 330, 471, 605, 741 ]
1407.0108
1
We study a constrained optimal control problem allowing for degenerate coefficients . The coefficients can be random and then the value function is described by a degenerate backward stochastic partial differential equation (BSPDE) with singular terminal condition. For this degenerate BSPDE, we prove the existence and uniqueness of the nonnegative solution .
We study a constrained optimal control problem with possibly degenerate coefficients arising in models of optimal portfolio liquidation under market impact . The coefficients can be random in which case the value function is described by a degenerate backward stochastic partial differential equation (BSPDE) with singular terminal condition. For this degenerate BSPDE, we prove existence and uniqueness of a nonnegative solution. Our existence result requires a novel gradient estimate for degenerate BSPDEs .
[ { "type": "R", "before": "allowing for degenerate coefficients", "after": "with possibly degenerate coefficients arising in models of optimal portfolio liquidation under market impact", "start_char_pos": 47, "end_char_pos": 83 }, { "type": "R", "before": "and then", "after": "in which case", "start_char_pos": 117, "end_char_pos": 125 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 302, "end_char_pos": 305 }, { "type": "R", "before": "the nonnegative solution", "after": "a nonnegative solution. Our existence result requires a novel gradient estimate for degenerate BSPDEs", "start_char_pos": 334, "end_char_pos": 358 } ]
[ 0, 85, 265 ]
1407.0256
1
During last 15 years various parameterizations of the implied volatility (IV) surface were proposed in the literature to address few goals: a) given market data on some options build a no-arbitrage local volatility (Dupire's) surface to further exploit it for calibration of a local stochastic volatility model; b) obtain volatilities for pricing OTC options and other derivatives with strikes and maturities other than that offered by option exchanges; c) produce a volatility forecast over future periods of time, which is helpful in Value-at-Risk models, computing forward IVs and exposures, etc. , d) assess an adequacy of an option pricing model based on the shape of the IV surface. Among various existing parameterizations SVI model of Gatheral is the most elaborated one which takes into account a correct asymptotic behavior at wings, and respects no-arbitrage conditions as well as no-arbitrage interpolation and extrapolation. In this paper we propose another class of parameterizations that deliver same functionality but sometimes with a better quality of fit .
We propose a new static parameterization of the implied volatility surface which is constructed by using polynomials of sigmoid functions combined with some other terms. This parameterization is flexible enough to fit market implied volatilities which demonstrate smile or skew. An arbitrage-free calibration algorithm is considered that constructs the implied volatility surface as a grid in the strike-expiration space and guarantees a lack of arbitrage at every node of this grid. We also demonstrate how to construct an arbitrage-free interpolation and extrapolation in time, as well as build a local volatility and implied pdf surfaces. Asymptotic behavior of this parameterization is discussed, as well as results on stability of the calibrated parameters are presented. Numerical examples show robustness of the proposed approach in building all these surfaces as well as demonstrate a better quality of the fit as compared with some known models .
[ { "type": "R", "before": "During last 15 years various parameterizations", "after": "We propose a new static parameterization", "start_char_pos": 0, "end_char_pos": 46 }, { "type": "R", "before": "(IV) surface were proposed in the literature to address few goals: a) given market data on some options build a no-arbitrage local volatility (Dupire's) surface to further exploit it for calibration of a local stochastic volatility model; b) obtain volatilities for pricing OTC options and other derivatives with strikes and maturities other than that offered by option exchanges; c) produce a volatility forecast over future periods of time, which is helpful in Value-at-Risk models, computing forward IVs and exposures, etc. , d) assess an adequacy of an option pricing model based on the shape of the IV surface. Among various existing parameterizations SVI model of Gatheral is the most elaborated one which takes into account a correct asymptotic behavior at wings, and respects no-arbitrage conditions", "after": "surface which is constructed by using polynomials of sigmoid functions combined with some other terms. This parameterization is flexible enough to fit market implied volatilities which demonstrate smile or skew. An arbitrage-free calibration algorithm is considered that constructs the implied volatility surface as a grid in the strike-expiration space and guarantees a lack of arbitrage at every node of this grid. We also demonstrate how to construct an arbitrage-free interpolation and extrapolation in time, as well as build a local volatility and implied pdf surfaces. Asymptotic behavior of this parameterization is discussed, as well as results on stability of the calibrated parameters are presented. Numerical examples show robustness of the proposed approach in building all these surfaces", "start_char_pos": 73, "end_char_pos": 880 }, { "type": "R", "before": "no-arbitrage interpolation and extrapolation. In this paper we propose another class of parameterizations that deliver same functionality but sometimes with", "after": "demonstrate", "start_char_pos": 892, "end_char_pos": 1048 }, { "type": "R", "before": "fit", "after": "the fit as compared with some known models", "start_char_pos": 1069, "end_char_pos": 1072 } ]
[ 0, 311, 453, 688, 937 ]
1407.0433
1
Distributed, controllable energy storage devices offer several significant benefits to electric power system operation. Three such benefits include reducing peak load, providing standby power, and enhancing power quality. These benefits, however, are only realized during peak load or during an outage, events that are infrequent. This paper presents a means of realizing additional benefits by taking advantage of the fluctuating costs of energy in competitive energy markets. An algorithm for optimal charge/discharge scheduling of community energy storage (CES) devices as well as an analysis of several of the key drivers of such optimization are discussed.
Distributed, controllable energy storage devices offer several benefits to electric power system operation. Three such benefits include reducing peak load, providing standby power, and enhancing power quality. These benefits, however, are only realized during peak load or during an outage, events that are infrequent. This paper presents a means of realizing additional benefits by taking advantage of the fluctuating costs of energy in competitive energy markets. An algorithm for optimal charge/discharge scheduling of community energy storage (CES) devices as well as an analysis of several of the key drivers of the optimization are discussed.
[ { "type": "D", "before": "significant", "after": null, "start_char_pos": 63, "end_char_pos": 74 }, { "type": "R", "before": "such", "after": "the", "start_char_pos": 629, "end_char_pos": 633 } ]
[ 0, 119, 221, 330, 477 ]
1407.0948
1
In a model independent discrete time financial market, we discuss the richness of the family of martingale measures in relation to different notions of Arbitrage, generated by a class of non-negligible sets\mathcal{S , which we call Arbitrage de la classe S. The choice of S reflects into the intrinsic properties of the class of polar sets of martingale measures. In particular for } S being the open sets we show that the absence of arbitrage opportunities, with respect to an opportune filtration enlargement, guarantees the existence of full support martingale measures. Finally we provide a dual representation in terms of weakly open sets of probability measures, which highlights the robust nature of our approach .
In a model independent discrete time financial market, we discuss the richness of the family of martingale measures in relation to different notions of Arbitrage, generated by a class \mathcal{S , which we call Arbitrage de la classe S. The choice of S reflects into the intrinsic properties of the class of polar sets of martingale measures. In particular : for S= \Omega} absence of Model Independent Arbitrage is equivalent to the existence of a martingale measure; for S being the open sets , absence of Open Arbitrage is equivalent to the existence of full support martingale measures. These results are obtained by adopting a technical filtration enlargement and by constructing a universal aggregator of all arbitrage opportunities. We further introduce the notion of market feasibility and provide its characterization via arbitrage conditions. We conclude providing a dual representation of Open Arbitrage in terms of weakly open sets of probability measures, which highlights the robust nature of this concept .
[ { "type": "R", "before": "of non-negligible sets\\mathcal{S", "after": "\\mathcal{S", "start_char_pos": 184, "end_char_pos": 216 }, { "type": "R", "before": "de la classe", "after": "de la classe", "start_char_pos": 243, "end_char_pos": 255 }, { "type": "R", "before": "for", "after": ": for S=", "start_char_pos": 379, "end_char_pos": 382 }, { "type": "A", "before": null, "after": "\\Omega", "start_char_pos": 383, "end_char_pos": 383 }, { "type": "A", "before": null, "after": "absence of Model Independent Arbitrage is equivalent to the existence of a martingale measure; for", "start_char_pos": 385, "end_char_pos": 385 }, { "type": "R", "before": "we show that the absence of arbitrage opportunities, with respect to an opportune filtration enlargement, guarantees", "after": ", absence of Open Arbitrage is equivalent to", "start_char_pos": 408, "end_char_pos": 524 }, { "type": "R", "before": "Finally we provide", "after": "These results are obtained by adopting a technical filtration enlargement and by constructing a universal aggregator of all arbitrage opportunities. We further introduce the notion of market feasibility and provide its characterization via arbitrage conditions. We conclude providing", "start_char_pos": 576, "end_char_pos": 594 }, { "type": "A", "before": null, "after": "of Open Arbitrage", "start_char_pos": 617, "end_char_pos": 617 }, { "type": "R", "before": "our approach", "after": "this concept", "start_char_pos": 710, "end_char_pos": 722 } ]
[ 0, 258, 364, 575 ]
1407.1135
1
Quantitative modeling in biology can be difficult due to the scarcity of parameter values . An alternative is qualitative modeling since it requires few to no parameters. This article presents a qualitative modeling derived from boolean networks where fuzzy logic is used and where edges can be tuned. Fuzzy logic being continuous, its variables can be finely valued while remaining qualitative. To consider that some interactions are slower or weaker than other ones, edge states are computed to modulate in speed and strength the signal they convey. The proposed formalism is illustrated through its implementation on an example network. The simulations show that continuous results are produced, thus allowing a fine analysis, and that modulating the signal conveyed by the edges allows their tuning according to knowledge about the interaction they model. The present work is expected to bring enhancements in the ability of qualitative models to simulate biological networks.
Quantitative modeling in biology can be difficult due to parameter value scarcity . An alternative is qualitative modeling since it requires few to no parameters. This article presents a qualitative modeling derived from boolean networks where fuzzy logic is used and where edges can be tuned. Fuzzy logic being continuous, its variables can be finely valued while remaining qualitative. To consider that some interactions are slower or weaker than other ones, edge states are computed to modulate in speed and strength the signal they convey. The proposed formalism is illustrated through its implementation on an example network. Simulations show that continuous results are produced, thus allowing fine analysis, and that modulating the signal conveyed by the edges allows their tuning according to knowledge about the interaction they model. The present work is expected to bring enhancements in the ability of qualitative models to simulate biological networks.
[ { "type": "R", "before": "the scarcity of parameter values", "after": "parameter value scarcity", "start_char_pos": 57, "end_char_pos": 89 }, { "type": "R", "before": "The simulations", "after": "Simulations", "start_char_pos": 640, "end_char_pos": 655 }, { "type": "D", "before": "a", "after": null, "start_char_pos": 713, "end_char_pos": 714 } ]
[ 0, 91, 170, 301, 395, 551, 639, 859 ]
1407.1135
3
Quantitative modeling in systems biology can be difficult due to the scarcity of quantitative details about biological phenomenons, especially at the subcellular scale. An alternative to escape this difficulty is qualitative modeling since it requires few to no quantitative information. Among the qualitative modeling approaches , the Boolean network formalism is one of the most popular . However, Boolean models allow variables to be valued at only true or false , which can appear too simplistic when modeling biological processes. Consequently, this work proposes a modeling approach derived from Boolean networks where fuzzy operators are used and where edges are tuned. Fuzzy operators allow variables to be continuous and then to be more finely valued than with discrete modeling approaches, such as Boolean networks, while remaining qualitative. Moreover, to consider that in a given biological network some interactions are slower and/or weaker relative to other ones, edge states are computed in order to modulate in speed and strength the signal they convey. The proposed formalism is illustrated through its implementation on a tiny sample of the epidermal growth factor receptor signaling pathway. The obtained simulations show that continuous results are produced, thus allowing finer analysis , and that modulating the signal conveyed by the edges allows their tuning according to knowledge about the modeled interactions , thus incorporating more knowledge. The proposed modeling approach is expected to bring enhancements in the ability of qualitative models to simulate the dynamics of biological networks while not requiring quantitative information.
Due to the scarcity of quantitative details about biological phenomena, quantitative modeling in systems biology can be compromised, especially at the subcellular scale. One way to get around this is qualitative modeling because it requires few to no quantitative information. One of the most popular qualitative modeling approaches is the Boolean network formalism . However, Boolean models allow variables to take only two values , which can be too simplistic in some cases. The present work proposes a modeling approach derived from Boolean networks where continuous logical operators are used and where edges can be tuned. Using continuous logical operators allows variables to be more finely valued while remaining qualitative. To consider that some biological interactions can be slower or weaker than other ones, edge states are also computed in order to modulate in speed and strength the signal they convey. The proposed formalism is illustrated on a toy network coming from the epidermal growth factor receptor signaling pathway. The obtained simulations show that continuous results are produced, thus allowing finer analysis . The simulations also show that modulating the signal conveyed by the edges allows to incorporate knowledge about the interactions they model. The goal is to provide enhancements in the ability of qualitative models to simulate the dynamics of biological networks while limiting the need of quantitative information.
[ { "type": "R", "before": "Quantitative modeling in systems biology can be difficult due", "after": "Due", "start_char_pos": 0, "end_char_pos": 61 }, { "type": "R", "before": "phenomenons,", "after": "phenomena, quantitative modeling in systems biology can be compromised,", "start_char_pos": 119, "end_char_pos": 131 }, { "type": "R", "before": "An alternative to escape this difficulty", "after": "One way to get around this", "start_char_pos": 169, "end_char_pos": 209 }, { "type": "R", "before": "since", "after": "because", "start_char_pos": 234, "end_char_pos": 239 }, { "type": "R", "before": "Among the", "after": "One of the most popular", "start_char_pos": 288, "end_char_pos": 297 }, { "type": "R", "before": ",", "after": "is", "start_char_pos": 330, "end_char_pos": 331 }, { "type": "D", "before": "is one of the most popular", "after": null, "start_char_pos": 362, "end_char_pos": 388 }, { "type": "R", "before": "be valued at only true or false", "after": "take only two values", "start_char_pos": 434, "end_char_pos": 465 }, { "type": "R", "before": "appear too simplistic when modeling biological processes. Consequently, this", "after": "be too simplistic in some cases. The present", "start_char_pos": 478, "end_char_pos": 554 }, { "type": "R", "before": "fuzzy", "after": "continuous logical", "start_char_pos": 625, "end_char_pos": 630 }, { "type": "R", "before": "are tuned. Fuzzy operators allow", "after": "can be tuned. Using continuous logical operators allows", "start_char_pos": 666, "end_char_pos": 698 }, { "type": "D", "before": "continuous and then to be", "after": null, "start_char_pos": 715, "end_char_pos": 740 }, { "type": "D", "before": "than with discrete modeling approaches, such as Boolean networks,", "after": null, "start_char_pos": 760, "end_char_pos": 825 }, { "type": "R", "before": "Moreover, to consider that in a given biological network some interactions are slower and/or weaker relative to", "after": "To consider that some biological interactions can be slower or weaker than", "start_char_pos": 855, "end_char_pos": 966 }, { "type": "A", "before": null, "after": "also", "start_char_pos": 995, "end_char_pos": 995 }, { "type": "R", "before": "through its implementation on a tiny sample of", "after": "on a toy network coming from", "start_char_pos": 1110, "end_char_pos": 1156 }, { "type": "R", "before": ", and", "after": ". The simulations also show", "start_char_pos": 1310, "end_char_pos": 1315 }, { "type": "R", "before": "their tuning according to", "after": "to incorporate", "start_char_pos": 1372, "end_char_pos": 1397 }, { "type": "R", "before": "modeled interactions , thus incorporating more knowledge. The proposed modeling approach is expected to bring", "after": "interactions they model. The goal is to provide", "start_char_pos": 1418, "end_char_pos": 1527 }, { "type": "R", "before": "not requiring", "after": "limiting the need of", "start_char_pos": 1632, "end_char_pos": 1645 } ]
[ 0, 168, 287, 390, 535, 676, 854, 1071, 1212, 1475 ]
1407.1499
1
Double phosphorylation of protein kinases is a common feature of signalling cascades. This motif may reduce cross-talk between signalling pathways, as the second phosphorylation site provides an opportunity for proofreading, especially when phosphorylation is distributive rather than processive. Recent studies suggest that phosphorylation can be `pseudo-processive' in the crowded cellular environment, as the tendency to rebind after the first phosphorylation is enhanced by slow diffusion. Here, we use a simple model with unsaturated reactants to show that specificity for one substrate over another drops as rebinding probabilities are increased and phosphorylation becomes pseudo-processive . However, this decrease in specificity with increased rebinding probability is generally also observed if two distinct enzyme species are required for phosphorylation, i.e. when the system is necessarily distributive. We conclude that the loss of specificity is due to an intrinsic reduction in selectivity with increased rebinding, which benefits inefficient reactions, rather than pseudo-processivity itself. We also show that proofreading can remain effective when the intended signalling pathway exhibits high levels of rebinding-induced pseudo-processivity, unlike other proposed advantages of the dual phosphorylation motif.
Double phosphorylation of protein kinases is a common feature of signalling cascades. This motif may reduce cross-talk between signalling pathways, as the second phosphorylation site allows for proofreading, especially when phosphorylation is distributive rather than processive. Recent studies suggest that phosphorylation can be `pseudo-processive' in the crowded cellular environment, as rebinding after the first phosphorylation is enhanced by slow diffusion. Here, we use a simple model with unsaturated reactants to show that specificity for one substrate over another drops as rebinding increases and pseudo-processive behavior becomes possible . However, this loss of specificity with increased rebinding is typically also observed if two distinct enzyme species are required for phosphorylation, i.e. when the system is necessarily distributive. Thus the loss of specificity is due to an intrinsic reduction in selectivity with increased rebinding, which benefits inefficient reactions, rather than pseudo-processivity itself. We also show that proofreading can remain effective when the intended signalling pathway exhibits high levels of rebinding-induced pseudo-processivity, unlike other proposed advantages of the dual phosphorylation motif.
[ { "type": "R", "before": "provides an opportunity", "after": "allows", "start_char_pos": 183, "end_char_pos": 206 }, { "type": "R", "before": "the tendency to rebind", "after": "rebinding", "start_char_pos": 408, "end_char_pos": 430 }, { "type": "R", "before": "probabilities are increased and phosphorylation becomes", "after": "increases and", "start_char_pos": 624, "end_char_pos": 679 }, { "type": "A", "before": null, "after": "behavior becomes possible", "start_char_pos": 698, "end_char_pos": 698 }, { "type": "R", "before": "decrease in", "after": "loss of", "start_char_pos": 715, "end_char_pos": 726 }, { "type": "R", "before": "probability is generally", "after": "is typically", "start_char_pos": 764, "end_char_pos": 788 }, { "type": "R", "before": "We conclude that", "after": "Thus", "start_char_pos": 918, "end_char_pos": 934 } ]
[ 0, 85, 296, 493, 700, 917, 1110 ]
1407.1595
1
This paper studies the question of filtering and maximizing terminal wealth from expected utility in a stochastic volatility models. The special feature is that the only information available to the investor is the one generated by the asset prices and, in particular, the return processes cannot be observed directly and assumed to be modelled by a stochastic differential equation. Using stochastic non-linear filtering and change of measure techniques, the partial observation context can be transformed into a full information context such that coefficients depend only on past history of observed prices (filters processes). The main difficulty is that these filters are valued in infinite-dimensional space: it satisfy a stochastic partial differential equations named "Kushner-Stratonovich equations". We also show that we need to introduce an a priori models for the trend and the stochastic volatility in order to evaluate the filters processes. The dynamic programming or maximum principle are still applicable and the associated Bellman equationor Hamiltonian system are now in infinite dimension .
This paper studies the question of filtering and maximizing terminal wealth from expected utility in a partially information stochastic volatility models. The special features is that the only information available to the investor is the one generated by the asset prices , and the unobservable processes will be modeled by a stochastic differential equations. Using the change of measure techniques, the partial observation context can be transformed into a full information context such that coefficients depend only on past history of observed prices (filters processes). Adapting the stochastic non-linear filtering, we show that under some assumptions on the model coefficients, the estimation of the filters depend on a priorimodels for the trend and the stochastic volatility. Moreover, these filters satisfy a stochastic partial differential equations named "Kushner-Stratonovich equations". Using the martingale duality approach in this partially observed incomplete model, we can characterize the value function and the optimal portfolio. The main result here is that the dual value function associated to the martingale approach can be expressed, via the dynamic programmingapproach, in terms of the solution to a semilinear partial differential equation. We illustrate our results with some examples of stochastic volatility models popular in the financial literature .
[ { "type": "A", "before": null, "after": "partially information", "start_char_pos": 103, "end_char_pos": 103 }, { "type": "R", "before": "feature", "after": "features", "start_char_pos": 146, "end_char_pos": 153 }, { "type": "R", "before": "and, in particular, the return processes cannot be observed directly and assumed to be modelled", "after": ", and the unobservable processes will be modeled", "start_char_pos": 250, "end_char_pos": 345 }, { "type": "R", "before": "equation. Using stochastic non-linear filtering and", "after": "equations. Using the", "start_char_pos": 375, "end_char_pos": 426 }, { "type": "R", "before": "The main difficulty is that these filters are valued in infinite-dimensional space: it", "after": "Adapting the stochastic non-linear filtering, we show that under some assumptions on the model coefficients, the estimation of the filters depend on a priorimodels for the trend and the stochastic volatility. Moreover, these filters", "start_char_pos": 631, "end_char_pos": 717 }, { "type": "R", "before": "We also show that we need to introduce an a priori models for the trend and the stochastic volatility in order to evaluate the filters processes. The dynamic programming or maximum principle are still applicable and the associated Bellman equationor Hamiltonian system are now in infinite dimension", "after": "Using the martingale duality approach in this partially observed incomplete model, we can characterize the value function and the optimal portfolio. The main result here is that the dual value function associated to the martingale approach can be expressed, via the dynamic programmingapproach, in terms of the solution to a semilinear partial differential equation. We illustrate our results with some examples of stochastic volatility models popular in the financial literature", "start_char_pos": 810, "end_char_pos": 1108 } ]
[ 0, 133, 384, 630, 809, 955 ]
1407.1769
1
The paper develops general, discrete, non-probabilistic market models and minmax price bounds leading to a price interval . The approach provides the trajectory based analogue of martingale-like properties as well as a generalization that allows a limited notion of arbitrage in the market while still providing coherent option prices. Several properties of the price bounds are obtained, in particular a connection with risk neutral pricing is established for trajectory markets associated to a martingale model. A result is stated for the evaluation of the price bounds by a recursive procedure .
The paper develops general, discrete, non-probabilistic market models and minmax price bounds leading to price intervals for European options . The approach provides the trajectory based analogue of martingale-like properties as well as a generalization that allows a limited notion of arbitrage in the market while still providing coherent option prices. Several properties of the price bounds are obtained, in particular a connection with risk neutral pricing is established for trajectory markets associated to a continuous-time martingale model .
[ { "type": "R", "before": "a price interval", "after": "price intervals for European options", "start_char_pos": 105, "end_char_pos": 121 }, { "type": "R", "before": "martingale model. A result is stated for the evaluation of the price bounds by a recursive procedure", "after": "continuous-time martingale model", "start_char_pos": 496, "end_char_pos": 596 } ]
[ 0, 123, 335, 513 ]
1407.2031
1
We used a set of coupled Brownian motions to sample cross-correlation matrices . The spectral properties of this ensemble of random matrices are shown to be in agreement with some stylized facts of financial markets. Through the presented model formulas are given for the analysis of heterogeneous time-series. Furthermore evidence for a localization transition in eigenvectors related to small eigenvalues in cross-correlations analysis of this model is found and a simple explanation of localization phenomena in financial time-series is provided .
We define a random-matrix ensemble given by the infinite-time covariance matrices of Ornstein-Uhlenbeck processes at different temperatures coupled by a Gaussian symmetric matrix . The spectral properties of this ensemble are shown to be in qualitative agreement with some stylized facts of financial markets. Through the presented model formulas are given for the analysis of heterogeneous time-series. Furthermore evidence for a localization transition in eigenvectors related to small and large eigenvalues in cross-correlations analysis of this model is found and a simple explanation of localization phenomena in financial time-series is provided . Finally we identify both in our model and in real financial data an inverted-bell effect in correlation between localized components and their local temperature: high and low temperature/volatility components are the most localized ones .
[ { "type": "R", "before": "used a set of coupled Brownian motions to sample cross-correlation matrices", "after": "define a random-matrix ensemble given by the infinite-time covariance matrices of Ornstein-Uhlenbeck processes at different temperatures coupled by a Gaussian symmetric matrix", "start_char_pos": 3, "end_char_pos": 78 }, { "type": "D", "before": "of random matrices", "after": null, "start_char_pos": 122, "end_char_pos": 140 }, { "type": "A", "before": null, "after": "qualitative", "start_char_pos": 160, "end_char_pos": 160 }, { "type": "A", "before": null, "after": "and large", "start_char_pos": 396, "end_char_pos": 396 }, { "type": "A", "before": null, "after": ". Finally we identify both in our model and in real financial data an inverted-bell effect in correlation between localized components and their local temperature: high and low temperature/volatility components are the most localized ones", "start_char_pos": 551, "end_char_pos": 551 } ]
[ 0, 80, 217, 311 ]
1407.2088
1
Fractal structure of shortest paths depends strongly on interresidue interaction cutoff distance. Taking the cutoff distanceas variable, the paths are self similar above 6.8 \AA with a fractal dimension of 1.12 , remarkably close to Euclidean dimension. Below 6.8 {\AA} , paths are multifractal . The number of steps to traverse a shortest path is a discontinuous function of cutoff size at short wavelengths. An algorithm is introduced to determine the residues on a given shortest path. Shannon entropy of information transport between two residues along a shortest path is lower than the entropies along longer paths between the same two points leading to the conclusion that communication over shortest paths results in highest lossless encoding .
Fractal structure of shortest paths depends strongly on interresidue interaction cutoff distance. The dimensionality of shortest paths is calculated as a function of interaction cutoff distance. Shortest paths are self similar with a fractal dimension of 1.12 when calculated with step lengths larger than 6.8 {\AA} . Paths are multifractal below 6.8 \AA . The number of steps to traverse a shortest path is a discontinuous function of cutoff size at short cutoff values, showing abrupt decreases to smaller values as cutoff distance increases. As information progresses along the direction of a shortest path a large set of residues are affected because they are interacting neighbors to the residues of the shortest path. Thus, several residues are involved diffusively in information transport which may be identified with the present model. An algorithm is introduced to determine the residues of a given shortest path. The shortest path residues are the highly visited residues during information transport. These paths are shown to lie on the high entropy landscape of the protein where entropy is taken to increase with abundance of visits to nodes during signal transport .
[ { "type": "R", "before": "Taking the cutoff distanceas variable, the", "after": "The dimensionality of shortest paths is calculated as a function of interaction cutoff distance. Shortest", "start_char_pos": 98, "end_char_pos": 140 }, { "type": "D", "before": "above 6.8", "after": null, "start_char_pos": 164, "end_char_pos": 173 }, { "type": "D", "before": "\\AA", "after": null, "start_char_pos": 174, "end_char_pos": 177 }, { "type": "R", "before": ", remarkably close to Euclidean dimension. Below", "after": "when calculated with step lengths larger than", "start_char_pos": 211, "end_char_pos": 259 }, { "type": "R", "before": ", paths are multifractal", "after": ". Paths are multifractal below 6.8", "start_char_pos": 270, "end_char_pos": 294 }, { "type": "A", "before": null, "after": "\\AA", "start_char_pos": 295, "end_char_pos": 295 }, { "type": "R", "before": "wavelengths.", "after": "cutoff values, showing abrupt decreases to smaller values as cutoff distance increases. As information progresses along the direction of a shortest path a large set of residues are affected because they are interacting neighbors to the residues of the shortest path. Thus, several residues are involved diffusively in information transport which may be identified with the present model.", "start_char_pos": 398, "end_char_pos": 410 }, { "type": "R", "before": "on", "after": "of", "start_char_pos": 464, "end_char_pos": 466 }, { "type": "R", "before": "Shannon entropy of information transport between two residues along a shortest path is lower than the entropies along longer paths between the same two points leading to the conclusion that communication over shortest paths results in highest lossless encoding", "after": "The shortest path residues are the highly visited residues during information transport. These paths are shown to lie on the high entropy landscape of the protein where entropy is taken to increase with abundance of visits to nodes during signal transport", "start_char_pos": 490, "end_char_pos": 750 } ]
[ 0, 97, 253, 297, 410, 489 ]
1407.2420
1
We show the existence of a continuous-time Nash equilibrium in a financial market with risk averse market makers and an informed trader with a private information. The unwillingness of market makers to bear risk causes the informed trader to absorb large shocks in their inventories. The informed trader's optimal strategy is to drive the market price to its fundamental value while participating in the risk sharing with the market makers. The optimal strategies of the agents turn out to be solutions of a forward-backward system of partial and stochastic differential equations. In particular, the price set by the market makers is the solution to a non-standard `quadratic' backward stochastic differential equation .
This paper develops a new methodology for studying continuous-time Nash equilibrium in a financial market with asymmetrically informed agents. This approach allows us to lift the restriction of risk neutrality imposed on market makers by the current literature. It turns out that, when the market makers are risk averse, the optimal strategies of the agents are solutions of a forward-backward system of partial and stochastic differential equations. In particular, the price set by the market makers solves a nonstandard "quadratic" backward stochastic differential equation . The main result of the paper is the existence of a Markovian solution to this forward-backward system on an arbitrary time interval, which is obtained via a fixed-point argument on the space of absolutely continuous distribution functions. Moreover, the equilibrium obtained in this paper is able to explain several stylized facts which are not captured by the current asymmetric information models .
[ { "type": "R", "before": "We show the existence of a", "after": "This paper develops a new methodology for studying", "start_char_pos": 0, "end_char_pos": 26 }, { "type": "R", "before": "risk averse market makers and an informed trader with a private information. The unwillingness of market makers to bear risk causes the informed trader to absorb large shocks in their inventories. The informed trader's optimal strategy is to drive the market price to its fundamental value while participating in the risk sharing with the market makers. The", "after": "asymmetrically informed agents. This approach allows us to lift the restriction of risk neutrality imposed on market makers by the current literature. It turns out that, when the market makers are risk averse, the", "start_char_pos": 87, "end_char_pos": 444 }, { "type": "R", "before": "turn out to be", "after": "are", "start_char_pos": 478, "end_char_pos": 492 }, { "type": "R", "before": "is the solution to a non-standard `quadratic'", "after": "solves a nonstandard \"quadratic\"", "start_char_pos": 632, "end_char_pos": 677 }, { "type": "A", "before": null, "after": ". The main result of the paper is the existence of a Markovian solution to this forward-backward system on an arbitrary time interval, which is obtained via a fixed-point argument on the space of absolutely continuous distribution functions. Moreover, the equilibrium obtained in this paper is able to explain several stylized facts which are not captured by the current asymmetric information models", "start_char_pos": 720, "end_char_pos": 720 } ]
[ 0, 163, 283, 440, 581 ]
1407.3201
1
Given the limited CDS market it is inevitable that CVA desks will partially warehouse credit risk . Thus realistic CVA pricing must include both warehoused and hedged risks. Furthermore, warehoused risks may produce profits and losses which will be taxable. Paying for capital use, will also generate potentially taxable profitswith which to pay shareholder dividends. Here we extend the semi-replication approach in ( Burgard and Kjaer 2013) to include partial risk warehousing and tax consequences. In doing so we introduce double-semi-replication, i.e. partial hedging of value jump on counterparty default, and TVA : Tax Valuation Adjustment . We take an expectation approach to hedging open risk and so introduce a market price of counterparty default value jump risk. We show that both risk warehousing and tax are material in a set of interest rate swap examples .
Credit risk may be warehoused by choice, or because of limited hedging possibilities. Credit risk warehousing increases capital requirements and leaves open risk. Open risk must be priced in the physical measure, rather than the risk neutral measure, and implies profits and losses . Furthermore the rate of return on capital that shareholders require must be paid from profits. Profits are taxable and losses provide tax credits. Here we extend the semi-replication approach of Burgard and Kjaer ( 2013) and the capital formalism (KVA) of Green, Kenyon, and Dennis (2014) to cover credit risk warehousing and tax , formalized as double-semi-replication and TVA ( Tax Valuation Adjustment ) to enable quantification .
[ { "type": "R", "before": "Given the limited CDS market it is inevitable that CVA desks will partially warehouse credit risk . Thus realistic CVA pricing must include both warehoused and hedged risks. Furthermore, warehoused risks may produce", "after": "Credit risk may be warehoused by choice, or because of limited hedging possibilities. Credit risk warehousing increases capital requirements and leaves open risk. Open risk must be priced in the physical measure, rather than the risk neutral measure, and implies", "start_char_pos": 0, "end_char_pos": 215 }, { "type": "R", "before": "which will be taxable. Paying for capital use, will also generate potentially taxable profitswith which to pay shareholder dividends.", "after": ". Furthermore the rate of return on capital that shareholders require must be paid from profits. Profits are taxable and losses provide tax credits.", "start_char_pos": 235, "end_char_pos": 368 }, { "type": "R", "before": "in (", "after": "of", "start_char_pos": 414, "end_char_pos": 418 }, { "type": "A", "before": null, "after": "(", "start_char_pos": 437, "end_char_pos": 437 }, { "type": "R", "before": "to include partial", "after": "and the capital formalism (KVA) of Green, Kenyon, and Dennis (2014) to cover credit", "start_char_pos": 444, "end_char_pos": 462 }, { "type": "R", "before": "consequences. In doing so we introduce double-semi-replication, i.e. partial hedging of value jump on counterparty default, and TVA :", "after": ", formalized as double-semi-replication and TVA (", "start_char_pos": 488, "end_char_pos": 621 }, { "type": "R", "before": ". We take an expectation approach to hedging open risk and so introduce a market price of counterparty default value jump risk. We show that both risk warehousing and tax are material in a set of interest rate swap examples", "after": ") to enable quantification", "start_char_pos": 647, "end_char_pos": 870 } ]
[ 0, 99, 173, 257, 368, 501, 648, 774 ]
1407.4017
1
We examine the reconstruction of the angular-domain periodogram from spatial-domain signals received at different time indices and that of the frequency-domain periodogram from time-domain signals received at different wireless sensors , two problems that show great similarities . We split the entire angular or frequency band into equal-size binsand set the bin size such that the received spectra at two frequencies or angles, whose distance is equal to or larger than the size of a bin, are uncorrelated. These problems in the two different domains lead to a similar circulant structure in the so-called coset correlation matrix , which allows for a strong compression and a simple least-squares reconstruction method. The latter is possible under the full column rank condition of the system matrix, which can be achieved by designing the spatial or temporal sampling patterns based on a circular sparse ruler. We evaluate the coset correlation matrix estimation and analyze the statistical performance of the compressively reconstructed periodogram , which includes a bias and variance analysis. We then consider the case when the size of the bin is decreased such that the received spectra at two frequencies or angles, with a spacing between them larger than the size of the bin, can still be correlated. In this case, the resulting coset correlation matrix is generally not circulant and thus a special approach is required.
In this paper, two problems that show great similarities are examined. The first problem is the reconstruction of the angular-domain periodogram from spatial-domain signals received at different time indices . The second one is the reconstruction of the frequency-domain periodogram from time-domain signals received at different wireless sensors . We split the entire angular or frequency band into uniform bins. The bin size is set such that the received spectra at two frequencies or angles, whose distance is equal to or larger than the size of a bin, are uncorrelated. These problems in the two different domains lead to a similar circulant structure in the so-called coset correlation matrix . This circulant structure allows for a strong compression and a simple least-squares reconstruction method. The latter is possible under the full column rank condition of the system matrix, which can be achieved by designing the spatial or temporal sampling patterns based on a circular sparse ruler. We analyze the statistical performance of the compressively reconstructed periodogram including bias and variance . We further consider the case when the bins are so small that the received spectra at two frequencies or angles, with a spacing between them larger than the size of the bin, can still be correlated. In this case, the resulting coset correlation matrix is generally not circulant and thus a special approach is required.
[ { "type": "R", "before": "We examine", "after": "In this paper, two problems that show great similarities are examined. The first problem is", "start_char_pos": 0, "end_char_pos": 10 }, { "type": "R", "before": "and that", "after": ". The second one is the reconstruction", "start_char_pos": 127, "end_char_pos": 135 }, { "type": "D", "before": ", two problems that show great similarities", "after": null, "start_char_pos": 236, "end_char_pos": 279 }, { "type": "R", "before": "equal-size binsand set the bin size", "after": "uniform bins. The bin size is set", "start_char_pos": 333, "end_char_pos": 368 }, { "type": "R", "before": ", which", "after": ". This circulant structure", "start_char_pos": 633, "end_char_pos": 640 }, { "type": "D", "before": "evaluate the coset correlation matrix estimation and", "after": null, "start_char_pos": 919, "end_char_pos": 971 }, { "type": "R", "before": ", which includes a", "after": "including", "start_char_pos": 1055, "end_char_pos": 1073 }, { "type": "R", "before": "analysis. We then", "after": ". We further", "start_char_pos": 1092, "end_char_pos": 1109 }, { "type": "R", "before": "size of the bin is decreased such", "after": "bins are so small", "start_char_pos": 1137, "end_char_pos": 1170 } ]
[ 0, 281, 508, 722, 915, 1101, 1312 ]
1407.4374
1
Target identification aims at identifying biomolecules whose the function should be therapeutically modified to cure the considered pathology. An algorithm for in silico target identification using boolean network attractors is proposed. It assumes that the attractors of a boolean network correspond to the phenotypes produced by the modeled biological network. It identifies target combinations which allow disturbed biological networks to avoid attractors responsible for pathological phenotypes. The algorithm is tested on a boolean model of the mammalian cell cycle where the retinoblastoma protein is inactivated, as seen in diseases such as cancer. It returns target combinations able to remove attractors responsible for pathological phenotypes. The results show that the algorithm succeeds in performing the proposed in silico target identification. However, as with any in silico evidence, there is a bridge to cross between theory and practice. Nevertheless, it is expected that the algorithm is of interest for target identification.
Target identification aims at identifying biomolecules whose function should be therapeutically modified to cure the considered pathology. An algorithm for in silico target identification using boolean network attractors is proposed. It assumes that boolean network attractors correspond to the phenotypes produced by the modeled biological network. It identifies target combinations which allow disturbed biological networks to avoid attractors responsible for pathological phenotypes. The algorithm is tested on a boolean model of the mammalian cell cycle where retinoblastoma protein is inactivated, as seen in diseases such as cancer. It returns target combinations able to remove attractors responsible for pathological phenotypes. Results show that the algorithm succeeds in performing the proposed in silico target identification. However, as with any in silico evidence, there is a bridge to cross between theory and practice. Nevertheless, it is expected that the algorithm is of interest for target identification.
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 61, "end_char_pos": 64 }, { "type": "R", "before": "the attractors of a boolean network", "after": "boolean network attractors", "start_char_pos": 254, "end_char_pos": 289 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 577, "end_char_pos": 580 }, { "type": "R", "before": "The results", "after": "Results", "start_char_pos": 754, "end_char_pos": 765 } ]
[ 0, 142, 237, 362, 499, 655, 753, 858, 955 ]
1407.4374
2
Target identification aims at identifying biomolecules whose function should be therapeutically modified to cure the considered pathology. An algorithm for in silico target identification using boolean network attractors is proposed. It assumes that boolean network attractors correspond to the phenotypes produced by the modeled biological network. It identifies target combinations which allow disturbed biological networks to avoid attractors responsible for pathological phenotypes. The algorithm is tested on a boolean model of the mammalian cell cycle where retinoblastoma protein is inactivated, as seen in diseases such as cancer. It returns target combinations able to remove attractors responsible for pathological phenotypes . Results show that the algorithm succeeds in performing the proposed in silico target identification. However, as with any in silico evidence, there is a bridge to cross between theory and practice. Nevertheless, it is expected that the algorithm is of interest for target identification.
Target identification aims at identifying biomolecules whose function should be therapeutically altered to cure the considered pathology. An algorithm for in silico target identification using boolean network attractors is proposed. It assumes that attractors correspond to phenotypes produced by the modeled biological network. It identifies target combinations which allow disturbed networks to avoid attractors associated with pathological phenotypes. The algorithm is tested on a boolean model of the mammalian cell cycle and its applications are illustrated on a boolean model of Fanconi anemia. Results show that the algorithm returns target combinations able to remove attractors associated with pathological phenotypes and then succeeds in performing the proposed in silico target identification. However, as with any in silico evidence, there is a bridge to cross between theory and practice. Nevertheless, it is expected that the algorithm is of interest for target identification.
[ { "type": "R", "before": "modified", "after": "altered", "start_char_pos": 96, "end_char_pos": 104 }, { "type": "D", "before": "boolean network", "after": null, "start_char_pos": 250, "end_char_pos": 265 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 291, "end_char_pos": 294 }, { "type": "D", "before": "biological", "after": null, "start_char_pos": 406, "end_char_pos": 416 }, { "type": "R", "before": "responsible for", "after": "associated with", "start_char_pos": 446, "end_char_pos": 461 }, { "type": "R", "before": "where retinoblastoma protein is inactivated, as seen in diseases such as cancer. It", "after": "and its applications are illustrated on a boolean model of Fanconi anemia. Results show that the algorithm", "start_char_pos": 558, "end_char_pos": 641 }, { "type": "R", "before": "responsible for pathological phenotypes . Results show that the algorithm", "after": "associated with pathological phenotypes and then", "start_char_pos": 696, "end_char_pos": 769 } ]
[ 0, 138, 233, 349, 486, 638, 838, 935 ]
1407.4452
1
A unified analytical pricing framework with involvement of the shot noise random process has been introduced and elaborated. Two exactly solvable new models have been developed. The first model has been designed to value options. It is assumed that stock price stochastic dynamics follows a Geometric Shot Noise motion. A new arbitrage-free integro-differential option pricing equation has been found and solved. The put-call parity has been proved and the Greeks have been calculated. Three new Greeks associated with model market parameters have been introduced and evaluated. It has been shown that in diffusion approximation the developed option pricing model incorporates the well-known Black-Scholes equation and its solution. The stochastic dynamic origin of the Black-Scholes volatility has been discussed. To model stochastic dynamics of a short term interest rate, the second model has been introduced and developed based on Langevin type equation with shot noise. It has been found that the model provides affine term structure. A new bond pricing formula has been obtained. It has been shown that in diffusion approximation the developed bond pricing formula goes into the well-known Vasicek solution. The stochastic dynamic origin of the long-term mean and instantaneous volatility of the Vasicek model have been explained. Despite the lack of normality of probability distributions involved, newly elaborated models have the same degree of analytical tractability as the Black--Scholes model and the Vasicek model . It allows to obtain new exact simple formulas to value options and bonds .
A unified analytical pricing framework with involvement of the shot noise random process has been introduced and elaborated. Two exactly solvable new models have been developed. The first model has been designed to value options. It is assumed that asset price stochastic dynamics follows a Geometric Shot Noise motion. A new arbitrage-free integro-differential option pricing equation has been found and solved. The put-call parity has been proved and the Greeks have been calculated. Three additional new Greeks associated with market model parameters have been introduced and evaluated. It has been shown that in diffusion approximation the developed option pricing model incorporates the well-known Black-Scholes equation and its solution. The stochastic dynamic origin of the Black-Scholes volatility has been uncovered. The new option pricing model has been generalized based on asset price dynamics modeled by the superposition of Geometric Brownian motion and Geometric Shot Noise. To model stochastic dynamics of a short term interest rate, the second model has been introduced and developed based on Langevin type equation with shot noise. A new bond pricing formula has been obtained. It has been shown that in diffusion approximation the developed bond pricing formula goes into the well-known Vasicek solution. The stochastic dynamic origin of the long-term mean and instantaneous volatility of the Vasicek model has been uncovered. A generalized bond pricing model has been introduced and developed based on short term interest rate stochastic dynamics modeled by superposition of a standard Wiener process and shot noise. Despite the non-Gaussianity of probability distributions involved, all newly elaborated models have the same degree of analytical tractability as the Black-Scholes model and the Vasicek model .
[ { "type": "R", "before": "stock", "after": "asset", "start_char_pos": 249, "end_char_pos": 254 }, { "type": "A", "before": null, "after": "additional", "start_char_pos": 492, "end_char_pos": 492 }, { "type": "R", "before": "model market", "after": "market model", "start_char_pos": 520, "end_char_pos": 532 }, { "type": "R", "before": "discussed.", "after": "uncovered. The new option pricing model has been generalized based on asset price dynamics modeled by the superposition of Geometric Brownian motion and Geometric Shot Noise.", "start_char_pos": 805, "end_char_pos": 815 }, { "type": "D", "before": "It has been found that the model provides affine term structure.", "after": null, "start_char_pos": 976, "end_char_pos": 1040 }, { "type": "R", "before": "have been explained. Despite the lack of normality of", "after": "has been uncovered. A generalized bond pricing model has been introduced and developed based on short term interest rate stochastic dynamics modeled by superposition of a standard Wiener process and shot noise. Despite the non-Gaussianity of", "start_char_pos": 1317, "end_char_pos": 1370 }, { "type": "A", "before": null, "after": "all", "start_char_pos": 1407, "end_char_pos": 1407 }, { "type": "R", "before": "Black--Scholes", "after": "Black-Scholes", "start_char_pos": 1487, "end_char_pos": 1501 }, { "type": "D", "before": ". It allows to obtain new exact simple formulas to value options and bonds", "after": null, "start_char_pos": 1530, "end_char_pos": 1604 } ]
[ 0, 124, 177, 229, 319, 412, 485, 579, 733, 815, 975, 1040, 1086, 1214, 1337, 1531 ]
1407.4777
1
We consider parametric version of fixed-delay continuous-time Markov chains (or equivalently deterministic and stochastic Petri nets, DSPN) where fixed-delay transitions are specified by parameters, rather than concrete values. Our goal is to synthesize values of these parameters that minimise expected cost of reaching a given set of target states for a given cost function . We show that under mild assumptions, optimal values of parameters can be efficiently approximated using translation to a Markov decision process (MDP) whose actions correspond to discretized values of these parameters . Even though the translation is theoretically efficient, we also provide heuristics that further decreases the size of the MDP depending on properties of the continuous-time model .
We consider parametric version of fixed-delay continuous-time Markov chains (or equivalently deterministic and stochastic Petri nets, DSPN) where fixed-delay transitions are specified by parameters, rather than concrete values. Our goal is to synthesize values of these parameters that , for a given cost function, minimise expected total cost incurred before reaching a given set of target states . We show that under mild assumptions, optimal values of parameters can be effectively approximated using translation to a Markov decision process (MDP) whose actions correspond to discretized values of these parameters .
[ { "type": "R", "before": "minimise expected cost of", "after": ", for a given cost function, minimise expected total cost incurred before", "start_char_pos": 286, "end_char_pos": 311 }, { "type": "D", "before": "for a given cost function", "after": null, "start_char_pos": 350, "end_char_pos": 375 }, { "type": "R", "before": "efficiently", "after": "effectively", "start_char_pos": 451, "end_char_pos": 462 }, { "type": "D", "before": ". Even though the translation is theoretically efficient, we also provide heuristics that further decreases the size of the MDP depending on properties of the continuous-time model", "after": null, "start_char_pos": 596, "end_char_pos": 776 } ]
[ 0, 227, 377 ]
1407.5040
1
Wireless communication is the prerequisite for the highly desired in-situ and real-time monitoring capability in underground environments, including oil reservoirs, groundwater aquifers, volcanos, among others . However, existing wireless communication techniques do not work in such environments due to the harsh transmission medium with very high material absorption and the inaccessible nature of underground environment that requires extremely small device size. Although Magnetic Induction ( MI) communication has been shown to be a promising technique in underground environments, the existing MI system utilizes very large coil antennas, which are not suitable for deployment in underground. In this paper, we propose a metamaterial enhanced magnetic induction communication mechanism that can achieve over meter scale communication range by using millimeter scale coil antennasin the harsh underground environment. An analytical channel model for the new mechanism is developed to explore the fundamentals of metamaterial enhanced MI communication in various underground environments. The effects of important system and environmental factors are quantitatively captured, including the operating frequency, bandwidth, and parameters of metamaterial antennas, as well as permittivity, permeability, and conductivity of underground medium . The theoretical model is validated through the finite element simulation software, COMSOL Multiphysics .
Magnetic Induction (MI) communication technique has shown great potentials in complex and RF-challenging environments, such as underground and underwater, due to its advantage over EM wave-based techniques in penetrating lossy medium . However, the transmission distance of MI techniques is limited since magnetic field attenuates very fast in the near field. To this end, this paper proposes Metamaterial-enhanced Magnetic Induction ( M^2I) communication mechanism, where a MI coil antenna is enclosed by a metamaterial shell that can enhance the magnetic fields around the MI transceivers. As a result, the M^2I communication system can achieve tens of meters communication range by using pocket-sized antennas. In this paper, an analytical channel model is developed to explore the fundamentals of the M^2I mechanism, in the aspects of communication range and channel capacity, and the susceptibility to various hostile and complex environments . The theoretical model is validated through the finite element simulation software, Comsol Multiphysics. Proof-of-concept experiments are also conducted to validate the feasibility of M^2I .
[ { "type": "R", "before": "Wireless communication is the prerequisite for the highly desired in-situ and real-time monitoring capability in underground environments, including oil reservoirs, groundwater aquifers, volcanos, among others", "after": "Magnetic Induction (MI) communication technique has shown great potentials in complex and RF-challenging environments, such as underground and underwater, due to its advantage over EM wave-based techniques in penetrating lossy medium", "start_char_pos": 0, "end_char_pos": 209 }, { "type": "R", "before": "existing wireless communication techniques do not work in such environments due to the harsh transmission medium with very high material absorption and the inaccessible nature of underground environment that requires extremely small device size. Although", "after": "the transmission distance of MI techniques is limited since magnetic field attenuates very fast in the near field. To this end, this paper proposes Metamaterial-enhanced", "start_char_pos": 221, "end_char_pos": 475 }, { "type": "R", "before": "MI) communication has been shown to be a promising technique in underground environments, the existing MI system utilizes very large coil antennas, which are not suitable for deployment in underground. In this paper, we propose a metamaterial enhanced magnetic induction communication mechanism that can achieve over meter scale", "after": "M^2I) communication mechanism, where a MI coil antenna is enclosed by a metamaterial shell that can enhance the magnetic fields around the MI transceivers. As a result, the M^2I communication system can achieve tens of meters", "start_char_pos": 497, "end_char_pos": 825 }, { "type": "R", "before": "millimeter scale coil antennasin the harsh underground environment. An", "after": "pocket-sized antennas. In this paper, an", "start_char_pos": 855, "end_char_pos": 925 }, { "type": "D", "before": "for the new mechanism", "after": null, "start_char_pos": 951, "end_char_pos": 972 }, { "type": "R", "before": "metamaterial enhanced MI communication in various underground environments. The effects of important system and environmental factors are quantitatively captured, including the operating frequency, bandwidth, and parameters of metamaterial antennas, as well as permittivity, permeability, and conductivity of underground medium", "after": "the M^2I mechanism, in the aspects of communication range and channel capacity, and the susceptibility to various hostile and complex environments", "start_char_pos": 1017, "end_char_pos": 1344 }, { "type": "R", "before": "COMSOL Multiphysics", "after": "Comsol Multiphysics. Proof-of-concept experiments are also conducted to validate the feasibility of M^2I", "start_char_pos": 1430, "end_char_pos": 1449 } ]
[ 0, 211, 466, 698, 922, 1092, 1346 ]
1407.6117
1
Progress in cell reprogramming recently has revived Waddington's concept of an epigenetic landscapein the term of a quasi-potential function and cell attractors in a complex dynamical system embodied by the cell's gene regulatory network (GRN) . The quasi-potential of network states have biological significance because the relative stability in a multi-stable dynamical system offers a measure of the effort for the transition between attractors . However, quasi-potential landscapes that have been developed for continuous systems by multiple groups are not suitable for discrete networks , which are generally used to study large networks or ensembles thereof. Here we introduce the relative stability of network states to Boolean networks by using the noise-perturbed Markov matrices. With an ensemble approach of a minimal gene network for pancreas development , we show that Boolean networks with canalized / sign-compatible Boolean functionscan capture essential features of cell fate dynamics and allow the estimation of relative stabilities of network states and, hence, of transition barriers. Our Boolean network framework for calculating the relative stabilities and transition rates of network states can be used to quantify the influence of different geneson cell transitions, estimate the time sequence of cell differentiation and facilitate the rational design of cell reprogramming protocols.
Progress in cell type reprogramming has revived the interest in Waddington's concept of the epigenetic landscape. Recently researchers developed the quasi-potential theory to represent the Waddington's landscape. The Quasi-potential U(x), derived from interactions in the gene regulatory network (GRN) of a cell, quantifies the relative stability of network states, which determine the effort required for state transitions in a multi-stable dynamical system . However, quasi-potential landscapes , originally developed for continuous systems , are not suitable for discrete-valued networks which are important tools to study complex systems. In this paper, we provide a framework to quantify the landscape for discrete Boolean networks (BNs). We apply our framework to study pancreas cell differentiation where an ensemble of BN models is considered based on the structure of a minimal GRN for pancreas development . We impose biologically motivated structural constraints (corresponding to specific type of Boolean functions) and dynamical constraints (corresponding to stable attractor states) to limit the space of BN models for pancreas development. In addition, we enforce a novel functional constraint corresponding to the relative ordering of attractor states in BN models to restrict the space of BN models to the biological relevant class. We find that BNs with canalyzing / sign-compatible Boolean functions best capture the dynamics of pancreas cell differentiation. This framework can also determine the genes' influence on cell state transitions, and thus can facilitate the rational design of cell reprogramming protocols.
[ { "type": "R", "before": "reprogramming recently has revived", "after": "type reprogramming has revived the interest in", "start_char_pos": 17, "end_char_pos": 51 }, { "type": "R", "before": "an epigenetic landscapein the term of a", "after": "the epigenetic landscape. Recently researchers developed the", "start_char_pos": 76, "end_char_pos": 115 }, { "type": "R", "before": "function and cell attractors in a complex dynamical system embodied by the cell's", "after": "theory to represent the Waddington's landscape. The Quasi-potential U(x), derived from interactions in the", "start_char_pos": 132, "end_char_pos": 213 }, { "type": "R", "before": ". The quasi-potential of network states have biological significance because", "after": "of a cell, quantifies", "start_char_pos": 244, "end_char_pos": 320 }, { "type": "A", "before": null, "after": "of network states, which determine the effort required for state transitions", "start_char_pos": 344, "end_char_pos": 344 }, { "type": "D", "before": "offers a measure of the effort for the transition between attractors", "after": null, "start_char_pos": 380, "end_char_pos": 448 }, { "type": "R", "before": "that have been", "after": ", originally", "start_char_pos": 487, "end_char_pos": 501 }, { "type": "R", "before": "by multiple groups", "after": ",", "start_char_pos": 535, "end_char_pos": 553 }, { "type": "R", "before": "discrete networks , which are generally used to study large networks or ensembles thereof. Here we introduce the relative stability of network states to Boolean networks by using the noise-perturbed Markov matrices. With an ensemble approach of a minimal gene network", "after": "discrete-valued networks which are important tools to study complex systems. In this paper, we provide a framework to quantify the landscape for discrete Boolean networks (BNs). We apply our framework to study pancreas cell differentiation where an ensemble of BN models is considered based on the structure of a minimal GRN", "start_char_pos": 575, "end_char_pos": 842 }, { "type": "R", "before": ", we show that Boolean networks with canalized", "after": ". We impose biologically motivated structural constraints (corresponding to specific type of Boolean functions) and dynamical constraints (corresponding to stable attractor states) to limit the space of BN models for pancreas development. In addition, we enforce a novel functional constraint corresponding to the relative ordering of attractor states in BN models to restrict the space of BN models to the biological relevant class. We find that BNs with canalyzing", "start_char_pos": 868, "end_char_pos": 914 }, { "type": "R", "before": "sign-compatible Boolean functionscan capture essential features of cell fate dynamics and allow the estimation of relative stabilities of network states and, hence, of transition barriers. Our Boolean network framework for calculating the relative stabilities and transition rates of network states can be used to quantify the influence of different geneson cell transitions, estimate the time sequence of cell differentiation and", "after": "sign-compatible Boolean functions best capture the dynamics of pancreas cell differentiation. This framework can also determine the genes' influence on cell state transitions, and thus can", "start_char_pos": 917, "end_char_pos": 1347 } ]
[ 0, 450, 665, 790, 1105 ]
1407.6860
1
We use probabilistic methods to characterise the optimal exercise region of a swing option with put payoff, n \ge exercise rights and finite maturity . The underlying asset's dynamics is given by a geometric Brownian motion according to the Black & Scholes model. The optimal exercise region of each right (except the last) of the swing option that we consider is characterised in terms of two boundaries which are continuous functions of time and uniquely solve a system of coupled integral equations of Volterra-type. The swing option's price is then provided as the sum of a European part and an early exercise premium depending on the optimal exercise boundaries.
We use probabilistic methods to characterise the optimal exercise region of a swing option with put payoff, n \ge 2 exercise rights and finite maturity , when the underlying asset's dynamics is specified according to the Black & Scholes model. The optimal exercise region of each right (except the last) is described in terms of two boundaries which are continuous functions of time and uniquely solve a system of coupled integral equations of Volterra-type. The swing option's price is then obtained as the sum of a European part and an early exercise premium which depends on the optimal boundaries.
[ { "type": "A", "before": null, "after": "2", "start_char_pos": 114, "end_char_pos": 114 }, { "type": "R", "before": ". The", "after": ", when the", "start_char_pos": 151, "end_char_pos": 156 }, { "type": "R", "before": "given by a geometric Brownian motion", "after": "specified", "start_char_pos": 188, "end_char_pos": 224 }, { "type": "R", "before": "of the swing option that we consider is characterised", "after": "is described", "start_char_pos": 325, "end_char_pos": 378 }, { "type": "R", "before": "provided", "after": "obtained", "start_char_pos": 554, "end_char_pos": 562 }, { "type": "R", "before": "depending", "after": "which depends", "start_char_pos": 623, "end_char_pos": 632 }, { "type": "D", "before": "exercise", "after": null, "start_char_pos": 648, "end_char_pos": 656 } ]
[ 0, 152, 264, 520 ]
1407.6860
2
We use probabilistic methods to characterise the optimal exercise region of a swing option with put payoff , n%DIFDELCMD < \ge %%% 2 exercise rights and finite maturity, when the underlying asset's dynamics is specified according to the Black Scholes model . The optimal exercise region of each right (except the last) is described in terms of two boundaries which are continuous functions of time and uniquely solve a system of coupled integral equations of Volterra-type. The swing option's price is then obtained as the sum of a European part and an early exercise premium which depends on the optimal boundaries .
We use probabilistic methods to characterise %DIFDELCMD < \ge %%% time dependent optimal stopping boundaries in a problem of multiple optimal stopping on a finite time horizon. Motivated by financial applications we consider a payoff of immediate stopping of "put" type and the underlying dynamics follows a geometric Brownian motion . The optimal stopping region relative to each optimal stopping time is described in terms of two boundaries which are continuous , monotonic functions of time and uniquely solve a system of coupled integral equations of Volterra-type. Finally we provide a formula for the value function of the problem .
[ { "type": "D", "before": "the optimal exercise region of a swing option with put payoff , n", "after": null, "start_char_pos": 45, "end_char_pos": 110 }, { "type": "D", "before": "2 exercise rights and finite maturity, when the underlying asset's dynamics is specified according to the Black", "after": null, "start_char_pos": 131, "end_char_pos": 242 }, { "type": "R", "before": "Scholes model", "after": "time dependent optimal stopping boundaries in a problem of multiple optimal stopping on a finite time horizon. Motivated by financial applications we consider a payoff of immediate stopping of \"put\" type and the underlying dynamics follows a geometric Brownian motion", "start_char_pos": 243, "end_char_pos": 256 }, { "type": "R", "before": "exercise region of each right (except the last)", "after": "stopping region relative to each optimal stopping time", "start_char_pos": 271, "end_char_pos": 318 }, { "type": "A", "before": null, "after": ", monotonic", "start_char_pos": 380, "end_char_pos": 380 }, { "type": "R", "before": "The swing option's price is then obtained as the sum of a European part and an early exercise premium which depends on the optimal boundaries", "after": "Finally we provide a formula for the value function of the problem", "start_char_pos": 475, "end_char_pos": 616 } ]
[ 0, 258, 474 ]
1407.7198
1
The mechanism of glucose transport across membrane is studied from the point of quantum conformational transition. The structural variations among four kinds of conformations of the transporter GLUT1 (ligand free occluded, outward open, ligand bound occluded , and inward open) are looked as the quantum transition. The comparative studies between mechanisms of uniporter (GLUT1) and symporter (XylE ) are given. The transitional rates are calculated from the fundamental theory. The glucose transport dynamics is proposed. The steady state of the transporter is found and its stability is demonstrated . The glucose translocation rates in two directions and in different steps of the transition are compared. The mean transport time in a cycle is calculated . The non-Arrhenius temperature dependence of the transition rate and the temperature relation for the mean transport time are predicted. It is suggested that the direct measurement of temperature dependence is a useful tool for deeply understanding the transmembrane transport mechanism.
After a brief review of the protein folding quantum theory and a short discussion on its experimental evidences the mechanism of glucose transport across membrane is studied from the point of quantum conformational transition. The structural variations among four kinds of conformations of the human glucose transporter GLUT1 (ligand free occluded, outward open, ligand bound occluded and inward open) are looked as the quantum transition. The comparative studies between mechanisms of uniporter (GLUT1) and symporter (XylE and GlcP ) are given. The transitional rates are calculated from the fundamental theory. The monosaccharide transport kinetics is proposed. The steady state of the transporter is found and its stability is studied . The glucose (xylose) translocation rates in two directions and in different steps are compared. The mean transport time in a cycle is calculated and based on it the comparison of the transport times between GLUT1,GlcP and XylE can be drawn . The non-Arrhenius temperature dependence of the transition rate and the mean transport time is predicted. It is suggested that the direct measurement of temperature dependence is a useful tool for deeply understanding the transmembrane transport mechanism.
[ { "type": "R", "before": "The", "after": "After a brief review of the protein folding quantum theory and a short discussion on its experimental evidences the", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "A", "before": null, "after": "human glucose", "start_char_pos": 182, "end_char_pos": 182 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 260, "end_char_pos": 261 }, { "type": "A", "before": null, "after": "and GlcP", "start_char_pos": 401, "end_char_pos": 401 }, { "type": "R", "before": "glucose transport dynamics", "after": "monosaccharide transport kinetics", "start_char_pos": 486, "end_char_pos": 512 }, { "type": "R", "before": "demonstrated", "after": "studied", "start_char_pos": 592, "end_char_pos": 604 }, { "type": "A", "before": null, "after": "(xylose)", "start_char_pos": 619, "end_char_pos": 619 }, { "type": "D", "before": "of the transition", "after": null, "start_char_pos": 681, "end_char_pos": 698 }, { "type": "A", "before": null, "after": "and based on it the comparison of the transport times between GLUT1,GlcP and XylE can be drawn", "start_char_pos": 762, "end_char_pos": 762 }, { "type": "D", "before": "temperature relation for the", "after": null, "start_char_pos": 837, "end_char_pos": 865 }, { "type": "R", "before": "are", "after": "is", "start_char_pos": 886, "end_char_pos": 889 } ]
[ 0, 114, 316, 414, 481, 525, 606, 712, 764, 900 ]