doc_id
stringlengths
2
10
revision_depth
stringclasses
5 values
before_revision
stringlengths
3
309k
after_revision
stringlengths
5
309k
edit_actions
list
sents_char_pos
list
1304.4525
1
Implementing large-scale information and communication technology (IT) projects carries large risks and easily might disrupt operations, waste taxpayers' money, and create negative publicity. Because of the high risks it is important that government leaders manage the attendant risks. We analysed the based on a sample of 1,355 public0sector IT projects. The sample included large-scale projects, on average the actual expenditure was 130 million and the average duration was 35 months. Our findings showed that the typical project had no cost overruns and took on average 24\% longer than initially expected. However, comparing the risk distribution with the normative model of a thin-tailed distribution, projects' actual costs should fall within -30\% and +25\% of the budget in nearly 99 out of 100 projects. The data showed, however, that a staggering 18\% of all projects are outliers with cost overruns >25\%. Tests showed that the risk of outliers is even higher for standard software (24\%) as well as in certain project types, e.g., data management (41\%), office management (23\%), eGovernment (21\%) and management information systems (20\%). Analysis showed also that projects duration adds risk: every additional year of project duration increases the average cost risk by 4.2 percentage points. Lastly, we suggest four solutions that URLanization can take: (1) benchmark URLanization to know where you are, (2) de-bias your IT project decision-making, (3) reduce the complexities of your IT projects, and (4) develop Masterbuilders to learn from the best in the field.
Implementing large-scale information and communication technology (IT) projects carries large risks and easily might disrupt operations, waste taxpayers' money, and create negative publicity. Because of the high risks it is important that government leaders manage the attendant risks. We analysed a sample of 1,355 public sector IT projects. The sample included large-scale projects, on average the actual expenditure was 130 million and the average duration was 35 months. Our findings showed that the typical project had no cost overruns and took on average 24\% longer than initially expected. However, comparing the risk distribution with the normative model of a thin-tailed distribution, projects' actual costs should fall within -30\% and +25\% of the budget in nearly 99 out of 100 projects. The data showed, however, that a staggering 18\% of all projects are outliers with cost overruns >25\%. Tests showed that the risk of outliers is even higher for standard software (24\%) as well as in certain project types, e.g., data management (41\%), office management (23\%), eGovernment (21\%) and management information systems (20\%). Analysis showed also that projects duration adds risk: every additional year of project duration increases the average cost risk by 4.2 percentage points. Lastly, we suggest four solutions that public URLanization can take: (1) benchmark URLanization to know where you are, (2) de-bias your IT project decision-making, (3) reduce the complexities of your IT projects, and (4) develop Masterbuilders to learn from the best in the field.
[ { "type": "D", "before": "the based on", "after": null, "start_char_pos": 298, "end_char_pos": 310 }, { "type": "R", "before": "public0sector", "after": "public sector", "start_char_pos": 329, "end_char_pos": 342 }, { "type": "A", "before": null, "after": "public", "start_char_pos": 1350, "end_char_pos": 1350 } ]
[ 0, 191, 285, 355, 487, 610, 813, 917, 1155, 1310 ]
1304.4676
1
Neurons are thought of as the building blocks of excitable brain tissue. However, at the single neuron level, the neuronal membrane, the dendritic arbor and the axonal projections can also be considered an extended active medium. Active dendritic branchlets enable the propagation of dendritic spikes, whose computational functions, despite several proposals, remain an open question. Here we propose a concrete function to the active channels in large dendritic trees. By using a probabilistic cellular automaton approach, we model the input-output response of large active dendritic arbors subjected to complex spatio-temporal inputs , and exhibiting non-stereotyped dendritic spikes. We find that, if dendritic spikes have a non-deterministic duration, the dendritic arbor can undergo a continuous phase transition from a quiescent to an active state, thereby exhibiting spontaneous and self-sustained localized activity as suggested by experiments. Analogously to the critical brain hypothesis, which states that neuronal networks URLanize near a phase transition to take advantage of specific properties of the critical state, here we propose that neurons with large dendritic arbors optimize their capacity to distinguish incoming stimuli at the critical state. We suggest that "computation at the edge of a phase transition" is more compatible with the view that dendritic arbors perform an analog and dynamical rather than a symbolic and digital dendritic computation.
Neurons are thought of as the building blocks of excitable brain tissue. However, at the single neuron level, the neuronal membrane, the dendritic arbor and the axonal projections can also be considered an extended active medium. Active dendritic branchlets enable the propagation of dendritic spikes, whose computational functions, despite several proposals, remain an open question. Here we propose a concrete function to the active channels in large dendritic trees. By using a probabilistic cellular automaton approach, we model the input-output response of large active dendritic arbors subjected to complex spatio-temporal inputs and exhibiting non-stereotyped dendritic spikes. We find that, if dendritic spikes have a non-deterministic duration, the dendritic arbor can undergo a continuous phase transition from a quiescent to an active state, thereby exhibiting spontaneous and self-sustained localized activity as suggested by experiments. Analogously to the critical brain hypothesis, which states that neuronal networks URLanize near a phase transition to take advantage of specific properties of the critical state, here we propose that neurons with large dendritic arbors optimize their capacity to distinguish incoming stimuli at the critical state. We suggest that "computation at the edge of a phase transition" is more compatible with the view that dendritic arbors perform an analog rather than a digital dendritic computation.
[ { "type": "D", "before": ",", "after": null, "start_char_pos": 636, "end_char_pos": 637 }, { "type": "D", "before": "and dynamical", "after": null, "start_char_pos": 1405, "end_char_pos": 1418 }, { "type": "D", "before": "symbolic and", "after": null, "start_char_pos": 1433, "end_char_pos": 1445 } ]
[ 0, 72, 229, 384, 469, 686, 952, 1267 ]
1304.5040
1
A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities: %DIFDELCMD < myenumerate \item %%% The optimal terminal wealth X^*(T) : = X_{\varphi^*}(T) of the classical problem to maximize the expected U-utility of the terminal wealth X_{\varphi}(T) generated by admissible portfolios \varphi(t) ; 0 \leq t \leq T in a market with the risky asset price process modeled as a semimartingale %DIFDELCMD < \item %%% The optimal scenario dQ^*{dP} of the dual problem to minimize the expected V-value of dQ{dP} over a family of equivalent local martingale measures Q . Here V is the convex dual function of the concave function U. In this paper we consider markets modeled by It\^o-L\'evy processes , and in the first part we give a new proof of the above result in this setting, based on the maximum principle in stochastic control theory . An advantage with our approach is that it also gives\emph{ [ an explicit relation between the optimal portfolio \varphi^* and the optimal measure Q^* , in terms of backward stochastic differential equations . In the second part we present robust (model uncertainty) versions of the optimization problems in (i) and (ii), and we prove a relation between them. In particular, we show explicitly how to get from the solution of one of the problems to the solution of the other. We illustrate the results with explicit examples.
A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities: %DIFDELCMD < myenumerate \item %%% (i) The optimal terminal wealth X^*(T) : = X_{\varphi^*}(T) of the problem to maximize the expected U-utility of the terminal wealth X_{\varphi}(T) generated by admissible portfolios \varphi(t) , 0 \leq t \leq T in a market with the risky asset price process modeled as a semimartingale %DIFDELCMD < \item %%% ; (ii) The optimal scenario dQ^*{dP} of the dual problem to minimize the expected V-value of dQ{dP} over a family of equivalent local martingale measures Q , where V is the convex conjugate function of the concave function U. In this paper we consider markets modeled by It\^o-L\'evy processes . In the first part we use the maximum principle in stochastic control theory to extend the above relation to a\emph{dynamic relation, valid for all t \in[0,T . We prove in particular that the optimal adjoint process for the primal problem coincides with the optimal density process, and that the optimal adjoint process for the dual problem coincides with the optimal wealth process, 0 \leq t \leq T. In the terminal time case t=T we recover the classical duality connection above. We get moreover an explicit relation between the optimal portfolio \varphi^* and the optimal measure Q^* . We also obtain that the existence of an optimal scenario is equivalent to the replicability of a related T-claim . In the second part we present robust (model uncertainty) versions of the optimization problems in (i) and (ii), and we prove a similar dynamic relation between them. In particular, we show how to get from the solution of one of the problems to the other. We illustrate the results with explicit examples.
[ { "type": "A", "before": null, "after": "(i)", "start_char_pos": 160, "end_char_pos": 160 }, { "type": "D", "before": "classical", "after": null, "start_char_pos": 224, "end_char_pos": 233 }, { "type": "R", "before": ";", "after": ",", "start_char_pos": 361, "end_char_pos": 362 }, { "type": "A", "before": null, "after": "; (ii)", "start_char_pos": 477, "end_char_pos": 477 }, { "type": "R", "before": ". Here", "after": ", where", "start_char_pos": 627, "end_char_pos": 633 }, { "type": "R", "before": "dual", "after": "conjugate", "start_char_pos": 650, "end_char_pos": 654 }, { "type": "R", "before": ", and in", "after": ". In", "start_char_pos": 759, "end_char_pos": 767 }, { "type": "R", "before": "give a new proof of the above result in this setting, based on the", "after": "use the", "start_char_pos": 786, "end_char_pos": 852 }, { "type": "R", "before": ". An advantage with our approach is that it also gives", "after": "to extend the above relation to a", "start_char_pos": 900, "end_char_pos": 954 }, { "type": "A", "before": null, "after": "dynamic", "start_char_pos": 960, "end_char_pos": 960 }, { "type": "A", "before": null, "after": "relation, valid for all t \\in", "start_char_pos": 961, "end_char_pos": 961 }, { "type": "A", "before": null, "after": "0,T", "start_char_pos": 962, "end_char_pos": 962 }, { "type": "A", "before": null, "after": ". We prove in particular that the optimal adjoint process for the primal problem coincides with the optimal density process, and that the optimal adjoint process for the dual problem coincides with the optimal wealth process, 0 \\leq t \\leq T. In the terminal time case t=T we recover the classical duality connection above. We get moreover", "start_char_pos": 963, "end_char_pos": 963 }, { "type": "R", "before": ", in terms of backward stochastic differential equations", "after": ". We also obtain that the existence of an optimal scenario is equivalent to the replicability of a related T-claim", "start_char_pos": 1053, "end_char_pos": 1109 }, { "type": "A", "before": null, "after": "similar dynamic", "start_char_pos": 1239, "end_char_pos": 1239 }, { "type": "D", "before": "explicitly", "after": null, "start_char_pos": 1286, "end_char_pos": 1296 }, { "type": "D", "before": "the solution of", "after": null, "start_char_pos": 1352, "end_char_pos": 1367 } ]
[ 0, 362, 690, 901, 1111, 1262, 1378 ]
1304.5337
1
In this paper, we study a parabolic free boundary problem which shows that the solutions of this free boundary problem are increasing functions. Furthermore, we provide a rigorous veri?cation for that the free boundary for this problem is concave. As an application to the American option pricing problem, our results imply that the early exercise boundary of an American call is a strictly decreasing concave function. This result provides a useful information to obtain an asymptotic formula for the early exercise boundary .
In this paper, we study a parabolic free boundary problem which shows that the solutions of this free boundary problem are increasing functions. Furthermore, we provide a rigorous verification for that the free boundary for this problem is convex .
[ { "type": "R", "before": "veri?cation", "after": "verification", "start_char_pos": 180, "end_char_pos": 191 }, { "type": "R", "before": "concave. As an application to the American option pricing problem, our results imply that the early exercise boundary of an American call is a strictly decreasing concave function. This result provides a useful information to obtain an asymptotic formula for the early exercise boundary", "after": "convex", "start_char_pos": 239, "end_char_pos": 525 } ]
[ 0, 144, 247, 419 ]
1304.5337
2
In this paper , we study a parabolic free boundary problem which shows that the solutions of this free boundary problem are increasing functions. Furthermore, we provide a rigorous verification for that the free boundary for this problem is convex .
This paper studies the parabolic free boundary problem arising from pricing American-style put options on an asset whose index follows a time-homogeneous diffusion process. The time-homogeneous diffusion process in this paper includes the geometric Brownian motion process, the CEV process, the mean-reverting Gaussian process or the Vasicek model, and the mean-reverting square root process or the CIR model. The contributions are to provide rigorous proofs of following facts. The value of an American-style put option increases with an increase in the time-to-maturity and decreases with an increase in the underlying asset index. The early exercise boundary is a strictly decreasing convex function of the time-to-maturity under the given conditions .
[ { "type": "R", "before": "In this paper , we study a", "after": "This paper studies the", "start_char_pos": 0, "end_char_pos": 26 }, { "type": "R", "before": "which shows that the solutions of this free boundary problem are increasing functions. Furthermore, we provide a rigorous verification for that the free boundary for this problem is convex", "after": "arising from pricing American-style put options on an asset whose index follows a time-homogeneous diffusion process. The time-homogeneous diffusion process in this paper includes the geometric Brownian motion process, the CEV process, the mean-reverting Gaussian process or the Vasicek model, and the mean-reverting square root process or the CIR model. The contributions are to provide rigorous proofs of following facts. The value of an American-style put option increases with an increase in the time-to-maturity and decreases with an increase in the underlying asset index. The early exercise boundary is a strictly decreasing convex function of the time-to-maturity under the given conditions", "start_char_pos": 59, "end_char_pos": 247 } ]
[ 0, 145 ]
1304.5337
3
This paper studies the parabolic free boundary problem arising from pricing American-style put options on an asset whose index follows a time-homogeneous diffusion process. The time-homogeneous diffusion process in this paper includes the geometric Brownian motion process , the CEV process, the mean-reverting Gaussian process or the Vasicek model, and the mean-reverting square root process or the CIR model. The contributions are to provide rigorous proofs of following facts. The value of an American-style put option increases with an increase in the time-to-maturity and decreases with an increase in the underlying asset index. The early exercise boundary is a strictly decreasing convex functionof the time-to-maturity under the given conditions .
This paper studies the parabolic free boundary problem arising from pricing American-style put options on an asset whose index follows a geometric Brownian motion process . The contribution is to propose a condition for that the early exercise boundary is a convex function .
[ { "type": "D", "before": "time-homogeneous diffusion process. The time-homogeneous diffusion process in this paper includes the", "after": null, "start_char_pos": 137, "end_char_pos": 238 }, { "type": "R", "before": ", the CEV process, the mean-reverting Gaussian process or the Vasicek model, and the mean-reverting square root process or the CIR model. The contributions are to provide rigorous proofs of following facts. The value of an American-style put option increases with an increase in the time-to-maturity and decreases with an increase in the underlying asset index. The", "after": ". The contribution is to propose a condition for that the", "start_char_pos": 273, "end_char_pos": 638 }, { "type": "R", "before": "strictly decreasing convex functionof the time-to-maturity under the given conditions", "after": "convex function", "start_char_pos": 668, "end_char_pos": 753 } ]
[ 0, 172, 410, 479, 634 ]
1304.5380
1
We present a Bayesian framework for estimating the customer equity and the customer lifetime value (CLV) based on the purchasing behaviour deducible from the market surveys. We analyse a consumer survey on mobile phones carried out in Finland in February 2013. The survey data contains consumer given information on the current and previous brand of the phone and the times of the last two purchases. In contrast to personal purchase histories stored in a customer registry of a company, the survey provides information also on the purchase behaviour of the customers of the competitors. The proposed framework systematically takes into account the prior information and the sampling variance of the survey data and by using Bayesian statistics quantifies the uncertainty of the customer equity and CLV estimates by posterior distributions. The introduced approach is directly applicable in the domains where a customer relationship can be thought to be monogamous.
We present a Bayesian framework for estimating the customer equity (CE) and the customer lifetime value (CLV) based on the purchasing behavior deducible from the market surveys. As an example on the use of the framework, we analyze a consumer survey on mobile phones carried out in Finland in February 2013. The survey data contains consumer given information on the current and previous brand of the phone and the times of the last two purchases. In contrast to personal purchase histories stored in a customer registry of a company, the survey provides information also on the purchase behavior of the customers of the competitors. The proposed framework systematically takes into account the prior information and the sampling variance of the survey data and by using Bayesian statistics quantifies the uncertainty of the CE and CLV estimates by posterior distributions. The introduced approach is directly applicable in the domains where a customer relationship can be thought to be monogamous.
[ { "type": "A", "before": null, "after": "(CE)", "start_char_pos": 67, "end_char_pos": 67 }, { "type": "R", "before": "behaviour", "after": "behavior", "start_char_pos": 130, "end_char_pos": 139 }, { "type": "R", "before": "We analyse", "after": "As an example on the use of the framework, we analyze", "start_char_pos": 175, "end_char_pos": 185 }, { "type": "R", "before": "behaviour", "after": "behavior", "start_char_pos": 542, "end_char_pos": 551 }, { "type": "R", "before": "customer equity", "after": "CE", "start_char_pos": 780, "end_char_pos": 795 } ]
[ 0, 174, 261, 401, 588, 841 ]
1304.5380
2
We present a Bayesian framework for estimating the customer equity (CE ) and the customer lifetime value (CLV ) based on the purchasing behavior deducible from the market surveys . As an example on the use of the framework, we analyze a consumer survey on mobile phones carried out in Finland in February 2013. The survey data contains consumer given information on the current and previous brand of the phone and the times of the last two purchases . In contrast to personal purchase histories stored in a customer registry of a company, the survey provides information also on the purchase behavior of the customers of the competitors. The proposed framework systematically takes into account the prior information and the sampling variance of the survey data and by using Bayesian statistics quantifies the uncertainty of the CE and CLV estimates by posterior distributions. The introduced approach is directly applicable in the domains where a customer relationship can be thought to be monogamous .
We present a Bayesian framework for estimating the customer lifetime value (CLV ) and the customer equity (CE ) based on the purchasing behavior deducible from the market surveys on customer purchasing behavior. The proposed framework systematically addresses the challenges faced when the future value of customers is estimated based on survey data. The scarcity of the survey data and the sampling variance are countered by utilizing the prior information and quantifying the uncertainty of the CE and CLV estimates by posterior distributions. Furthermore, information on the purchase behavior of the customers of competitors available in the survey data is integrated to the framework. The introduced approach is directly applicable in the domains where a customer relationship can be thought to be monogamous. As an example on the use of the framework, we analyze a consumer survey on mobile phones carried out in Finland in February 2013. The survey data contains consumer given information on the current and previous brand of the phone and the times of the last two purchases .
[ { "type": "R", "before": "equity (CE", "after": "lifetime value (CLV", "start_char_pos": 60, "end_char_pos": 70 }, { "type": "R", "before": "lifetime value (CLV", "after": "equity (CE", "start_char_pos": 90, "end_char_pos": 109 }, { "type": "R", "before": ".", "after": "on customer purchasing behavior. The proposed framework systematically addresses the challenges faced when the future value of customers is estimated based on survey data. The scarcity of the survey data and the sampling variance are countered by utilizing the prior information and quantifying the uncertainty of the CE and CLV estimates by posterior distributions. Furthermore, information on the purchase behavior of the customers of competitors available in the survey data is integrated to the framework. The introduced approach is directly applicable in the domains where a customer relationship can be thought to be monogamous.", "start_char_pos": 179, "end_char_pos": 180 }, { "type": "D", "before": ". In contrast to personal purchase histories stored in a customer registry of a company, the survey provides information also on the purchase behavior of the customers of the competitors. The proposed framework systematically takes into account the prior information and the sampling variance of the survey data and by using Bayesian statistics quantifies the uncertainty of the CE and CLV estimates by posterior distributions. The introduced approach is directly applicable in the domains where a customer relationship can be thought to be monogamous", "after": null, "start_char_pos": 450, "end_char_pos": 1001 } ]
[ 0, 180, 310, 451, 637, 877 ]
1304.5404
1
Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many disciplines, spanning biochemistry, epidemiology, pharmacology, ecology and social networks . It is now well-established that, for small population sizes, stochastic models for reaction networks are necessary to capture randomness in the interactions. The tools for analyzing them , however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics , and determining moment bounds for the underlying stochastic process . Theoretical and computational solutions for these problems are obtained by utilizing a blend of ideas and techniques from probability theory, linear algebra , polynomial analysis and optimization theory. We demonstrate that stability properties of a wide class of networks can be assessed from theoretical results that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species , but worst-case quadratic . We illustrate the validity, the efficiency and the universality of our results on several reaction networks arising in fields such as biochemistry, epidemiology and ecology .
Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology . It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models , however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics . We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that stability properties of a wide class of biological networks can be assessed from theoretical results that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species . We illustrate the validity, the efficiency and the universality of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology . The biological implications of the results as well as an example of a non-ergodic biological network are also discussed .
[ { "type": "R", "before": "disciplines, spanning biochemistry, epidemiology, pharmacology, ecology and social networks", "after": "biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology", "start_char_pos": 175, "end_char_pos": 266 }, { "type": "A", "before": null, "after": "biochemical", "start_char_pos": 352, "end_char_pos": 352 }, { "type": "R", "before": "them", "after": "such models", "start_char_pos": 452, "end_char_pos": 456 }, { "type": "R", "before": ", and determining moment bounds for the underlying stochastic process . Theoretical and computational solutions for these problems are obtained by utilizing", "after": ". We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on", "start_char_pos": 889, "end_char_pos": 1045 }, { "type": "D", "before": "and techniques", "after": null, "start_char_pos": 1063, "end_char_pos": 1077 }, { "type": "D", "before": ", polynomial analysis", "after": null, "start_char_pos": 1118, "end_char_pos": 1139 }, { "type": "A", "before": null, "after": "biological", "start_char_pos": 1225, "end_char_pos": 1225 }, { "type": "D", "before": ", but worst-case quadratic", "after": null, "start_char_pos": 1468, "end_char_pos": 1494 }, { "type": "R", "before": "fields such as biochemistry,", "after": "biochemistry, systems biology,", "start_char_pos": 1616, "end_char_pos": 1644 }, { "type": "A", "before": null, "after": ". The biological implications of the results as well as an example of a non-ergodic biological network are also discussed", "start_char_pos": 1670, "end_char_pos": 1670 } ]
[ 0, 124, 268, 427, 522, 707, 960, 1164, 1372, 1496 ]
1304.5404
2
Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that stability properties of a wide class of biological networks can be assessed from theoretical results that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the universality of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed.
Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed.
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 1218, "end_char_pos": 1218 }, { "type": "R", "before": "theoretical results", "after": "our sufficient theoretical conditions", "start_char_pos": 1300, "end_char_pos": 1319 }, { "type": "R", "before": "universality", "after": "wide applicability", "start_char_pos": 1564, "end_char_pos": 1576 } ]
[ 0, 124, 293, 463, 564, 749, 931, 1080, 1197, 1416, 1512, 1688 ]
1304.6957
1
A perturbation framework , called the quasi-stationary analysis (QSA), is developed to analyze metastable behavior in stochastic processes with random internal and external states. The QSA is illustrated with a model of gene expression that displays bistable switching. In this model, the external state represents the number of protein molecules produced by a hypothetical gene. Once produced, a protein is eventually degraded. The internal state represents the activated or unactivated state of the gene; in the activated state the gene produces protein more rapidly than the unactivated state. The gene is activated by a dimer of the protein it produces so that the activation rate depends on the current protein level. This is a well studied model, and several model reductions and diffusion approximation methods are available to analyze its behavior. However, it is unclear if these methods accurately approximate long-time metastable behavior (i.e., mean switching time between metastable states of the bistable system). Diffusion approximations are generally known to fail in this regard . It is shown that a diffusion approximation based on a quasi-steady-state reduction (stochastic averaging), which averages the internal state out and reduces the process to a continuous Markov process for the external state, provides unreliable accuracy. On the other hand, the QSA approximation is consistently accurate .
A perturbation framework is developed to analyze metastable behavior in stochastic processes with random internal and external states. The process is assumed to be under weak noise conditions, and the case where the deterministic limit is bistable is considered. A general analytical approximation is derived for the stationary probability density and the mean switching time between metastable states, which includes the pre exponential factor. The results are illustrated with a model of gene expression that displays bistable switching. In this model, the external state represents the number of protein molecules produced by a hypothetical gene. Once produced, a protein is eventually degraded. The internal state represents the activated or unactivated state of the gene; in the activated state the gene produces protein more rapidly than the unactivated state. The gene is activated by a dimer of the protein it produces so that the activation rate depends on the current protein level. This is a well studied model, and several model reductions and diffusion approximation methods are available to analyze its behavior. However, it is unclear if these methods accurately approximate long-time metastable behavior (i.e., mean switching time between metastable states of the bistable system). Diffusion approximations are generally known to fail in this regard .
[ { "type": "D", "before": ", called the quasi-stationary analysis (QSA),", "after": null, "start_char_pos": 25, "end_char_pos": 70 }, { "type": "R", "before": "QSA is", "after": "process is assumed to be under weak noise conditions, and the case where the deterministic limit is bistable is considered. A general analytical approximation is derived for the stationary probability density and the mean switching time between metastable states, which includes the pre exponential factor. The results are", "start_char_pos": 185, "end_char_pos": 191 }, { "type": "D", "before": ". It is shown that a diffusion approximation based on a quasi-steady-state reduction (stochastic averaging), which averages the internal state out and reduces the process to a continuous Markov process for the external state, provides unreliable accuracy. On the other hand, the QSA approximation is consistently accurate", "after": null, "start_char_pos": 1096, "end_char_pos": 1417 } ]
[ 0, 180, 269, 379, 428, 506, 596, 722, 856, 1027, 1097, 1351 ]
1304.7533
1
We show how Quantitative Structuring incorporates traditional ideas of product design while supporting a more accurate expression of clients' views on the market. We briefly touch upon adjacent topics regarding the safety of financial products and the role of pricing models in product design.
Quantitative Structuring is a rigorous framework for the design of financial products. We show how it incorporates traditional investment ideas while supporting a more accurate expression of clients' views on the market. We briefly touch upon adjacent topics regarding the safety of financial derivatives and the role of pricing models in product design.
[ { "type": "A", "before": null, "after": "Quantitative Structuring is a rigorous framework for the design of financial products.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "Quantitative Structuring incorporates traditional ideas of product design", "after": "it incorporates traditional investment ideas", "start_char_pos": 13, "end_char_pos": 86 }, { "type": "R", "before": "products", "after": "derivatives", "start_char_pos": 236, "end_char_pos": 244 } ]
[ 0, 163 ]
1304.7533
2
Quantitative Structuring is a rigorous framework for the design of financial products. We show how it incorporates traditional investment ideas while supporting a more accurate expression of clients' views on the market. We briefly touch upon adjacent topics regarding the safety of financial derivatives and the role of pricing models in product design.
Quantitative structuring is a rigorous framework for the design of financial products. We show how it incorporates traditional investment ideas while supporting a more accurate expression of clients' views . We touch upon adjacent topics regarding the safety of financial derivatives and the role of pricing models in product design.
[ { "type": "R", "before": "Structuring", "after": "structuring", "start_char_pos": 13, "end_char_pos": 24 }, { "type": "R", "before": "on the market. We briefly", "after": ". We", "start_char_pos": 206, "end_char_pos": 231 } ]
[ 0, 86, 220 ]
1304.7535
1
We reinforce the foundations of our quantitative approach to structuring by considering a large class of rational investors. Again, the Bayesian laws of information processing provide us with simple yet powerful tools . Structuring of investment derivatives is summarized as a manufacturing process . This allows for performance, quality and safety to be built into the product at the level of individual production stages -- just as it is done in the established manufacturing industries .
We present a theory of product design covering a large class of investors. Bayesian laws of information processing provide the logical foundation and lead to a simple structuring tool -- the payoff elasticity equation . Structuring of investment derivatives is summarized as a manufacturing process .
[ { "type": "R", "before": "reinforce the foundations of our quantitative approach to structuring by considering", "after": "present a theory of product design covering", "start_char_pos": 3, "end_char_pos": 87 }, { "type": "R", "before": "rational investors. Again, the", "after": "investors.", "start_char_pos": 105, "end_char_pos": 135 }, { "type": "R", "before": "us with simple yet powerful tools", "after": "the logical foundation and lead to a simple structuring tool -- the payoff elasticity equation", "start_char_pos": 184, "end_char_pos": 217 }, { "type": "D", "before": ". This allows for performance, quality and safety to be built into the product at the level of individual production stages -- just as it is done in the established manufacturing industries", "after": null, "start_char_pos": 299, "end_char_pos": 488 } ]
[ 0, 124, 300 ]
1304.7535
2
We present a theory of product design covering a large class of investors. Bayesian laws of information processing provide the logical foundation and lead to a simple structuring tool -- the payoff elasticity equation. Structuring of investment derivatives is summarized as a manufacturing process .
Financial derivatives have often been compared to instruments of gambling. It turns out that many naive ways of making them do indeed lead to behavior which is mathematically equivalent to gambling. Fortunately, this inadvertent effect can be understood and prevented. We present a theory of product design which allows us to do that .
[ { "type": "A", "before": null, "after": "Financial derivatives have often been compared to instruments of gambling. It turns out that many naive ways of making them do indeed lead to behavior which is mathematically equivalent to gambling. Fortunately, this inadvertent effect can be understood and prevented.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "covering a large class of investors. Bayesian laws of information processing provide the logical foundation and lead to a simple structuring tool -- the payoff elasticity equation. Structuring of investment derivatives is summarized as a manufacturing process", "after": "which allows us to do that", "start_char_pos": 39, "end_char_pos": 298 } ]
[ 0, 75, 219 ]
1304.7535
3
Financial derivatives have often been compared to instrumentsof gambling . It turns out that many naive ways of making them do indeed lead to behavior which is mathematically equivalent to gambling. Fortunately, this inadvertent effect can be understood and prevented. We present a theory of product design which allows us to do that.
Financial derivatives have often been criticized as casino-style betting instruments . It turns out that many naive ways of making them are indeed equivalent to gambling. Fortunately, this inadvertent effect can be understood and prevented. We present a theory of product design which achieves that.
[ { "type": "R", "before": "compared to instrumentsof gambling", "after": "criticized as casino-style betting instruments", "start_char_pos": 38, "end_char_pos": 72 }, { "type": "R", "before": "do indeed lead to behavior which is mathematically", "after": "are indeed", "start_char_pos": 124, "end_char_pos": 174 }, { "type": "R", "before": "allows us to do", "after": "achieves", "start_char_pos": 313, "end_char_pos": 328 } ]
[ 0, 74, 198, 268 ]
1304.7633
1
Prion diseases are invariably fatal and highly infectious neurodegenerative diseases that affect a wide variety of mammalian species such as sheep, goats, mice, humans, chimpanzees, hamsters, cattle, elks, deers , minks, cats, chicken, pigs, turtles, etc. These neurodegenerative diseases are caused by the conversion from a soluble normal cellular protein into insoluble abnormally folded infectious prions and the conversion is believed to involve conformational change from a predominantly alpha-helical protein to one rich in beta-sheet structure. Such conformational changes may be amenable to study by molecular dynamics techniques. For rabbits, classical studies show they have a low susceptibility to be infected, but in 2012 it was reported that rabbit prion can be generated (though not directly) and the rabbit prion is infectious and transmissible (Proceedings of the National Academy of Sciences USA Volume 109 Issue 13 Pages from 5080 to 5085) . This paper studies the molecular structure of rabbit prion protein wild-type and mutants by molecular dynamics techniques, in order to understand the specific mechanism of rabbit prion protein and rabbit prions.
Prion diseases are invariably fatal and highly infectious neurodegenerative diseases that affect a wide variety of mammalian species such as sheep, goats, mice, humans, chimpanzees, hamsters, cattle, elks, deer , minks, cats, chicken, pigs, turtles, etc. These neurodegenerative diseases are caused by the conversion from a soluble normal cellular protein into insoluble abnormally folded infectious prions and the conversion is believed to involve conformational change from a predominantly alpha-helical protein to one rich in beta-sheet structure. Such conformational changes may be amenable to study by molecular dynamics (MD) techniques. For rabbits, classical studies show they have a low susceptibility to be infected, but in 2012 it was reported that rabbit prion can be generated (though not directly) and the rabbit prion is infectious and transmissible (Proceedings of the National Academy of Sciences USA 109 ( 13 ): 5080-5) . This paper studies the NMR and X-ray molecular structures of rabbit prion protein wild-type and mutants by MD techniques, in order to understand the specific mechanism of rabbit prion protein and rabbit prions.
[ { "type": "R", "before": "deers", "after": "deer", "start_char_pos": 206, "end_char_pos": 211 }, { "type": "A", "before": null, "after": "(MD)", "start_char_pos": 627, "end_char_pos": 627 }, { "type": "D", "before": "Volume", "after": null, "start_char_pos": 914, "end_char_pos": 920 }, { "type": "R", "before": "Issue", "after": "(", "start_char_pos": 925, "end_char_pos": 930 }, { "type": "R", "before": "Pages from 5080 to 5085)", "after": "): 5080-5)", "start_char_pos": 934, "end_char_pos": 958 }, { "type": "R", "before": "molecular structure", "after": "NMR and X-ray molecular structures", "start_char_pos": 984, "end_char_pos": 1003 }, { "type": "R", "before": "molecular dynamics", "after": "MD", "start_char_pos": 1053, "end_char_pos": 1071 } ]
[ 0, 255, 551, 639, 960 ]
1304.7664
1
The lattice-Boltzmann method (LBM) is an algorithm for CFD simulations that has gained popularity due to its ease of implementation and suitability for complex geometries. Its scalability on multicore chips is often limited due to its low computational intensity, leading to interesting characteristics regarding optimal performance and energy to solution on the chip and highly parallel levels. In this paper we perform a thorough analysis of a two-relaxation-time (TRT) model in a sparse lattice representation on the Intel Sandy Bridge processor. Starting from a single-core performance model we can describe the intra-chip saturation characteristics of the implementation and its optimal operating point in terms of energy to solution as a function of the propagation method, the clock frequency, and the SIMD vectorization. We then show if and how these findings may be extrapolated to the massively parallel level on a petascale-class machine, and quantify the energy-saving potential of various optimizations .
Algorithms with low computational intensity show interesting performance and power consumption behavior on multicore processors. We choose the lattice-Boltzmann method (LBM) as a prototype for this scenario in order to show if and how single-chip performance and power characteristics can be generalized to the highly parallel case. LBM is an algorithm for CFD simulations that has gained popularity due to its ease of implementation and suitability for complex geometries. In this paper we perform a thorough analysis of a sparse-lattice LBM implementation on the Intel Sandy Bridge processor. Starting from a single-core performance model we can describe the intra-chip saturation characteristics of the code and its optimal operating point in terms of energy to solution as a function of the propagation method, the clock frequency, and the SIMD vectorization. We then show how these findings may be extrapolated to the massively parallel level on a petascale-class machine, and quantify the energy-saving potential of various optimizations . We find that high single-core performance and a correct choice of the number of cores used on the chip are the essential factors for lowest energy to solution with minimal loss of performance. In the highly parallel case, these guidelines are found to be even more important for fixing the optimal performance-energy operating point, especially when taking the system's baseline power consumption and the MPI communication characteristics into account. Simplistic measures often applied by users and computing centers, such as setting a low clock speed for memory-bound applications, have limited impact .
[ { "type": "R", "before": "The", "after": "Algorithms with low computational intensity show interesting performance and power consumption behavior on multicore processors. We choose the", "start_char_pos": 0, "end_char_pos": 3 }, { "type": "A", "before": null, "after": "as a prototype for this scenario in order to show if and how single-chip performance and power characteristics can be generalized to the highly parallel case. LBM", "start_char_pos": 35, "end_char_pos": 35 }, { "type": "D", "before": "Its scalability on multicore chips is often limited due to its low computational intensity, leading to interesting characteristics regarding optimal performance and energy to solution on the chip and highly parallel levels.", "after": null, "start_char_pos": 173, "end_char_pos": 396 }, { "type": "R", "before": "two-relaxation-time (TRT) model in a sparse lattice representation", "after": "sparse-lattice LBM implementation", "start_char_pos": 447, "end_char_pos": 513 }, { "type": "R", "before": "implementation", "after": "code", "start_char_pos": 662, "end_char_pos": 676 }, { "type": "D", "before": "if and", "after": null, "start_char_pos": 843, "end_char_pos": 849 }, { "type": "A", "before": null, "after": ". We find that high single-core performance and a correct choice of the number of cores used on the chip are the essential factors for lowest energy to solution with minimal loss of performance. In the highly parallel case, these guidelines are found to be even more important for fixing the optimal performance-energy operating point, especially when taking the system's baseline power consumption and the MPI communication characteristics into account. Simplistic measures often applied by users and computing centers, such as setting a low clock speed for memory-bound applications, have limited impact", "start_char_pos": 1017, "end_char_pos": 1017 } ]
[ 0, 172, 396, 550, 829 ]
1304.8077
1
The global dynamics of gene regulatory networks are known to show robustness to perturbations of different kinds: intrinsic and extrinsic noise, as well as mutations of individual genes. One molecular mechanism underlying this robustness has been identified as the action of so-called microRNAs that operate via feedforward loops. We present results of a computational study, using the modeling framework of generalized Boolean networks, which explores the role that such network motifs play in stabilizing global dynamics .
The global dynamics of gene regulatory networks are known to show robustness to perturbations in the form of intrinsic and extrinsic noise, as well as mutations of individual genes. One molecular mechanism underlying this robustness has been identified as the action of so-called microRNAs that operate via feedforward loops. We present results of a computational study, using the modeling framework of stochastic Boolean networks, which explores the role that such network motifs play in stabilizing global dynamics . The paper introduces a new measure for the stability of stochastic networks. The results show that certain types of feedforward loops do indeed buffer the network against stochastic effects .
[ { "type": "R", "before": "of different kinds:", "after": "in the form of", "start_char_pos": 94, "end_char_pos": 113 }, { "type": "R", "before": "generalized", "after": "stochastic", "start_char_pos": 408, "end_char_pos": 419 }, { "type": "A", "before": null, "after": ". The paper introduces a new measure for the stability of stochastic networks. The results show that certain types of feedforward loops do indeed buffer the network against stochastic effects", "start_char_pos": 523, "end_char_pos": 523 } ]
[ 0, 186, 330 ]
1305.0623
1
Circadian oscillation provides selection advantages through synchronization to the daylight cycle. However, a reliable clock must be designed through two conflicting properties: entrainability to properly respond to external stimuli such as sunlight, and regularity to oscillate with a precise period. These two aspects do not easily coexist because better entrainability favors higher sensitivity, which may sacrifice the regularity. To investigate conditions for satisfying the two properties, we analytically calculated the optimal phase-response curve with a variational method. Our result indicates an existence of a dead zone, i.e., a time during which external stimuli neither advance nor delay the clock. This result is independent of model details and a dead zone appears only when the input stimuli obey the time course of actual insolation . Our calculation demonstrates that every circadian clock with a dead zone is optimally adapted to the daylight cycle . Our result also explains the lack of a dead zone in oscillators of mammalian somatic cells, and justifies more effective entrainment by dawn and dusk than by on/off lighting protocols in animals including humans .
Circadian oscillation provides selection advantages through synchronization to the daylight cycle. However, a reliable clock must be designed through two conflicting properties: entrainability to synchronize internal time with periodic stimuli such as sunlight, and regularity to oscillate with a precise period. These two aspects do not easily coexist because better entrainability favors higher sensitivity, which may sacrifice the regularity. To investigate conditions for satisfying the two properties, we analytically calculated the optimal phase-response curve with a variational method. Our result indicates an existence of a dead zone, i.e., a time period during which input stimuli neither advance nor delay the clock. A dead zone appears only when input stimuli obey the time course of actual solar radiation but a simple sine curve cannot yield a dead zone . Our calculation demonstrates that every circadian clock with a dead zone is optimally adapted to the daylight cycle .
[ { "type": "R", "before": "properly respond to external", "after": "synchronize internal time with periodic", "start_char_pos": 196, "end_char_pos": 224 }, { "type": "R", "before": "during which external", "after": "period during which input", "start_char_pos": 646, "end_char_pos": 667 }, { "type": "R", "before": "This result is independent of model details and a", "after": "A", "start_char_pos": 713, "end_char_pos": 762 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 791, "end_char_pos": 794 }, { "type": "R", "before": "insolation", "after": "solar radiation but a simple sine curve cannot yield a dead zone", "start_char_pos": 840, "end_char_pos": 850 }, { "type": "D", "before": ". Our result also explains the lack of a dead zone in oscillators of mammalian somatic cells, and justifies more effective entrainment by dawn and dusk than by on/off lighting protocols in animals including humans", "after": null, "start_char_pos": 969, "end_char_pos": 1182 } ]
[ 0, 98, 301, 434, 582, 712, 852, 970 ]
1305.0954
1
We design, implement and test a simple algorithm which computes the approximate entropy of a finite binary string of arbitrary length. The BiEntropy algorithm evaluates the order and disorder within and across an entire binary string of length n in O(n^2)time using O(n) memory. The algorithm uses a weighted average of the Shannon Entropies of the string and the first n-2 binary derivatives of the string. We successfully test the algorithm in the fields of Human Vision, Cryptography, Random Number Generation and Quantitative Finance.
We design, implement and test a simple algorithm which computes the approximate entropy of a finite binary string of arbitrary length. The algorithm uses a weighted average of the Shannon Entropies of the string and all but the last binary derivative of the string. We successfully test the algorithm in the fields of Prime Number Theory (where we prove explicitly that the sequence of prime numbers is not periodic), Human Vision, Cryptography, Random Number Generation and Quantitative Finance.
[ { "type": "R", "before": "BiEntropy algorithm evaluates the order and disorder within and across an entire binary string of length n in O(n^2)time using O(n) memory. The algorithm", "after": "algorithm", "start_char_pos": 139, "end_char_pos": 292 }, { "type": "R", "before": "the first n-2 binary derivatives", "after": "all but the last binary derivative", "start_char_pos": 360, "end_char_pos": 392 }, { "type": "A", "before": null, "after": "Prime Number Theory (where we prove explicitly that the sequence of prime numbers is not periodic),", "start_char_pos": 460, "end_char_pos": 460 } ]
[ 0, 134, 278, 407 ]
1305.1847
1
We use Brownian dynamics simulations to study the permeation properties of a generic electrostatic model of a biological ion channel as a function of the fixed charge Q_f at its selectivity filter. We reconcile the recently-discovered discrete calcium conduction bands M0 (Q_f=1e), M1 (3e), M2 (5e) with the set of sodium conduction bands L0 (0.5-0.7e), L1 (1.5-2e) thereby obtaining a completed pattern of conduction and selectivity bands v Q_f for the sodium-calcium channels family. An increase of Q_f leads to an increase of calcium selectivity: L0 (sodium selective, non-blocking channel) -> M0 (non-selective channel) -> L1 (sodium selective channel with divalent block) -> M1 (calcium selective channel exhibiting the anomalous mole fraction effect). We create a consistent identification scheme where the L1 band is identified with the eukaryotic (DEKA) sodium channel, and L0 (speculatively) with the bacterial NaChBac channel. The scheme created is able to account for the experimentally observed mutation-induced transformations between non-selective channels, sodium-selective channels, and calcium-selective channels, which we interpret as transitions between different rows of the identification table. By considering the potential energy changes during permeation, we show explicitly that the multi-ion conduction bands of calcium and sodium channels arise as the result of resonant barrier-less conduction. Our results confirm the crucial influence of electrostatic interactions on conduction and on the Ca/Na valence selectivity of calcium and sodium ion channels. The model and results could be also applicable to biomimetic nanopores with charged walls.
We use Brownian dynamics simulations to study the permeation properties of a generic electrostatic model of a biological ion channel as a function of the fixed charge Q_f at its selectivity filter. We reconcile the recently-discovered discrete calcium conduction bands M0 (Q_f=1e), M1 (3e), M2 (5e) with the set of sodium conduction bands L0 (0.5-0.7e), L1 (1.5-2e) thereby obtaining a completed pattern of conduction and selectivity bands v Q_f for the sodium-calcium channels family. An increase of Q_f leads to an increase of calcium selectivity: L0 (sodium selective, non-blocking channel) -> M0 (non-selective channel) -> L1 (sodium selective channel with divalent block) -> M1 (calcium selective channel exhibiting the anomalous mole fraction effect). We create a consistent identification scheme where the L0 band is identified with the eukaryotic (DEKA) sodium channel, and L1/L2 (speculatively) with the bacterial NaChBac channel. The scheme created is able to account for the experimentally observed mutation-induced transformations between non-selective channels, sodium-selective channels, and calcium-selective channels, which we interpret as transitions between different rows of the identification table. By considering the potential energy changes during permeation, we show explicitly that the multi-ion conduction bands of calcium and sodium channels arise as the result of resonant barrier-less conduction. Our results confirm the crucial influence of electrostatic interactions on conduction and on the Ca/Na valence selectivity of calcium and sodium ion channels. The model and results could be also applicable to biomimetic nanopores with charged walls.
[ { "type": "R", "before": "L1", "after": "L0", "start_char_pos": 813, "end_char_pos": 815 }, { "type": "R", "before": "L0", "after": "L1/L2", "start_char_pos": 882, "end_char_pos": 884 } ]
[ 0, 197, 485, 757, 936, 1216, 1422, 1581 ]
1305.2121
1
Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.
Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.
[ { "type": "R", "before": "and", "after": "(El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and", "start_char_pos": 197, "end_char_pos": 200 }, { "type": "A", "before": null, "after": "of competitive resource allocation", "start_char_pos": 543, "end_char_pos": 543 } ]
[ 0, 115, 250, 485 ]
1305.2151
1
This paper contains an overview of results for dynamic risk measures in markets with transaction costs . We provide the main results of four different approaches. We will prove under which assumptions results within these approaches coincide, and how properties like primal and dual representation and time consistency in the different approaches compare to each other.
This paper contains an overview of results for dynamic multivariate risk measures . We provide the main results of four different approaches. We will prove under which assumptions results within these approaches coincide, and how properties like primal and dual representation and time consistency in the different approaches compare to each other.
[ { "type": "R", "before": "risk measures in markets with transaction costs", "after": "multivariate risk measures", "start_char_pos": 55, "end_char_pos": 102 } ]
[ 0, 104, 162 ]
1305.2496
1
Analysis of network dynamics became increasingly important to understand the mechanisms and consequences of changes in biological systemsfrom macromolecules to cells URLanisms. Currently available network dynamics tools are mostly tailored for specific tasks such as calculation of molecular or neural dynamics. Our Turbinesoftware offers a generic framework enabling the simulation of any algorithmically definable dynamics of any network. Turbine is also optimized for handling very large networks in the range of millions of nodes and edges . Using a perturbation transmission model inspired by communicating vessels, here we introduce a novel centrality measure termed as perturbation centrality. Perturbation centrality is the reciprocal of the time needed to dissipate a starting perturbation in the network. Hubs and inter-modular nodes proved to be highly efficient in perturbation propagation. High perturbation centrality nodes of the Met-tRNA synthetase protein structure network were identified as amino acids involved in substrate binding and allosteric communication by earlier studies. Changes in perturbation centralities of yeast protein-protein interaction network nodes upon various stresses well recapitulated the functional changes of stressed yeast cells. The Turbine software and the perturbation centrality measure provide a large variety of novel options for future studies on network robustness, signalingmechanisms and drug design.
Analysis of network dynamics became a focal point to understand and predict changes of complex systems. Here we introduce Turbine, a generic framework enabling fast simulation of any algorithmically definable dynamics on very large networks . Using a perturbation transmission model inspired by communicating vessels, we define a novel centrality measure : perturbation centrality. Hubs and inter-modular nodes proved to be highly efficient in perturbation propagation. High perturbation centrality nodes of the Met-tRNA synthetase protein structure network were identified as amino acids involved in intra-protein communication by earlier studies. Changes in perturbation centralities of yeast interactome nodes upon various stresses well recapitulated the functional changes of stressed yeast cells. The novelty and usefulness of perturbation centrality was validated in several other model, biological and social networks. The Turbine software and the perturbation centrality measure may provide a large variety of novel options to assess signaling, drug action, environmental and social interventions. The Turbine algorithm is available at: URL
[ { "type": "R", "before": "increasingly important to understand the mechanisms and consequences of changes in biological systemsfrom macromolecules to cells URLanisms. Currently available network dynamics tools are mostly tailored for specific tasks such as calculation of molecular or neural dynamics. Our Turbinesoftware offers", "after": "a focal point to understand and predict changes of complex systems. Here we introduce Turbine,", "start_char_pos": 36, "end_char_pos": 338 }, { "type": "R", "before": "the", "after": "fast", "start_char_pos": 368, "end_char_pos": 371 }, { "type": "R", "before": "of any network. Turbine is also optimized for handling", "after": "on", "start_char_pos": 425, "end_char_pos": 479 }, { "type": "D", "before": "in the range of millions of nodes and edges", "after": null, "start_char_pos": 500, "end_char_pos": 543 }, { "type": "R", "before": "here we introduce", "after": "we define", "start_char_pos": 621, "end_char_pos": 638 }, { "type": "R", "before": "termed as", "after": ":", "start_char_pos": 666, "end_char_pos": 675 }, { "type": "D", "before": "Perturbation centrality is the reciprocal of the time needed to dissipate a starting perturbation in the network.", "after": null, "start_char_pos": 701, "end_char_pos": 814 }, { "type": "R", "before": "substrate binding and allosteric", "after": "intra-protein", "start_char_pos": 1034, "end_char_pos": 1066 }, { "type": "R", "before": "protein-protein interaction network", "after": "interactome", "start_char_pos": 1147, "end_char_pos": 1182 }, { "type": "A", "before": null, "after": "novelty and usefulness of perturbation centrality was validated in several other model, biological and social networks. The", "start_char_pos": 1282, "end_char_pos": 1282 }, { "type": "A", "before": null, "after": "may", "start_char_pos": 1340, "end_char_pos": 1340 }, { "type": "R", "before": "for future studies on network robustness, signalingmechanisms and drug design.", "after": "to assess signaling, drug action, environmental and social interventions. The Turbine algorithm is available at: URL", "start_char_pos": 1382, "end_char_pos": 1460 } ]
[ 0, 176, 311, 440, 700, 814, 902, 1100, 1277 ]
1305.3243
1
We present and discuss a stochastic model of financial assets dynamics based on the idea of an inverse renormalization group strategy. With this strategy we construct the multivariate distributions of elementary returns based on the scaling with time of the probability density of their aggregates. In its simplest version the model is the product of an endogenous auto-regressive component and a random rescaling factor embodying exogenous influences. Mathematical properties like increments' stationarity and ergodicity can be proven. Thanks to the relatively low number of parameters, model calibration can be conveniently based on a method of moments, as exemplified in the case of historical data of the S&P500 index. The calibrated model accounts very well for many stylized facts, like volatility clustering, power law decay of the volatility autocorrelation function, and multiscaling with time of the aggregated return distribution. In agreement with empirical evidence in finance, the dynamics is not invariant under time reversal and, with suitable generalizations, skewness of the return distribution and leverage effects can be included. The analytical tractability of the model opens interesting perspectives for applications, for instance in terms of obtaining closed formulas for derivative pricing. Further important features are: The possibility of making contact, in certain limits, with auto-regressive models widely used in finance; The possibility of partially resolving the endogenous and exogenous components of the volatility, with consistent results when applied to historical series.
We present and discuss a stochastic model of financial assets dynamics based on the idea of an inverse renormalization group strategy. With this strategy we construct the multivariate distributions of elementary returns based on the scaling with time of the probability density of their aggregates. In its simplest version the model is the product of an endogenous auto-regressive component and a random rescaling factor designed to embody also exogenous influences. Mathematical properties like increments' stationarity and ergodicity can be proven. Thanks to the relatively low number of parameters, model calibration can be conveniently based on a method of moments, as exemplified in the case of historical data of the S&P500 index. The calibrated model accounts very well for many stylized facts, like volatility clustering, power law decay of the volatility autocorrelation function, and multiscaling with time of the aggregated return distribution. In agreement with empirical evidence in finance, the dynamics is not invariant under time reversal and, with suitable generalizations, skewness of the return distribution and leverage effects can be included. The analytical tractability of the model opens interesting perspectives for applications, for instance in terms of obtaining closed formulas for derivative pricing. Further important features are: The possibility of making contact, in certain limits, with auto-regressive models widely used in finance; The possibility of partially resolving the long-memory and short-memory components of the volatility, with consistent results when applied to historical series.
[ { "type": "R", "before": "embodying", "after": "designed to embody also", "start_char_pos": 421, "end_char_pos": 430 }, { "type": "R", "before": "endogenous and exogenous", "after": "long-memory and short-memory", "start_char_pos": 1497, "end_char_pos": 1521 } ]
[ 0, 134, 298, 452, 536, 722, 941, 1150, 1315, 1453 ]
1305.3945
1
In this paper we study how coding in distributed storage reduces download time, in addition to providing reliability against disk failures. The download time is reduced because when a content file is encoded to add redundancy and distributed across multiple disks, reading only a subset of the disks is sufficient to reconstruct the content. For the same total storage used, coding exploits the diversity in storage better than simple replication, and hence gives faster download. We use a novel fork-join queuing framework to model multiple users requesting the content simultaneously, and derive bounds on the expected download time. Our results demonstrate the fundamental trade-off between the download time and the amount of storage space. This trade-off can be used for design of the amount of redundancy required to meet the delay constraints on content delivery.
In this paper we study how coding in distributed storage reduces expected download time, in addition to providing reliability against disk failures. The expected download time is reduced because when a content file is encoded to add redundancy and distributed across multiple disks, reading only a subset of the disks is sufficient to reconstruct the content. For the same total storage used, coding exploits the diversity in storage better than simple replication, and hence gives faster download. We use a novel fork-join queuing framework to model multiple users requesting the content simultaneously, and derive bounds on the expected download time. Our system model and results are a novel generalization of the fork-join system that is studied in queueing theory literature. Our results demonstrate the fundamental trade-off between the expected download time and the amount of storage space. This trade-off can be used for design of the amount of redundancy required to meet the delay constraints on content delivery.
[ { "type": "A", "before": null, "after": "expected", "start_char_pos": 65, "end_char_pos": 65 }, { "type": "A", "before": null, "after": "expected", "start_char_pos": 145, "end_char_pos": 145 }, { "type": "R", "before": "results", "after": "system model and results are a novel generalization of the fork-join system that is studied in queueing theory literature. Our results", "start_char_pos": 642, "end_char_pos": 649 }, { "type": "A", "before": null, "after": "expected", "start_char_pos": 700, "end_char_pos": 700 } ]
[ 0, 140, 343, 482, 637, 747 ]
1305.4337
1
Understanding biomolecular systems is important both for the analysis of naturally occurring systemsas well as for the design of new ones. However, mathematical tools for analysis of such complex systems are generally lacking. Here, we present an application of the method of sinusoidal-input describing function for the analysis of such a system. Using this technique, we approximate the input-output response of a simple biomolecular signaling system both computationally and analytically. We systematically investigate the dependence of this approximation on system parameters. Finally, we estimate the error involved in this approximation. These results can help in establishing a framework for analysis of biomolecular systems through the use of simplified models .
Mathematical methods provide useful framework for the analysis and design of complex systems. In newer contexts such as biology, however, there is a need to both adapt existing methods as well as to develop new ones. Using a combination of analytical and computational approaches, we adapt and develop the method of describing functions to represent the input-output responses of biomolecular signaling systems. We approximate representative systems exhibiting various saturating and hysteretic dynamics in a way that is better than the standard linearization. Further, we develop analytical upper bounds for the computational error estimates. Finally, we use these error estimates to augment the limit cycle analysis with a simple and quick way to bound the predicted oscillation amplitude. These results provide system approximations that can provide more insight into the local behaviour of these systems, compute responses to other periodic inputs, and to analyze limit cycles .
[ { "type": "R", "before": "Understanding biomolecular systems is important both", "after": "Mathematical methods provide useful framework", "start_char_pos": 0, "end_char_pos": 52 }, { "type": "R", "before": "of naturally occurring systemsas well as for the design of", "after": "and design of complex systems. In newer contexts such as biology, however, there is a need to both adapt existing methods as well as to develop", "start_char_pos": 70, "end_char_pos": 128 }, { "type": "R", "before": "However, mathematical tools for analysis of such complex systems are generally lacking. Here, we present an application of", "after": "Using a combination of analytical and computational approaches, we adapt and develop", "start_char_pos": 139, "end_char_pos": 261 }, { "type": "R", "before": "sinusoidal-input describing function for the analysis of such a system. Using this technique, we approximate the", "after": "describing functions to represent the", "start_char_pos": 276, "end_char_pos": 388 }, { "type": "R", "before": "response of a simple biomolecular signaling system both computationally and analytically. We systematically investigate the dependence of this approximation on system parameters. Finally, we estimate the error involved in this approximation. These results can help in establishing a framework for analysis of biomolecular systems through the use of simplified models", "after": "responses of biomolecular signaling systems. We approximate representative systems exhibiting various saturating and hysteretic dynamics in a way that is better than the standard linearization. Further, we develop analytical upper bounds for the computational error estimates. Finally, we use these error estimates to augment the limit cycle analysis with a simple and quick way to bound the predicted oscillation amplitude. These results provide system approximations that can provide more insight into the local behaviour of these systems, compute responses to other periodic inputs, and to analyze limit cycles", "start_char_pos": 402, "end_char_pos": 768 } ]
[ 0, 138, 226, 347, 491, 580, 643 ]
1305.4337
2
Mathematical methods provide useful framework for the analysis and design of complex systems. In newer contexts such as biology, however, there is a need to both adapt existing methods as well as to develop new ones. Using a combination of analytical and computational approaches, we adapt and develop the method of describing functions to represent the input-output responses of biomolecular signaling systems. We approximate representative systems exhibiting various saturating and hysteretic dynamics in a way that is better than the standard linearization. Further, we develop analytical upper bounds for the computational error estimates. Finally, we use these error estimates to augment the limit cycle analysis with a simple and quick way to bound the predicted oscillation amplitude. These results provide system approximations that can provide more insight into the local behaviour of these systems , compute responses to other periodic inputs, and to analyze limit cycles.
Mathematical methods provide useful framework for the analysis and design of complex systems. In newer contexts such as biology, however, there is a need to both adapt existing methods as well as to develop new ones. Using a combination of analytical and computational approaches, we adapt and develop the method of describing functions to represent the input-output responses of biomolecular signalling systems. We approximate representative systems exhibiting various saturating and hysteretic dynamics in a way that is better than the standard linearization. Further, we develop analytical upper bounds for the computational error estimates. Finally, we use these error estimates to augment the limit cycle analysis with a simple and quick way to bound the predicted oscillation amplitude. These results provide system approximations that can add more insight into the local behaviour of these systems than standard linearization , compute responses to other periodic inputs, and to analyze limit cycles.
[ { "type": "R", "before": "signaling", "after": "signalling", "start_char_pos": 393, "end_char_pos": 402 }, { "type": "R", "before": "provide", "after": "add", "start_char_pos": 845, "end_char_pos": 852 }, { "type": "A", "before": null, "after": "than standard linearization", "start_char_pos": 908, "end_char_pos": 908 } ]
[ 0, 93, 216, 411, 560, 643, 791 ]
1305.4719
1
The short-time asymptotic behavior of option prices for a variety of models with jumps has received much attention in recent years. In the present work, a novel third-order approximation for ATM option prices under the CGMY L\'{e}vy model is derived, and extended to a model with an additional independent Brownian component. Our results shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of ATM option prices near expiration . In particular, a new type of transition phenomenon is uncovered in which the third order term exhibits two district asymptotic regimes depending on whether 1<Y<3/2 or 3/2<Y<2.
The short-time asymptotic behavior of option prices for a variety of models with jumps has received much attention in recent years. In the present work, a novel third-order approximation for close-to-the-money European option prices under the CGMY L\'{e}vy model is derived, and extended to a model with an additional independent Brownian component. The asymptotic regime considered, in which the strike is made to converge to the spot stock price as the maturity approaches 0, is more relevant in applications since the most liquid options have strikes that close to the spot price. Our results shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of option prices near expiration when the strike is close to the spot price . In particular, a new type of transition phenomenon is uncovered in which the third order term exhibits two district asymptotic regimes depending on whether 1<Y<3/2 or 3/2<Y<2.
[ { "type": "R", "before": "ATM", "after": "close-to-the-money European", "start_char_pos": 191, "end_char_pos": 194 }, { "type": "A", "before": null, "after": "The asymptotic regime considered, in which the strike is made to converge to the spot stock price as the maturity approaches 0, is more relevant in applications since the most liquid options have strikes that close to the spot price.", "start_char_pos": 326, "end_char_pos": 326 }, { "type": "D", "before": "ATM", "after": null, "start_char_pos": 472, "end_char_pos": 475 }, { "type": "A", "before": null, "after": "when the strike is close to the spot price", "start_char_pos": 506, "end_char_pos": 506 } ]
[ 0, 131, 325, 508 ]
1305.4719
2
The short-time asymptotic behavior of option prices for a variety of models with jumps has received much attention in recent years. In the present work, a novel third-order approximation for close-to-the-money European option prices under the CGMY L\'{e}vy model is derived, and extended to a model with an additional independent Brownian component. The asymptotic regime considered, in which the strike is made to converge to the spot stock price as the maturity approaches 0, is more relevant in applications since the most liquid options have strikes that close to the spot price. Our results shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of option prices near expiration when the strike is close to the spot price. In particular, a new type of transition phenomenon is uncovered in which the third order term exhibits two district asymptotic regimes depending on whether 1 <Y< 3/2 or 3/2 <Y<2.
The short-time asymptotic behavior of option prices for a variety of models with jumps has received much attention in recent years. In the present work, novel third-order approximations for close-to-the-money European option prices under an infinite-variation CGMY L\'{e}vy model are derived, and are then extended to a model with an additional independent Brownian component. The asymptotic regime considered, in which the strike is made to converge to the spot stock price as the maturity approaches zero, is relevant in applications since the most liquid options have strikes that are close to the spot price. Our results shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of option prices near expiration when the strike is close to the spot price. In particular, a new type of transition phenomenon is uncovered in which the third order term exhibits two distinct asymptotic regimes depending on whether Y\in( 1 , 3/2 ) or Y\in( 3/2 ,2).
[ { "type": "D", "before": "a", "after": null, "start_char_pos": 153, "end_char_pos": 154 }, { "type": "R", "before": "approximation", "after": "approximations", "start_char_pos": 173, "end_char_pos": 186 }, { "type": "R", "before": "the", "after": "an infinite-variation", "start_char_pos": 239, "end_char_pos": 242 }, { "type": "R", "before": "is", "after": "are", "start_char_pos": 263, "end_char_pos": 265 }, { "type": "A", "before": null, "after": "are then", "start_char_pos": 279, "end_char_pos": 279 }, { "type": "R", "before": "0, is more", "after": "zero, is", "start_char_pos": 476, "end_char_pos": 486 }, { "type": "A", "before": null, "after": "are", "start_char_pos": 560, "end_char_pos": 560 }, { "type": "R", "before": "district", "after": "distinct", "start_char_pos": 912, "end_char_pos": 920 }, { "type": "A", "before": null, "after": "Y\\in(", "start_char_pos": 961, "end_char_pos": 961 }, { "type": "R", "before": "<Y<", "after": ",", "start_char_pos": 964, "end_char_pos": 967 }, { "type": "R", "before": "or", "after": ") or Y\\in(", "start_char_pos": 972, "end_char_pos": 974 }, { "type": "R", "before": "<Y<2.", "after": ",2).", "start_char_pos": 979, "end_char_pos": 984 } ]
[ 0, 131, 350, 585, 804 ]
1305.4719
3
The short-time asymptotic behavior of option prices for a variety of models with jumps has received much attention in recent years. In the present work, novel third-order approximations for close-to-the-money European option prices under an infinite-variation CGMY L\'{e}vy model are derived, and are then extended to a model with an additional independent Brownian component. The asymptotic regime considered, in which the strike is made to converge to the spot stock price as the maturity approaches zero, is relevant in applications since the most liquid options have strikes that are close to the spot price. Our results shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of option prices near expiration when the strike is close to the spot price. In particular, a new type of transition phenomenon is uncovered in which the third order term exhibits two distinct asymptotic regimes depending on whether Y\in(1,3/2) or Y\in(3/2,2) .
A third-order approximation for close-to-the-money European option prices under an infinite-variation CGMY L\'{e}vy model is derived, and is then extended to a model with an additional independent Brownian component. The asymptotic regime considered, in which the strike is made to converge to the spot stock price as the maturity approaches zero, is relevant in applications since the most liquid options have strikes that are close to the spot price. Our results shed new light on the connection between both the volatility of the continuous component and the jump parameters and the behavior of option prices near expiration when the strike is close to the spot price. In particular, a new type of transition phenomenon is uncovered in which the third order term exhibits two distinct asymptotic regimes depending on whether Y\in(1,3/2) or Y\in(3/2,2) . Unlike second order approximations, the expansions herein are shown to be remarkably accurate so that they can actually be used for calibrating some model parameters. For illustration, we calibrate the volatility \sigma of the Brownian component and the jump intensity C of the CGMY model to actual option prices .
[ { "type": "R", "before": "The short-time asymptotic behavior of option prices for a variety of models with jumps has received much attention in recent years. In the present work, novel", "after": "A", "start_char_pos": 0, "end_char_pos": 158 }, { "type": "R", "before": "approximations", "after": "approximation", "start_char_pos": 171, "end_char_pos": 185 }, { "type": "R", "before": "are", "after": "is", "start_char_pos": 280, "end_char_pos": 283 }, { "type": "R", "before": "are", "after": "is", "start_char_pos": 297, "end_char_pos": 300 }, { "type": "A", "before": null, "after": ". Unlike second order approximations, the expansions herein are shown to be remarkably accurate so that they can actually be used for calibrating some model parameters. For illustration, we calibrate the volatility \\sigma of the Brownian component and the jump intensity C of the CGMY model to actual option prices", "start_char_pos": 1015, "end_char_pos": 1015 } ]
[ 0, 131, 376, 612, 831 ]
1305.5068
1
Using the Helmholtz decomposition of the vector field of folding fluxes in a reduced space of collective variables, a potential of the driving force for protein folding is determined . The potential has two components and can be written as a complex function . One component is responsible for the source and sink of the folding flows (representing, respectively, the unfolded states and the native state of the protein ) , and the other accounts for the vorticity of the flow that is produced at the boundaries of the main flow by the contact of the moving folding "fluid" with the quiescent surroundings . The theoretical consideration is illustrated by calculations for a model \beta-hairpin protein.
Using the Helmholtz decomposition of the vector field of folding fluxes in a two-dimensional space of collective variables, a potential of the driving force for protein folding is introduced . The potential has two components . One component is responsible for the source and sink of the folding flows , which represent, respectively, the unfolded states and the native state of the protein , and the other , which accounts for the flow vorticity inherently generated at the periphery of the flow field, is responsible for the canalization of the flow between the source and sink . The theoretical consideration is illustrated by calculations for a model \beta-hairpin protein.
[ { "type": "R", "before": "reduced", "after": "two-dimensional", "start_char_pos": 77, "end_char_pos": 84 }, { "type": "R", "before": "determined", "after": "introduced", "start_char_pos": 172, "end_char_pos": 182 }, { "type": "D", "before": "and can be written as a complex function", "after": null, "start_char_pos": 218, "end_char_pos": 258 }, { "type": "R", "before": "(representing,", "after": ", which represent,", "start_char_pos": 335, "end_char_pos": 349 }, { "type": "D", "before": ")", "after": null, "start_char_pos": 420, "end_char_pos": 421 }, { "type": "A", "before": null, "after": ", which", "start_char_pos": 438, "end_char_pos": 438 }, { "type": "R", "before": "vorticity", "after": "flow vorticity inherently generated at the periphery", "start_char_pos": 456, "end_char_pos": 465 }, { "type": "R", "before": "that is produced at the boundaries of the main flow by the contact of the moving folding \"fluid\" with the quiescent surroundings", "after": "field, is responsible for the canalization of the flow between the source and sink", "start_char_pos": 478, "end_char_pos": 606 } ]
[ 0, 184, 260, 608 ]
1305.5784
1
What is aging? Mechanistic answers to this question remain elusive despite decades of research. Here, we propose a mathematical model of cellular aging based on a model gene interaction network. Our model network is made of only non-aging components - the functionality of gene interactions decrease with a constant mortality rate. Death of a cell occurs in the model when an essential gene loses all of its interactions to other genes, equivalent to the deletion of an essential gene. Interactions among genes are modeled to be inherently stochastic due to limited numbers of functional molecules of gene products . We show that characteristics of biological aging, the exponential increase of mortality rate over time, can arise from this gene network model . Hence, we demonstrate that cellular aging is an emergent property of this model network. Our model predicts that the rate of aging, defined by the Gompertz coefficient, is proportional to the average number of active interactions per gene in the network and that stochastic heterogeneity of gene interactions is an important factor in shaping the dynamics of the aging process. This theoretic framework offers a mechanistic foundation for the pleiotropic nature of aging and can provide insights on interpretation of experimental data .
What is aging? Mechanistic answers to this question remain elusive despite decades of research. Here, we propose a mathematical model of cellular aging based on a model gene interaction network. Our network model is made of only non-aging components - the biological functions of gene interactions decrease with a constant mortality rate. Death of a cell occurs in the model when an essential gene loses all of its interactions to other genes, equivalent to the deletion of an essential gene. Gene interactions are stochastic based on a binomial distribution . We show that the defining characteristic of biological aging, the exponential increase of mortality rate over time, can arise from this gene network model during the early stage of aging . Hence, we demonstrate that cellular aging is an emergent property of this model network. Our model predicts that the rate of aging, defined by the Gompertz coefficient, is approximately proportional to the average number of active interactions per gene and that the stochastic heterogeneity of gene interactions is an important factor in the dynamics of the aging process. This theoretic framework offers a mechanistic foundation for the pleiotropic nature of aging and can provide insights on cellular aging .
[ { "type": "R", "before": "model network", "after": "network model", "start_char_pos": 199, "end_char_pos": 212 }, { "type": "R", "before": "functionality", "after": "biological functions", "start_char_pos": 256, "end_char_pos": 269 }, { "type": "R", "before": "Interactions among genes are modeled to be inherently stochastic due to limited numbers of functional molecules of gene products", "after": "Gene interactions are stochastic based on a binomial distribution", "start_char_pos": 486, "end_char_pos": 614 }, { "type": "R", "before": "characteristics", "after": "the defining characteristic", "start_char_pos": 630, "end_char_pos": 645 }, { "type": "A", "before": null, "after": "during the early stage of aging", "start_char_pos": 760, "end_char_pos": 760 }, { "type": "A", "before": null, "after": "approximately", "start_char_pos": 935, "end_char_pos": 935 }, { "type": "R", "before": "in the network and that", "after": "and that the", "start_char_pos": 1003, "end_char_pos": 1026 }, { "type": "D", "before": "shaping", "after": null, "start_char_pos": 1099, "end_char_pos": 1106 }, { "type": "R", "before": "interpretation of experimental data", "after": "cellular aging", "start_char_pos": 1263, "end_char_pos": 1298 } ]
[ 0, 14, 95, 194, 331, 485, 616, 762, 851, 1141 ]
1305.5963
1
In this article , we consider European options of type h(X^1_T, X^2_T,\ldots, X^n_T) depending on many underlying assets. We study how such options can be valued in terms of simple vanilla options in different market models. We consider different approaches and derive several pricing formulas for a wide class of functions h:%DIFDELCMD < \R%%% _+^n\rightarrow%DIFDELCMD < \R%%% . We also give multidimensional version of the result of Breeden and Litzenberger Breeden on the relation between derivatives of the call price and the risk-neutral density of the underlying asset .
In this note , we consider European options of type h(X^1_T, X^2_T,\ldots, X^n_T) depending on several underlying assets. We %DIFDELCMD < \R%%% %DIFDELCMD < \R%%% give a multidimensional version of the result of Breeden and Litzenberger Breeden on the relation between derivatives of the call price and the risk-neutral density of the underlying asset . The pricing measure is assumed to be absolutely continuous with respect to the Lebesgue measure on the state space .
[ { "type": "R", "before": "article", "after": "note", "start_char_pos": 8, "end_char_pos": 15 }, { "type": "R", "before": "many", "after": "several", "start_char_pos": 98, "end_char_pos": 102 }, { "type": "D", "before": "study how such options can be valued in terms of simple vanilla options in different market models. We consider different approaches and derive several pricing formulas for a wide class of functions h:", "after": null, "start_char_pos": 125, "end_char_pos": 326 }, { "type": "D", "before": "_+^n\\rightarrow", "after": null, "start_char_pos": 345, "end_char_pos": 360 }, { "type": "R", "before": ". We also give", "after": "give a", "start_char_pos": 379, "end_char_pos": 393 }, { "type": "A", "before": null, "after": ". The pricing measure is assumed to be absolutely continuous with respect to the Lebesgue measure on the state space", "start_char_pos": 576, "end_char_pos": 576 } ]
[ 0, 121, 224 ]
1305.6797
1
By means of a novel version of the Continuous-Time Random Walk (CTRW) model with memory , we describe, for instance, the stochastic process of a single share price on a double-auction market within the high frequency time scale. The memory present in the model is understood as dependence between successive share price jumps , while waiting times between price changes are considered as i.i.d. random variables. The range of this memory is defined herein by dependence between three successive jumps of the process. This dependence is motivated both empirically, by analysis of empirical two-point histograms , and theoretically, by analysis of the bid-ask bounce mechanism containing some delay . Our model turns out to be analytically solvable, which enables a direct comparison of its predictions with empirical counterparts, for instance, with so significant and commonly used quantity as a velocity autocorrelation function. This work strongly extends the capabilities of the CTRW formalism.
A novel version of the Continuous-Time Random Walk (CTRW) model with memory is developed. This memory means the dependence between arbitrary number of successive jumps of the process , while waiting times between jumps are considered as i.i.d. random variables. The dependence was found by analysis of empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale, and justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market . Our model turns out to be exactly analytically solvable, which enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus this paper significantly extends the capabilities of the CTRW formalism.
[ { "type": "R", "before": "By means of a", "after": "A", "start_char_pos": 0, "end_char_pos": 13 }, { "type": "R", "before": ", we describe, for instance, the stochastic process of a single share price on a double-auction market within the high frequency time scale. The memory present in the model is understood as dependence between successive share price jumps", "after": "is developed. This memory means the dependence between arbitrary number of successive jumps of the process", "start_char_pos": 88, "end_char_pos": 325 }, { "type": "R", "before": "price changes", "after": "jumps", "start_char_pos": 356, "end_char_pos": 369 }, { "type": "R", "before": "range of this memory is defined herein by dependence between three successive jumps of the process. This dependence is motivated both empirically,", "after": "dependence was found", "start_char_pos": 417, "end_char_pos": 563 }, { "type": "R", "before": "two-point histograms , and theoretically, by analysis of the", "after": "histograms for the stochastic process of a single share price on a market within the high frequency time scale, and justified theoretically by considering", "start_char_pos": 589, "end_char_pos": 649 }, { "type": "A", "before": null, "after": "characteristic for any double-auction market", "start_char_pos": 697, "end_char_pos": 697 }, { "type": "A", "before": null, "after": "exactly", "start_char_pos": 726, "end_char_pos": 726 }, { "type": "A", "before": null, "after": "their", "start_char_pos": 808, "end_char_pos": 808 }, { "type": "R", "before": "so significant and commonly used quantity as a", "after": "empirical", "start_char_pos": 852, "end_char_pos": 898 }, { "type": "R", "before": "This work strongly", "after": "Thus this paper significantly", "start_char_pos": 934, "end_char_pos": 952 } ]
[ 0, 228, 412, 516, 699, 933 ]
1305.6868
1
In this article, we study the problem of pricing defaultable bond with discrete default intensity , discrete default barrier and exogenous default recovery. The motivation is the fact that the investor outside of the firm can know the firm information like default barrier only in some discrete dates such as announcing dates of firm management information and the credit rating of the firm that reflects the default intensity is generally not changed between the two adjacent announcing dates. In our model, the risk free short rate follows a generalized Hull-White model. The default event occurs in an expected or unexpected manner when the firm value reaches a certain lower threshold - the default barrier at predetermined discrete announcing dates or at the first jump time of a Poisson process with given default intensity , respectively . Then our pricing problem is derived to a solving problem of PDE with constant default intensity and terminal value of binary type in every subinterval between the two adjacent announcing dates. Our main method to give the pricing of defaultable bonds is to transform several PDEs for pricing in every subinterval between the two adjacent announcing dates into a pricing problem of a higher order binary option using several changes of variable and unknown function including change of numeraire and then to use the pricing formulae of higher binary options .
In this article, we consider a 2 factors-model for pricing defaultable bond with discrete default intensity and barrier where the 2 factors are stochastic risk free short rate process and firm value process. We assume that the default event occurs in an expected manner when the firm value reaches a given default barrier at predetermined discrete announcing dates or in an unexpected manner at the first jump time of a Poisson process with given default intensity given by a step function of time variable . Then our pricing model is given by a solving problem of several linear PDEs with variable coefficients and terminal value of binary type in every subinterval between the two adjacent announcing dates. Our main approach is to use higher order binaries. We first provide the pricing formulae of higher order binaries with time dependent coefficients and consider their integrals on the last expiry date variable. Then using the pricing formulae of higher binary options and their integrals, we give the pricing formulae of defaultable bonds in both cases of exogenous and endogenous default recoveries and credit spread analysis .
[ { "type": "R", "before": "study the problem of", "after": "consider a 2 factors-model for", "start_char_pos": 20, "end_char_pos": 40 }, { "type": "R", "before": ", discrete default barrier and exogenous default recovery. The motivation is the fact that the investor outside of the firm can know the firm information like default barrier only in some discrete dates such as announcing dates of firm management information and the credit rating of the firm that reflects the default intensity is generally not changed between the two adjacent announcing dates. In our model, the", "after": "and barrier where the 2 factors are stochastic", "start_char_pos": 98, "end_char_pos": 512 }, { "type": "R", "before": "follows a generalized Hull-White model. The", "after": "process and firm value process. We assume that the", "start_char_pos": 534, "end_char_pos": 577 }, { "type": "D", "before": "or unexpected", "after": null, "start_char_pos": 614, "end_char_pos": 627 }, { "type": "R", "before": "certain lower threshold - the", "after": "given", "start_char_pos": 665, "end_char_pos": 694 }, { "type": "A", "before": null, "after": "in an unexpected manner", "start_char_pos": 757, "end_char_pos": 757 }, { "type": "R", "before": ", respectively", "after": "given by a step function of time variable", "start_char_pos": 831, "end_char_pos": 845 }, { "type": "R", "before": "problem is derived to", "after": "model is given by", "start_char_pos": 865, "end_char_pos": 886 }, { "type": "R", "before": "PDE with constant default intensity", "after": "several linear PDEs with variable coefficients", "start_char_pos": 908, "end_char_pos": 943 }, { "type": "R", "before": "method to give the pricing of defaultable bonds is to transform several PDEs for pricing in every subinterval between the two adjacent announcing dates into a pricing problem of a higher order binary option using several changes of variable and unknown function including change of numeraire and then to use the", "after": "approach is to use higher order binaries. We first provide the", "start_char_pos": 1051, "end_char_pos": 1362 }, { "type": "A", "before": null, "after": "formulae of higher order binaries with time dependent coefficients and consider their integrals on the last expiry date variable. Then using the pricing", "start_char_pos": 1371, "end_char_pos": 1371 }, { "type": "A", "before": null, "after": "and their integrals, we give the pricing formulae of defaultable bonds in both cases of exogenous and endogenous default recoveries and credit spread analysis", "start_char_pos": 1406, "end_char_pos": 1406 } ]
[ 0, 156, 494, 573, 847, 1041 ]
1305.6988
1
In this article, we study the problem of pricing defaultable bond with endogenous default recovery under discrete default information using higher order binary oprions and their integrals. In our credit risk model, the risk free short rate is a constant , the default event occurs in an expected when the firm value reaches a certain lower threshold - the default barrier at predetermined discrete announcing dates or unexpected manner at the first jump time of a Poisson process with given default intensity , respectively and default recovery is related to the firm value (endogenous recovery) . Our pricing problem is derived to a solving problem of inhomogeneous Black-Scholes PDEs with different coefficients and terminal value of binary type in every subinterval between the two adjacent announcing dates. In order to deal with the difference of coefficients in subintervals we establish a relation between prices of higher order binaries with different coefficients. In our model, due to the inhomogenous term related to endogenous recovery, our pricing formulae are represented by not only the prices of higher binary options but also the integrals of them. So we consider a special binary option called intergral of i-th binary or nothing and then we obtain the pricing formulae of our defaultable corporate bond by using the pricing formulae of higher binary options and integrals of them.
In this article, we study the problem of pricing defaultable bond with discrete default intensity and barrier under constant risk free short rate using higher order binary oprions and their integrals. In our credit risk model, the risk free short rate is a constant and the default event occurs in an expected manner when the firm value reaches a given default barrier at predetermined discrete announcing dates or in an unexpected manner at the first jump time of a Poisson process with given default intensity given by a step function of time variable, respectively. We consider both endogenous and exogenous default recovery . Our pricing problem is derived to a solving problem of inhomogeneous or homogeneous Black-Scholes PDEs with different coefficients and terminal value of binary type in every subinterval between the two adjacent announcing dates. In order to deal with the difference of coefficients in subintervals we use a relation between prices of higher order binaries with different coefficients. In our model, due to the inhomogenous term related to endogenous recovery, our pricing formulae are represented by not only the prices of higher binary options but also the integrals of them. So we consider a special binary option called intergral of i-th binary or nothing and then we obtain the pricing formulae of our defaultable corporate bond by using the pricing formulae of higher binary options and integrals of them.
[ { "type": "R", "before": "endogenous default recovery under discrete default information", "after": "discrete default intensity and barrier under constant risk free short rate", "start_char_pos": 71, "end_char_pos": 133 }, { "type": "R", "before": ",", "after": "and", "start_char_pos": 254, "end_char_pos": 255 }, { "type": "A", "before": null, "after": "manner", "start_char_pos": 296, "end_char_pos": 296 }, { "type": "R", "before": "certain lower threshold - the", "after": "given", "start_char_pos": 327, "end_char_pos": 356 }, { "type": "A", "before": null, "after": "in an", "start_char_pos": 419, "end_char_pos": 419 }, { "type": "R", "before": ", respectively and default recovery is related to the firm value (endogenous recovery)", "after": "given by a step function of time variable, respectively. We consider both endogenous and exogenous default recovery", "start_char_pos": 511, "end_char_pos": 597 }, { "type": "A", "before": null, "after": "or homogeneous", "start_char_pos": 669, "end_char_pos": 669 }, { "type": "R", "before": "establish", "after": "use", "start_char_pos": 887, "end_char_pos": 896 } ]
[ 0, 188, 599, 814, 976, 1168 ]
1305.6988
2
In this article, we study the problem of pricing defaultable bond with discrete default intensity and barrier under constant risk free short rate using higher order binary oprions and their integrals. In our credit risk model, the risk free short rate is a constant and the default event occurs in an expected manner when the firm value reaches a given default barrier at predetermined discrete announcing dates or in an unexpected manner at the first jump time of a Poisson process with given default intensity given by a step function of time variable, respectively. We consider both endogenous and exogenous default recovery. Our pricing problem is derived to a solving problem of inhomogeneous or homogeneous Black-Scholes PDEs with different coefficients and terminal value of binary type in every subinterval between the two adjacent announcing dates. In order to deal with the difference of coefficients in subintervals we use a relation between prices of higher order binaries with different coefficients. In our model, due to the inhomogenous term related to endogenous recovery, our pricing formulae are represented by not only the prices of higher binary options but also the integrals of them. So we consider a special binary option called intergral of i-th binary or nothing and then we obtain the pricing formulae of our defaultable corporate bond by using the pricing formulae of higher binary options and integrals of them.
In this article, we study the problem of pricing defaultable bond with discrete default intensity and barrier under constant risk free short rate using higher order binary options and their integrals. In our credit risk model, the risk free short rate is a constant and the default event occurs in an expected manner when the firm value reaches a given default barrier at predetermined discrete announcing dates or in an unexpected manner at the first jump time of a Poisson process with given default intensity given by a step function of time variable, respectively. We consider both endogenous and exogenous default recovery. Our pricing problem is derived to a solving problem of inhomogeneous or homogeneous Black-Scholes PDEs with different coefficients and terminal value of binary type in every subinterval between the two adjacent announcing dates. In order to deal with the difference of coefficients in subintervals we use a relation between prices of higher order binaries with different coefficients. In our model, due to the inhomogenous term related to endogenous recovery, our pricing formulae are represented by not only the prices of higher binary options but also the integrals of them. So we consider a special binary option called integral of i-th binary or nothing and then we obtain the pricing formulae of our defaultable corporate bond by using the pricing formulae of higher binary options and integrals of them.
[ { "type": "R", "before": "oprions", "after": "options", "start_char_pos": 172, "end_char_pos": 179 }, { "type": "R", "before": "intergral", "after": "integral", "start_char_pos": 1252, "end_char_pos": 1261 } ]
[ 0, 200, 568, 628, 857, 1013, 1205 ]
1306.0215
1
Two financial networks, namely, cross-border long-term debt and equity securities portfolio investment networks are analysed . They serve as proxies for measuring the interdependence of financial markets and the robustness of the global financial system from 2002 to 2012, covering the 2008 global financial crisis. Focusing on the largest strongly-connected core component of the threshold network, while the edge threshold is set according to the percolation properties of the long-term debt securities network, we identify two early-warning indicators for global financial distress. The spread of certain financial derivative products, such as credit default swaps and equity-linked derivatives, scales with the edge density of the long-term debt securities network . In addition, the algebraic connectivity of the equity securities network, taken as a measure for the robustness of financial markets, drops already sharply well ahead of the 2008 financial crisis .
Cross-border equity and long-term debt securities portfolio investment networks are analysed from 2002 to 2012, covering the 2008 global financial crisis . They serve as network-proxies for measuring the robustness of the global financial system and the interdependence of financial markets, respectively. Two early-warning indicators for financial crises are identified: First, the algebraic connectivity of the equity securities network, as a measure for structural robustness, drops close to zero already in 2005, while there is an over-representation of high-degree off-shore financial centres among the countries most-related to this observation, suggesting an investigation of such nodes with respect to the structural stability of the global financial system. Second, using a phenomenological model, the edge density of the debt securities network is found to describe, and even forecast, the proliferation of several over-the-counter-traded financial derivatives, most prominently credit default swaps, enabling one to detect potentially dangerous levels of market interdependence and systemic risk .
[ { "type": "R", "before": "Two financial networks, namely, cross-border", "after": "Cross-border equity and", "start_char_pos": 0, "end_char_pos": 44 }, { "type": "D", "before": "and equity", "after": null, "start_char_pos": 60, "end_char_pos": 70 }, { "type": "A", "before": null, "after": "from 2002 to 2012, covering the 2008 global financial crisis", "start_char_pos": 125, "end_char_pos": 125 }, { "type": "R", "before": "proxies", "after": "network-proxies", "start_char_pos": 142, "end_char_pos": 149 }, { "type": "D", "before": "interdependence of financial markets and the", "after": null, "start_char_pos": 168, "end_char_pos": 212 }, { "type": "R", "before": "from 2002 to 2012, covering the 2008 global financial crisis. Focusing on the largest strongly-connected core component of the threshold network, while the edge threshold is set according to the percolation properties of the long-term debt securities network, we identify two", "after": "and the interdependence of financial markets, respectively. Two", "start_char_pos": 255, "end_char_pos": 530 }, { "type": "R", "before": "indicators for global financial distress. The spread of certain financial derivative products, such as credit default swaps and equity-linked derivatives, scales with", "after": "indicators for financial crises are identified: First, the algebraic connectivity of the equity securities network, as a measure for structural robustness, drops close to zero already in 2005, while there is an over-representation of high-degree off-shore financial centres among the countries most-related to this observation, suggesting an investigation of such nodes with respect to the structural stability of the global financial system. Second, using a phenomenological model,", "start_char_pos": 545, "end_char_pos": 711 }, { "type": "D", "before": "long-term", "after": null, "start_char_pos": 736, "end_char_pos": 745 }, { "type": "R", "before": ". In addition, the algebraic connectivity of the equity securities network, taken as a measure for the robustness of financial markets, drops already sharply well ahead of", "after": "is found to describe, and even forecast,", "start_char_pos": 770, "end_char_pos": 941 }, { "type": "R", "before": "2008 financial crisis", "after": "proliferation of several over-the-counter-traded financial derivatives, most prominently credit default swaps, enabling one to detect potentially dangerous levels of market interdependence and systemic risk", "start_char_pos": 946, "end_char_pos": 967 } ]
[ 0, 127, 316, 586, 771 ]
1306.0887
1
We question the industry practice of economic scenario generation involving statistically dependent default times. In particular, we investigate under which conditions a single simulation of joint default times at a final time horizon can be decomposed in a set of simulations of joint defaults on subsequent adjacent sub-periods leading to that final horizon. As a reasonable trade-off between realistic stylized facts, practical demands, and mathematical tractability, we propose models leading to a Markovian multi-variate default-indicator process. The well-known "looping default" case is shown to be equipped with this property, to be linked to the classical "Freund distribution" , and to allow for a new construction with immediate multi-variate extensions. If, additionally, all sub-vectors of the default indicator process are also Markovian, this constitutes a new characterization of the Marshall-Olkin distribution, and hence of multi-variate lack-of-memory. A paramount property of the resulting model is stability of the type of multi-variate distribution with respect to elimination or insertion of a new marginal component with marginal distribution from the same family. The practical implications of this "nested margining" property are enormous. To implement this distribution we present an efficient and unbiased simulation algorithm based on the Levy-frailty construction. We highlight different pitfalls in the simulation of dependent default times and examine, within a numerical case study, the effect of inadequate simulation practices.
We investigate under which conditions a single simulation of joint default times at a final time horizon can be decomposed into a set of simulations of joint defaults on subsequent adjacent sub-periods leading to that final horizon. Besides the theoretical interest, this is also a practical problem as part of the industry has been working under the misleading assumption that the two approaches are equivalent for practical purposes. As a reasonable trade-off between realistic stylized facts, practical demands, and mathematical tractability, we propose models leading to a Markovian multi-variate survival--indicator process, and we investigate two instances of static models for the vector of default times from the statistical literature that fall into this class. On the one hand, the "looping default" case is known to be equipped with this property, and we point out that it coincides with the classical "Freund distribution" in the bivariate case. On the other hand, if all sub-vectors of the survival indicator process are Markovian, this constitutes a new characterization of the Marshall--Olkin distribution, and hence of multi-variate lack-of-memory. A paramount property of the resulting model is stability of the type of multi-variate distribution with respect to elimination or insertion of a new marginal component with marginal distribution from the same family. The practical implications of this "nested margining" property are enormous. To implement this distribution we present an efficient and unbiased simulation algorithm based on the L\'evy-frailty construction. We highlight different pitfalls in the simulation of dependent default times and examine, within a numerical case study, the effect of inadequate simulation practices.
[ { "type": "D", "before": "question the industry practice of economic scenario generation involving statistically dependent default times. In particular, we", "after": null, "start_char_pos": 3, "end_char_pos": 132 }, { "type": "R", "before": "in", "after": "into", "start_char_pos": 253, "end_char_pos": 255 }, { "type": "A", "before": null, "after": "Besides the theoretical interest, this is also a practical problem as part of the industry has been working under the misleading assumption that the two approaches are equivalent for practical purposes.", "start_char_pos": 361, "end_char_pos": 361 }, { "type": "R", "before": "default-indicator process. The well-known", "after": "survival--indicator process, and we investigate two instances of static models for the vector of default times from the statistical literature that fall into this class. On the one hand, the", "start_char_pos": 527, "end_char_pos": 568 }, { "type": "R", "before": "shown", "after": "known", "start_char_pos": 595, "end_char_pos": 600 }, { "type": "R", "before": "to be linked to", "after": "and we point out that it coincides with", "start_char_pos": 636, "end_char_pos": 651 }, { "type": "R", "before": ", and to allow for a new construction with immediate multi-variate extensions. If, additionally,", "after": "in the bivariate case. On the other hand, if", "start_char_pos": 688, "end_char_pos": 784 }, { "type": "R", "before": "default", "after": "survival", "start_char_pos": 808, "end_char_pos": 815 }, { "type": "D", "before": "also", "after": null, "start_char_pos": 838, "end_char_pos": 842 }, { "type": "R", "before": "Marshall-Olkin", "after": "Marshall--Olkin", "start_char_pos": 901, "end_char_pos": 915 }, { "type": "R", "before": "Levy-frailty", "after": "L\\'evy-frailty", "start_char_pos": 1369, "end_char_pos": 1381 } ]
[ 0, 114, 360, 553, 766, 972, 1189, 1266, 1395 ]
1306.1390
1
We study kinetic model of Nuclear Receptor Binding to Promoter Regions. This model is for a system of ordinary differential equations. Model reduction techniques have been used to simplify chemical kinetics.In this case study, we apply the technique of pseudo-first order approximation to simplify the reaction rates. CellDesigner has been used to draw the structures of chemical reactions of Nuclear Receptor Binding to Promoter Regions .
We study kinetic model of Nuclear Receptor Binding to Promoter Regions. This model is written as a system of ordinary differential equations. Model reduction techniques have been used to simplify chemical kinetics.In this case study, the technique of Pseudo-first order approximation is applied to simplify the reaction rates. CellDesigner has been used to draw the structures of chemical reactions of Nuclear Receptor Binding to Promoter Regions . After model reduction, the general analytical solution for reduced model is given and the number of species and reactions are reduced from 9 species and 6 reactions to 6 species and 5 reactions .
[ { "type": "R", "before": "for", "after": "written as", "start_char_pos": 86, "end_char_pos": 89 }, { "type": "D", "before": "we apply", "after": null, "start_char_pos": 227, "end_char_pos": 235 }, { "type": "R", "before": "pseudo-first order approximation", "after": "Pseudo-first order approximation is applied", "start_char_pos": 253, "end_char_pos": 285 }, { "type": "A", "before": null, "after": ". After model reduction, the general analytical solution for reduced model is given and the number of species and reactions are reduced from 9 species and 6 reactions to 6 species and 5 reactions", "start_char_pos": 438, "end_char_pos": 438 } ]
[ 0, 71, 134, 207, 317 ]
1306.2412
1
In this short note, a correction is made to the recently proposed solution to a 1D biased diffusion model for linear DNA translocation and a new analysis will be given ] ] ] ] ] ] .
In this short note, a correction is made to the recently proposed solution 1 to a 1D biased diffusion model for linear DNA translocation and a new analysis will be given to the data in 1]. It was pointed out 2] by us recently that this 1D linear translocation model is equivalent to the one that was considered by Schrodinger 3] for the Enrenhaft-Millikan measurements 4,5] on electron charge. Here we apply Schrodinger's first-passage-time distribution formula to the data set in 1]. It is found that Schrodinger's formula can be used to describe the time distribution of DNA translocation in solid-state nanopores. These fittings yield two useful parameters: drift velocity of DNA translocation and diffusion constant of DNA inside the nanopore. The results suggest two regimes of DNA translocation: (I) at low voltages, there are clear deviations from Smoluchowski's linear law of electrophoresis 6] which we attribute to the entropic barrier effects; (II) at high voltages, the translocation velocity is a linear function of the applied electric field. In regime II, the apparent diffusion constant exhibits a quadratic dependence on applied electric field, suggesting a mechanism of Taylor dispersion effect likely due the electro-osmotic flow field in the nanopore channel. This analysis yields a dispersion-free diffusion constant value for the segment of DNA inside the nanopore which is in agreement with Stokes-Einstein theory quantitatively. The implication of Schrodinger's formula for DNA sequencing is discussed .
[ { "type": "A", "before": null, "after": "1", "start_char_pos": 75, "end_char_pos": 75 }, { "type": "A", "before": null, "after": "to the data in", "start_char_pos": 169, "end_char_pos": 169 }, { "type": "A", "before": null, "after": "1", "start_char_pos": 170, "end_char_pos": 170 }, { "type": "A", "before": null, "after": ". It was pointed out", "start_char_pos": 171, "end_char_pos": 171 }, { "type": "A", "before": null, "after": "2", "start_char_pos": 172, "end_char_pos": 172 }, { "type": "A", "before": null, "after": "by us recently that this 1D linear translocation model is equivalent to the one that was considered by Schrodinger", "start_char_pos": 174, "end_char_pos": 174 }, { "type": "A", "before": null, "after": "3", "start_char_pos": 175, "end_char_pos": 175 }, { "type": "A", "before": null, "after": "for the Enrenhaft-Millikan measurements", "start_char_pos": 177, "end_char_pos": 177 }, { "type": "A", "before": null, "after": "4,5", "start_char_pos": 178, "end_char_pos": 178 }, { "type": "A", "before": null, "after": "on electron charge. Here we apply Schrodinger's first-passage-time distribution formula to the data set in", "start_char_pos": 180, "end_char_pos": 180 }, { "type": "A", "before": null, "after": "1", "start_char_pos": 181, "end_char_pos": 181 }, { "type": "A", "before": null, "after": ". It is found that Schrodinger's formula can be used to describe the time distribution of DNA translocation in solid-state nanopores. These fittings yield two useful parameters: drift velocity of DNA translocation and diffusion constant of DNA inside the nanopore. The results suggest two regimes of DNA translocation: (I) at low voltages, there are clear deviations from Smoluchowski's linear law of electrophoresis", "start_char_pos": 182, "end_char_pos": 182 }, { "type": "A", "before": null, "after": "6", "start_char_pos": 183, "end_char_pos": 183 }, { "type": "A", "before": null, "after": "which we attribute to the entropic barrier effects; (II) at high voltages, the translocation velocity is a linear function of the applied electric field. In regime II, the apparent diffusion constant exhibits a quadratic dependence on applied electric field, suggesting a mechanism of Taylor dispersion effect likely due the electro-osmotic flow field in the nanopore channel. This analysis yields a dispersion-free diffusion constant value for the segment of DNA inside the nanopore which is in agreement with Stokes-Einstein theory quantitatively. The implication of Schrodinger's formula for DNA sequencing is discussed", "start_char_pos": 185, "end_char_pos": 185 } ]
[ 0 ]
1306.2719
1
For a given Markov process X and survival function %DIFDELCMD < \ovl %%% H on%DIFDELCMD < \mbb %%% R_+, the%DIFDELCMD < {\em %%% \overline inverse first-passage time problem (IFPT) is to find a barrier function b: %DIFDELCMD < \mbb %%% R_+ \to[-\infty,+\infty] such that the survival function of the first-passage time \tau_b=\inf\{t\ge0: X(t) \leq b(t)\} is given by %DIFDELCMD < \ovl %%% \overline H. In this paper we consider a version of the IFPT problem where the barrier is %DIFDELCMD < {\em %%% fixed at zero and the problem is to find an entrance law \mu and a time-change I such that for the time-changed process X\circ I the IFPT problem is solved by a constant barrier at the level zero. For any L\'{e}vy process X satisfying a Cram\'{e the solution of this problem , which is given in terms of a quasi-invariant distribution of the process X killed at the epoch of first entrance into the negative half-axis. For a given multi-variate survival function %DIFDELCMD < \ovl %%% H of generalised \overline frailty type we construct subsequently an explicit solution to the corresponding IFPT with the barrier level fixed at zero. We apply these results to the valuation of financial contracts that are subject to counterparty credit risk.
For a given Markov process X and survival function %DIFDELCMD < \ovl %%% %DIFDELCMD < \mbb %%% %DIFDELCMD < {\em %%% \overline H on \mathbb R^+, the inverse first-passage time problem (IFPT) is to find a barrier function b: %DIFDELCMD < \mbb %%% \mathbb R^+ \to[-\infty,+\infty] such that the survival function of the first-passage time \tau_b=\inf\{t\ge0: X(t) < b(t)\} is given by %DIFDELCMD < \ovl %%% \overline H. In this paper we consider a version of the IFPT problem where the barrier is %DIFDELCMD < {\em %%% fixed at zero and the problem is to find an initial distribution \mu and a time-change I such that for the time-changed process X\circ I the IFPT problem is solved by a constant barrier at the level zero. For any L\'{e}vy process X satisfying an exponential moment condition, we derive the solution of this problem in terms of \lambda-invariant distributions of the process X killed at the epoch of first entrance into the negative half-axis. We provide an explicit characterization of such distributions, which is a result of independent interest. For a given multi-variate survival function %DIFDELCMD < \ovl %%% \overline H of generalized frailty type we construct subsequently an explicit solution to the corresponding IFPT with the barrier level fixed at zero. We apply these results to the valuation of financial contracts that are subject to counterparty credit risk.
[ { "type": "D", "before": "H on", "after": null, "start_char_pos": 73, "end_char_pos": 77 }, { "type": "D", "before": "R_+, the", "after": null, "start_char_pos": 99, "end_char_pos": 107 }, { "type": "A", "before": null, "after": "H on \\mathbb R^+, the", "start_char_pos": 139, "end_char_pos": 139 }, { "type": "R", "before": "R_+", "after": "\\mathbb R^+", "start_char_pos": 237, "end_char_pos": 240 }, { "type": "R", "before": "\\leq", "after": "<", "start_char_pos": 345, "end_char_pos": 349 }, { "type": "R", "before": "entrance law", "after": "initial distribution", "start_char_pos": 547, "end_char_pos": 559 }, { "type": "R", "before": "a Cram\\'{e", "after": "an exponential moment condition, we derive", "start_char_pos": 738, "end_char_pos": 748 }, { "type": "D", "before": ", which is given", "after": null, "start_char_pos": 778, "end_char_pos": 794 }, { "type": "R", "before": "a quasi-invariant distribution", "after": "\\lambda-invariant distributions", "start_char_pos": 807, "end_char_pos": 837 }, { "type": "A", "before": null, "after": "We provide an explicit characterization of such distributions, which is a result of independent interest.", "start_char_pos": 922, "end_char_pos": 922 }, { "type": "D", "before": "H of generalised", "after": null, "start_char_pos": 989, "end_char_pos": 1005 }, { "type": "A", "before": null, "after": "H of generalized", "start_char_pos": 1016, "end_char_pos": 1016 } ]
[ 0, 214, 699, 921, 1140 ]
1306.2719
2
For a given Markov process X and survival function %DIFDELCMD < \overline %%% H on \mathbb R ^+, the inverse first-passage time problem (IFPT) is to find a barrier function b: \mathbb R ^+\to[-\infty,+\infty] such that the survival function of the first-passage time \tau_b=\inf \{t\ge0:X(t)<b(t)\} is given by %DIFDELCMD < \overline %%% H . In this paper we consider a version of the IFPT problem where the barrier is fixed at zero and the problem is to find an initial distribution \mu and a time-change I such that for the time-changed process X\circ I the IFPT problem is solved by a constant barrier at the level zero. For any L\'{e}vy process X satisfying an exponential moment condition, we derive the solution of this problem in terms of \lambda-invariant distributions of the process X killed at the epoch of first entrance into the negative half-axis. We provide an explicit characterization of such distributions, which is a result of independent interest. For a given multi-variate survival function %DIFDELCMD < \overline %%% H of generalized frailty type we construct subsequently an explicit solution to the corresponding IFPT with the barrier level fixed at zero. We apply these results to the valuation of financial contracts that are subject to counterparty credit risk.
For a given Markov process X and survival function %DIFDELCMD < \overline %%% on \mathbb{R ^+, the inverse first-passage time problem (IFPT) is to find a barrier function b: \mathbb{R ^+\to[-\infty,+\infty] such that the survival function of the first-passage time \tau_b=\inf \{t\ge0:X(t)<b(t)\} is given by %DIFDELCMD < \overline %%% . In this paper , we consider a version of the IFPT problem where the barrier is fixed at zero and the problem is to find an initial distribution \mu and a time-change I such that for the time-changed process X\circ I the IFPT problem is solved by a constant barrier at the level zero. For any L\'{e}vy process X satisfying an exponential moment condition, we derive the solution of this problem in terms of \lambda-invariant distributions of the process X killed at the epoch of first entrance into the negative half-axis. We provide an explicit characterization of such distributions, which is a result of independent interest. For a given multi-variate survival function %DIFDELCMD < \overline %%% of generalized frailty type , we construct subsequently an explicit solution to the corresponding IFPT with the barrier level fixed at zero. We apply these results to the valuation of financial contracts that are subject to counterparty credit risk.
[ { "type": "R", "before": "H on \\mathbb R", "after": "on \\mathbb{R", "start_char_pos": 78, "end_char_pos": 92 }, { "type": "R", "before": "\\mathbb R", "after": "\\mathbb{R", "start_char_pos": 176, "end_char_pos": 185 }, { "type": "D", "before": "H", "after": null, "start_char_pos": 338, "end_char_pos": 339 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 356, "end_char_pos": 356 }, { "type": "D", "before": "H", "after": null, "start_char_pos": 1040, "end_char_pos": 1041 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 1070, "end_char_pos": 1070 } ]
[ 0, 175, 624, 862, 968, 1181 ]
1306.3421
1
The intricate pattern of chemical modifications on the DNA and histones, the "histone code", is considered to be a gene regulation factor. Multivalency is seen by many as an essential instrument to mediate the transmission of the "encoded" information to the transcription machinery via multi- domain effector proteins and chromatin-associated complexes. However, the physical model explaining the critical role of multivalency for the histone code is still largely unknown . Here, we propose a theory that explains the energetic "payout" of multivalency in the formalism of statistical mechanics. The model quantifies the dependence of binding entropy in a multivalent system on the geometry and flexibility of the binding partners. The model explains how the histone-mediated signalling may rely upon weak individual histone-effector affinities while maintaining high probabilities for simultaneous binding of multiple partners and introduces an entropic "lock-and-key" as a critical component of the histone code .
The intricate pattern of chemical modifications on the DNA and histones, the "histone code", is considered to be a key gene regulation factor. Multivalency is seen by many as an essential instrument to mediate the transmission of the "encoded" information to the transcription machinery via multi- domain effector proteins and chromatin-associated complexes. However, the physical model explaining the critical role of multivalency for the histone code is still lacking . Here, we propose a theory that explains the energetic "payout" of multivalency in the formalism of statistical mechanics. The model quantifies the dependence of binding entropy in a multivalent system on the geometry and flexibility of the binding partners. An all-atom molecular dynamics study of a relevant biological system involving the multivalent chromatin effector UHRF1 and its target histone H3 demonstrate that this interaction conforms to the conditions for an optimal free-energy payout, as predicted by the model .
[ { "type": "A", "before": null, "after": "key", "start_char_pos": 115, "end_char_pos": 115 }, { "type": "R", "before": "largely unknown", "after": "lacking", "start_char_pos": 459, "end_char_pos": 474 }, { "type": "R", "before": "The model explains how the histone-mediated signalling may rely upon weak individual histone-effector affinities while maintaining high probabilities for simultaneous binding of multiple partners and introduces an entropic \"lock-and-key\" as a critical component of the histone code", "after": "An all-atom molecular dynamics study of a relevant biological system involving the multivalent chromatin effector UHRF1 and its target histone H3 demonstrate that this interaction conforms to the conditions for an optimal free-energy payout, as predicted by the model", "start_char_pos": 735, "end_char_pos": 1016 } ]
[ 0, 139, 355, 598, 734 ]
1306.3421
2
The intricate pattern of chemical modifications on the DNA and histones, the "histone code", is considered to be a key gene regulation factor. Multivalency is seen by many as an essential instrument to mediate the transmission of the "encoded" information to the transcription machinery via multi- domain effector proteins and chromatin-associated complexes. However, the physical model explaining the critical role of multivalency for the histone code is still lacking. Here, we propose a theory that explains the energetic " payout " of multivalency in the formalism of statistical mechanics. The model quantifies the dependence of binding entropy in a multivalent system on the geometry and flexibility of the binding partners . An all-atom molecular dynamics study of a relevant biological system involving the multivalent chromatin effector UHRF1 and its target histone H3 demonstrate that this interaction conforms to the conditions for an optimal free-energy payout, as predicted by the model .
The intricate pattern of chemical modifications on DNA and histones, the "histone code", is considered to be a key gene regulation factor. Multivalency is seen by many as an essential instrument to transmit the "encoded" information to the transcription machinery via multi-domain effector proteins and chromatin-associated complexes. However, as examples of multivalent histone engagement accumulate, an apparent contradiction is emerging. The isolated effector domains are notably weak binders, thus it is often asserted that the entropic cost of orienting multiple domains can be " prepaid " by a rigid tether. Meanwhile, evidence suggests that the tethers are largely disordered and offer little rigidity. Here we consider a mechanism to "prepay" the entropic costs of orienting the domains for binding, not through rigidity of the tether but through the careful spacing of the modifications on chromatin . An all-atom molecular dynamics study of the most fully characterized multivalent chromatin effector conforms to the conditions for an optimal free-energy payout, as predicted by the model discussed here .
[ { "type": "D", "before": "the", "after": null, "start_char_pos": 51, "end_char_pos": 54 }, { "type": "R", "before": "mediate the transmission of the", "after": "transmit the", "start_char_pos": 202, "end_char_pos": 233 }, { "type": "R", "before": "multi- domain", "after": "multi-domain", "start_char_pos": 291, "end_char_pos": 304 }, { "type": "R", "before": "the physical model explaining the critical role of multivalency for the histone code is still lacking. Here, we propose a theory that explains the energetic", "after": "as examples of multivalent histone engagement accumulate, an apparent contradiction is emerging. The isolated effector domains are notably weak binders, thus it is often asserted that the entropic cost of orienting multiple domains can be", "start_char_pos": 368, "end_char_pos": 524 }, { "type": "R", "before": "payout", "after": "prepaid", "start_char_pos": 527, "end_char_pos": 533 }, { "type": "R", "before": "of multivalency in the formalism of statistical mechanics. The model quantifies the dependence of binding entropy in a multivalent system on the geometry and flexibility of the binding partners", "after": "by a rigid tether. Meanwhile, evidence suggests that the tethers are largely disordered and offer little rigidity. Here we consider a mechanism to \"prepay\" the entropic costs of orienting the domains for binding, not through rigidity of the tether but through the careful spacing of the modifications on chromatin", "start_char_pos": 536, "end_char_pos": 729 }, { "type": "R", "before": "a relevant biological system involving the", "after": "the most fully characterized", "start_char_pos": 772, "end_char_pos": 814 }, { "type": "D", "before": "UHRF1 and its target histone H3 demonstrate that this interaction", "after": null, "start_char_pos": 846, "end_char_pos": 911 }, { "type": "A", "before": null, "after": "discussed here", "start_char_pos": 1000, "end_char_pos": 1000 } ]
[ 0, 142, 358, 470, 594, 731 ]
1306.3437
1
We first present and analyze a central cutting surface algorithm for general semi-infinite convex optimization problems, and use it to develop an algorithm for distributionally robust optimization problems in which the uncertainty set consists of probability distributions with given bounds on their moments. The cutting surface algorithm is also applicable to problems with non-differentiable semi-infinite constraints indexed by an infinite-dimensional index set. Examples comparing the cutting surface algorithm to the central cutting plane algorithm of Kortanek and No demonstrate the potential of the central cutting surface algorithm even in the solution of traditional semi-infinite convex programming problems , whose constraints are differentiable , and are indexed by an index set of low dimension. Our primary motivation for the higher level of generality is to solve distributionally robust optimization problems with moment uncertainty. After the analysis of the cutting surface algorithm, we extend the authors' moment matching scenario generation algorithm to a probabilistic algorithm that finds optimal probability distributions subject to moment constraints. The combination of this distribution optimization method and the cutting surface algorithm yields a solution to a family of distributionally robust optimization problems that are considerably more general than the ones proposed to date.
We present and analyze a central cutting surface algorithm for general semi-infinite convex optimization problems, and use it to develop a novel algorithm for distributionally robust optimization problems in which the uncertainty set consists of probability distributions with given bounds on their moments. Moments of arbitrary order, as well as non-polynomial moments can be included in the formulation. We show that this gives rise to a hierarchy of optimization problems with decreasing levels of risk-aversion, with classic robust optimization at one end of the spectrum, and stochastic programming at the other. Although our primary motivation is to solve distributionally robust optimization problems with moment uncertainty, the cutting surface method for general semi-infinite convex programs is also of independent interest. The proposed method is applicable to problems with non-differentiable semi-infinite constraints indexed by an infinite-dimensional index set. Examples comparing the cutting surface algorithm to the central cutting plane algorithm of Kortanek and No demonstrate the potential of our algorithm even in the solution of traditional semi-infinite convex programming problems whose constraints are differentiable and are indexed by an index set of low dimension. After the rate of convergence analysis of the cutting surface algorithm, we extend the authors' moment matching scenario generation algorithm to a probabilistic algorithm that finds optimal probability distributions subject to moment constraints. The combination of this distribution optimization method and the central cutting surface algorithm yields a solution to a family of distributionally robust optimization problems that are considerably more general than the ones proposed to date.
[ { "type": "D", "before": "first", "after": null, "start_char_pos": 3, "end_char_pos": 8 }, { "type": "R", "before": "an", "after": "a novel", "start_char_pos": 143, "end_char_pos": 145 }, { "type": "R", "before": "The cutting surface algorithm is also", "after": "Moments of arbitrary order, as well as non-polynomial moments can be included in the formulation. We show that this gives rise to a hierarchy of optimization problems with decreasing levels of risk-aversion, with classic robust optimization at one end of the spectrum, and stochastic programming at the other. Although our primary motivation is to solve distributionally robust optimization problems with moment uncertainty, the cutting surface method for general semi-infinite convex programs is also of independent interest. The proposed method is", "start_char_pos": 309, "end_char_pos": 346 }, { "type": "R", "before": "the central cutting surface", "after": "our", "start_char_pos": 602, "end_char_pos": 629 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 718, "end_char_pos": 719 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 757, "end_char_pos": 758 }, { "type": "R", "before": "Our primary motivation for the higher level of generality is to solve distributionally robust optimization problems with moment uncertainty. After the", "after": "After the rate of convergence", "start_char_pos": 809, "end_char_pos": 959 }, { "type": "A", "before": null, "after": "central", "start_char_pos": 1242, "end_char_pos": 1242 } ]
[ 0, 308, 465, 808, 949, 1176 ]
1306.4975
1
Financial time series exhibit a number of interesting properties that are difficult to explain with simple models. These properties include fat-tails in the distribution of price fluctuations (or returns) that are slowly removed at longer timescales, strong autocorrelations in absolute returns but zero autocorrelation in returns themselves, and multifractal scaling. Although the underlying cause of these features is unknown, there is growing evidence they originate in the behavior of volatility, i.e., in the behavior of the magnitude of price fluctuations. In this paper, we posit a feedback mechanism for volatility that reproduces many of the non-trivial properties of empirical prices. The model is parsimonious, requires only two parameters to fit a specific financial time series , and can be grounded in a straightforward framework where volatility fluctuations are driven by the estimation error of an exogenous Poisson rate.
Financial time series exhibit a number of interesting properties that are difficult to explain with simple models. These properties include fat-tails in the distribution of price fluctuations (or returns) that are slowly removed at longer timescales, strong autocorrelations in absolute returns but zero autocorrelation in returns themselves, and multifractal scaling. Although the underlying cause of these features is unknown, there is growing evidence they originate in the behavior of volatility, i.e., in the behavior of the magnitude of price fluctuations. In this paper, we posit a feedback mechanism for volatility that closely reproduces the non-trivial properties of empirical prices. The model is parsimonious, contains only two parameters that are easily estimated, fits empirical data better than standard models , and can be grounded in a straightforward framework where volatility fluctuations are driven by the estimation error of an exogenous Poisson rate.
[ { "type": "R", "before": "reproduces many of", "after": "closely reproduces", "start_char_pos": 628, "end_char_pos": 646 }, { "type": "R", "before": "requires", "after": "contains", "start_char_pos": 722, "end_char_pos": 730 }, { "type": "R", "before": "to fit a specific financial time series", "after": "that are easily estimated, fits empirical data better than standard models", "start_char_pos": 751, "end_char_pos": 790 } ]
[ 0, 114, 368, 562, 694 ]
1306.5145
1
The well-known theorem of Dybvig, Ingersoll and Ross shows that the long zero-coupon rate can never fall. This result, which---although undoubtedly correct---has been regarded by many as counterintuitive and even pathological , stems from the implicit assumption that the long-term discount function has an exponential tail. We revisit the problem in the setting of modern interest rate theory, and show that if the long "simple" interest rate (or Libor rate) is finite, then this rate (unlike the zero-coupon rate) acts viably as a state variable, the value of which can fluctuate randomly in line with other economic indicators. New interest rate models are constructed, under this hypothesis , that illustrate explicitly the good asymptotic behaviour of the resulting discount bond system . The conditions necessary for the existence of such "hyperbolic" long rates turn out to be those of so-called social discounting, which allow for long-term cash flows to be treated as broadly "just as important" as those of the short or medium term. As a consequence, we are able to provide a consistent arbitrage-free valuation framework for the cost-benefit analysis and risk management of long-term social projects, such as those associated with sustainable energy, resource conservation, and climate change.
The well-known theorem of Dybvig, Ingersoll and Ross shows that the long zero-coupon rate can never fall. This result, which, although undoubtedly correct, has been regarded by many as surprising , stems from the implicit assumption that the long-term discount function has an exponential tail. We revisit the problem in the setting of modern interest rate theory, and show that if the long "simple" interest rate (or Libor rate) is finite, then this rate (unlike the zero-coupon rate) acts viably as a state variable, the value of which can fluctuate randomly in line with other economic indicators. New interest rate models are constructed, under this hypothesis and certain generalisations thereof , that illustrate explicitly the good asymptotic behaviour of the resulting discount bond systems . The conditions necessary for the existence of such "hyperbolic" and "generalised hyperbolic" long rates are those of so-called social discounting, which allow for long-term cash flows to be treated as broadly "just as important" as those of the short or medium term. As a consequence, we are able to provide a consistent arbitrage-free valuation framework for the cost-benefit analysis and risk management of long-term social projects, such as those associated with sustainable energy, resource conservation, and climate change.
[ { "type": "R", "before": "which---although undoubtedly correct---has", "after": "which, although undoubtedly correct, has", "start_char_pos": 119, "end_char_pos": 161 }, { "type": "R", "before": "counterintuitive and even pathological", "after": "surprising", "start_char_pos": 187, "end_char_pos": 225 }, { "type": "A", "before": null, "after": "and certain generalisations thereof", "start_char_pos": 695, "end_char_pos": 695 }, { "type": "R", "before": "system", "after": "systems", "start_char_pos": 786, "end_char_pos": 792 }, { "type": "R", "before": "long rates turn out to be", "after": "and \"generalised hyperbolic\" long rates are", "start_char_pos": 859, "end_char_pos": 884 } ]
[ 0, 105, 324, 630, 794, 1043 ]
1306.5145
2
The well-known theorem of Dybvig, Ingersoll and Ross shows that the long zero-coupon rate can never fall. This result, which, although undoubtedly correct, has been regarded by many as surprising, stems from the implicit assumption that the long-term discount function has an exponential tail. We revisit the problem in the setting of modern interest rate theory, and show that if the long "simple" interest rate (or Libor rate) is finite, then this rate (unlike the zero-coupon rate) acts viably as a state variable, the value of which can fluctuate randomly in line with other economic indicators. New interest rate models are constructed, under this hypothesis and certain generalisations thereof, that illustrate explicitly the good asymptotic behaviour of the resulting discount bond systems. The conditions necessary for the existence of such "hyperbolic" and " generalised hyperbolic" long rates are those of so-called social discounting, which allow for long-term cash flows to be treated as broadly "just as important" as those of the short or medium term. As a consequence, we are able to provide a consistent arbitrage-free valuation framework for the cost-benefit analysis and risk management of long-term social projects, such as those associated with sustainable energy, resource conservation, and climate change.
The well-known theorem of Dybvig, Ingersoll and Ross shows that the long zero-coupon rate can never fall. This result, which, although undoubtedly correct, has been regarded by many as surprising, stems from the implicit assumption that the long-term discount function has an exponential tail. We revisit the problem in the setting of modern interest rate theory, and show that if the long "simple" interest rate (or Libor rate) is finite, then this rate (unlike the zero-coupon rate) acts viably as a state variable, the value of which can fluctuate randomly in line with other economic indicators. New interest rate models are constructed, under this hypothesis and certain generalizations thereof, that illustrate explicitly the good asymptotic behaviour of the resulting discount bond systems. The conditions necessary for the existence of such "hyperbolic" and " generalized hyperbolic" long rates are those of so-called social discounting, which allow for long-term cash flows to be treated as broadly "just as important" as those of the short or medium term. As a consequence, we are able to provide a consistent arbitrage-free valuation framework for the cost-benefit analysis and risk management of long-term social projects, such as those associated with sustainable energy, resource conservation, and climate change.
[ { "type": "R", "before": "generalisations", "after": "generalizations", "start_char_pos": 676, "end_char_pos": 691 }, { "type": "R", "before": "generalised", "after": "generalized", "start_char_pos": 868, "end_char_pos": 879 } ]
[ 0, 105, 293, 599, 797, 1065 ]
1307.0044
1
Numerous energy harvesting mobile and wireless devices that will serve as building blocks for the Internet of Things (IoT) are currently under development. However, there is still only limited understanding of the energy availability from various sources and its impact on energy harvesting-adaptive algorithms. Hence, we focus on characterizing the kinetic (motion) energy that can be harvested by a mobile device with an IoT form factor . We first discuss methods for estimating harvested energy from acceleration traces. We then briefly describe experiments with moving objects and provide insights into the suitability of different scenarios for harvesting. To characterize the energy availability associated with specific human activities (e.g., relaxing, walking, and cycling), we analyze a motion dataset with over 40 participants. Based on acceleration measurements that we collected for over 200 hours, we also study energy generation processes associated with day-long human routines. Finally, we use our measurement traces to evaluate the performance of energy harvesting-adaptive algorithms . Overall, the observations will provide insights into the design of networking algorithms and motion energy harvesters, which will be embedded in mobile devices .
Numerous energy harvesting wireless devices that will serve as building blocks for the Internet of Things (IoT) are currently under development. However, there is still only limited understanding of the properties of various energy sources and their impact on energy harvesting adaptive algorithms. Hence, we focus on characterizing the kinetic (motion) energy that can be harvested by a wireless node with an IoT form factor and on developing energy allocation algorithms for such nodes. In this paper, we describe methods for estimating harvested energy from acceleration traces. To characterize the energy availability associated with specific human activities (e.g., relaxing, walking, cycling), we analyze a motion dataset with over 40 participants. Based on acceleration measurements that we collected for over 200 hours, we study energy generation processes associated with day-long human routines. We also briefly summarize our experiments with moving objects. We develop energy allocation algorithms that take into account practical IoT node design considerations, and evaluate the algorithms using the collected measurements. Our observations provide insights into the design of motion energy harvesters, IoT nodes, and energy harvesting adaptive algorithms .
[ { "type": "D", "before": "mobile and", "after": null, "start_char_pos": 27, "end_char_pos": 37 }, { "type": "R", "before": "energy availability from various sources and its", "after": "properties of various energy sources and their", "start_char_pos": 214, "end_char_pos": 262 }, { "type": "R", "before": "harvesting-adaptive", "after": "harvesting adaptive", "start_char_pos": 280, "end_char_pos": 299 }, { "type": "R", "before": "mobile device", "after": "wireless node", "start_char_pos": 401, "end_char_pos": 414 }, { "type": "R", "before": ". We first discuss", "after": "and on developing energy allocation algorithms for such nodes. In this paper, we describe", "start_char_pos": 439, "end_char_pos": 457 }, { "type": "D", "before": "We then briefly describe experiments with moving objects and provide insights into the suitability of different scenarios for harvesting.", "after": null, "start_char_pos": 524, "end_char_pos": 661 }, { "type": "D", "before": "and", "after": null, "start_char_pos": 770, "end_char_pos": 773 }, { "type": "D", "before": "also", "after": null, "start_char_pos": 915, "end_char_pos": 919 }, { "type": "R", "before": "Finally, we use our measurement traces to evaluate the performance of energy harvesting-adaptive algorithms . Overall, the observations will", "after": "We also briefly summarize our experiments with moving objects. We develop energy allocation algorithms that take into account practical IoT node design considerations, and evaluate the algorithms using the collected measurements. Our observations", "start_char_pos": 995, "end_char_pos": 1135 }, { "type": "D", "before": "networking algorithms and", "after": null, "start_char_pos": 1172, "end_char_pos": 1197 }, { "type": "R", "before": "which will be embedded in mobile devices", "after": "IoT nodes, and energy harvesting adaptive algorithms", "start_char_pos": 1224, "end_char_pos": 1264 } ]
[ 0, 155, 311, 440, 523, 661, 838, 994, 1104 ]
1307.0220
1
A dipole-loaded monopole antenna is optimized for uniform hemispherical coverage using VSO, a new global search design and optimization algorithm. The antenna's performance is compared to a genetically optimized loaded monopole , and VSO is tested against two suites of benchmark functions .
A dipole-loaded monopole antenna is optimized for uniform hemispherical coverage using VSO, a new global search design and optimization algorithm. The antenna's performance is compared to genetic algorithm and hill-climber optimized loaded monopoles , and VSO is tested against two suites of benchmark functions and several other algorithms .
[ { "type": "R", "before": "a genetically optimized loaded monopole", "after": "genetic algorithm and hill-climber optimized loaded monopoles", "start_char_pos": 188, "end_char_pos": 227 }, { "type": "A", "before": null, "after": "and several other algorithms", "start_char_pos": 290, "end_char_pos": 290 } ]
[ 0, 146 ]
1307.0367
1
Monolayer spontaneous curvatures for cholesterol, DOPE, POPE, DOPC, DPPC, DSPC, POPC, SOPC, and egg sphingomyelin were obtained using small-angle X-ray scattering (SAXS) on inverted hexagonal phases (HII). Spontaneous curvatures of bilayer forming lipids were estimated by adding controlled amounts to a HII forming template following previously established protocols. In our analysis we compared two methods, based either on the calculation of the electron density map, or simple measurement of the lattice parameter. Within uncertainty of the measurement both methods yielded good agreement. Spontanous curvatures of both phosphatidylethanolamines and cholesterol were found to be one order of magnitude more negative than those of phosphatidylcholines, whose J0 is close to zero. Interestingly, a significant positive J0 value (+0.1 1/nm) was retrieved for DPPC at 35 {\deg}C. We further determined the temperature dependence of the spontaneous curvatures J0(T) in the range from 15 to 55 \deg C \degC , resulting in a quite narrow distribution of -1 to -3 * 10^-3 1/nm{\deg}C for all investigated lipids. The data allowed us to estimate the monolayer spontaneous curvatures of ternary lipid mixtures showing liquid ordered / liquid disordered phase coexistence. We report spontaneous curvature phase diagrams for DSPC/DOPC/Chol, DPPC/DOPC/Chol and SM/POPC/Chol and discuss effects on protein insertion and line tension.
Monolayer spontaneous curvatures for cholesterol, DOPE, POPE, DOPC, DPPC, DSPC, POPC, SOPC, and egg sphingomyelin were obtained using small-angle X-ray scattering (SAXS) on inverted hexagonal phases (HII). Spontaneous curvatures of bilayer forming lipids were estimated by adding controlled amounts to a HII forming template following previously established protocols. Spontanous curvatures of both phosphatidylethanolamines and cholesterol were found to be at least a factor of two more negative than those of phosphatidylcholines, whose J0 are closer to zero. Interestingly, a significant positive J0 value (+0.1 1/nm) was retrieved for DPPC at 25 {\deg}C. We further determined the temperature dependence of the spontaneous curvatures J0(T) in the range from 15 to 55 \degC , resulting in a quite narrow distribution of -1 to -3 * 10^-3 1/nm{\deg}C for most investigated lipids. The data allowed us to estimate the monolayer spontaneous curvatures of ternary lipid mixtures showing liquid ordered / liquid disordered phase coexistence. We report spontaneous curvature phase diagrams for DSPC/DOPC/Chol, DPPC/DOPC/Chol and SM/POPC/Chol and discuss effects on protein insertion and line tension.
[ { "type": "D", "before": "In our analysis we compared two methods, based either on the calculation of the electron density map, or simple measurement of the lattice parameter. Within uncertainty of the measurement both methods yielded good agreement.", "after": null, "start_char_pos": 369, "end_char_pos": 593 }, { "type": "R", "before": "one order of magnitude", "after": "at least a factor of two", "start_char_pos": 683, "end_char_pos": 705 }, { "type": "R", "before": "is close", "after": "are closer", "start_char_pos": 765, "end_char_pos": 773 }, { "type": "R", "before": "35", "after": "25", "start_char_pos": 868, "end_char_pos": 870 }, { "type": "D", "before": "\\deg", "after": null, "start_char_pos": 992, "end_char_pos": 996 }, { "type": "D", "before": "C", "after": null, "start_char_pos": 997, "end_char_pos": 998 }, { "type": "R", "before": "all", "after": "most", "start_char_pos": 1084, "end_char_pos": 1087 } ]
[ 0, 205, 368, 518, 593, 782, 879, 1108, 1265 ]
1307.0444
2
An on-going debate in the energy economics and power market community has raised the question if energy-only power markets are increasingly failing due to growing in-feed shares from subsidized RES . The short answer to this is: no , they are not failing. Energy-based power markets are, however, facing several market distortions, namely from the gap between the electricity volume traded at spot markets versus the overall electricity consumption as well as the (wrong) regulatory assumption that variable RES generation, i.e., wind and PV , truly have zero marginal operation costs. We show that both effects overamplify the well-known merit-order effect of RES power in-feed beyond a level that is explainable by underlying physical realities, i.e., thermal power plants being willing to accept negative electricity prices to be able to stay online due to considerations of wear & tear and start-stop constraints. In this paper we analyze the impacts of wind and PV power in-feed on the spot market for a region that is already today experiencing significant FIT-subsidized RES power in-feed (%DIFDELCMD < \approx20%%% \%), the German-Austrian market zone of the EPEX. We show a comparison of the FIT-subsidized RES energy production volume to the spot market volume and the overall load demand. Furthermore, a spot market analysis based on the assumption that RES units have to feed-in with their assumed true marginal costs, i.e., operation, maintenance and balancing costs, is performed. Our analysis results show that, if the necessary regulatory adaptations are taken, i.e., increasing the spot market's share of overall load demand and using the true marginal costs of RES units in the merit-order, energy-based power markets can remain functional despite high RES power in-feed .
An on-going debate in the energy economics and power market community has raised the question if energy-only power markets are increasingly failing due to growing feed-in shares from subsidized renewable energy sources (RES) . The short answer to this is: No , they are not failing. Energy-based power markets are, however, facing several market distortions, namely from the gap between the electricity volume traded at day-ahead markets versus the overall electricity consumption as well as the (wrong) regulatory assumption that variable RES generation, i.e., wind and photovoltaic (PV) , truly have zero marginal operation costs. In this paper we show that both effects over-amplify the well-known merit-order effect of RES power feed-in beyond a level that is explainable by underlying physical realities, i.e., thermal power plants being willing to accept negative electricity prices to be able to stay online due to considerations of wear & tear and start-stop constraints. We analyze the impacts of wind and PV power feed-in on the day-ahead market for a region that is already today experiencing significant feed-in tariff (FIT)-subsidized RES power %DIFDELCMD < \approx20%%% feed-in, the EPEX German-Austrian market zone (\approx\,20\% FIT share). Our analysis shows that, if the necessary regulatory adaptations are taken, i.e., increasing the day-ahead market's share of overall load demand and using the true marginal costs of RES units in the merit-order, energy-based power markets can remain functional despite high RES power feed-in .
[ { "type": "R", "before": "in-feed", "after": "feed-in", "start_char_pos": 163, "end_char_pos": 170 }, { "type": "R", "before": "RES", "after": "renewable energy sources (RES)", "start_char_pos": 194, "end_char_pos": 197 }, { "type": "R", "before": "no", "after": "No", "start_char_pos": 229, "end_char_pos": 231 }, { "type": "R", "before": "spot", "after": "day-ahead", "start_char_pos": 393, "end_char_pos": 397 }, { "type": "R", "before": "PV", "after": "photovoltaic (PV)", "start_char_pos": 539, "end_char_pos": 541 }, { "type": "R", "before": "We", "after": "In this paper we", "start_char_pos": 586, "end_char_pos": 588 }, { "type": "R", "before": "overamplify", "after": "over-amplify", "start_char_pos": 612, "end_char_pos": 623 }, { "type": "R", "before": "in-feed", "after": "feed-in", "start_char_pos": 671, "end_char_pos": 678 }, { "type": "R", "before": "In this paper we", "after": "We", "start_char_pos": 918, "end_char_pos": 934 }, { "type": "R", "before": "in-feed on the spot", "after": "feed-in on the day-ahead", "start_char_pos": 976, "end_char_pos": 995 }, { "type": "R", "before": "FIT-subsidized", "after": "feed-in tariff (FIT)-subsidized", "start_char_pos": 1063, "end_char_pos": 1077 }, { "type": "D", "before": "in-feed (", "after": null, "start_char_pos": 1088, "end_char_pos": 1097 }, { "type": "R", "before": "\\%), the", "after": "feed-in, the EPEX", "start_char_pos": 1123, "end_char_pos": 1131 }, { "type": "R", "before": "of the EPEX. We show a comparison of the FIT-subsidized RES energy production volume to the spot market volume and the overall load demand. Furthermore, a spot market analysis based on the assumption that RES units have to feed-in with their assumed true marginal costs, i.e., operation, maintenance and balancing costs, is performed. Our analysis results show", "after": "(\\approx\\,20\\% FIT share). Our analysis shows", "start_char_pos": 1160, "end_char_pos": 1520 }, { "type": "R", "before": "spot", "after": "day-ahead", "start_char_pos": 1599, "end_char_pos": 1603 }, { "type": "R", "before": "in-feed", "after": "feed-in", "start_char_pos": 1781, "end_char_pos": 1788 } ]
[ 0, 255, 585, 917, 1172, 1299, 1494 ]
1307.0715
1
The effect of magnesium ion Mg2+ on the dielectric relaxation of semidilute DNA aqueous solutions has been studied by means of dielectric spectroscopy . Two dielectric relaxations in the 100 Hz - 100 MHz frequency range , originating in the motion of DNA counterions, were probed as a function of DNA and Mg2+ ion concentration in added MgCl2 salt. The high-frequency mode in the MHz range, stemming from the URLanization of the DNA network, reveals de Gennes-Pfeuty-Dobrynin correlation length as the pertinent fundamental length scale for sufficiently low concentration of added salt . No relaxation fingerprint of DNA denaturation bubbles, leading to exposed hydrophobic core scaling, was detected at low DNA concentrations, thus indicating an increased stability of the double-stranded conformation as compared to the case of DNA solutionswith univalent counterions. The presence of Mg2+ does not change qualitatively the low frequency mode in the kHz range correlated with single DNA conformational properties. It does, however, introduce some changes in the effective size of the DNA molecule and in the electrostatic screening effects of the Odijk-Skolnick-Fixman type. All results consistently demonstrate that Mg2+ ions interact with DNA in a similar way as Na1+ ions do, their effect being mostly describable through an enhanced screening.
The effect of magnesium ion Mg2+ on the dielectric relaxation of semidilute DNA aqueous solutions has been studied by means of dielectric spectroscopy in the 100 Hz- 100 MHz frequency range . De Gennes-Pfeuty-Dobrynin semidilute solution correlation length is the pertinent fundamental length scale for sufficiently low concentration of added salt , describing the collective properties of Mg-DNA solutions . No relaxation fingerprint of the DNA denaturation bubbles, leading to exposed hydrophobic core scaling, was detected at low DNA concentrations, thus indicating an increased stability of the double-stranded conformation in Mg-DNA solutions as compared to the case of Na-DNA solutions. Some changes are detected in the behavior of the fundamental length scale pertaining to the single molecule DNA properties, reflecting modified electrostatic screening effects of the Odijk-Skolnick-Fixman type. All results consistently demonstrate that Mg2+ ions interact with DNA in a similar way as Na1+ ions do, their effect being mostly describable through an enhanced screening.
[ { "type": "D", "before": ". Two dielectric relaxations", "after": null, "start_char_pos": 151, "end_char_pos": 179 }, { "type": "R", "before": "Hz -", "after": "Hz-", "start_char_pos": 191, "end_char_pos": 195 }, { "type": "R", "before": ", originating in the motion of DNA counterions, were probed as a function of DNA and Mg2+ ion concentration in added MgCl2 salt. The high-frequency mode in the MHz range, stemming from the URLanization of the DNA network, reveals de", "after": ". De", "start_char_pos": 220, "end_char_pos": 452 }, { "type": "R", "before": "correlation length as", "after": "semidilute solution correlation length is", "start_char_pos": 476, "end_char_pos": 497 }, { "type": "A", "before": null, "after": ", describing the collective properties of Mg-DNA solutions", "start_char_pos": 586, "end_char_pos": 586 }, { "type": "A", "before": null, "after": "the", "start_char_pos": 618, "end_char_pos": 618 }, { "type": "A", "before": null, "after": "in Mg-DNA solutions", "start_char_pos": 805, "end_char_pos": 805 }, { "type": "R", "before": "DNA solutionswith univalent counterions. The presence of Mg2+ does not change qualitatively the low frequency mode in the kHz range correlated with single DNA conformational properties. It does, however, introduce some changes in the effective size of the DNA molecule and in the", "after": "Na-DNA solutions. Some changes are detected in the behavior of the fundamental length scale pertaining to the single molecule DNA properties, reflecting modified", "start_char_pos": 833, "end_char_pos": 1112 } ]
[ 0, 152, 348, 390, 873, 1018, 1179 ]
1307.0817
1
We represent an exchange economy in terms of statistical ensembles for complex networks by introducing the concept of market configuration. In this way, starting from economic reasoning, we obtain a sound interpretation of the typical network variables in terms of thermodynamic quantities together with a strong consistency with microeconomic theory, and in particular with Walrasian\} describing the flow of a given commodity from agent i to agent j. This sequence can be arranged in a nonnegative matrix W which we can regard as the representation of a weighted and directed network or digraph G. Our main result consists in showing that } general equilibrium theory . In our formalism, naturally arises the interpretation of the temperature T as a quantification of economic disequilibrium, which can indeed coexist with statistical equilibrium .
We represent an exchange economy in terms of statistical ensembles for complex networks by introducing the concept of market configuration. This is defined as a sequence of nonnegative discrete random variables \{w_{ij\} describing the flow of a given commodity from agent i to agent j. This sequence can be arranged in a nonnegative matrix W which we can regard as the representation of a weighted and directed network or digraph G. Our main result consists in showing that } general equilibrium theory imposes highly restrictive conditions upon market configurations, which are in most cases not fulfilled by real markets. An explicit example with reference to the e-MID interbank credit market is provided .
[ { "type": "R", "before": "In this way, starting from economic reasoning, we obtain a sound interpretation of the typical network variables in terms of thermodynamic quantities together with a strong consistency with microeconomic theory, and in particular with Walrasian", "after": "This is defined as a sequence of nonnegative discrete random variables \\{w_{ij", "start_char_pos": 140, "end_char_pos": 384 }, { "type": "R", "before": ". In our formalism, naturally arises the interpretation of the temperature T as a quantification of economic disequilibrium, which can indeed coexist with statistical equilibrium", "after": "imposes highly restrictive conditions upon market configurations, which are in most cases not fulfilled by real markets. An explicit example with reference to the e-MID interbank credit market is provided", "start_char_pos": 670, "end_char_pos": 848 } ]
[ 0, 139, 452, 671 ]
1307.1337
1
A methodology is proposed to automatically detect protein associations in genomic databases. A new statistical test is defined to assess the significance of a group of proteins when found in several genesets of a given database. Applied to protein pairs, the thresholded p-values of the test define a graph structure on the set of proteins . The cliques of that graph are significant protein groups , linked to a set of genesets where they can be found. The method can be applied to any database, and is illustrated on the KEGG database and on specific selections from the MSygDB C2 database. Most of the protein associations detected in KEGG and cancer-related genesets of MSigDB C2 match already known interactions. On more specific selections of C2, many previously unkown protein associations have been detected. They could indicate potentially interesting protein-protein interactions, if validated by biological evidence.
A methodology is proposed to automatically detect significant symbol associations in genomic databases. A new statistical test is proposed to assess the significance of a group of symbols when found in several genesets of a given database. Applied to symbol pairs, the thresholded p-values of the test define a graph structure on the set of symbols . The cliques of that graph are significant symbol associations , linked to a set of genesets where they can be found. The method can be applied to any database, and is illustrated MSigDB C2 database. Many of the symbol associations detected in C2 or in non-specific selections did correspond to already known interactions. On more specific selections of C2, many previously unkown symbol associations have been detected. These associations unveal new candidates for gene or protein interactions, needing further investigation for biological evidence.
[ { "type": "R", "before": "protein", "after": "significant symbol", "start_char_pos": 50, "end_char_pos": 57 }, { "type": "R", "before": "defined", "after": "proposed", "start_char_pos": 119, "end_char_pos": 126 }, { "type": "R", "before": "proteins", "after": "symbols", "start_char_pos": 168, "end_char_pos": 176 }, { "type": "R", "before": "protein", "after": "symbol", "start_char_pos": 240, "end_char_pos": 247 }, { "type": "R", "before": "proteins", "after": "symbols", "start_char_pos": 331, "end_char_pos": 339 }, { "type": "R", "before": "protein groups", "after": "symbol associations", "start_char_pos": 384, "end_char_pos": 398 }, { "type": "R", "before": "on the KEGG database and on specific selections from the MSygDB", "after": "MSigDB", "start_char_pos": 516, "end_char_pos": 579 }, { "type": "R", "before": "Most of the protein", "after": "Many of the symbol", "start_char_pos": 593, "end_char_pos": 612 }, { "type": "D", "before": "KEGG and cancer-related genesets of MSigDB", "after": null, "start_char_pos": 638, "end_char_pos": 680 }, { "type": "R", "before": "match", "after": "or in non-specific selections did correspond to", "start_char_pos": 684, "end_char_pos": 689 }, { "type": "R", "before": "protein", "after": "symbol", "start_char_pos": 776, "end_char_pos": 783 }, { "type": "R", "before": "They could indicate potentially interesting protein-protein interactions, if validated by", "after": "These associations unveal new candidates for gene or protein interactions, needing further investigation for", "start_char_pos": 817, "end_char_pos": 906 } ]
[ 0, 92, 228, 341, 453, 592, 717, 816 ]
1307.2035
1
We define and study periodic strategies in two player finite strategic form games . This concept can arise from some epistemic analysis of the rationalizability concept of Bernheim and Pearce. We analyze in detail the pure strategies and mixed strategies cases. In the pure strategies case, we prove that every two player finite action game has at least one periodic strategy, making the periodic strategies an inherent characteristic of these games . Applying the algorithm of periodic strategies in the case where mixed strategies are used, we find some very interesting outcomes with useful quantitative features for some classes of games. Particularly interesting are the implications of the algorithm to collective action games, for which we were able to establish the result that the collective action strategy can be incorporated in a purely non-cooperative context. Moreover, we address the periodicity issue for the case the players have a continuum set of strategies available. We also discuss whether periodic strategies can imply any sort of cooperativity. In addition, we put the periodic strategies in an epistemic framework .
We introduce a new solution concept for selecting optimal strategies in strategic form games which we call periodic strategies and the solution concept periodicity. As we will explicitly demonstrate, the periodicity solution concept has implications for non-trivial realistic games, which renders this solution concept very valuable. The most striking application of periodicity is that in mixed strategy strategic form games, we were able to find solutions that result to values for the utility function of each player, that are equal to the Nash equilibrium ones, with the difference that in the Nash strategies playing, the payoffs strongly depend on what the opponent plays, while in the periodic strategies case, the payoffs of each player are completely robust against what the opponent plays. We formally define and study periodic strategies in two player perfect information strategic form games, with pure strategies and generalize the results to include multiplayer games with perfect information. We prove that every non-trivial finite game has at least one periodic strategy, with non-trivial meaning a game with non-degenerate payoffs. In principle the algorithm we provide, holds true for every non-trivial game, because in degenerate games, inconsistencies can occur. In addition, we also address the incomplete information games in the context of Bayesian games, in which case generalizations of Bernheim's rationalizability offers us the possibility to embed the periodicity concept in the Bayesian games framework . Applying the algorithm of periodic strategies in the case where mixed strategies are used, we find some very interesting outcomes with useful quantitative features for some classes of games. We support all our results throughout the article by providing some illustrative examples .
[ { "type": "R", "before": "define and study periodic strategies in two player finite", "after": "introduce a new solution concept for selecting optimal strategies in", "start_char_pos": 3, "end_char_pos": 60 }, { "type": "R", "before": ". This concept can arise from some epistemic analysis of", "after": "which we call periodic strategies and the solution concept periodicity. As we will explicitly demonstrate, the periodicity solution concept has implications for non-trivial realistic games, which renders this solution concept very valuable. The most striking application of periodicity is that in mixed strategy strategic form games, we were able to find solutions that result to values for the utility function of each player, that are equal to the Nash equilibrium ones, with the difference that in", "start_char_pos": 82, "end_char_pos": 138 }, { "type": "R", "before": "rationalizability concept of Bernheim and Pearce. We analyze in detail the pure strategies and mixed strategies cases. In the pure", "after": "Nash strategies playing, the payoffs strongly depend on what the opponent plays, while in the periodic", "start_char_pos": 143, "end_char_pos": 273 }, { "type": "R", "before": "we", "after": "the payoffs of each player are completely robust against what the opponent plays. We formally define and study periodic strategies in two player perfect information strategic form games, with pure strategies and generalize the results to include multiplayer games with perfect information. We", "start_char_pos": 291, "end_char_pos": 293 }, { "type": "R", "before": "two player finite action", "after": "non-trivial finite", "start_char_pos": 311, "end_char_pos": 335 }, { "type": "R", "before": "making the periodic strategies an inherent characteristic of these games", "after": "with non-trivial meaning a game with non-degenerate payoffs. In principle the algorithm we provide, holds true for every non-trivial game, because in degenerate games, inconsistencies can occur. In addition, we also address the incomplete information games in the context of Bayesian games, in which case generalizations of Bernheim's rationalizability offers us the possibility to embed the periodicity concept in the Bayesian games framework", "start_char_pos": 377, "end_char_pos": 449 }, { "type": "R", "before": "Particularly interesting are the implications of the algorithm to collective action games, for which we were able to establish the result that the collective action strategy can be incorporated in a purely non-cooperative context. Moreover, we address the periodicity issue for the case the players have a continuum set of strategies available. We also discuss whether periodic strategies can imply any sort of cooperativity. In addition, we put the periodic strategies in an epistemic framework", "after": "We support all our results throughout the article by providing some illustrative examples", "start_char_pos": 643, "end_char_pos": 1138 } ]
[ 0, 83, 192, 261, 451, 642, 873, 987, 1068 ]
1307.2849
1
We study a continuous-time problem of optimal public good contribution under uncertainty for an economy with a finite number of agents. Each agent can allocate his wealth between private consumption and repeated but irreversible contributions to increase the stock of some public good. We study the corresponding social planner problem and the case of strategic interaction between the agents and we characterize the optimal investment policies by a set of necessary and sufficient stochastic Kuhn-Tucker conditions . Suitably combining arguments from Duality Theory and the General Theory of Stochastic Processes, we prove an abstract existence result for a Nash equilibrium of our public good contribution game. Also, we show that our model exhibits a dynamic free rider effect. We explicitly evaluate it in a symmetric Black-Scholes setting with Cobb-Douglas utilities and we show that uncertainty and irreversibility of public good provisions do not affect free-riding.
We study a continuous-time problem of public good contribution under uncertainty for an economy with a finite number of agents. Each agent aims to maximize his expected utility allocating his initial wealth over a given time period between private consumption and repeated but irreversible contributions to increase the stock of some public good. We study the corresponding social planner problem and the case of strategic interaction between the agents . These problems are set up as stochastic control problems with both monotone and classical controls representing the cumulative contribution into the public good and the consumption of the private good, respectively. We characterize the optimal investment policies by a set of necessary and sufficient stochastic Kuhn-Tucker conditions , which in turn allow to identify a universal signal process that triggers the public good investments. Further we show that our model exhibits a dynamic free rider effect. We explicitly evaluate it in a symmetric Black-Scholes setting with Cobb-Douglas utilities and we show that uncertainty and irreversibility of public good provisions need not affect the degree of free-riding.
[ { "type": "D", "before": "optimal", "after": null, "start_char_pos": 38, "end_char_pos": 45 }, { "type": "R", "before": "can allocate his wealth", "after": "aims to maximize his expected utility allocating his initial wealth over a given time period", "start_char_pos": 147, "end_char_pos": 170 }, { "type": "R", "before": "and we", "after": ". These problems are set up as stochastic control problems with both monotone and classical controls representing the cumulative contribution into the public good and the consumption of the private good, respectively. We", "start_char_pos": 393, "end_char_pos": 399 }, { "type": "R", "before": ". Suitably combining arguments from Duality Theory and the General Theory of Stochastic Processes, we prove an abstract existence result for a Nash equilibrium of our public good contribution game. Also,", "after": ", which in turn allow to identify a universal signal process that triggers the public good investments. Further", "start_char_pos": 516, "end_char_pos": 719 }, { "type": "R", "before": "do not affect", "after": "need not affect the degree of", "start_char_pos": 947, "end_char_pos": 960 } ]
[ 0, 135, 285, 713, 780 ]
1307.2849
2
We study a continuous-time problem of public good contribution under uncertainty for an economy with a finite number of agents. Each agent aims to maximize his expected utility allocating his initial wealth over a given time period between private consumption and repeated but irreversible contributions to increase the stock of some public good. We study the corresponding social planner problem and the case of strategic interaction between the agents . These problems are set up as stochastic control problems with both monotone and classical controls representing the cumulative contribution into the public good and the consumption of the private good, respectively. We characterize the optimal investment policies by a set of necessary and sufficient stochastic Kuhn-Tucker conditions , which in turn allow to identify a universal signal process that triggers the public good investments. Further we show that our model exhibits a dynamic free rider effect. We explicitly evaluate it in a symmetric Black-Scholes setting with Cobb-Douglas utilities and we show that uncertainty and irreversibility of public good provisions need not affect the degree of free-riding .
In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents , i.e. the public good contribution game. We show existence and uniqueness of the social planner's optimal policy, we characterize it by necessary and sufficient stochastic Kuhn-Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation, although the latter is not related to a meaningful control problem. We finally also provide a detailed analysis of the so-called free rider effect .
[ { "type": "R", "before": "We study a", "after": "In this paper we study", "start_char_pos": 0, "end_char_pos": 10 }, { "type": "R", "before": "problem of", "after": "stochastic control problems with both monotone and classical controls motivated by the so-called", "start_char_pos": 27, "end_char_pos": 37 }, { "type": "R", "before": "under uncertainty for an economy with a finite number of agents. Each agent aims to maximize his", "after": "problem. That is the problem of n economic agents aiming to maximize their", "start_char_pos": 63, "end_char_pos": 159 }, { "type": "D", "before": "his", "after": null, "start_char_pos": 188, "end_char_pos": 191 }, { "type": "D", "before": "repeated but", "after": null, "start_char_pos": 264, "end_char_pos": 276 }, { "type": "R", "before": "stock", "after": "level", "start_char_pos": 320, "end_char_pos": 325 }, { "type": "R", "before": "study", "after": "investigate", "start_char_pos": 350, "end_char_pos": 355 }, { "type": "R", "before": ". These problems are set up as stochastic control problems with both monotone and classical controls representing the cumulative contribution into the public good and the consumption of the private good, respectively. We characterize the optimal investment policies by a set of", "after": ", i.e. the public good contribution game. We show existence and uniqueness of the social planner's optimal policy, we characterize it by", "start_char_pos": 454, "end_char_pos": 731 }, { "type": "R", "before": ", which in turn allow to identify a universal signal process that triggers", "after": "and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of", "start_char_pos": 791, "end_char_pos": 865 }, { "type": "R", "before": "investments. Further we show that our model exhibits a dynamic free rider effect. We explicitly evaluate it in a symmetric Black-Scholes setting with Cobb-Douglas utilities and we show that uncertainty and irreversibility of public good provisions need not affect the degree of free-riding", "after": "contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation, although the latter is not related to a meaningful control problem. We finally also provide a detailed analysis of the so-called free rider effect", "start_char_pos": 882, "end_char_pos": 1171 } ]
[ 0, 127, 346, 455, 671, 894, 963 ]
1307.3426
1
We present an analytical model of the cell actin cytoskeleton as a finite droplet of polar active matter. Using hydrodynamic theory, we calculate the steady state flows that result from a splayed polarisation of the actin filaments. We relate this to a spherical cell embedded in a 3D environment by imposing a viscous friction at the fixed droplet boundary. We show that the droplet has non-zero force dipole and quadrupole moments, the latter of which is essential for self-propelled motion of the droplet at low Reynolds' number. Therefore, our model describes a simple mechanism for cell motility in a 3D environment . Our analytical results predict how the system depends on various parameters such as the effective friction coefficient, the phenomenological activity parameter and the splay of the imposed polarisation.
We present a continuum level analytical model of a droplet of active contractile fluid consisting of filaments and motors. We calculate the steady state flows that result from a splayed polarisation of the filaments. We account for the interaction with an arbitrary external medium by imposing a viscous friction at the fixed droplet boundary. We then show that the droplet has non-zero force dipole and quadrupole moments, the latter of which is essential for self-propelled motion of the droplet at low Reynolds' number. Therefore, this calculation describes a simple mechanism for the motility of a droplet of active contractile fluid embedded in a 3D environment , which is relevant to cell migration in confinement (for example, embedded within a gel or tissue) . Our analytical results predict how the system depends on various parameters such as the effective friction coefficient, the phenomenological activity parameter and the splay of the imposed polarisation.
[ { "type": "R", "before": "an", "after": "a continuum level", "start_char_pos": 11, "end_char_pos": 13 }, { "type": "R", "before": "the cell actin cytoskeleton as a finite droplet of polar active matter. Using hydrodynamic theory, we", "after": "a droplet of active contractile fluid consisting of filaments and motors. We", "start_char_pos": 34, "end_char_pos": 135 }, { "type": "D", "before": "actin", "after": null, "start_char_pos": 216, "end_char_pos": 221 }, { "type": "R", "before": "relate this to a spherical cell embedded in a 3D environment", "after": "account for the interaction with an arbitrary external medium", "start_char_pos": 236, "end_char_pos": 296 }, { "type": "A", "before": null, "after": "then", "start_char_pos": 362, "end_char_pos": 362 }, { "type": "R", "before": "our model", "after": "this calculation", "start_char_pos": 545, "end_char_pos": 554 }, { "type": "R", "before": "cell motility", "after": "the motility of a droplet of active contractile fluid embedded", "start_char_pos": 588, "end_char_pos": 601 }, { "type": "A", "before": null, "after": ", which is relevant to cell migration in confinement (for example, embedded within a gel or tissue)", "start_char_pos": 622, "end_char_pos": 622 } ]
[ 0, 105, 232, 358, 533, 624 ]
1307.4276
1
Addressing the functionality of genomes is one of the most important and challenging tasks of today's biology. In particular the ability to link genotypes to corresponding phenotypes is of particular interest in the reconstruction and biotechnological manipulation of metabolic pathways. Over the last years, the OmnilogTM Phenotype Microarray (PM) technology has been used to address many specific issues related to the metabolic functionality of URLanisms. However, software that could directly link PM data with the gene(s) of interest followed by the extraction of information on gene-phenotype correlation is still missing. Here we present DuctApe, a suite that allows the analysis of both genomic sequences and PM data, to find any metabolic difference among PM experiments and to correlate them with KEGG pathways and gene presence/absence patterns. As example, an application of the program to four Sinorhizobium meliloti strains is also presented. The source code and tutorials are available at URL
Addressing the functionality of genomes is one of the most important and challenging tasks of today's biology. In particular the ability to link genotypes to corresponding phenotypes is of interest in the reconstruction and biotechnological manipulation of metabolic pathways. Over the last years, the OmniLogTM Phenotype Microarray (PM) technology has been used to address many specific issues related to the metabolic functionality of URLanisms. However, computational tools that could directly link PM data with the gene(s) of interest followed by the extraction of information on genephenotype correlation are still missing. Here we present DuctApe, a suite that allows the analysis of both genomic sequences and PM data, to find metabolic differences among PM experiments and to correlate them with KEGG pathways and gene presence/absence patterns. As example, an application of the program to four bacterial datasets is presented. The source code and tutorials are available at URL
[ { "type": "D", "before": "particular", "after": null, "start_char_pos": 189, "end_char_pos": 199 }, { "type": "R", "before": "OmnilogTM", "after": "OmniLogTM", "start_char_pos": 313, "end_char_pos": 322 }, { "type": "R", "before": "software", "after": "computational tools", "start_char_pos": 468, "end_char_pos": 476 }, { "type": "R", "before": "gene-phenotype correlation is", "after": "genephenotype correlation are", "start_char_pos": 584, "end_char_pos": 613 }, { "type": "R", "before": "any metabolic difference", "after": "metabolic differences", "start_char_pos": 734, "end_char_pos": 758 }, { "type": "R", "before": "Sinorhizobium meliloti strains is also", "after": "bacterial datasets is", "start_char_pos": 907, "end_char_pos": 945 } ]
[ 0, 110, 287, 458, 628, 856, 956 ]
1307.4566
1
This paper considers models of large-scale networks where nodes are characterized by a set of states describing their local behavior , and by an explicit mobility model over a two-dimensional lattice. A stochastic model is given as a Markov population process that is, in general, infeasible to analyze due to the massive state space sizes involved. Building on recent results on fluid approximation, such a process admits a limit behavior as a system of ordinary differential equations , whose size is, unfortunately, dependent on the number of points in the lattice . Assuming an unbiased random walk model of nodes mobility, we prove convergence of the stochastic process to the solution of a system of partial differential equations of reaction-diffusion type . This provides a macroscopic view of the model which becomes independent of the lattice granularity, by approximating inherently discrete stochastic movements with continuous , deterministic diffusions. We illustrate the practical applicability of this result by modeling a network of mobile nodes with on/off behavior performing file transfers with connectivity to 802.11 access points. A numerical validation shows high quality of the approximation even for low populations and coarse lattices , and excellent speed of convergence with increasing system sizes .
We consider Markov models of large-scale networks where nodes are characterized by their local behavior and by a mobility model over a two-dimensional lattice. By assuming random walk, we prove convergence to a system of partial differential equations (PDEs) whose size depends neither on the lattice size nor on the population of nodes . This provides a macroscopic view of the model which approximates discrete stochastic movements with continuous deterministic diffusions. We illustrate the practical applicability of this result by modeling a network of mobile nodes with on/off behavior performing file transfers with connectivity to 802.11 access points. By means of an empirical validation against discrete-event simulation we show high quality of the PDE approximation even for low populations and coarse lattices . In addition, we confirm the computational advantage in using the PDE limit over a traditional ordinary differential equation limit where the lattice is modeled discretely, yielding speed-ups of up to two orders of magnitude .
[ { "type": "R", "before": "This paper considers", "after": "We consider Markov", "start_char_pos": 0, "end_char_pos": 20 }, { "type": "D", "before": "a set of states describing", "after": null, "start_char_pos": 85, "end_char_pos": 111 }, { "type": "R", "before": ", and by an explicit", "after": "and by a", "start_char_pos": 133, "end_char_pos": 153 }, { "type": "R", "before": "A stochastic model is given as a Markov population process that is, in general, infeasible to analyze due to the massive state space sizes involved. Building on recent results on fluid approximation, such a process admits a limit behavior as a system of ordinary differential equations , whose size is, unfortunately, dependent on the number of points in the lattice . Assuming an unbiased random walk model of nodes mobility, we prove convergence of the stochastic process to the solution of a system of partial differential equations of reaction-diffusion type", "after": "By assuming random walk, we prove convergence to a system of partial differential equations (PDEs) whose size depends neither on the lattice size nor on the population of nodes", "start_char_pos": 201, "end_char_pos": 763 }, { "type": "R", "before": "becomes independent of the lattice granularity, by approximating inherently", "after": "approximates", "start_char_pos": 818, "end_char_pos": 893 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 940, "end_char_pos": 941 }, { "type": "R", "before": "A numerical validation shows", "after": "By means of an empirical validation against discrete-event simulation we show", "start_char_pos": 1153, "end_char_pos": 1181 }, { "type": "A", "before": null, "after": "PDE", "start_char_pos": 1202, "end_char_pos": 1202 }, { "type": "R", "before": ", and excellent speed of convergence with increasing system sizes", "after": ". In addition, we confirm the computational advantage in using the PDE limit over a traditional ordinary differential equation limit where the lattice is modeled discretely, yielding speed-ups of up to two orders of magnitude", "start_char_pos": 1262, "end_char_pos": 1327 } ]
[ 0, 200, 349, 569, 765, 967, 1152 ]
1307.4581
1
Practical video streaming systems all use some form of progressive downloading to let users download the video at a faster rate than the playback rate . Since users may quit before viewing the complete video, however, much of the downloaded video may be "wasted". To the extent that users' departure behavior can be predicted, smart progressive downloading can be used to significantly improve performance for fixed server bandwidth . Through measurement, we extract certain user behavior properties for implementing such smart progressive downloading , and demonstrate its advantage using prototype implementation as well as simulations.
Bandwidth consumption is a significant concern for online video service providers. Practical video streaming systems usually use some form of HTTP streaming (progressive download) to let users download the video at a faster rate than the video bitrate . Since users may quit before viewing the complete video, however, much of the downloaded video will be "wasted". To the extent that users' departure behavior can be predicted, we develop smart streaming that can be used to improve user QoE with limited server bandwidth or save bandwidth cost with unlimited server bandwidth . Through measurement, we extract certain user behavior properties for implementing such smart streaming , and demonstrate its advantage using prototype implementation as well as simulations.
[ { "type": "A", "before": null, "after": "Bandwidth consumption is a significant concern for online video service providers.", "start_char_pos": 0, "end_char_pos": 0 }, { "type": "R", "before": "all", "after": "usually", "start_char_pos": 35, "end_char_pos": 38 }, { "type": "R", "before": "progressive downloading", "after": "HTTP streaming (progressive download)", "start_char_pos": 56, "end_char_pos": 79 }, { "type": "R", "before": "playback rate", "after": "video bitrate", "start_char_pos": 138, "end_char_pos": 151 }, { "type": "R", "before": "may", "after": "will", "start_char_pos": 248, "end_char_pos": 251 }, { "type": "R", "before": "smart progressive downloading", "after": "we develop smart streaming that", "start_char_pos": 328, "end_char_pos": 357 }, { "type": "R", "before": "significantly improve performance for fixed server bandwidth", "after": "improve user QoE with limited server bandwidth or save bandwidth cost with unlimited server bandwidth", "start_char_pos": 373, "end_char_pos": 433 }, { "type": "R", "before": "progressive downloading", "after": "streaming", "start_char_pos": 529, "end_char_pos": 552 } ]
[ 0, 153, 264, 435 ]
1307.5163
1
We describe an abstract control-theoretic setting in which the validity of the dynamic programming principle can be established in continuous time by a verification of a small number of structural properties. As an application we treat several cases of interest, most notably the lower-hedging and utility-maximization problems of financial mathematics both of which are naturally posed over " sets of martingale measures " .
We describe an abstract control-theoretic framework in which the validity of the dynamic programming principle can be established in continuous time by a verification of a small number of structural properties. As an application we treat several cases of interest, most notably the lower-hedging and utility-maximization problems of financial mathematics both of which are naturally posed over `` sets of martingale measures '' .
[ { "type": "R", "before": "setting", "after": "framework", "start_char_pos": 42, "end_char_pos": 49 }, { "type": "R", "before": "\"", "after": "``", "start_char_pos": 392, "end_char_pos": 393 }, { "type": "R", "before": "\"", "after": "''", "start_char_pos": 422, "end_char_pos": 423 } ]
[ 0, 208 ]
1307.5617
1
Comparative statics is a well established research field where one analyzes how changes in parameters of a strategic game affect the resulting equilibria. Examples of such parameter changes include tax/subsidy changes or production cost shifts in oligopoly models. While classic comparative statics is mainly concerned with qualitative approaches (e.g., deciding whether a marginal parameter change improves or hurts equilibrium profits or welfare), we aim at quantifying the possible extend of such an effect . We apply our quantitative approach to the multimarket oligopoly model introduced by Bulow, Geanakoplos and Klemperer (1985). In this model, there are two firms competing on two markets with one firm having a monopoly on one market. Bulow et al. describe the counterintuitive example of a positive price shock in the firm's monopoly market resulting in a reduction of the firm's equilibrium profit. We quantify for the first time the worst-case profit reduction for the case of two markets with affine price functions and firms with convex cost technologies . We show that the relative loss of the monopoly firm is at most 25\% no matter how many firms compete on the second market. In particular, we show for the setting of Bulow et al. involving affine price functions and only one additional firm on the second market that the worst case loss in profit is bounded by 6.25\%. We further investigate a dual effect: How much can a firm gain from a negative price shock in its monopoly market? Our results imply that this gain is at most 33\% . We complement our bounds by concrete examples of markets where these bounds are attained .
Comparative statics is a well established research field where one analyzes how marginal changes in parameters of a strategic game affect the resulting equilibria. While classic comparative statics is mainly concerned with qualitative approaches (e.g., deciding whether a parameter change improves or hurts equilibrium profits or welfare), we provide a framework to expose the extend (not monotonicity) of a discrete (not marginal) parameter change, with the additional benefit that our results can even be used when there is uncertainty about the exact model instance . We apply our quantitative approach to the multimarket oligopoly model introduced by Bulow, Geanakoplos and Klemperer (1985). They describe the counterintuitive example of a positive price shock in the firm's monopoly market resulting in a reduction of the firm's equilibrium profit. We quantify for the first time the worst case profit reduction for multimarket oligopolies with an arbitrary number of markets exhibiting arbitrary positive price shocks. For markets with affine price functions and firms with convex cost technologies , we show that the relative loss of any firm is at most 25\% no matter how many firms compete in the oligopoly. We further investigate the impact of positive price shocks on total profit of all firms as well as on consumer surplus. We find tight bounds also for these measures showing that total profit and consumer surplus decreases by at most 25\% and 16.6\%, respectively .
[ { "type": "A", "before": null, "after": "marginal", "start_char_pos": 80, "end_char_pos": 80 }, { "type": "D", "before": "Examples of such parameter changes include tax/subsidy changes or production cost shifts in oligopoly models.", "after": null, "start_char_pos": 156, "end_char_pos": 265 }, { "type": "D", "before": "marginal", "after": null, "start_char_pos": 374, "end_char_pos": 382 }, { "type": "R", "before": "aim at quantifying the possible extend of such an effect", "after": "provide a framework to expose the extend (not monotonicity) of a discrete (not marginal) parameter change, with the additional benefit that our results can even be used when there is uncertainty about the exact model instance", "start_char_pos": 454, "end_char_pos": 510 }, { "type": "R", "before": "In this model, there are two firms competing on two markets with one firm having a monopoly on one market. Bulow et al.", "after": "They", "start_char_pos": 638, "end_char_pos": 757 }, { "type": "R", "before": "worst-case", "after": "worst case", "start_char_pos": 946, "end_char_pos": 956 }, { "type": "R", "before": "the case of two markets", "after": "multimarket oligopolies with an arbitrary number of markets exhibiting arbitrary positive price shocks. For markets", "start_char_pos": 978, "end_char_pos": 1001 }, { "type": "R", "before": ". We", "after": ", we", "start_char_pos": 1070, "end_char_pos": 1074 }, { "type": "R", "before": "the monopoly", "after": "any", "start_char_pos": 1106, "end_char_pos": 1118 }, { "type": "R", "before": "on the second market. In particular, we show for the setting of Bulow et al. involving affine price functions and only one additional firm on the second market that the worst case loss in profit is bounded by 6.25\\%. We further investigate a dual effect: How much can a firm gain from a negative price shock in its monopoly market? Our results imply that this gain is at most 33\\% . We complement our bounds by concrete examples of markets where these bounds are attained", "after": "in the oligopoly. We further investigate the impact of positive price shocks on total profit of all firms as well as on consumer surplus. We find tight bounds also for these measures showing that total profit and consumer surplus decreases by at most 25\\% and 16.6\\%, respectively", "start_char_pos": 1173, "end_char_pos": 1644 } ]
[ 0, 155, 265, 512, 637, 744, 910, 1071, 1194, 1389, 1504, 1555 ]
1307.5617
2
Comparative statics is a well established research field where one analyzes how marginal changes in parameters of a strategic game affect the resulting equilibria. While classic comparative statics is mainly concerned with qualitative approaches (e.g., deciding whether a parameter change improves or hurts equilibrium profits or welfare), we provide a framework to expose the extend (not monotonicity) of a discrete (not marginal) parameter change , with the additional benefit that our results can even be used when there is uncertainty about the exact model instance. We apply our quantitative approach to the multimarket oligopoly model introduced by Bulow, Geanakoplos and Klemperer (1985). They describe the counterintuitive example of a positive price shock in the firm's monopoly market resulting in a reduction of the firm's equilibrium profit. We quantify for the first time the worst case profit reduction for multimarket oligopolies with an arbitrary number of markets exhibiting arbitrary positive price shocks. For markets with affine price functions and firms with convex cost technologies, we show that the relative loss of any firm is at most 25\% no matter how many firms compete in the oligopoly. We further investigate the impact of positive price shocks on total profit of all firms as well as on consumer surplus . We find tight bounds also for these measures showing that total profit and consumer surplus decreases by at most 25\% and 16.6\%, respectively .
We introduce a quantitative approach to comparative statics that allows to bound the maximum effect of an exogenous parameter change on a system's equilibrium. The motivation for this approach is a well known paradox in multimarket Cournot competition, where a positive price shock on a monopoly market may actually reduce the monopolist's profit. We use our approach to quantify for the first time the worst case profit reduction for multimarket oligopolies exposed to arbitrary positive price shocks. For markets with affine price functions and firms with convex cost technologies, we show that the relative profit loss of any firm is at most 25\% no matter how many firms compete in the oligopoly. We further investigate the impact of positive price shocks on total profit of all firms as well as on social welfare . We find tight bounds also for these measures showing that total profit and social welfare decreases by at most 25\% and 16.6\%, respectively . Finally, we show that in our model, mixed, correlated and coarse correlated equilibria are essentially unique, thus, all our bounds apply to these game solutions as well .
[ { "type": "R", "before": "Comparative statics is a well established research field where one analyzes how marginal changes in parameters of a strategic game affect the resulting equilibria. While classic comparative statics is mainly concerned with qualitative approaches (e.g., deciding whether a parameter change improves or hurts equilibrium profits or welfare), we provide a framework to expose the extend (not monotonicity) of a discrete (not marginal) parameter change , with the additional benefit that our results can even be used when there is uncertainty about the exact model instance. We apply our quantitative approach to the multimarket oligopoly model introduced by Bulow, Geanakoplos and Klemperer (1985). They describe the counterintuitive example of", "after": "We introduce a quantitative approach to comparative statics that allows to bound the maximum effect of an exogenous parameter change on a system's equilibrium. The motivation for this approach is a well known paradox in multimarket Cournot competition, where", "start_char_pos": 0, "end_char_pos": 741 }, { "type": "R", "before": "in the firm's monopoly market resulting in a reduction of the firm's equilibrium", "after": "on a monopoly market may actually reduce the monopolist's", "start_char_pos": 765, "end_char_pos": 845 }, { "type": "A", "before": null, "after": "use our approach to", "start_char_pos": 857, "end_char_pos": 857 }, { "type": "R", "before": "with an arbitrary number of markets exhibiting arbitrary", "after": "exposed to arbitrary", "start_char_pos": 946, "end_char_pos": 1002 }, { "type": "A", "before": null, "after": "profit", "start_char_pos": 1133, "end_char_pos": 1133 }, { "type": "R", "before": "consumer surplus", "after": "social welfare", "start_char_pos": 1320, "end_char_pos": 1336 }, { "type": "R", "before": "consumer surplus", "after": "social welfare", "start_char_pos": 1414, "end_char_pos": 1430 }, { "type": "A", "before": null, "after": ". Finally, we show that in our model, mixed, correlated and coarse correlated equilibria are essentially unique, thus, all our bounds apply to these game solutions as well", "start_char_pos": 1482, "end_char_pos": 1482 } ]
[ 0, 163, 570, 695, 853, 1025, 1217, 1338 ]
1307.5981
1
Oil is widely perceived as a good diversification tool for stock markets. To fully understand the potential, we propose a new empirical methodology which combines generalized autoregressive score copula functions with high frequency data , and allows us to capture and forecast the conditional time-varying joint distribution of the oil -- stocks pair accurately. Our realized GARCH with time-varying copula yields statistically better forecasts of the dependence as well as quantiles of the distribution when compared to competing models. Using recently proposed conditional diversification benefits measure which take into account higher-order moments and nonlinear dependence , we document reducing benefits from diversification over the past ten years. Diversification benefits implied by our empirical model are moreover strongly varying over time. These findings have important implications for portfolio management .
Oil is perceived as a good diversification tool for stock markets. To fully understand this potential, we propose a new empirical methodology that combines generalized autoregressive score copula functions with high frequency data and allows us to capture and forecast the conditional time-varying joint distribution of the oil -- stocks pair accurately. Our realized GARCH with time-varying copula yields statistically better forecasts of the dependence and quantiles of the distribution relative to competing models. Employing a recently proposed conditional diversification benefits measure that considers higher-order moments and nonlinear dependence from tail events , we document decreasing benefits from diversification over the past ten years. The diversification benefits implied by our empirical model are , moreover, strongly varied over time. These findings have important implications for asset allocation, as the benefits of including oil in stock portfolios may not be as large as perceived .
[ { "type": "D", "before": "widely", "after": null, "start_char_pos": 7, "end_char_pos": 13 }, { "type": "R", "before": "the", "after": "this", "start_char_pos": 94, "end_char_pos": 97 }, { "type": "R", "before": "which", "after": "that", "start_char_pos": 148, "end_char_pos": 153 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 238, "end_char_pos": 239 }, { "type": "R", "before": "as well as", "after": "and", "start_char_pos": 464, "end_char_pos": 474 }, { "type": "R", "before": "when compared", "after": "relative", "start_char_pos": 505, "end_char_pos": 518 }, { "type": "R", "before": "Using", "after": "Employing a", "start_char_pos": 540, "end_char_pos": 545 }, { "type": "R", "before": "which take into account", "after": "that considers", "start_char_pos": 609, "end_char_pos": 632 }, { "type": "A", "before": null, "after": "from tail events", "start_char_pos": 679, "end_char_pos": 679 }, { "type": "R", "before": "reducing", "after": "decreasing", "start_char_pos": 694, "end_char_pos": 702 }, { "type": "R", "before": "Diversification", "after": "The diversification", "start_char_pos": 758, "end_char_pos": 773 }, { "type": "R", "before": "moreover strongly varying", "after": ", moreover, strongly varied", "start_char_pos": 818, "end_char_pos": 843 }, { "type": "R", "before": "portfolio management", "after": "asset allocation, as the benefits of including oil in stock portfolios may not be as large as perceived", "start_char_pos": 902, "end_char_pos": 922 } ]
[ 0, 73, 363, 539, 757, 854 ]
1307.6322
1
Volatility clustering, long-range dependence, non-Gaussianity and anomalous scaling are all well-known stylized facts of financial assets return dynamics. These elements have a relevant impact on the aptness of models for the pricing of options written on financial assets. We make us of a model developed in physics that captures the previously cited returns features . The model allows deriving closed form equations for option pricing . We present the model providing a financial interpretation of its components and discuss the parameters estimation. We then derive pricing equations and use them in an empirical application based on a major equity index option dataset .
Volatility clustering, long-range dependence, and non-Gaussian scaling are stylized facts of financial assets dynamics. They are ignored in the Black Scholes framework, but have a relevant impact on the pricing of options written on financial assets. Using a recent model for market dynamics which adequately captures the above stylized facts, we derive closed form equations for option pricing , obtaining the Black Scholes as a special case. By applying our pricing equations to a major equity index option dataset , we show that inclusion of stylized features in financial modeling moves derivative prices about 30\% closer to the market values without the need of calibrating models parameters on available derivative prices .
[ { "type": "R", "before": "non-Gaussianity and anomalous scaling are all well-known", "after": "and non-Gaussian scaling are", "start_char_pos": 46, "end_char_pos": 102 }, { "type": "R", "before": "return dynamics. These elements", "after": "dynamics. They are ignored in the Black", "start_char_pos": 138, "end_char_pos": 169 }, { "type": "A", "before": null, "after": "Scholes framework, but", "start_char_pos": 170, "end_char_pos": 170 }, { "type": "D", "before": "aptness of models for the", "after": null, "start_char_pos": 201, "end_char_pos": 226 }, { "type": "R", "before": "We make us of a model developed in physics that captures the previously cited returns features . The model allows deriving", "after": "Using a recent model for market dynamics which adequately captures the above stylized facts, we derive", "start_char_pos": 275, "end_char_pos": 397 }, { "type": "R", "before": ". We present the model providing a financial interpretation of its components and discuss the parameters estimation. We then derive pricing equations and use them in an empirical application based on", "after": ", obtaining the Black", "start_char_pos": 439, "end_char_pos": 638 }, { "type": "A", "before": null, "after": "Scholes as a special case. By applying our pricing equations to", "start_char_pos": 639, "end_char_pos": 639 }, { "type": "A", "before": null, "after": ", we show that inclusion of stylized features in financial modeling moves derivative prices about 30\\% closer to the market values without the need of calibrating models parameters on available derivative prices", "start_char_pos": 676, "end_char_pos": 676 } ]
[ 0, 154, 274, 371, 440, 555 ]
1307.6373
1
While the performance of maximum ratio combining (MRC) is well understood for a single isolated link, the same is not true in the presence of interference, which is typically correlated across antennas due to the common locations of interferers. For tractability, prior work focuses on the two extreme cases where the interference power across antennas is either assumed to be fully correlated or fully uncorrelated. In this paper, we address this shortcoming and characterize the performance of MRC in the presence of spatially-correlated interference across antennas. Modeling the interference field as a Poisson point process (PPP) , we derive the exact distribution of the signal-to-interference ratio (SIR) for the case of two receive antennas and upper and lower bounds for the general case. Using these results, we study the diversity behavior of MRC in the high-reliability regime and obtain the critical density of simultaneous transmissions for a given outage constraint. The exact SIR distribution is also useful in benchmarking simpler correlation models. We show that the full-correlation assumption is considerably pessimistic (up to 30\% higher outage probability for typical values) and the no-correlation assumption is significantly optimistic compared to the true performance.
While the performance of maximum ratio combining (MRC) is well understood for a single isolated link, the same is not true in the presence of interference, which is typically correlated across antennas due to the common locations of interferers. For tractability, prior work focuses on the two extreme cases where the interference power across antennas is either assumed to be fully correlated or fully uncorrelated. In this paper, we address this shortcoming and characterize the performance of MRC in the presence of spatially-correlated interference across antennas. Modeling the interference field as a Poisson point process , we derive the exact distribution of the signal-to-interference ratio (SIR) for the case of two receive antennas , and upper and lower bounds for the general case. Using these results, we study the diversity behavior of MRC and characterize the critical density of simultaneous transmissions for a given outage constraint. The exact SIR distribution is also useful in benchmarking simpler correlation models. We show that the full-correlation assumption is considerably pessimistic (up to 30\% higher outage probability for typical values) and the no-correlation assumption is significantly optimistic compared to the true performance.
[ { "type": "D", "before": "(PPP)", "after": null, "start_char_pos": 629, "end_char_pos": 634 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 749, "end_char_pos": 749 }, { "type": "R", "before": "in the high-reliability regime and obtain", "after": "and characterize", "start_char_pos": 859, "end_char_pos": 900 } ]
[ 0, 245, 416, 569, 798, 982, 1068 ]
1307.6695
1
The literature of heavy tails starts with a random walk and finds mechanisms that lead to fat tails under aggregation. We follow the inverse route and show how starting with fat tails we get to thin-tails when deriving the probability distribution of the response to a random variable. We introduce a general dose-response curve and argue that the left and right-boundedness or saturation of the reponse in natural things leads to thin-tails, even when the "underlying" random variable at the source of the exposure is fat-tailed.
The literature of heavy tails (typically) starts with a random walk and finds mechanisms that lead to fat tails under aggregation. We follow the inverse route and show how starting with fat tails we get to thin-tails when deriving the probability distribution of the response to a random variable. We introduce a general dose-response curve and argue that the left and right-boundedness or saturation of the response in natural things leads to thin-tails, even when the "underlying" random variable at the source of the exposure is fat-tailed.
[ { "type": "A", "before": null, "after": "(typically)", "start_char_pos": 30, "end_char_pos": 30 }, { "type": "R", "before": "reponse", "after": "response", "start_char_pos": 397, "end_char_pos": 404 } ]
[ 0, 119, 286 ]
1307.6801
1
Energy landscape theory describes how a full-length protein can attain its native fold by sampling only a tiny fraction of all possible structures. Although protein folding is now understood to be concomitant with synthesis on the ribosome , there have been few attempts to modify energy landscape theory by accounting for cotranslational folding. This paper introduces a model for cotranslational folding that leads to a natural definition of a nested energy landscapes . By applying concepts drawn from submanifold differential geometry , the dynamics of protein folding on the ribosome can be explored in a quantitative manner and conditions on the nested potential energy landscapes for a good cotranslational folder are obtained. A generalisation of diffusion rate theory using van Kampen's technique of composite stochastic processes is then used to account for entropic contributions and the effects of variable translation rates on cotranslational folding. This stochastic approach agrees well with experimental results and Hamiltionian formalism in the deterministic limit.
Energy landscape theory describes how a full-length protein can attain its native fold after sampling only a tiny fraction of all possible structures. Although protein folding is now understood to be concomitant with synthesis on the ribosome there have been few attempts to modify energy landscape theory by accounting for cotranslational folding. This paper introduces a model for cotranslational folding that leads to a natural definition of a nested energy landscape . By applying concepts drawn from submanifold differential geometry the dynamics of protein folding on the ribosome can be explored in a quantitative manner and conditions on the nested potential energy landscapes for a good cotranslational folder are obtained. A generalisation of diffusion rate theory using van Kampen's technique of composite stochastic processes is then used to account for entropic contributions and the effects of variable translation rates on cotranslational folding. This stochastic approach agrees well with experimental results and Hamiltionian formalism in the deterministic limit.
[ { "type": "R", "before": "by", "after": "after", "start_char_pos": 87, "end_char_pos": 89 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 240, "end_char_pos": 241 }, { "type": "R", "before": "landscapes", "after": "landscape", "start_char_pos": 460, "end_char_pos": 470 }, { "type": "D", "before": ",", "after": null, "start_char_pos": 539, "end_char_pos": 540 } ]
[ 0, 147, 347, 472, 734, 964 ]
1307.7337
1
We explore the effect of an attractive interaction between parallel aligned polymers, which are perpendicularly grafted on a substrate. Such an attractive interaction could , e.g., be due to reversible cross-links. The competition between permanent grafting favoring a homogeneous state of the polymer brush and the attraction, which tends to induce in-plane collapse of the aligned polymers, gives rise to an instability of the homogeneous phase to a bundled state. In this latter state the in-plane translational symmetry is spontaneously broken and the density is modulated with a finite wavelength, which is set by the length scale of transverse fluctuations of the grafted polymers. We analyse the instability for two models of aligned polymers: directed polymers with a line tension and weakly bending chains with a bending stiffness.
We explore the effect of an attractive interaction between parallel-aligned polymers, which are perpendicularly grafted on a substrate. Such an attractive interaction could be due to , e.g., reversible cross-links. The competition between permanent grafting favoring a homogeneous state of the polymer brush and the attraction, which tends to induce in-plane collapse of the aligned polymers, gives rise to an instability of the homogeneous phase to a bundled state. In this latter state the in-plane translational symmetry is spontaneously broken and the density is modulated with a finite wavelength, which is set by the length scale of transverse fluctuations of the grafted polymers. We analyze the instability for two models of aligned polymers: directed polymers with a line tension and weakly bending chains with a bending stiffness.
[ { "type": "R", "before": "parallel aligned", "after": "parallel-aligned", "start_char_pos": 59, "end_char_pos": 75 }, { "type": "A", "before": null, "after": "be due to", "start_char_pos": 173, "end_char_pos": 173 }, { "type": "D", "before": "be due to", "after": null, "start_char_pos": 182, "end_char_pos": 191 }, { "type": "R", "before": "analyse", "after": "analyze", "start_char_pos": 692, "end_char_pos": 699 } ]
[ 0, 135, 215, 467, 688 ]
1307.8075
1
Based on the measurements of noise in gene expression performed during the last decade, it has become customary to think of gene regulation in terms of a two-state model, where the promoter of a gene can stochastically switch between an ON and an OFF state. As experiments are becoming increasingly precise and the deviations from the two-state model start to be observable, we ask about the experimental signatures of complex multi-state promoters, as well as their functional consequences . In detail, we (i) extend the calculations for noise in gene expression to promoters described by state transition diagrams with multiple states, (ii) systematically compute the experimentally accessible noise characteristics for these complex promoters, and (iii) use information theory to evaluate the channel capacities of complex promoter architectures and compare them to the baseline provided by the two-state model. We find that adding internal states to the promoter generically decreases channel capacity, except in certain cases, three of which (cooperativity, dual-role regulation, promoter cycling) we analyze in detail.
Based on the measurements of noise in gene expression performed during the last decade, it has become customary to think of gene regulation in terms of a two-state model, where the promoter of a gene can stochastically switch between an ON and an OFF state. As experiments are becoming increasingly precise and the deviations from the two-state model start to be observable, we ask about the experimental signatures of complex multi-state promoters, as well as the functional consequences of this additional complexity . In detail, we (i) extend the calculations for noise in gene expression to promoters described by state transition diagrams with multiple states, (ii) systematically compute the experimentally accessible noise characteristics for these complex promoters, and (iii) use information theory to evaluate the channel capacities of complex promoter architectures and compare them to the baseline provided by the two-state model. We find that adding internal states to the promoter generically decreases channel capacity, except in certain cases, three of which (cooperativity, dual-role regulation, promoter cycling) we analyze in detail.
[ { "type": "R", "before": "their functional consequences", "after": "the functional consequences of this additional complexity", "start_char_pos": 461, "end_char_pos": 490 } ]
[ 0, 257, 492, 914 ]
1308.0210
1
We analyze the dynamics of the prices of gold, oil, and stocks over 26 years (1987-2012) using both intra-day and daily data and employing a variety of methodologies including a novel time-frequency approach. We account for structural breaks and show radical change in correlations between assets after the 2007-2008 crisis in terms of time-frequency behavior. No strong evidence for a specific asset leading any other one emerges and the assets under research do not share the long-term equilibrium relationship. Strong implication is that after the structural change gold, oil, and stocks cannot be used together for risk diversification .
We employ a wavelet approach and conduct a time-frequency analysis of dynamic correlations between pairs of key traded assets ( gold, oil, and stocks ) covering the period from 1987 to 2012. The analysis is performed on both intra-day and daily data . We show that heterogeneity in correlations across a number of investment horizons between pairs of assets is a dominant feature during times of economic downturn and financial turbulence for all three pairs of the assets under research . Heterogeneity prevails in correlations between gold and stocks. After the 2008 crisis, correlations among all three assets increase and become homogenous: the timing differs for the three pairs but coincides with the structural breaks that are identified in specific correlation dynamics. A strong implication emerges: during the period under research, and from a different-investment-horizons perspective, all three assets could be used in a well-diversified portfolio only during relatively short periods .
[ { "type": "R", "before": "analyze the dynamics of the prices of", "after": "employ a wavelet approach and conduct a time-frequency analysis of dynamic correlations between pairs of key traded assets (", "start_char_pos": 3, "end_char_pos": 40 }, { "type": "R", "before": "over 26 years (1987-2012) using", "after": ") covering the period from 1987 to 2012. The analysis is performed on", "start_char_pos": 63, "end_char_pos": 94 }, { "type": "R", "before": "and employing a variety of methodologies including a novel time-frequency approach. We account for structural breaks and show radical change in correlations between assets after the 2007-2008 crisis in terms of time-frequency behavior. No strong evidence for a specific asset leading any other one emerges and", "after": ". We show that heterogeneity in correlations across a number of investment horizons between pairs of assets is a dominant feature during times of economic downturn and financial turbulence for all three pairs of", "start_char_pos": 125, "end_char_pos": 434 }, { "type": "R", "before": "do not share the long-term equilibrium relationship. Strong implication is that after the structural change gold, oil, and stocks cannot be used together for risk diversification", "after": ". Heterogeneity prevails in correlations between gold and stocks. After the 2008 crisis, correlations among all three assets increase and become homogenous: the timing differs for the three pairs but coincides with the structural breaks that are identified in specific correlation dynamics. A strong implication emerges: during the period under research, and from a different-investment-horizons perspective, all three assets could be used in a well-diversified portfolio only during relatively short periods", "start_char_pos": 461, "end_char_pos": 639 } ]
[ 0, 208, 360, 513 ]
1308.0510
1
The survival and proliferation of cells URLanisms require a highly coordinated allocation of cellular resources to ensure the efficient synthesis of cellular components. In particular, the total enzymatic capacity for cellular metabolism is limited by a number of factors such as cytosolic space, energy expenditure , or nitrogen availability . While extensive work has been done to study constrained optimization problems based on stoichiometric information, mathematical results that characterize the optimal flux in kinetic metabolic networks are still scarce. Here, we study constrained enzyme allocation problems with general kinetics, using the theory of oriented matroids. We give a rigorous proof for the fact that a solution of the non-linear optimization problem is necessarily an elementary flux mode . This finding has significant consequences for our understanding of metabolic switches as well as for the computation of optimal fluxes in kinetic metabolic networks.
The survival and proliferation of cells URLanisms require a highly coordinated allocation of cellular resources to ensure the efficient synthesis of cellular components. In particular, the total enzymatic capacity for cellular metabolism is limited by finite resources that are shared between all enzymes, such as cytosolic space, energy expenditure for amino-acid synthesis, or micro-nutrients . While extensive work has been done to study constrained optimization problems based only on stoichiometric information, mathematical results that characterize the optimal flux in kinetic metabolic networks are still scarce. Here, we study constrained enzyme allocation problems with general kinetics, using the theory of oriented matroids. We give a rigorous proof for the fact that optimal solutions of the non-linear optimization problem are elementary flux modes . This finding has significant consequences for our understanding of optimality in metabolic networks as well as for the identification of metabolic switches and the computation of optimal flux distributions in kinetic metabolic networks.
[ { "type": "R", "before": "a number of factors", "after": "finite resources that are shared between all enzymes,", "start_char_pos": 252, "end_char_pos": 271 }, { "type": "R", "before": ", or nitrogen availability", "after": "for amino-acid synthesis, or micro-nutrients", "start_char_pos": 316, "end_char_pos": 342 }, { "type": "A", "before": null, "after": "only", "start_char_pos": 429, "end_char_pos": 429 }, { "type": "R", "before": "a solution", "after": "optimal solutions", "start_char_pos": 724, "end_char_pos": 734 }, { "type": "R", "before": "is necessarily an elementary flux mode", "after": "are elementary flux modes", "start_char_pos": 774, "end_char_pos": 812 }, { "type": "R", "before": "metabolic switches", "after": "optimality in metabolic networks", "start_char_pos": 882, "end_char_pos": 900 }, { "type": "A", "before": null, "after": "identification of metabolic switches and the", "start_char_pos": 920, "end_char_pos": 920 }, { "type": "R", "before": "fluxes", "after": "flux distributions", "start_char_pos": 944, "end_char_pos": 950 } ]
[ 0, 169, 344, 564, 680, 814 ]
1308.0931
1
In this work we construct an optimal shrinkage estimator for the precision matrix in high dimensions. We consider the general asymptotics when the number of variables p\rightarrow\infty and the sample size n\rightarrow\infty so that p/n\rightarrow c\in (0, 1 ). The precision matrix is estimated directly, without inverting the corresponding estimator for the covariance matrix. The recent results from the random matrix theory allow us to find the asymptotic deterministic equivalents of the optimal shrinkage intensities and estimate them consistently. The resulting distribution-free estimator has almost surely the minimum Frobenius loss. Additionally, we prove that the Frobenius norm of the inverse sample covariance matrix tends almost surely to a deterministic quantity and estimate it consistently. At the end, a simulation is provided where the suggested estimator is compared with the estimators for the precision matrix proposed in the literature. The optimal shrinkage estimator shows significant improvement and robustness even for non-normally distributed data.
In this work we construct an optimal shrinkage estimator for the precision matrix in high dimensions. We consider the general asymptotics when the number of variables p\rightarrow\infty and the sample size n\rightarrow\infty so that p/n\rightarrow c\in (0, +\infty ). The precision matrix is estimated directly, without inverting the corresponding estimator for the covariance matrix. The recent results from the random matrix theory allow us to find the asymptotic deterministic equivalents of the optimal shrinkage intensities and estimate them consistently. The resulting distribution-free estimator has almost surely the minimum Frobenius loss. Additionally, we prove that the Frobenius norms of the inverse and of the pseudo-inverse sample covariance matrices tend almost surely to deterministic quantities and estimate them consistently. At the end, a simulation is provided where the suggested estimator is compared with the estimators for the precision matrix proposed in the literature. The optimal shrinkage estimator shows significant improvement and robustness even for non-normally distributed data.
[ { "type": "R", "before": "1", "after": "+\\infty", "start_char_pos": 257, "end_char_pos": 258 }, { "type": "R", "before": "norm", "after": "norms", "start_char_pos": 685, "end_char_pos": 689 }, { "type": "R", "before": "sample covariance matrix tends", "after": "and of the pseudo-inverse sample covariance matrices tend", "start_char_pos": 705, "end_char_pos": 735 }, { "type": "R", "before": "a deterministic quantity and estimate it", "after": "deterministic quantities and estimate them", "start_char_pos": 753, "end_char_pos": 793 } ]
[ 0, 101, 261, 378, 554, 642, 807, 959 ]
1308.0958
1
Standard economic theory makes an allowance for the agency problem, but not the compounding of moral hazard in the presence of informational opacity, particularly in what concerns high-impact events in fat tailed domains. But the ancients did; so did many aspects of moral philosophy. We propose a global and morally mandatory heuristic that anyone involved in an action which can possibly generate harm for others, even probabilistically, should be required to be exposed to some damage, regardless of context. While perhaps not sufficient, the heuristic is certainly necessary hence mandatory. It is supposed to counter risk hiding in the tails. We link the rule to various philosophical approaches to ethics and moral luck.
Standard economic theory makes an allowance for the agency problem, but not the compounding of moral hazard in the presence of informational opacity, particularly in what concerns high-impact events in fat tailed domains. Nor did it look at exposure as a filter that removes bad risk takers from the system so they stop harming others. But the ancients did; so did many aspects of moral philosophy. We propose a global and morally mandatory heuristic that anyone involved in an action which can possibly generate harm for others, even probabilistically, should be required to be exposed to some damage, regardless of context. While perhaps not sufficient, the heuristic is certainly necessary hence mandatory. It is supposed to counter risk hiding and transfer in the tails. We link the rule to various philosophical approaches to ethics and moral luck.
[ { "type": "A", "before": null, "after": "Nor did it look at exposure as a filter that removes bad risk takers from the system so they stop harming others.", "start_char_pos": 222, "end_char_pos": 222 }, { "type": "A", "before": null, "after": "and transfer", "start_char_pos": 635, "end_char_pos": 635 } ]
[ 0, 221, 244, 285, 512, 596, 649 ]
1308.0958
2
Standard economic theory makes an allowance for the agency problem, but not the compounding of moral hazard in the presence of informational opacity, particularly in what concerns high-impact events in fat tailed domains . Nor did it look at exposure as a filter that removes bad risk takers from the system so they stop harming others. But the ancients did; so did many aspects of moral philosophy. We propose a global and morally mandatory heuristic that anyone involved in an action which can possibly generate harm for others, even probabilistically, should be required to be exposed to some damage, regardless of context. While perhaps not sufficient, the heuristic is certainly necessary hence mandatory. It is supposed to counter risk hiding and transfer in the tails. We link the rule to various philosophical approaches to ethics and moral luck.
Standard economic theory makes an allowance for the agency problem, but not the compounding of moral hazard in the presence of informational opacity, particularly in what concerns high-impact events in fat tailed domains (under slow convergence for the law of large numbers) . Nor did it look at exposure as a filter that removes nefarious risk takers from the system so they stop harming others. \textcolor{red But the ancients did; so did many aspects of moral philosophy. We propose a global and morally mandatory heuristic that anyone involved in an action which can possibly generate harm for others, even probabilistically, should be required to be exposed to some damage, regardless of context. While perhaps not sufficient, the heuristic is certainly necessary hence mandatory. It is supposed to counter voluntary and involuntary risk hiding- and risk transfer - in the tails. We link the rule to various philosophical approaches to ethics and moral luck.
[ { "type": "A", "before": null, "after": "(under slow convergence for the law of large numbers)", "start_char_pos": 221, "end_char_pos": 221 }, { "type": "R", "before": "bad", "after": "nefarious", "start_char_pos": 277, "end_char_pos": 280 }, { "type": "A", "before": null, "after": "\\textcolor{red", "start_char_pos": 338, "end_char_pos": 338 }, { "type": "R", "before": "risk hiding and transfer", "after": "voluntary and involuntary risk hiding- and risk transfer -", "start_char_pos": 739, "end_char_pos": 763 } ]
[ 0, 337, 360, 401, 628, 712, 777 ]
1308.1154
1
We study the dynamic evolution of cross-correlations in the Chinese stock market mainly based on the random matrix theory (RMT). The correlation matrices constructed from the return series of 367 A-share stocks traded on the Shanghai Stock Exchange from January 4, 1999 to December 30, 2011 are calculated over a rolling window with a size of 400 days. As a consequence, a thorough study of the variation of the interconnection among stocks and its underlying information in different time periods is conducted. The evolutions of the statistical properties of the correlation coefficients, eigenvalues, and eigenvectors of the correlation matrices are carefully analyzed. We find that the stock correlations are significantly increased in the periods of two market crashes in 2001 and 2008, and the systemic risk is higher in the volatile periods than calm periods. By investigating the significant contributors of the large eigenvectors in different rolling windows, we observe a dynamic evolution behavior in business sectors such as IT, electronics, and real estate, which are those industries leading the rise (drop) before (after) the crash .
We study the dynamic evolution of cross-correlations in the Chinese stock market mainly based on the random matrix theory (RMT). The correlation matrices constructed from the return series of 367 A-share stocks traded on the Shanghai Stock Exchange from January 4, 1999 to December 30, 2011 are calculated over a moving window with a size of 400 days. The evolutions of the statistical properties of the correlation coefficients, eigenvalues, and eigenvectors of the correlation matrices are carefully analyzed. We find that the stock correlations are significantly increased in the periods of two market crashes in 2001 and 2008, during which only five eigenvalues significantly deviate from the random correlation matrix, and the systemic risk is higher in these volatile periods than calm periods. By investigating the significant contributors of the deviating eigenvectors in different moving windows, we observe a dynamic evolution behavior in business sectors such as IT, electronics, and real estate, which lead the rise (drop) before (after) the crashes .
[ { "type": "R", "before": "rolling", "after": "moving", "start_char_pos": 313, "end_char_pos": 320 }, { "type": "D", "before": "As a consequence, a thorough study of the variation of the interconnection among stocks and its underlying information in different time periods is conducted.", "after": null, "start_char_pos": 353, "end_char_pos": 511 }, { "type": "A", "before": null, "after": "during which only five eigenvalues significantly deviate from the random correlation matrix,", "start_char_pos": 791, "end_char_pos": 791 }, { "type": "R", "before": "the", "after": "these", "start_char_pos": 827, "end_char_pos": 830 }, { "type": "R", "before": "large", "after": "deviating", "start_char_pos": 920, "end_char_pos": 925 }, { "type": "R", "before": "rolling", "after": "moving", "start_char_pos": 952, "end_char_pos": 959 }, { "type": "R", "before": "are those industries leading", "after": "lead", "start_char_pos": 1077, "end_char_pos": 1105 }, { "type": "R", "before": "crash", "after": "crashes", "start_char_pos": 1141, "end_char_pos": 1146 } ]
[ 0, 128, 352, 511, 671, 866 ]
1308.1221
1
Based on the negative and positive realized semivariances developed in Barndorff-Nielsen et al. (2010), we modify the volatility spillover index devised in Diebold and Yilmaz (2009). The resulting asymmetric volatility spillover indices are easy to compute and account well for negative and positive parts of volatility. We apply the modified indices on the 30 U.S. stocks with the highest market capitalization over the period 2004-2011 to study intra-market spillovers. We provide evidence of sizable volatility-spillover asymmetries and a markedly different pattern of spillovers during periods of economic ups and downs .
Asymmetries in volatility spillovers are highly relevant to risk valuation and portfolio diversification strategies in financial markets. Yet, the large literature studying information transmission mechanisms ignores the fact that bad and good volatility may spill over at different magnitudes. This paper fills this gap with two contributions. One, we suggest how to quantify asymmetries in volatility spillovers due to bad and good volatility. Two, using high frequency data covering most liquid U.S. stocks in seven sectors, we provide ample evidence of the asymmetric connectedness of stocks. We universally reject the hypothesis of symmetric connectedness at the disaggregate level but in contrast, we document the symmetric transmission of information in an aggregated portfolio. We show that bad and good volatility is transmitted at different magnitudes in different sectors, and the asymmetries sizably change over time. While negative spillovers are often of substantial magnitudes, they do not strictly dominate positive spillovers. We find that the overall intra-market connectedness of U.S. stocks increased substantially with the increased uncertainty of stock market participants during the financial crisis .
[ { "type": "R", "before": "Based on the negative and positive realized semivariances developed in Barndorff-Nielsen et al. (2010), we modify the volatility spillover index devised in Diebold and Yilmaz (2009). The resulting asymmetric volatility spillover indices are easy to compute and account well for negative and positive parts of volatility. We apply the modified indices on the 30", "after": "Asymmetries in volatility spillovers are highly relevant to risk valuation and portfolio diversification strategies in financial markets. Yet, the large literature studying information transmission mechanisms ignores the fact that bad and good volatility may spill over at different magnitudes. This paper fills this gap with two contributions. One, we suggest how to quantify asymmetries in volatility spillovers due to bad and good volatility. Two, using high frequency data covering most liquid", "start_char_pos": 0, "end_char_pos": 360 }, { "type": "R", "before": "with the highest market capitalization over the period 2004-2011 to study", "after": "in seven sectors, we provide ample evidence of the asymmetric connectedness of stocks. We universally reject the hypothesis of symmetric connectedness at the disaggregate level but in contrast, we document the symmetric transmission of information in an aggregated portfolio. We show that bad and good volatility is transmitted at different magnitudes in different sectors, and the asymmetries sizably change over time. While negative spillovers are often of substantial magnitudes, they do not strictly dominate positive spillovers. We find that the overall", "start_char_pos": 373, "end_char_pos": 446 }, { "type": "R", "before": "spillovers. We provide evidence of sizable volatility-spillover asymmetries and a markedly different pattern of spillovers during periods of economic ups and downs", "after": "connectedness of U.S. stocks increased substantially with the increased uncertainty of stock market participants during the financial crisis", "start_char_pos": 460, "end_char_pos": 623 } ]
[ 0, 182, 320, 471 ]
1308.1797
1
A Management Information System (MIS) is an information system that is intended to be used by the higher management of URLanization . The MIS generally collects summarized data from different departments or subsystems of URLanization and presents in a form that is helpful to the management for taking better decisions for URLanization .
A Management Information System (MIS) is a URLanization and presentation of information that is generally required by the management of URLanization for taking better decisions for URLanization . The MIS data may be derived from various units of URLanization or from other sources. However it is very difficult to say the exact structure of MIS as the structure and goals of different types URLanizations are different. Hence both the data and structure of MIS is dependent on the type URLanization and often customized to the specific requirement of the management .
[ { "type": "R", "before": "an information system that is intended to be used by the higher", "after": "a URLanization and presentation of information that is generally required by the", "start_char_pos": 41, "end_char_pos": 104 }, { "type": "A", "before": null, "after": "for taking better decisions for URLanization", "start_char_pos": 132, "end_char_pos": 132 }, { "type": "R", "before": "generally collects summarized data from different departments or subsystems of URLanization and presents in a form that is helpful to the management for taking better decisions for URLanization", "after": "data may be derived from various units of URLanization or from other sources. However it is very difficult to say the exact structure of MIS as the structure and goals of different types URLanizations are different. Hence both the data and structure of MIS is dependent on the type URLanization and often customized to the specific requirement of the management", "start_char_pos": 143, "end_char_pos": 336 } ]
[ 0, 134 ]
1308.1875
1
Protein translation is one of the most important processes in cell life but, despite being well understood biochemically, the implications of its intrinsic stochastic nature have not been fully elucidated. In this paper we develop a microscopic and stochastic model for a ribosome translating a protein , which explicitly takes into consideration tRNA recharging dynamics, spatial inhomogeneity and stochastic fluctuations in the number of charged tRNAs around the ribosome. By analyzing this non-equilibrium system we are able to derive the statistical distribution of the intervals between subsequent translation events , and to show that it deviates from an exponential due to the coupling between the fluctuations of charged and uncharged populations of tRNA.
Protein translation is one of the most important processes in cell life but, despite being well understood biochemically, the implications of its intrinsic stochastic nature have not been fully elucidated. In this paper we develop a microscopic and stochastic model which describes a crucial step in protein translation, namely the binding of the tRNA to the ribosome. Our model explicitly takes into consideration tRNA recharging dynamics, spatial inhomogeneity and stochastic fluctuations in the number of charged tRNAs around the ribosome. By analyzing this non-equilibrium system we are able to derive the statistical distribution of the times needed by the tRNAs to bind to the ribosome , and to show that it deviates from an exponential due to the coupling between the fluctuations of charged and uncharged populations of tRNA.
[ { "type": "R", "before": "for a ribosome translating a protein , which", "after": "which describes a crucial step in protein translation, namely the binding of the tRNA to the ribosome. Our model", "start_char_pos": 266, "end_char_pos": 310 }, { "type": "R", "before": "intervals between subsequent translation events", "after": "times needed by the tRNAs to bind to the ribosome", "start_char_pos": 574, "end_char_pos": 621 } ]
[ 0, 205, 474 ]
1308.1988
1
The mechanical stretching of single poly-proteins is an emerging tool for the study of protein (un)folding, chemical catalysis and polymer physics at the single molecule level. The observed processes i.e unfolding or reduction events, are typically considered stochastic and by its nature are susceptible to be censored by the finite duration of the experiment. Here we provide a formal analytical and experimental description on the number of observed events under various conditions of practical interest. This analysis for the first time informs on the nature of the process under which a protein is attached between the substrate and pulling probe .
The mechanical stretching of single poly-proteins is an emerging tool for the study of protein (un)folding, chemical catalysis and polymer physics at the single molecule level. The observed processes i.e unfolding or reduction events, are typically considered to be stochastic and by its nature are susceptible to be censored by the finite duration of the experiment. Here we develop a formal analytical and experimental description on the number of observed events under various conditions of practical interest. We provide a rule of thumb to define the experiment protocol duration. Finally we provide a methodology to accurately estimate the number of stretched molecules based on the number of observed unfolding events. Using this analysis on experimental data we conclude for the first time that poly-ubiquitin binds at a random position both to the substrate and to the pulling probe and that observing all the existing modules is the less likely event .
[ { "type": "A", "before": null, "after": "to be", "start_char_pos": 260, "end_char_pos": 260 }, { "type": "R", "before": "provide", "after": "develop", "start_char_pos": 371, "end_char_pos": 378 }, { "type": "R", "before": "This analysis for the first time informs on the nature of", "after": "We provide a rule of thumb to define the experiment protocol duration. Finally we provide a methodology to accurately estimate the number of stretched molecules based on the number of observed unfolding events. Using this analysis on experimental data we conclude for the first time that poly-ubiquitin binds at a random position both to", "start_char_pos": 509, "end_char_pos": 566 }, { "type": "D", "before": "process under which a protein is attached between the", "after": null, "start_char_pos": 571, "end_char_pos": 624 }, { "type": "R", "before": "pulling probe", "after": "to the pulling probe and that observing all the existing modules is the less likely event", "start_char_pos": 639, "end_char_pos": 652 } ]
[ 0, 176, 362, 508 ]
1308.2250
1
In this paper, we develop a new mathematical technique which can be used to express the joint distribution of a Markov process and its running maximum (or minimum) through the distribution of the process itself. This technique is an extension of the classical reflection principle for Brownian motion, and it is obtained by weakening the assumptions of symmetry required for the standard reflection principle to work. We call this method a weak reflection principle and show that it provides solutions to many problems for which the classical reflection principle is typically used. In addition, unlike the standard reflection principle, the new method works for a much larger class of stochastic processes which, in particular, do not possess any strong symmetries. Here, we review the existing results which establish the weak reflection principle for a large class of time-homogeneous diffusions on a real line and, then, proceed to develop this method for all L\'evy processes with one-sided jumps (subject to some admissibility conditions). Finally, we demonstrate the applications of the weak reflection principle in Financial Mathematics, Computational Methods, and Inverse Problems.
In this paper, we develop a new mathematical technique which allows us to express the joint distribution of a Markov process and its running maximum (or minimum) through the marginal distribution of the process itself. This technique is an extension of the classical reflection principle for Brownian motion, and it is obtained by weakening the assumptions of symmetry required for the classical reflection principle to work. We call this method a weak reflection principle and show that it provides solutions to many problems for which the classical reflection principle is typically used. In addition, unlike the classical reflection principle, the new method works for a much larger class of stochastic processes which, in particular, do not possess any strong symmetries. Here, we review the existing results which establish the weak reflection principle for a large class of time-homogeneous diffusions on a real line and, then, proceed to extend this method to the Levy processes with one-sided jumps (subject to some admissibility conditions). Finally, we demonstrate the applications of the weak reflection principle in Financial Mathematics, Computational Methods, and Inverse Problems.
[ { "type": "R", "before": "can be used", "after": "allows us", "start_char_pos": 61, "end_char_pos": 72 }, { "type": "A", "before": null, "after": "marginal", "start_char_pos": 176, "end_char_pos": 176 }, { "type": "R", "before": "standard", "after": "classical", "start_char_pos": 380, "end_char_pos": 388 }, { "type": "R", "before": "standard", "after": "classical", "start_char_pos": 608, "end_char_pos": 616 }, { "type": "R", "before": "develop this method for all L\\'evy", "after": "extend this method to the Levy", "start_char_pos": 937, "end_char_pos": 971 } ]
[ 0, 212, 418, 583, 767, 1046 ]
1308.2250
2
In this paper, we develop a new mathematical technique which allows us to express the joint distribution of a Markov process and its running maximum (or minimum) through the marginal distribution of the process itself. This technique is an extension of the classical reflection principle for Brownian motion, and it is obtained by weakening the assumptions of symmetry required for the classical reflection principle to work. We call this method a weak reflection principle and show that it provides solutions to many problems for which the classical reflection principle is typically used. In addition, unlike the classical reflection principle, the new method works for a much larger class of stochastic processes which, in particular, do not possess any strong symmetries. Here, we review the existing results which establish the weak reflection principle for a large class of time-homogeneous diffusions on a real line and , then , proceed to extend this method to the Levy processes with one-sided jumps (subject to some admissibility conditions). Finally, we demonstrate the applications of the weak reflection principle in Financial Mathematics, Computational Methods, and Inverse Problems .
In this paper, we develop a new mathematical technique which allows us to express the joint distribution of a Markov process and its running maximum (or minimum) through the marginal distribution of the process itself. This technique is an extension of the classical reflection principle for Brownian motion, and it is obtained by weakening the assumptions of symmetry required for the classical reflection principle to work. We call this method a weak reflection principle and show that it provides solutions to many problems for which the classical reflection principle is typically used. In addition, unlike the classical reflection principle, the new method works for a much larger class of stochastic processes which, in particular, do not possess any strong symmetries. Here, we review the existing results which establish the weak reflection principle for a large class of time-homogeneous diffusions on a real line and then proceed to extend this method to the L\'{e processes with one-sided jumps (subject to some admissibility conditions). Finally, we demonstrate the applications of the weak reflection principle in financial mathematics, computational methods and inverse problems .
[ { "type": "R", "before": ", then ,", "after": "then", "start_char_pos": 927, "end_char_pos": 935 }, { "type": "R", "before": "Levy", "after": "L\\'{e", "start_char_pos": 973, "end_char_pos": 977 }, { "type": "R", "before": "Financial Mathematics, Computational Methods, and Inverse Problems", "after": "financial mathematics, computational methods and inverse problems", "start_char_pos": 1130, "end_char_pos": 1196 } ]
[ 0, 218, 425, 590, 775, 1052 ]
1308.2254
1
This paper is concerned with the axiomatic foundation and explicit construction of the optimality criteria which can be used for investment problems with multiple time horizons, or when the time horizon is not known in advance. Both the investment criterion and the optimal strategy are characterized by the Hamilton-Jacobi-Bellman equation on a semi-infinite time interval. In the case when this equation can be linearized, the problem reduces to a time-reversed parabolic equation, which , however, cannot be analyzed via the standard methods of partial differential equations. Under the additional uniform ellipticity condition, we make use of the available description of the minimal solutions to such equations, along with some basic facts from the potential theory and convex analysis, to obtain an explicit integral representation of all the positive solutions. These results allow us to construct a large family of optimality criteria, including some closed form examples in relevant financial models.
This paper is concerned with the axiomatic foundation and explicit construction of a general class of optimality criteria that can be used for investment problems with multiple time horizons, or when the time horizon is not known in advance. Both the investment criterion and the optimal strategy are characterized by the Hamilton-Jacobi-Bellman equation on a semi-infinite time interval. In the case when this equation can be linearized, the problem reduces to a time-reversed parabolic equation, which cannot be analyzed via the standard methods of partial differential equations. Under the additional uniform ellipticity condition, we make use of the available description of all minimal solutions to such equations, along with some basic facts from potential theory and convex analysis, to obtain an explicit integral representation of all positive solutions. These results allow us to construct a large family of the aforementioned optimality criteria, including some closed form examples in relevant financial models.
[ { "type": "R", "before": "the optimality criteria which", "after": "a general class of optimality criteria that", "start_char_pos": 83, "end_char_pos": 112 }, { "type": "D", "before": ", however,", "after": null, "start_char_pos": 490, "end_char_pos": 500 }, { "type": "R", "before": "the", "after": "all", "start_char_pos": 676, "end_char_pos": 679 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 750, "end_char_pos": 753 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 845, "end_char_pos": 848 }, { "type": "A", "before": null, "after": "the aforementioned", "start_char_pos": 923, "end_char_pos": 923 } ]
[ 0, 227, 374, 579, 868 ]
1308.3331
1
We study risk measures for financial positions in a multi-asset setting, representing the minimum amount of capital to raise and invest in eligible portfolios of traded assets in order to meet a prescribed acceptability constraint. We investigate finiteness and continuity properties of these multi-asset risk measures , highlighting the interplay between the acceptance set and the class of eligible portfolios. We develop a new approach to dual representations of convex multi-asset risk measures which relies on a characterization of the structure of closedconvex acceptance sets. To avoid degenerate cases we need to ensure the existence of extensions of the underlying pricing functional which belong to the effective domain of the support function of the chosen acceptance set. We provide a characterization of when such extensions exist. Finally, we discuss applications to conical market models and set-valued risk measures, optimal risk sharing, and superhedging with shortfall risk .
The risk of financial positions is measured by the minimum amount of capital to raise and invest in eligible portfolios of traded assets in order to meet a prescribed acceptability constraint. We investigate nondegeneracy, finiteness and continuity properties of these risk measures with respect to multiple eligible assets. Our finiteness and continuity results highlight the interplay between the acceptance set and the class of eligible portfolios. We present a simple, alternative approach to the dual representation of convex risk measures by directly applying to the acceptance set the external characterization of closed, convex sets. We prove that risk measures are nondegenerate if and only if the pricing functional admits a positive extension which is a supporting functional for the underlying acceptance set, and provide a characterization of when such extensions exist. Finally, we discuss applications to set-valued risk measures, superhedging with shortfall risk , and optimal risk sharing .
[ { "type": "R", "before": "We study risk measures for financial positions in a multi-asset setting, representing", "after": "The risk of financial positions is measured by", "start_char_pos": 0, "end_char_pos": 85 }, { "type": "A", "before": null, "after": "nondegeneracy,", "start_char_pos": 247, "end_char_pos": 247 }, { "type": "R", "before": "multi-asset risk measures , highlighting", "after": "risk measures with respect to multiple eligible assets. Our finiteness and continuity results highlight", "start_char_pos": 294, "end_char_pos": 334 }, { "type": "R", "before": "develop a new approach to dual representations of convex multi-asset risk measures which relies on a characterization of the structure of closedconvex acceptance sets. To avoid degenerate cases we need to ensure the existence of extensions of the underlying pricing functional which belong to the effective domain of the support function of the chosen acceptance set. We", "after": "present a simple, alternative approach to the dual representation of convex risk measures by directly applying to the acceptance set the external characterization of closed, convex sets. We prove that risk measures are nondegenerate if and only if the pricing functional admits a positive extension which is a supporting functional for the underlying acceptance set, and", "start_char_pos": 417, "end_char_pos": 787 }, { "type": "D", "before": "conical market models and", "after": null, "start_char_pos": 882, "end_char_pos": 907 }, { "type": "D", "before": "optimal risk sharing, and", "after": null, "start_char_pos": 934, "end_char_pos": 959 }, { "type": "A", "before": null, "after": ", and optimal risk sharing", "start_char_pos": 993, "end_char_pos": 993 } ]
[ 0, 231, 413, 584, 784, 845 ]
1308.4187
1
Van der Waals density functional theory is integrated with analysis of a non-redundant set of protein-DNA crystal structures from the Nucleic Acid Database to study the stacking energetics of CG:CG base-pair steps, specifically the role of cytosine 5-methylation. Principal component analysis of the steps reveals the dominant collective motions to correspond to a tensile 'opening' mode and two shear 'sliding' and 'tearing' modes in the orthogonal plane. The stacking interactions of the methyl groups are observed to globally inhibit CG:CG step overtwisting while simultaneously softening the modes locally via potential energy modulations that create metastable states . The results have implications for the epigenetic control of DNA mechanics.
Van der Waals density functional theory is integrated with analysis of a non-redundant set of protein-DNA crystal structures from the Nucleic Acid Database to study the stacking energetics of CG:CG base-pair steps, specifically the role of cytosine 5-methylation. Principal component analysis of the steps reveals the dominant collective motions to correspond to a tensile 'opening' mode and two shear 'sliding' and 'tearing' modes in the orthogonal plane. The stacking interactions of the methyl groups globally inhibit CG:CG step overtwisting while simultaneously softening the modes locally via potential energy modulations that create metastable states . Additionally, the indirect effects of the methyl groups on possible base-pair steps neighboring CG:CG are observed to be of comparable importance to their direct effects on CG:CG . The results have implications for the epigenetic control of DNA mechanics.
[ { "type": "D", "before": "are observed to", "after": null, "start_char_pos": 504, "end_char_pos": 519 }, { "type": "A", "before": null, "after": ". Additionally, the indirect effects of the methyl groups on possible base-pair steps neighboring CG:CG are observed to be of comparable importance to their direct effects on CG:CG", "start_char_pos": 673, "end_char_pos": 673 } ]
[ 0, 263, 456, 675 ]
1308.5064
1
Operational risk capital charge is very sensitive to the modeling assumptions. In this paper, we consider a class of exactly solvable models of operational risk and we obtain new results on the correlation problem. In particular, we show that incorporating model risk for correlations decreases the bank's capital charge .
We propose a portfolio approach for operational risk quantification based on a class of analytical models from which we derive new results on the correlation problem. In particular, we show that uniform correlation is a robust assumption for measuring capital charges in these models .
[ { "type": "R", "before": "Operational risk capital charge is very sensitive to the modeling assumptions. In this paper, we consider", "after": "We propose a portfolio approach for operational risk quantification based on", "start_char_pos": 0, "end_char_pos": 105 }, { "type": "R", "before": "exactly solvable models of operational risk and we obtain", "after": "analytical models from which we derive", "start_char_pos": 117, "end_char_pos": 174 }, { "type": "R", "before": "incorporating model risk for correlations decreases the bank's capital charge", "after": "uniform correlation is a robust assumption for measuring capital charges in these models", "start_char_pos": 243, "end_char_pos": 320 } ]
[ 0, 78, 214 ]
1308.5376
1
We introduce a framework to analyze the relative performance of a portfolio with respect to a benchmark market index. We show that this relative performance has three components: a term that can be interpreted as energy coming from the market fluctuations , a relative entropy term that measures "distance " between the portfolio holdings and the market capital distribution, and another entropy term that can be controlled by the trader by choosing a suitable strategy. The first aids growth in the portfolio value, and the second poses as relative risk of being too far from the market. We give several explicit controls of the third term that allows one to outperform a diverse volatile market in the long run . Named energy-entropy portfolios, these strategies work in both discrete and continuous time, and require essentially no probabilistic or structural assumptions. They are well-suited to analyze a hierarchical portfolio of portfolios and attribute relative risk and reward to different levels of the hierarchy. We also consider functionally generated portfolios (introduced by Fernholz) in the case of two assets and the binary tree model and give a novel explanation of their efficacy .
We introduce a pathwise approach to analyze the relative performance of an equity portfolio with respect to a benchmark market portfolio. In this energy-entropy framework, the relative performance is decomposed into three components: a volatility term , a relative entropy term measuring the distance between the portfolio weights and the market capital distribution, and another entropy term that can be controlled by the investor by adopting a suitable rebalancing strategy. This framework leads to a class of portfolio strategies that allows one to outperform , in the long run , a market that is diverse and sufficiently volatile in the sense of stochastic portfolio theory. The framework is illustrated with several empirical examples .
[ { "type": "R", "before": "framework", "after": "pathwise approach", "start_char_pos": 15, "end_char_pos": 24 }, { "type": "R", "before": "a", "after": "an equity", "start_char_pos": 64, "end_char_pos": 65 }, { "type": "R", "before": "index. We show that this relative performance has", "after": "portfolio. In this energy-entropy framework, the relative performance is decomposed into", "start_char_pos": 111, "end_char_pos": 160 }, { "type": "R", "before": "term that can be interpreted as energy coming from the market fluctuations", "after": "volatility term", "start_char_pos": 181, "end_char_pos": 255 }, { "type": "R", "before": "that measures \"distance \"", "after": "measuring the distance", "start_char_pos": 282, "end_char_pos": 307 }, { "type": "R", "before": "holdings", "after": "weights", "start_char_pos": 330, "end_char_pos": 338 }, { "type": "R", "before": "trader by choosing a suitable strategy. The first aids growth in the portfolio value, and the second poses as relative risk of being too far from the market. We give several explicit controls of the third term", "after": "investor by adopting a suitable rebalancing strategy. This framework leads to a class of portfolio strategies", "start_char_pos": 431, "end_char_pos": 640 }, { "type": "R", "before": "a diverse volatile market", "after": ",", "start_char_pos": 671, "end_char_pos": 696 }, { "type": "R", "before": ". Named energy-entropy portfolios, these strategies work in both discrete and continuous time, and require essentially no probabilistic or structural assumptions. They are well-suited to analyze a hierarchical portfolio of portfolios and attribute relative risk and reward to different levels of the hierarchy. We also consider functionally generated portfolios (introduced by Fernholz) in the case of two assets and the binary tree model and give a novel explanation of their efficacy", "after": ", a market that is diverse and sufficiently volatile in the sense of stochastic portfolio theory. The framework is illustrated with several empirical examples", "start_char_pos": 713, "end_char_pos": 1198 } ]
[ 0, 117, 470, 588, 875, 1023 ]
1308.5836
1
Stochastic volatility (SV) models mimic many of the stylized facts attributed to time series of asset returns, while maintaining conceptual simplicity. A substantial body of research deals with various techniques for fitting relatively basic SV models, which assume the returns to be conditionally normally distributed or Student-t-distributed , given the volatility . In this manuscript, we consider a frequentist framework for estimating the conditional distribution in an SV model in a nonparametric way, thus avoiding any potentially critical assumptions on the shape . More specifically, we suggest to represent the density of the conditional distribution as a linear combination of standardized B-spline basis functions, imposing a penalty term in order to arrive at a good balance between goodness of fit and smoothness. This allows us to employ the efficient hidden Markov model machinery in order to fit the model and to assess its predictive performance. We demonstrate the feasibility of the approach in a simulation study before applying it to three series of returns on stocks and one series of stock index returns. The nonparametric approach leads to an improved predictive capacity in some cases, and we find evidence for the conditional distributions being leptokurtic and negatively skewed .
Stochastic volatility (SV) models mimic many of the stylized facts attributed to time series of asset returns, while maintaining conceptual simplicity. The commonly made assumption of conditionally normally distributed or Student-t-distributed returns , given the volatility , has however been questioned . In this manuscript, we discuss a penalized maximum likelihood approach for estimating the conditional distribution in an SV model in a nonparametric way, thus avoiding any potentially critical assumptions on the shape .
[ { "type": "R", "before": "A substantial body of research deals with various techniques for fitting relatively basic SV models, which assume the returns to be", "after": "The commonly made assumption of", "start_char_pos": 152, "end_char_pos": 283 }, { "type": "A", "before": null, "after": "returns", "start_char_pos": 344, "end_char_pos": 344 }, { "type": "A", "before": null, "after": ", has however been questioned", "start_char_pos": 368, "end_char_pos": 368 }, { "type": "R", "before": "consider a frequentist framework", "after": "discuss a penalized maximum likelihood approach", "start_char_pos": 394, "end_char_pos": 426 }, { "type": "D", "before": ". More specifically, we suggest to represent the density of the conditional distribution as a linear combination of standardized B-spline basis functions, imposing a penalty term in order to arrive at a good balance between goodness of fit and smoothness. This allows us to employ the efficient hidden Markov model machinery in order to fit the model and to assess its predictive performance. We demonstrate the feasibility of the approach in a simulation study before applying it to three series of returns on stocks and one series of stock index returns. The nonparametric approach leads to an improved predictive capacity in some cases, and we find evidence for the conditional distributions being leptokurtic and negatively skewed", "after": null, "start_char_pos": 574, "end_char_pos": 1308 } ]
[ 0, 151, 370, 575, 829, 966, 1130 ]
1308.5836
2
Stochastic volatility (SV) models mimic many of the stylized facts attributed to time series of asset returns, while maintaining conceptual simplicity. The commonly made assumption of conditionally normally distributed or Student-t-distributed returns, given the volatility, has however been questioned. In this manuscript, we discuss a penalized maximum likelihood approach for estimating the conditional distribution in an SV model in a nonparametric way, thus avoiding any potentially critical assumptions on the shape .
Stochastic volatility (SV) models mimic many of the stylized facts attributed to time series of asset returns, while maintaining conceptual simplicity. The commonly made assumption of conditionally normally distributed or Student-t-distributed returns, given the volatility, has however been questioned. In this manuscript, we introduce a novel maximum penalized likelihood approach for estimating the conditional distribution in an SV model in a nonparametric way, thus avoiding any potentially critical assumptions on the shape . The considered framework exploits the strengths both of the powerful hidden Markov model machinery and of penalized B-splines, and constitutes a powerful and flexible alternative to recently developed Bayesian approaches to semiparametric SV modelling. We demonstrate the feasibility of the approach in a simulation study before outlining its potential in applications to three series of returns on stocks and one series of stock index returns .
[ { "type": "R", "before": "discuss a penalized maximum", "after": "introduce a novel maximum penalized", "start_char_pos": 327, "end_char_pos": 354 }, { "type": "A", "before": null, "after": ". The considered framework exploits the strengths both of the powerful hidden Markov model machinery and of penalized B-splines, and constitutes a powerful and flexible alternative to recently developed Bayesian approaches to semiparametric SV modelling. We demonstrate the feasibility of the approach in a simulation study before outlining its potential in applications to three series of returns on stocks and one series of stock index returns", "start_char_pos": 522, "end_char_pos": 522 } ]
[ 0, 151, 303 ]
1308.6120
1
One of the findings of the recent literature is that the 2008 financial crisis caused reduction in international diversification benefits. To fully understand the possible potential from diversification, we build an empirical model which combines generalised autoregressive score copula functions with high frequency data, and allows us to capture and forecast the conditional time-varying joint distribution of stock returns. Using this novel methodology and fresh data covering five years after the crisis, we compute the conditional diversification benefits to answer the question, whether it is still interesting for an international investor to diversify. As a diversification tools, we consider the Czech PX and the German DAX broad stock indices, and we find that the diversification benefits strongly vary over the 2008-2013 post-crisis years.
One of the findings of the recent literature is that the 2008 financial crisis caused reduction in international diversification benefits. To fully understand the possible potential from diversification, we build an empirical model which combines generalised autoregressive score copula functions with high frequency data, and allows us to capture and forecast the conditional time-varying joint distribution of stock returns. Using this novel methodology and fresh data covering five years after the crisis, we compute the conditional diversification benefits to answer the question, whether it is still interesting for an international investor to diversify. As diversification tools, we consider the Czech PX and the German DAX broad stock indices, and we find that the diversification benefits strongly vary over the 2008--2013 crisis years.
[ { "type": "D", "before": "a", "after": null, "start_char_pos": 664, "end_char_pos": 665 }, { "type": "R", "before": "2008-2013 post-crisis", "after": "2008--2013 crisis", "start_char_pos": 823, "end_char_pos": 844 } ]
[ 0, 138, 426, 660 ]
1308.6387
1
An investor faced with a contingent claim may eliminate risk by perfect hedging, but as it is often quite expensive, he seeks partial hedging (quantile hedging or efficient hedging) that requires less capital and reduces the risk. Efficient hedging for European call option was considered in the standard Black-Scholes model with constant drift and volatility coefficients. In this paper we considered the efficient hedging for European call option in general Black-Scholes model dX_t=X_t(m(t)dt+\sigma (t)dw(t)) with time-varying drift and volatility coefficients and in fractional Black-Scholes model dX_t=X_t( mdt+\sigma dB _H(t) ) with constant coefficients.
An investor faced with a contingent claim may eliminate risk by perfect hedging, but as it is often quite expensive, he seeks partial hedging (quantile hedging or efficient hedging) that requires less capital and reduces the risk. Efficient hedging for European call option was considered in the standard Black-Scholes model with constant drift and volatility coefficients. In this paper we considered the efficient hedging for European call option in general Black-Scholes model dX_t=X_t(m(t)dt+\sigma (t)dw(t)) with time-varying drift and volatility coefficients and in fractional Black-Scholes model dX_t=X_t( \sigma B _H(t) +mdt ) with constant coefficients.
[ { "type": "R", "before": "mdt+\\sigma dB", "after": "\\sigma B", "start_char_pos": 613, "end_char_pos": 626 }, { "type": "A", "before": null, "after": "+mdt", "start_char_pos": 633, "end_char_pos": 633 } ]
[ 0, 230, 373 ]
1308.6465
1
Most decision theories including expected utility theory, rank dependent utility theory and the cumulative prospect theory assume that investors are only interested in the distribution of returns and not about the states of the economy in which income is received. Optimal payoffs have their lowest outcomes when the economy is in a downturn, and this is often at odds with the needs of many investors. We introduce a framework for portfolio selection that permits to deal with state-dependent preferences . We are able to characterize optimal payoffs in explicit form. Some applications in security design are discussed in detail. We extend the classical expected utility optimization problem of Merton to the state-dependent situation and also give some stochastic extensions of the target probability optimization problem.
Most decision theories , including expected utility theory, rank dependent utility theory and cumulative prospect theory , assume that investors are only interested in the distribution of returns and not in the states of the economy in which income is received. Optimal payoffs have their lowest outcomes when the economy is in a downturn, and this feature is often at odds with the needs of many investors. We introduce a framework for portfolio selection within which state-dependent preferences can be accommodated. Specifically, we assume that investors care about the distribution of final wealth and its interaction with some benchmark. In this context, we are able to characterize optimal payoffs in explicit form. Furthermore, we extend the classical expected utility optimization problem of Merton to the state-dependent situation . Some applications in security design are discussed in detail and we also solve some stochastic extensions of the target probability optimization problem.
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 23, "end_char_pos": 23 }, { "type": "D", "before": "the", "after": null, "start_char_pos": 93, "end_char_pos": 96 }, { "type": "A", "before": null, "after": ",", "start_char_pos": 124, "end_char_pos": 124 }, { "type": "R", "before": "about", "after": "in", "start_char_pos": 206, "end_char_pos": 211 }, { "type": "A", "before": null, "after": "feature", "start_char_pos": 354, "end_char_pos": 354 }, { "type": "R", "before": "that permits to deal with", "after": "within which", "start_char_pos": 455, "end_char_pos": 480 }, { "type": "R", "before": ". We", "after": "can be accommodated. Specifically, we assume that investors care about the distribution of final wealth and its interaction with some benchmark. In this context, we", "start_char_pos": 509, "end_char_pos": 513 }, { "type": "R", "before": "Some applications in security design are discussed in detail. We", "after": "Furthermore, we", "start_char_pos": 573, "end_char_pos": 637 }, { "type": "R", "before": "and also give", "after": ". Some applications in security design are discussed in detail and we also solve", "start_char_pos": 740, "end_char_pos": 753 } ]
[ 0, 266, 405, 510, 572, 634 ]
1308.6619
1
In this article, we introduce a new method to model stochastic gene expression . The protein concentration dynamics follows a backward stochastic differential equation (BSDE). To validate our approach we employ the Gillespiemethod to generate benchmark data. The numerical simulation shows that the data produced by both methods agree quite well .
In this article, we introduce a novel backward method to model stochastic gene expression and protein level dynamics . The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of endpoint ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time reversed simulations, allowing, for example, the assessment of the biological conditions (e.g. protein concentrations) that preceded an experimentally measured event of interest (e.g. mitosis, apoptosis, etc.) .
[ { "type": "R", "before": "new", "after": "novel backward", "start_char_pos": 32, "end_char_pos": 35 }, { "type": "A", "before": null, "after": "and protein level dynamics", "start_char_pos": 79, "end_char_pos": 79 }, { "type": "R", "before": "concentration dynamics follows a", "after": "amount is regarded as a diffusion process and is described by a", "start_char_pos": 94, "end_char_pos": 126 }, { "type": "A", "before": null, "after": "Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of endpoint (\"final\") conditions, in addition to the model parametrization.", "start_char_pos": 177, "end_char_pos": 177 }, { "type": "R", "before": "the Gillespiemethod to generate benchmark data. The numerical simulation shows that the data produced by both methods agree quite well", "after": "Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time reversed simulations, allowing, for example, the assessment of the biological conditions (e.g. protein concentrations) that preceded an experimentally measured event of interest (e.g. mitosis, apoptosis, etc.)", "start_char_pos": 213, "end_char_pos": 347 } ]
[ 0, 81, 176, 260 ]
1308.6756
1
We present a careful analysis of a set of effects that lead to significant biases in the estimation of the branching ratio n that quantifies the degree of endogeneity of how much past events trigger future events. We report (i) evidence of strong upward biases on the estimation of n when using power law memory kernels in the presence of a few outliers, (ii) strong effects on nresulting from the form of the regularization part of the power law kernel, (iii) strong edge effects on the estimated n when using power law kernels, and (iv) the need for an exhaustive search of the absolute maximum of the log-likelihood function due to its complicated shape. Moreover, we demonstrate that the calibration of the Hawkes process on mixtures of pure Poisson process with changes of regime leads to completely spurious apparent critical values for the branching ratio (n = 1) while the true value is actually n=0. More generally, regime shifts on the parameters of the Hawkes model and/or on the generating process itself are shown to systematically lead to a significant upward bias in the estimation of the branching ratio. Many of these effects are present in high-frequency financial data , which is studied as an illustration . Altogether, our careful exploration of the caveats of the calibration of the Hawkes process stresses the need for considering all the above issues before any conclusion can be sustained. In this respect, because the above effects are plaguing their analyses, the claim by Hardiman, Bercot and Bouchaud (2013) that financial market have been continuously functioning at or close to criticality (n = 1) cannot be supported. In contrast, our previous results on E-mini S&P 500 Futures Contracts and on major commodity future contracts are upheld.
We present a careful analysis of possible issues of the application of the self-excited Hawkes process to high-frequency financial data and carefully analyze a set of effects that lead to significant biases in the estimation of the "criticality index" n that quantifies the degree of endogeneity of how much past events trigger future events. We report a number of model biases that are intrinsic to the estimation of brnaching ratio (n) when using power law memory kernels. We demonstrate that the calibration of the Hawkes process on mixtures of pure Poisson process with changes of regime leads to completely spurious apparent critical values for the branching ratio (n \simeq 1) while the true value is actually n=0. More generally, regime shifts on the parameters of the Hawkes model and/or on the generating process itself are shown to systematically lead to a significant upward bias in the estimation of the branching ratio. We also demonstrate the importance of the preparation of the high-frequency financial data and give special care to the decrease of quality of the timestamps of tick data due to latency and grouping of messages to packets by the stock exchange . Altogether, our careful exploration of the caveats of the calibration of the Hawkes process stresses the need for considering all the above issues before any conclusion can be sustained. In this respect, because the above effects are plaguing their analyses, the claim by Hardiman, Bercot and Bouchaud (2013) that financial market have been continuously functioning at or close to criticality (n \simeq 1) cannot be supported. In contrast, our previous results on E-mini S&P 500 Futures Contracts and on major commodity future contracts are upheld.
[ { "type": "A", "before": null, "after": "possible issues of the application of the self-excited Hawkes process to high-frequency financial data and carefully analyze", "start_char_pos": 33, "end_char_pos": 33 }, { "type": "R", "before": "branching ratio", "after": "\"criticality index\"", "start_char_pos": 108, "end_char_pos": 123 }, { "type": "R", "before": "(i) evidence of strong upward biases on", "after": "a number of model biases that are intrinsic to", "start_char_pos": 225, "end_char_pos": 264 }, { "type": "R", "before": "n when using power law memory kernels in the presence of a few outliers, (ii) strong effects on nresulting from the form of the regularization part of the power law kernel, (iii) strong edge effects on the estimated n", "after": "brnaching ratio (n)", "start_char_pos": 283, "end_char_pos": 500 }, { "type": "R", "before": "kernels, and (iv) the need for an exhaustive search of the absolute maximum of the log-likelihood function due to its complicated shape. Moreover, we", "after": "memory kernels. We", "start_char_pos": 522, "end_char_pos": 671 }, { "type": "R", "before": "=", "after": "\\simeq", "start_char_pos": 867, "end_char_pos": 868 }, { "type": "R", "before": "Many of these effects are present in", "after": "We also demonstrate the importance of the preparation of the", "start_char_pos": 1122, "end_char_pos": 1158 }, { "type": "R", "before": ", which is studied as an illustration", "after": "and give special care to the decrease of quality of the timestamps of tick data due to latency and grouping of messages to packets by the stock exchange", "start_char_pos": 1189, "end_char_pos": 1226 }, { "type": "R", "before": "=", "after": "\\simeq", "start_char_pos": 1625, "end_char_pos": 1626 } ]
[ 0, 214, 658, 909, 1121, 1228, 1415, 1650 ]
1308.6756
2
We present a careful analysis of possible issues of the application of the self-excited Hawkes process to high-frequency financial data and carefully analyze a set of effects that lead to significant biases in the estimation of the "criticality index" n that quantifies the degree of endogeneity of how much past events trigger future events. We report a number of model biases that are intrinsic to the estimation of brnaching ratio (n) when using power law memory kernels. We demonstrate that the calibration of the Hawkes process on mixtures of pure Poisson process with changes of regime leads to completely spurious apparent critical values for the branching ratio (n \simeq 1) while the true value is actually n=0. More generally, regime shifts on the parameters of the Hawkes model and/or on the generating process itself are shown to systematically lead to a significant upward bias in the estimation of the branching ratio. We also demonstrate the importance of the preparation of the high-frequency financial data and give special care to the decrease of quality of the timestamps of tick data due to latency and grouping of messages to packets by the stock exchange. Altogether, our careful exploration of the caveats of the calibration of the Hawkes process stresses the need for considering all the above issues before any conclusion can be sustained. In this respect, because the above effects are plaguing their analyses, the claim by Hardiman, Bercot and Bouchaud (2013) that financial market have been continuously functioning at or close to criticality (n \simeq 1) cannot be supported. In contrast, our previous results on E-mini S&P 500 Futures Contracts and on major commodity future contracts are upheld.
We present a careful analysis of possible issues on the application of the self-excited Hawkes process to high-frequency financial data . We carefully analyze a set of effects leading to significant biases in the estimation of the "criticality index" n that quantifies the degree of endogeneity of how much past events trigger future events. We report a number of model biases that are intrinsic to the estimation of brnaching ratio (n) when using power law memory kernels. We demonstrate that the calibration of the Hawkes process on mixtures of pure Poisson process with changes of regime leads to completely spurious apparent critical values for the branching ratio (n ~ 1) while the true value is actually n=0. More generally, regime shifts on the parameters of the Hawkes model and/or on the generating process itself are shown to systematically lead to a significant upward bias in the estimation of the branching ratio. We also demonstrate the importance of the preparation of the high-frequency financial data and give special care to the decrease of quality of the timestamps of tick data due to latency and grouping of messages to packets by the stock exchange. Altogether, our careful exploration of the caveats of the calibration of the Hawkes process stresses the need for considering all the above issues before any conclusion can be sustained. In this respect, because the above effects are plaguing their analyses, the claim by Hardiman, Bercot and Bouchaud (2013) that financial market have been continuously functioning at or close to criticality (n ~ 1) cannot be supported. In contrast, our previous results on E-mini S&P 500 Futures Contracts and on major commodity future contracts are upheld.
[ { "type": "R", "before": "of", "after": "on", "start_char_pos": 49, "end_char_pos": 51 }, { "type": "R", "before": "and", "after": ". We", "start_char_pos": 136, "end_char_pos": 139 }, { "type": "R", "before": "that lead", "after": "leading", "start_char_pos": 175, "end_char_pos": 184 }, { "type": "R", "before": "\\simeq", "after": "~", "start_char_pos": 673, "end_char_pos": 679 }, { "type": "R", "before": "\\simeq", "after": "~", "start_char_pos": 1574, "end_char_pos": 1580 } ]
[ 0, 342, 474, 720, 932, 1177, 1364, 1604 ]