aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
cs0603115 | 1539159366 | The Graphic Processing Unit (GPU) has evolved into a powerful and flexible processor. The latest graphic processors provide fully programmable vertex and pixel processing units that support vector operations up to single floating-point precision. This computational power is now being used for general-purpose computations. However, some applications require higher precision than single precision. This paper describes the emulation of a 44-bit floating-point number format and its corresponding operations. An implementation is presented along with performance and accuracy results. | Others libraries represent multiprecision numbers as the unevaluated sum of several double-precision FP numbers such as Briggs' double-double @cite_18 , Bailey's quad-doubles @cite_5 and Daumas' floating-point expansions @cite_0 . This representation format is based on the IEEE-754 features that lead to simple algorithms for arithmetic operators. However this format is confined to low precision (2 to 3 floating-point number) as the complexity of algorithms increases quadratically with the precision. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_18"
],
"mid": [
"2104380208",
"2624158749",
"1975244381"
],
"abstract": [
"A quad-double number is an unevaluated sum of four IEEE double precision numbers, capable of representing at least 212 bits of significand. We present the algorithms for various arithmetic operations (including the four basic operations and various algebraic and transcendental operations) on quad-double numbers. The performance of the algorithms, implemented in C++, is also presented.",
"The crlibm project aims at developing a portable, proven, correctly rounded, and efficient mathematical library (libm) for double precision. Current libm implementation do not always return the floating-point number that is closest to the exact mathematical result. As a consequence, different libm implementation will return different results for the same input, which prevents full portability of floating-point ap- plications. In addition, few libraries support but the round-to-nearest mode of the IEEE754 IEC 60559 standard for floating-point arithmetic (hereafter usually referred to as the IEEE-754 stan- dard). crlibm provides the four rounding modes: To nearest, to +∞, to −∞ and to zero.",
"Multiple-precision integer operations are key components of many security applications; but unfortunately they are computationally expensive on contemporary CPUs. In this paper, we present our design and implementation of a multiple-precision integer library for GPUs which is implemented by CUDA. We report our experimental results which show that a significant speedup can be achieved by GPUs as compared with the GNU MP library on CPUs."
]
} |
cs0603115 | 1539159366 | The Graphic Processing Unit (GPU) has evolved into a powerful and flexible processor. The latest graphic processors provide fully programmable vertex and pixel processing units that support vector operations up to single floating-point precision. This computational power is now being used for general-purpose computations. However, some applications require higher precision than single precision. This paper describes the emulation of a 44-bit floating-point number format and its corresponding operations. An implementation is presented along with performance and accuracy results. | For example, Strzodka @cite_14 proposed a 16-bit fixed-point representation and operation out of the 8-bit fixed-point format. In his work, two 8-bit numbers were used to emulate 16-bit. The author claimed that operators in his representation format were only $50 | {
"cite_N": [
"@cite_14"
],
"mid": [
"2291160084"
],
"abstract": [
"Recent advances in convolutional neural networks have considered model complexity and hardware efficiency to enable deployment onto embedded systems and mobile devices. For example, it is now well-known that the arithmetic operations of deep networks can be encoded down to 8-bit fixed-point without significant deterioration in performance. However, further reduction in precision down to as low as 3-bit fixed-point results in significant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligible loss in classification performance. To perform this, we take advantage of the fact that the weights and activations in a trained network naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic representation to encode weights, communicate activations, and perform dot-products enables networks to 1) achieve higher classification accuracies than fixed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher final test accuracy than linear at 5-bits."
]
} |
quant-ph0601097 | 1545991018 | In this note we consider optimised circuits for implementing Shor's quantum factoring algorithm. First I give a circuit for which none of the about 2n qubits need to be initialised (though we still have to make the usual 2n measurements later on). Then I show how the modular additions in the algorithm can be carried out with a superposition of an arithmetic sequence. This makes parallelisation of Shor's algorithm easier. Finally I show how one can factor with only about 1.5n qubits, and maybe even fewer. | Also I understand that John Watrous @cite_4 has been using uniform superpositions of subgroups (and cosets) in his work on quantum algorithms for solvable groups. Thus he also used coset superpositions to represent elements of the factor group (and probably also to carry out factor group operations on them). In our case the overall group are the integers, the (normal) subgroup are the multiples of @math . The factor group who's elements we want to represent is @math . We now represent these elements by superpositions over the cosets of the form @math . A problem in our case is that we can do things only approximatively as the integers and the cosets are infinite sets. | {
"cite_N": [
"@cite_4"
],
"mid": [
"1967088292"
],
"abstract": [
"In this paper we give a polynomial-time quantum algorithm for computing orders of solvable groups. Several other problems, such as testing membership in solvable groups, testing equality of subgroups in a given solvable group, and testing normality of a subgroup in a given solvable group, reduce to computing orders of solvable groups and therefore admit polynomial-time quantum algorithms as well. Our algorithm works in the setting of black-box groups, wherein none of these problems have polynomial-time classical algorithms. As an important byproduct, our algorithm is able to produce a pure quantum state that is uniform over the elements in any chosen subgroup of a solvable group, which yields a natural way to apply existing quantum algorithms to factor groups of solvable groups."
]
} |
cs0601044 | 2950161596 | Fitness functions based on test cases are very common in Genetic Programming (GP). This process can be assimilated to a learning task, with the inference of models from a limited number of samples. This paper is an investigation on two methods to improve generalization in GP-based learning: 1) the selection of the best-of-run individuals using a three data sets methodology, and 2) the application of parsimony pressure in order to reduce the complexity of the solutions. Results using GP in a binary classification setup show that while the accuracy on the test sets is preserved, with less variances compared to baseline results, the mean tree size obtained with the tested methods is significantly reduced. | Some GP learning applications @cite_11 @cite_2 @cite_18 have made use of a three data sets methodology, but without making a thorough analysis of its effects. Panait and Luke @cite_25 conducted some experiments on different approaches to increase the robustness of the solutions generated by GP, using a three data sets methodology to evaluate the efficiency of each approach. Rowland @cite_21 and Kushchu @cite_4 conducted studies on generalization in EC and GP. Both of their argumentations converge toward the testing of solutions in previously unseen situations for improving robustness. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_21",
"@cite_2",
"@cite_25",
"@cite_11"
],
"mid": [
"2027433115",
"2964115178",
"2149273154",
"1808652302",
"2109449402",
"2129373702"
],
"abstract": [
"In genetic programming (GP), learning problems can be classified broadly into two types: those using data sets, as in supervised learning, and those using an environment as a source of feedback. An increasing amount of research has concentrated on the robustness or generalization ability of the programs evolved using GP. While some of the researchers report on the brittleness of the solutions evolved, others proposed methods of promoting robustness generalization. It is important that these methods are not ad hoc and are applicable to other experimental setups. In this paper, learning concepts from traditional machine learning and a brief review of research on generalization in GP are presented. The paper also identifies problems with brittleness of solutions produced by GP and suggests a method for promoting robustness generalization of the solutions in simulating learning behaviors using GP.",
"Predicated on the increasing abundance of electronic health records, we investigate the problem of inferring individualized treatment effects using observational data. Stemming from the potential outcomes model, we propose a novel multi-task learning framework in which factual and counterfactual outcomes are modeled as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregionalization kernel as a prior over the vvRKHS. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals, which are crucial for realizing the full potential of precision medicine. The impact of selection bias is alleviated via a risk-based empirical Bayes method for adapting the multi-task GP prior, which jointly minimizes the empirical error in factual outcomes and the uncertainty in (unobserved) counterfactual outcomes. We conduct experiments on observational datasets for an interventional social program applied to premature infants, and a left ventricular assist device applied to cardiac patients wait-listed for a heart transplant. In both experiments, we show that our method significantly outperforms the state-of-the-art.",
"Gaussian process (GP) models are very popular for machine learning and regression and they are widely used to account for spatial or temporal relationships between multivariate random variables. In this paper, we propose a general formulation of underdetermined source separation as a problem involving GP regression. The advantage of the proposed unified view is first to describe the different underdetermined source separation problems as particular cases of a more general framework. Second, it provides a flexible means to include a variety of prior information concerning the sources such as smoothness, local stationarity or periodicity through the use of adequate covariance functions. Third, given the model, it provides an optimal solution in the minimum mean squared error (MMSE) sense to the source separation problem. In order to make the GP models tractable for very large signals, we introduce framing as a GP approximation and we show that computations for regularly sampled and locally stationary GPs can be done very efficiently in the frequency domain. These findings establish a deep connection between GP and nonnegative tensor factorizations (NTF) with the Itakura-Saito distance and lead to effective methods to learn GP hyperparameters for very large and regularly sampled signals.",
"Objective To summarize literature describing approaches aimed at automatically identifying patients with a common phenotype. @PARASPLIT Materials and methods We performed a review of studies describing systems or reporting techniques developed for identifying cohorts of patients with specific phenotypes. Every full text article published in (1) Journal of American Medical Informatics Association , (2) Journal of Biomedical Informatics , (3) Proceedings of the Annual American Medical Informatics Association Symposium , and (4) Proceedings of Clinical Research Informatics Conference within the past 3 years was assessed for inclusion in the review. Only articles using automated techniques were included. @PARASPLIT Results Ninety-seven articles met our inclusion criteria. Forty-six used natural language processing (NLP)-based techniques, 24 described rule-based systems, 41 used statistical analyses, data mining, or machine learning techniques, while 22 described hybrid systems. Nine articles described the architecture of large-scale systems developed for determining cohort eligibility of patients. @PARASPLIT Discussion We observe that there is a rise in the number of studies associated with cohort identification using electronic medical records. Statistical analyses or machine learning, followed by NLP techniques, are gaining popularity over the years in comparison with rule-based systems. @PARASPLIT Conclusions There are a variety of approaches for classifying patients into a particular phenotype. Different techniques and data sources are used, and good performance is reported on datasets at respective institutions. However, no system makes comprehensive use of electronic medical records addressing all of their known weaknesses.",
"Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ) error term combined with a sparseness-inducing regularization term. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution, and compressed sensing are a few well-known examples of this approach. This paper proposes gradient projection (GP) algorithms for the bound-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range of applications, often being significantly faster (in terms of computation time) than competing methods. Although the performance of GP methods tends to degrade as the regularization term is de-emphasized, we show how they can be embedded in a continuation scheme to recover their efficient practical performance.",
"Researchers who make tutoring systems would like to know which sequences of educational content lead to the most effective learning by their students. The majority of data collected in many ITS systems consist of answers to a group of questions of a given skill often presented in a random sequence. Following work that identifies which items produce the most learning we propose a Bayesian method using similar permutation analysis techniques to determine if item learning is context sensitive and if so which orderings of questions produce the most learning. We confine our analysis to random sequences with three questions. The method identifies question ordering rules such as, question A should go before B, which are statistically reliably beneficial to learning. Real tutor data from five random sequence problem sets were analyzed. Statistically reliable orderings of questions were found in two of the five real data problem sets. A simulation consisting of 140 experiments was run to validate the method's accuracy and test its reliability. The method succeeded in finding 43 of the underlying item order effects with a 6 false positive rate using a p value threshold of <= 0.05. Using this method, ITS researchers can gain valuable knowledge about their problem sets and feasibly let the ITS automatically identify item order effects and optimize student learning by restricting assigned sequences to those prescribed as most beneficial to learning."
]
} |
cs0601044 | 2950161596 | Fitness functions based on test cases are very common in Genetic Programming (GP). This process can be assimilated to a learning task, with the inference of models from a limited number of samples. This paper is an investigation on two methods to improve generalization in GP-based learning: 1) the selection of the best-of-run individuals using a three data sets methodology, and 2) the application of parsimony pressure in order to reduce the complexity of the solutions. Results using GP in a binary classification setup show that while the accuracy on the test sets is preserved, with less variances compared to baseline results, the mean tree size obtained with the tested methods is significantly reduced. | Because of the bloat phenomenon, typical in GP, parsimony pressure has been more widely studied @cite_19 @cite_12 @cite_15 @cite_23 . In particular, several papers @cite_5 @cite_22 @cite_1 have produced interesting results around the idea of using a parsimony pressure to increase the generalization capability of GP-evolved solutions. However, a counter-argumentation is given in @cite_13 , where solutions biased toward low complexity have, in some circumstances, increased generalization error. This is in accordance with the argumentation given in @cite_17 , which states that less complex solutions are not always more robust. | {
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"2803852436",
"1982359839",
"2763281404",
"2962919088",
"2149273154",
"2074208271",
"2810518847",
"1549918636",
"2034600319"
],
"abstract": [
"For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work.",
"We propose and analyze a discontinuous Galerkin approximation for the Stokes problem. The finite element triangulation employed is not required to be conforming and we use discontinuous pressures and velocities. No additional unknown fields need to be introduced, but only suitable bilinear forms defined on the interfaces between the elements, involving the jumps of the velocity and the average of the pressure. We consider hp approximations using ℚk′–ℚk velocity-pressure pairs with k′ = k + 2, k + 1, k. Our methods show better stability properties than the corresponding conforming ones. We prove that our first two choices of velocity spaces ensure uniform divergence stability with respect to the mesh size h. Numerical results show that they are uniformly stable with respect to the local polynomial degree k, a property that has no analog in the conforming case. An explicit bound in k which is not sharp is also proven. Numerical results show that if equal order approximation is chosen for the velocity and pressure, no spurious pressure modes are present but the method is not uniformly stable either with respect to h or k. We derive a priori error estimates generalizing the abstract theory of mixed methods. Optimal error estimates in h are proven. As for discontinuous Galerkin methods for scalar diffusive problems, half of the power of k is lost for p and hp pproximations independently of the divergence stability.",
"When solving consensus optimization problems over a graph, there is often an explicit characterization of the convergence rate of Gradient Descent (GD) using the spectrum of the graph Laplacian. The same type of problems under the Alternating Direction Method of Multipliers (ADMM) are, however, poorly understood. For instance, simple but important non-strongly-convex consensus problems have not yet being analyzed, especially concerning the dependency of the convergence rate on the graph topology. Recently, for a non-strongly-convex consensus problem, a connection between distributed ADMM and lifted Markov chains was proposed, followed by a conjecture that ADMM is faster than GD by a square root factor in its convergence time, in close analogy to the mixing speedup achieved by lifting several Markov chains. Nevertheless, a proof of such a claim is is still lacking. Here we provide a full characterization of the convergence of distributed over-relaxed ADMM for the same type of consensus problem in terms of the topology of the underlying graph. Our results provide explicit formulas for optimal parameter selection in terms of the second largest eigenvalue of the transition matrix of the graph's random walk. Another consequence of our results is a proof of the aforementioned conjecture, which interestingly, we show it is valid for any graph, even the ones whose random walks cannot be accelerated via Markov chain lifting.",
"We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramer GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.",
"Gaussian process (GP) models are very popular for machine learning and regression and they are widely used to account for spatial or temporal relationships between multivariate random variables. In this paper, we propose a general formulation of underdetermined source separation as a problem involving GP regression. The advantage of the proposed unified view is first to describe the different underdetermined source separation problems as particular cases of a more general framework. Second, it provides a flexible means to include a variety of prior information concerning the sources such as smoothness, local stationarity or periodicity through the use of adequate covariance functions. Third, given the model, it provides an optimal solution in the minimum mean squared error (MMSE) sense to the source separation problem. In order to make the GP models tractable for very large signals, we introduce framing as a GP approximation and we show that computations for regularly sampled and locally stationary GPs can be done very efficiently in the frequency domain. These findings establish a deep connection between GP and nonnegative tensor factorizations (NTF) with the Itakura-Saito distance and lead to effective methods to learn GP hyperparameters for very large and regularly sampled signals.",
"We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993 in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535 success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993 success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.",
"In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real. The generator is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that 1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400 ), and 3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.",
"The formulation @math minx,yf(x)+g(y),subjecttoAx+By=b,where f and g are extended-value convex functions, arises in many application areas such as signal processing, imaging and image processing, statistics, and machine learning either naturally or after variable splitting. In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous gradient. On this kind of problem, a very effective approach is the alternating direction method of multipliers (ADM or ADMM), which solves a sequence of f g-decoupled subproblems. However, its effectiveness has not been matched by a provably fast rate of convergence; only sublinear rates such as O(1 k) and @math O(1 k2) were recently established in the literature, though the O(1 k) rates do not require strong convexity. This paper shows that global linear convergence can be guaranteed under the assumptions of strong convexity and Lipschitz gradient on one of the two functions, along with certain rank assumptions on A and B. The result applies to various generalizations of ADM that allow the subproblems to be solved faster and less exactly in certain manners. The derived rate of convergence also provides some theoretical guidance for optimizing the ADM parameters. In addition, this paper makes meaningful extensions to the existing global convergence theory of ADM generalizations.",
"In citepapa, Papadimitriou formalized the notion of routing stability in BGP as the following coalitional game theoretic problem: Given a network with a multicommodity flow satisfying node capacity and demand constraints, the payoff of a node is the total flow originated or terminated at it. A payoff allocation is in the core if and only if there is no subset of nodes that can increase their payoff by seceding from the network. We answer one of the open problems in citepapa by proving that for any network, the core is non-empty in both the transferable (where the nodes can compensate each other with side payments) and the non-transferable case. In the transferable case we show that such an allocation can be computed in polynomial time. We also generalize this result to the case where a strictly concave utility function is associated with each commodity."
]
} |
cs0601051 | 1489954224 | This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator @math for aggregate programs, independently proposed This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates. To appear in Theory and Practice of Logic Programming (TPLP). | This notion of unfolding derives from the work on unfolding of intensional sets @cite_8 , and has been independently described in @cite_5 . | {
"cite_N": [
"@cite_5",
"@cite_8"
],
"mid": [
"2004142419",
"2126998703"
],
"abstract": [
"Abstract Two conditions on a collection of simple orders - unimodality and straightness - are necessary but not jointly sufficient for unidimensional unfolding representations. From the analysis of these conditions, a polynomial time algorithm is derived for the testing of unidimensionality and for the construction of a representation when one exists.",
"Here, we propose a planning method for knotting unknotting of deformable linear objects. First, we propose a topological description of the state of a linear object. Second, transitions between these states are defined by introducing four basic operations. Then, possible sequences of crossing state transitions, i.e. possible manipulation processes, can be generated once the initial and the objective states are given. Third, a method for determining grasping points and their directions of movement is proposed to realize derived manipulation processes. Our proposed method indicated that it is theoretically possible for any knotting manipulation of a linear object placed on a table to be realized by a one-handed robot with three translational DOF and one rotational DOF. Furthermore, criteria for evaluation of generated plans are introduced to reduce the candidates of manipulation plans. Fourth, a planning method for tying knots tightly is established because they fulfill their fixing function by tightening them. Finally, we report knotting unknotting manipulation performed by a vision-guided system to demonstrate the usefulness of our approach."
]
} |
cs0601051 | 1489954224 | This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator @math for aggregate programs, independently proposed This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates. To appear in Theory and Practice of Logic Programming (TPLP). | The work of @cite_5 @cite_2 @cite_11 contains an elegant generalization of several semantics of logic programs to logic programs with aggregates. The key idea in this work is the use of approximation theory in defining several semantics for logic programs with aggregates (e.g., two-valued semantics, ultimate three-valued stable semantics, three-valued stable model semantics). In particular, in @cite_11 , the authors describe a fixpoint operator, called @math , operating on 3-valued interpretations and parameterized by the choice of approximating aggregates. | {
"cite_N": [
"@cite_5",
"@cite_11",
"@cite_2"
],
"mid": [
"1598820892",
"1520574003",
"342706626"
],
"abstract": [
"In this paper, we propose an extension of the well-founded and stable model semantics for logic programs with aggregates. Our approach uses Approximation Theory, a fixpoint theory of stable and well-founded fixpoints of non-monotone operators in a complete lattice. We define the syntax of logic programs with aggregates and define the immediate consequence operator of such programs. We investigate the well-founded and stable semantics generated by Approximation Theory. We show that our approach extends logic programs with stratified aggregation and that it correctly deals with well-known benchmark problems such as the shortest path program and the company control problem.",
"We introduce a family of partial stable model semantics for logic programs with arbitrary aggregate relations. The semantics are parametrized by the interpretation of aggregate relations in three-valued logic. Any semantics in this family satisfies two important properties: (i) it extends the partial stable semantics for normal logic programs and (ii) total stable models are always minimal. We also give a specific instance of the semantics and show that it has several attractive features.",
"Aggregates are functions that take sets as arguments. Examples are the function that maps a set to the number of its elements or the function which maps a set to its minimal element. Aggregates are frequently used in relational databases and have many applications in combinatorial search problems and knowledge representation. Aggregates are of particular importance for several extensions of logic programming which are used for declarative programming like Answer Set Programming, Abductive Logic Programming, and the logic of inductive definitions (ID-Logic). Aggregate atoms not only allow a broader class of problems to be represented in a natural way but also allow a more compact representation of problems which often leads to faster solving times. Extensions of specific semantics of logic programs with, in many cases, specific aggregate relations have been proposed before. The main contributions of this thesis are: (i) we extend all major semantics of logic programs: the least model semantics of definite logic programs, the standard model semantics of stratified programs, the Clark completion semantics, the well-founded semantics, the stable models semantics, and the three-valued stable semantics; (ii) our framework admits arbitrary aggregate relations in the bodies of rules. We follow a denotational approach in which a semantics is defined as a (set of) fixpoint(s) of an operator associated with a program. The main tool of this work is Approximation Theory. This is an algebraic theory which defines different types of fixpoints of an approximating operator associated with a logic program. All major semantics of a logic program correspond to specific types of fixpoints of an approximating operator introduced by Fitting. We study different approximating operators for aggregate programs and investigate the precision and complexity of the semantics generated by them. We study in detail one specific operator which extends the Fitting operator and whose semantics extends the three-valued stable semantics of logic programs without aggregates. We look at algorithms, complexity, transformations of aggregate atoms and programs, and an implementation in XSB Prolog."
]
} |
cs0601051 | 1489954224 | This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator @math for aggregate programs, independently proposed This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates. To appear in Theory and Practice of Logic Programming (TPLP). | For the sake of completeness, we will review the translation of @cite_5 , presented using the notation of our paper. Given a ground logic program with aggregates @math , @math denotes the ground normal logic program obtained after the translation. The process begins with the translation of each aggregate atom @math of the form @math into a disjunction @math , where @math , and each @math is a conjunction of the form [ l s_1 l l H ( ) s_2 l ] The construction of @math considers only the pairs @math that satisfy the following condition: each interpretation @math such that @math and @math must satisfy @math . The translation @math is then created by replacing rules with disjunction in the body by a set of standard rules in a straightforward way. For example, the rule [ a (b c), d ] is replaced by the two rules [ ] From the definitions of @math and of aggregate solutions, we have the following simple lemma: We next show that fixed point answer sets of @math are answer sets of @math . | {
"cite_N": [
"@cite_5"
],
"mid": [
"2064864283"
],
"abstract": [
"The addition of aggregates has been one of the most relevant enhancements to the language of answer set programming (ASP). They strengthen the modelling power of ASP in terms of natural and concise problem representations. Previous semantic definitions typically agree in the case of non-recursive aggregates, but the picture is less clear for aggregates involved in recursion. Some proposals explicitly avoid recursive aggregates, most others differ, and many of them do not satisfy desirable criteria, such as minimality or coincidence with answer sets in the aggregate-free case. In this paper we define a semantics for programs with arbitrary aggregates (including monotone, antimonotone, and nonmonotone aggregates) in the full ASP language allowing also for disjunction in the head (disjunctive logic programming - DLP). This semantics is a genuine generalization of the answer set semantics for DLP, it is defined by a natural variant of the Gelfond-Lifschitz transformation, and treats aggregate and non-aggregate literals in a uniform way. This novel transformation is interesting per se also in the aggregate-free case, since it is simpler than the original transformation and does not need to differentiate between positive and negative literals. We prove that our semantics guarantees the minimality (and therefore the incomparability) of answer sets, and we demonstrate that it coincides with the standard answer set semantics on aggregate-free programs. Moreover, we carry out an in-depth study of the computational complexity of the language. The analysis pays particular attention to the impact of syntactical restrictions on programs in the form of limited use of aggregates, disjunction, and negation. While the addition of aggregates does not affect the complexity of the full DLP language, it turns out that their presence does increase the complexity of normal (i.e., non-disjunctive) ASP programs up to the second level of the polynomial hierarchy. However, we show that there are large classes of aggregates the addition of which does not cause any complexity gap even for normal programs, including the fragment allowing for arbitrary monotone, arbitrary antimonotone, and stratified (i.e., non-recursive) nonmonotone aggregates. The analysis provides some useful indications on the possibility to implement aggregates in existing reasoning engines."
]
} |
cs0601051 | 1489954224 | This technical note describes a monotone and continuous fixpoint operator to compute the answer sets of programs with aggregates. The fixpoint operator relies on the notion of aggregate solution. Under certain conditions, this operator behaves identically to the three-valued immediate consequence operator @math for aggregate programs, independently proposed This operator allows us to closely tie the computational complexity of the answer set checking and answer sets existence problems to the cost of checking a solution of the aggregates in the program. Finally, we relate the semantics described by the operator to other proposals for logic programming with aggregates. To appear in Theory and Practice of Logic Programming (TPLP). | In @cite_5 , it is shown that answer sets of @math coincide with the of @math (defined by the operator @math ). This, together with the above lemma and Theorem , allows us to conclude the following theorem. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2551334724"
],
"abstract": [
"Let @math be a partially ordered set. If the Boolean lattice @math can be partitioned into copies of @math for some positive integer @math , then @math must satisfy the following two trivial conditions: (1) the size of @math is a power of @math , (2) @math has a unique maximal and minimal element. Resolving a conjecture of Lonc, it was shown by Gruslys, Leader and Tomon that these conditions are sufficient as well. In this paper, we show that if @math only satisfies condition (2), we can still almost partition @math into copies of @math . We prove that if @math has a unique maximal and minimal element, then there exists a constant @math such that all but at most @math elements of @math can be covered by disjoint copies of @math ."
]
} |
cs0601068 | 1628238937 | In this paper, we present a system called Checkbochs, a machine simulator that checks rules about its guest operating system and applications at the hardware level. The properties to be checked can be implemented as plugins' in the Checkbochs simulator. Some of the properties that were checked using Checkbochs include null-pointer checks, format-string vulnerabilities, user kernel pointer checks, and race-conditions. On implementing these checks, we were able to uncover previously-unknown bugs in widely used Linux distributions. We also tested our tools on undergraduate coursework, and found numerous bugs. | Static compile-time analysis with programmer written compiler-extensions was used to catch around 500 bugs in the linux kernel @cite_6 , @cite_3 . Using static data flow analysis and domain specific knowledge, many bugs were found in the heavily audited kernel. Ways have also been suggested to automatically detect anomalies as deviant behavior in the source code @cite_2 . Most of the bugs checked by static analysis are local to a single file, sometimes even local to a single procedure. This is due to the complexity involved in performing global compile time analysis. This limits the power of static analysis tools to surface bugs . Our approach, on the other hand, can track data flow across many different software components possibly written by different vendors and can thus target a different variety of errors. However, static analysis has the huge advantage of being able to check all possible code paths, while our execution-driven approach can only check bugs along the path of execution in the system. | {
"cite_N": [
"@cite_3",
"@cite_6",
"@cite_2"
],
"mid": [
"1600965014",
"1992371286",
"2124666592"
],
"abstract": [
"This paper shows how system-specific static analysis can find security errors that violate rules such as \"integers from untrusted sources must be sanitized before use\" and \"do not dereference user-supplied pointers.\" In our approach, programmers write system-specific extensions that are linked into the compiler and check their code for errors. We demonstrate the approach's effectiveness by using it to find over 100 security errors in Linux and OpenBSD, over 50 of which have led to kernel patches. An unusual feature of our approach is the use of methods to automatically detect when we miss code actions that should be checked.",
"Static checking can verify the absence of errors in a program, but often requires written annotations or specifications. As a result, static checking can be difficult to use effectively: it can be difficult to determine a specification and tedious to annotate programs. Automated tools that aid the annotation process can decrease the cost of static checking and enable it to be more widely used.This paper describes an evaluation of the effectiveness of two techniques, one static and one dynamic, to assist the annotation process. We quantitatively and qualitatively evaluate 41 programmers using ESC Java in a program verification task over three small programs, using Houdini for static inference and Daikon for dynamic inference. We also investigate the effect of unsoundness in the dynamic analysis.Statistically significant results show that both inference tools improve task completion; Daikon enables users to express more correct invariants; unsoundness of the dynamic analysis is little hindrance to users; and users imperfectly exploit Houdini. Interviews indicate that beginning users found Daikon to be helpful; Houdini to be neutral; static checking to be of potential practical use; and both assistance tools to have unique benefits.Our observations not only provide a critical evaluation of these two techniques, but also highlight important considerations for creating future assistance tools.",
"A great deal of attention has lately been given to addressing software bugs such as errors in operating system drivers or security bugs. However, there are many other lesser known errors specific to individual applications or APIs and these violations of application-specific coding rules are responsible for a multitude of errors. In this paper we propose DynaMine, a tool that analyzes source code check-ins to find highly correlated method calls as well as common bug fixes in order to automatically discover application-specific coding patterns. Potential patterns discovered through mining are passed to a dynamic analysis tool for validation; finally, the results of dynamic analysis are presented to the user.The combination of revision history mining and dynamic analysis techniques leveraged in DynaMine proves effective for both discovering new application-specific patterns and for finding errors when applied to very large applications with many man-years of development and debugging effort behind them. We have analyzed Eclipse and jEdit, two widely-used, mature, highly extensible applications consisting of more than 3,600,000 lines of code combined. By mining revision histories, we have discovered 56 previously unknown, highly application-specific patterns. Out of these, 21 were dynamically confirmed as very likely valid patterns and a total of 263 pattern violations were found."
]
} |
cs0601068 | 1628238937 | In this paper, we present a system called Checkbochs, a machine simulator that checks rules about its guest operating system and applications at the hardware level. The properties to be checked can be implemented as plugins' in the Checkbochs simulator. Some of the properties that were checked using Checkbochs include null-pointer checks, format-string vulnerabilities, user kernel pointer checks, and race-conditions. On implementing these checks, we were able to uncover previously-unknown bugs in widely used Linux distributions. We also tested our tools on undergraduate coursework, and found numerous bugs. | Recently, model checking was used to find serious file system errors @cite_8 . Using an abstract model and intelligent reduction of the state space, they could check for errors which would have required an exponential number of search paths through traditional testing. Model checking can check for deeper semantic bugs than possible with static compile-time analysis. We intend to use similar ideas to model check entire system images, thus allowing us to search a larger number of execution paths while performing our shadow machine analysis. One of the obstacles in this direction is the slow speed of machine simulation that makes execution of speculative paths almost infeasible. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2124877509"
],
"abstract": [
"This article shows how to use model checking to find serious errors in file systems. Model checking is a formal verification technique tuned for finding corner-case errors by comprehensively exploring the state spaces defined by a system. File systems have two dynamics that make them attractive for such an approach. First, their errors are some of the most serious, since they can destroy persistent data and lead to unrecoverable corruption. Second, traditional testing needs an impractical, exponential number of test cases to check that the system will recover if it crashes at any point during execution. Model checking employs a variety of state-reducing techniques that allow it to explore such vast state spaces efficiently.We built a system, FiSC, for model checking file systems. We applied it to four widely-used, heavily-tested file systems: ext3, JFS, ReiserFS and XFS. We found serious bugs in all of them, 33 in total. Most have led to patches within a day of diagnosis. For each file system, FiSC found demonstrable events leading to the unrecoverable destruction of metadata and entire directories, including the file system root directory “ ”."
]
} |
cs0601068 | 1628238937 | In this paper, we present a system called Checkbochs, a machine simulator that checks rules about its guest operating system and applications at the hardware level. The properties to be checked can be implemented as plugins' in the Checkbochs simulator. Some of the properties that were checked using Checkbochs include null-pointer checks, format-string vulnerabilities, user kernel pointer checks, and race-conditions. On implementing these checks, we were able to uncover previously-unknown bugs in widely used Linux distributions. We also tested our tools on undergraduate coursework, and found numerous bugs. | Shadow machine simulation has been previously used to perform taint analysis to determine the data lifetime of sensitive data @cite_0 . This work reported a startling observation that sensitive data like passwords and credit card numbers may reside in computer's memory and disk long after the user has logged out. Such leaks occur at caches, I O buffers, kernel queues, and other places which are not under the control of the application developer. Our work uses a similar taint analysis by marking all bytes received over the network as untrusted and checking if they are used in unwanted ways (eg. formatstring). | {
"cite_N": [
"@cite_0"
],
"mid": [
"1499241274"
],
"abstract": [
"Strictly limiting the lifetime (i.e. propagation and duration of exposure) of sensitive data (e.g. passwords) is an important and well accepted practice in secure software development. Unfortunately, there are no current methods available for easily analyzing data lifetime, and very little information available on the quality of today's software with respect to data lifetime. We describe a system we have developed for analyzing sensitive data lifetime through whole system simulation called TaintBochs. TaintBochs tracks sensitive data by \"tainting\" it at the hardware level. Tainting information is then propagated across operating system, language, and application boundaries, permitting analysis of sensitive data handling at a whole system level. We have used TaintBochs to analyze sensitive data handling in several large, real world applications. Among these were Mozilla, Apache, and Perl, which are used to process millions of passwords, credit card numbers, etc. on a daily basis. Our investigation reveals that these applications and the components they rely upon take virtually no measures to limit the lifetime of sensitive data they handle, leaving passwords and other sensitive data scattered throughout user and kernel memory. We show how a few simple and practical changes can greatly reduce sensitive data lifetime in these applications."
]
} |
cs0601068 | 1628238937 | In this paper, we present a system called Checkbochs, a machine simulator that checks rules about its guest operating system and applications at the hardware level. The properties to be checked can be implemented as plugins' in the Checkbochs simulator. Some of the properties that were checked using Checkbochs include null-pointer checks, format-string vulnerabilities, user kernel pointer checks, and race-conditions. On implementing these checks, we were able to uncover previously-unknown bugs in widely used Linux distributions. We also tested our tools on undergraduate coursework, and found numerous bugs. | Recently, @cite_11 used taint-analysis on untrusted data to check for security violations such as buffer overflows and formatstring attacks in applications. By implementing a valgrind skin, they were able to restrict the overhead of their taint-analysis tool to 10-25x. Considering that the computation power is relatively cheap, they suggest using their tool in production runs of the software. This will detect and prevent any online attacks on the system. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1849042743"
],
"abstract": [
"We investigate the limitations of using dynamic taint analysis for tracking privacy-sensitive information on Android-based mobile devices. Taint tracking keeps track of data as it propagates through variables, interprocess messages and files, by tagging them with taint marks. A popular taint-tracking system, TaintDroid, uses this approach in Android mobile applications to mark private information, such as device identifiers or user's contacts details, and subsequently issue warnings when this information is misused (e.g., sent to an un-desired third party). We present a collection of attacks on Android-based taint tracking. Specifically, we apply generic classes of anti-taint methods in a mobile device environment to circumvent this security technique. We have implemented the presented techniques in an Android application, ScrubDroid. We successfully tested our app with the TaintDroid implementations for Android OS versions 2.3 to 4.1.1, both using the emulator and with real devices. Finally, we evaluate the success rate and time to complete of the presented attacks. We conclude that, although taint tracking may be a valuable tool for software developers, it will not effectively protect sensitive data from the black-box code of a motivated attacker applying any of the presented anti-taint tracking methods."
]
} |
cs0601073 | 1535620672 | In this work we develop a new theory to analyse the process of routing in large-scale ad-hoc wireless networks. We use a path integral formulation to examine the properties of the paths generated by different routing strategies in these kinds of networks. Using this theoretical framework, we calculate the statistical distribution of the distances between any source to any destination in the network, hence we are able to deduce a length parameter that is unique for each routing strategy. This parameter, defined as the effective radius, effectively encodes the routing information required by a node. Analysing the afore- mentioned statistical distribution for different routing strategies, we obtain a threefold result for practical Large-Scale Wireless Ad-Hoc Networks: 1) We obtain the distribution of the lengths of all the paths in a network for any given routing strategy, 2) We are able to identify "good" routing strategies depending on the evolution of its effective radius as the number of nodes, N , increases to infinity, 3) For any routing strategy with finit e effective radius, we demonstrate that, in a large-scale network, is equivalent to a random routing strategy and that its transport capacity scales asp Nbit-meters per second, thus retrieving the scaling law that Gupta and Kumar (2000) obtained as the limit for single-route large-scale wireless networks. | The distribution of distances between source and destination nodes has been calculated before @cite_8 @cite_6 . Both cited approaches are dependent on a two-dimensional geometry which is justifiable up to some extent. In this work we opt for a three-dimensional formulation of the problem in order not to restrict the topology analyzed. But we are aware that the dimensionality of the routing problem in Wireless Ad-Hoc Networks is not a well defined problem. | {
"cite_N": [
"@cite_6",
"@cite_8"
],
"mid": [
"2126003739",
"2156689181"
],
"abstract": [
"Since ad hoc and sensor networks can be composed of a very large number of devices, the scalability of network protocols is a major design concern. Furthermore, network protocols must be designed to prolong the battery lifetime of the devices. However, most existing routing techniques for ad hoc networks are known not to scale well. On the other hand, the so-called geographical routing algorithms are known to be scalable but their energy efficiency has never been extensively and comparatively studied. In a geographical routing algorithm, data packets are forwarded by a node to its neighbor based on their respective positions. The neighborhood of each node is constituted by the nodes that lie within a certain radio range. Thus, from the perspective of a node forwarding a packet, the next hop depends on the width of the neighborhood it perceives. The analytical framework proposed in this paper allows to analyze the relationship between the energy efficiency of the routing tasks and the extension of the range of the topology knowledge for each node. A wider topology knowledge may improve the energy efficiency of the routing tasks but increases the cost of topology information due to signaling packets needed to acquire this information. The problem of determining the optimal topology knowledge range for each node to make energy efficient geographical routing decisions is tackled by integer linear programming. It is shown that the problem is intrinsically localized, i.e., a limited topology knowledge is sufficient to make energy efficient forwarding decisions. The leading forwarding rules for geographical routing are compared in this framework, and the energy efficiency of each of them is studied. Moreover, a new forwarding scheme, partial topology knowledge forwarding (PTKF), is introduced, and shown to outperform other existing schemes in typical application scenarios. A probe-based distributed protocol for knowledge range adjustment (PRADA) is finally introduced that allows each node to efficiently select online its topology knowledge range. PRADA is shown to rapidly converge to a near-optimal solution.",
"We consider routing problems in ad hoc wireless networks modeled as unit graphs in which nodes are points in the plane and two nodes can communicate if the distance between them is less than some fixed unit. We describe the first distributed algorithms for routing that do not require duplication of packets or memory at the nodes and yet guarantee that a packet is delivered to its destination. These algorithms can be extended to yield algorithms for broadcasting and geocasting that do not require packet duplication. A by product of our results is a simple distributed protocol for extracting a planar subgraph of a unit graph. We also present simulation results on the performance of our algorithms."
]
} |
cs0601073 | 1535620672 | In this work we develop a new theory to analyse the process of routing in large-scale ad-hoc wireless networks. We use a path integral formulation to examine the properties of the paths generated by different routing strategies in these kinds of networks. Using this theoretical framework, we calculate the statistical distribution of the distances between any source to any destination in the network, hence we are able to deduce a length parameter that is unique for each routing strategy. This parameter, defined as the effective radius, effectively encodes the routing information required by a node. Analysing the afore- mentioned statistical distribution for different routing strategies, we obtain a threefold result for practical Large-Scale Wireless Ad-Hoc Networks: 1) We obtain the distribution of the lengths of all the paths in a network for any given routing strategy, 2) We are able to identify "good" routing strategies depending on the evolution of its effective radius as the number of nodes, N , increases to infinity, 3) For any routing strategy with finit e effective radius, we demonstrate that, in a large-scale network, is equivalent to a random routing strategy and that its transport capacity scales asp Nbit-meters per second, thus retrieving the scaling law that Gupta and Kumar (2000) obtained as the limit for single-route large-scale wireless networks. | The analysis of the routing problem dependent on a length scale that characterizes the awareness of the distributed routing protocol of its environment is not original and has been used before in the work by Melodia @cite_0 . The authors of this work introduce a phenomenological quantity called Knowledge Range" represents the physical extent of the routing strategy up to which is capable of finding the shortest path. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2126003739"
],
"abstract": [
"Since ad hoc and sensor networks can be composed of a very large number of devices, the scalability of network protocols is a major design concern. Furthermore, network protocols must be designed to prolong the battery lifetime of the devices. However, most existing routing techniques for ad hoc networks are known not to scale well. On the other hand, the so-called geographical routing algorithms are known to be scalable but their energy efficiency has never been extensively and comparatively studied. In a geographical routing algorithm, data packets are forwarded by a node to its neighbor based on their respective positions. The neighborhood of each node is constituted by the nodes that lie within a certain radio range. Thus, from the perspective of a node forwarding a packet, the next hop depends on the width of the neighborhood it perceives. The analytical framework proposed in this paper allows to analyze the relationship between the energy efficiency of the routing tasks and the extension of the range of the topology knowledge for each node. A wider topology knowledge may improve the energy efficiency of the routing tasks but increases the cost of topology information due to signaling packets needed to acquire this information. The problem of determining the optimal topology knowledge range for each node to make energy efficient geographical routing decisions is tackled by integer linear programming. It is shown that the problem is intrinsically localized, i.e., a limited topology knowledge is sufficient to make energy efficient forwarding decisions. The leading forwarding rules for geographical routing are compared in this framework, and the energy efficiency of each of them is studied. Moreover, a new forwarding scheme, partial topology knowledge forwarding (PTKF), is introduced, and shown to outperform other existing schemes in typical application scenarios. A probe-based distributed protocol for knowledge range adjustment (PRADA) is finally introduced that allows each node to efficiently select online its topology knowledge range. PRADA is shown to rapidly converge to a near-optimal solution."
]
} |
cs0601073 | 1535620672 | In this work we develop a new theory to analyse the process of routing in large-scale ad-hoc wireless networks. We use a path integral formulation to examine the properties of the paths generated by different routing strategies in these kinds of networks. Using this theoretical framework, we calculate the statistical distribution of the distances between any source to any destination in the network, hence we are able to deduce a length parameter that is unique for each routing strategy. This parameter, defined as the effective radius, effectively encodes the routing information required by a node. Analysing the afore- mentioned statistical distribution for different routing strategies, we obtain a threefold result for practical Large-Scale Wireless Ad-Hoc Networks: 1) We obtain the distribution of the lengths of all the paths in a network for any given routing strategy, 2) We are able to identify "good" routing strategies depending on the evolution of its effective radius as the number of nodes, N , increases to infinity, 3) For any routing strategy with finit e effective radius, we demonstrate that, in a large-scale network, is equivalent to a random routing strategy and that its transport capacity scales asp Nbit-meters per second, thus retrieving the scaling law that Gupta and Kumar (2000) obtained as the limit for single-route large-scale wireless networks. | The use of random walks as an effective (or unique) strategy for routing in Large-Scale Ad-Hoc Networks has been suggested in some works @cite_1 @cite_13 . In these works, the common drive to use this strategy is the logical conclusion that effective distributed routing in a large-scale network is unfeasible as it would require solving an NP-complete problem @cite_9 . | {
"cite_N": [
"@cite_13",
"@cite_9",
"@cite_1"
],
"mid": [
"2107082801",
"2120361596",
"2068094787"
],
"abstract": [
"We quantify the effectiveness of random walks for searching and construction of unstructured peer-to-peer (P2P) networks. We have identified two cases where the use of random walks for searching achieves better results than flooding: a) when the overlay topology is clustered, and h) when a client re-issues the same query while its horizon does not change much. For construction, we argue that an expander can he maintained dynamically with constant operations per addition. The key technical ingredient of our approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk can achieve statistical properties similar to independent sampling (if the second eigenvalue of the transition matrix is hounded away from 1, which translates to good expansion of the network; such connectivity is desired, and believed to hold, in every reasonable network and network model). This property has been previously used in complexity theory for construction of pseudorandom number generators. We reveal another facet of this theory and translate savings in random bits to savings in processing overhead.",
"We consider a routing problem in the context of large scale networks with uncontrolled dynamics. A case of uncontrolled dynamics that has been studied extensively is that of mobile nodes, as this is typically the case in cellular and mobile ad-hoc networks. In this paper however we study routing in the presence of a different type of dynamics: nodes do not move, but instead switch between active and inactive states at random times. Our interest in this case is motivated by the behavior of sensor nodes powered by renewable sources, such as solar cells or ambient vibrations. In this paper we formalize the corresponding routing problem as a problem of constructing suitably constrained random walks on random dynamic graphs. We argue that these random walks should be designed so that their resulting invariant distribution achieves a certain load balancing property, and we give simple distributed algorithms to compute the local parameters for the random walks that achieve the sought behavior. A truly novel feature of our formulation is that the algorithms we obtain are able to route messages along all possible routes between a source and a destination node, without performing explicit route discovery repair computations, and without maintaining explicit state information about available routes at the nodes. To the best of our knowledge, these are the first algorithms that achieve true multipath routing (in a statistical sense), at the complexity of simple stateless operations.",
"Recent literature has presented evidence that the study of navigation in complex networks is useful to understand their dynamics and topology. Two main approaches are usually considered: navigation of random walkers and navigation of directed walkers. Unlike these approaches ours supposes that a traveler walks optimally in order to minimize the cost of the walking. If this happens, two extreme regimes arise—one dominated by directed walkers and the other by random walkers. We try to characterize the critical point of the transition from one regime to the other in function of the connectivity and the size of the network. Furthermore, we show that this approach can be used to generalize several concepts presented in the literature concerning random navigation and direct navigation. Finally, we defend that investigating the extreme regimes dominated by random walkers and directed walkers is not sufficient to correctly assess the characteristics of navigation in complex networks."
]
} |
cs0601089 | 1912679131 | This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting. | Distributed learning has been addressed in a variety of other works. Reference @cite_6 considered a PAC-like model for learning with many individually trained hypotheses in a distribution-specific learning framework. Reference @cite_14 considered the classical model for decentralized detection @cite_12 in a nonparametric setting. Reference @cite_0 studied the existence of consistent estimators in several models for distributed learning. From a data mining perspective, @cite_7 and @cite_2 derived algorithms for distributed boosting. Most similar to the research presented here, @cite_1 presented a general framework for distributed linear regression motivated by WSNs. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_12"
],
"mid": [
"2017753243",
"2095374884",
"2109154616",
"2949243244",
"2048679005",
"2116137244",
"2061177108"
],
"abstract": [
"Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a Gine´, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or “agnostic”) framework. Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class.",
"We introduce and investigate a new model of learning probability distributions from independent draws. Our model is inspired by the popular Probably Approximately Correct (PAC) model for learning boolean functions from labeled examples [24], in the sense that we emphasize efficient and approximate learning, and we study the learnability of restricted classes of target distributions. The dist ribut ion classes we examine are often defined by some simple computational mechanism for transforming a truly random string of input bits (which is not visible to the learning algorithm) into the stochastic observation (output) seen by the learning algorithm. In this paper, we concentrate on discrete distributions over O, I n. The problem of inferring an approximation to an unknown probability distribution on the basis of independent draws has a long and complex history in the pattern recognition and statistics literature. For instance, the problem of estimating the parameters of a Gaussian density in highdimensional space is one of the most studied statistical problems. Distribution learning problems have often been investigated in the context of unsupervised learning, in which a linear mixture of two or more distributions is generating the observations, and the final goal is not to model the distributions themselves, but to predict from which distribution each observation was drawn. Data clustering methods are a common tool here. There is also a large literature on nonpararnetric density estimation, in which no assumptions are made on the unknown target density. Nearest-neighbor approaches to the unsupervised learning problem often arise in the nonparametric setting. While we obviously cannot do justice to these areas here, the books of Duda and Hart [9] and Vapnik [25] provide excellent overviews and introductions to the pattern recognition work, as well as many pointers for further reading. See also Izenman’s recent survey article [16]. Roughly speaking, our work departs from the traditional statistical and pattern recognition approaches in two ways. First, we place explicit emphasis on the comput ationrd complexity of distribution learning. It seems fair to say that while previous research has provided an excellent understanding of the information-theoretic issues involved in dis-",
"We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer.",
"The problem of distributed or decentralized detection and estimation in applications such as wireless sensor networks has often been considered in the framework of parametric models, in which strong assumptions are made about a statistical description of nature. In certain applications, such assumptions are warranted and systems designed from these models show promise. However, in other scenarios, prior knowledge is at best vague and translating such knowledge into a statistical model is undesirable. Applications such as these pave the way for a nonparametric study of distributed detection and estimation. In this paper, we review recent work of the authors in which some elementary models for distributed learning are considered. These models are in the spirit of classical work in nonparametric statistics and are applicable to wireless sensor networks.",
"We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 mitchell+@cs.cmu.edu",
"We describe distributed algorithms for two widely-used topic models, namely the Latent Dirichlet Allocation (LDA) model, and the Hierarchical Dirichet Process (HDP) model. In our distributed algorithms the data is partitioned across separate processors and inference is done in a parallel, distributed fashion. We propose two distributed algorithms for LDA. The first algorithm is a straightforward mapping of LDA to a distributed processor setting. In this algorithm processors concurrently perform Gibbs sampling over local data followed by a global update of topic counts. The algorithm is simple to implement and can be viewed as an approximation to Gibbs-sampled LDA. The second version is a model that uses a hierarchical Bayesian extension of LDA to directly account for distributed data. This model has a theoretical guarantee of convergence but is more complex to implement than the first algorithm. Our distributed algorithm for HDP takes the straightforward mapping approach, and merges newly-created topics either by matching or by topic-id. Using five real-world text corpora we show that distributed learning works well in practice. For both LDA and HDP, we show that the converged test-data log probability for distributed learning is indistinguishable from that obtained with single-processor learning. Our extensive experimental results include learning topic models for two multi-million document collections using a 1024-processor parallel computer.",
"Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an independent and identically distributed (i.i.d.) sampling process among members of a network of simple learning agents. The agents are limited in their ability to communicate to a central fusion center and thus, the amount of information available for use in classification or regression is constrained. For several basic communication models in both the binary classification and regression frameworks, we question the existence of agent decision rules and fusion rules that result in a universally consistent ensemble; the answers to this question present new issues to consider with regard to universal consistency. This paper addresses the issue of whether or not the guarantees provided by Stone's theorem in centralized environments hold in distributed settings."
]
} |
cs0601089 | 1912679131 | This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting. | Ongoing research in the machine learning community seeks to design statistically sound learning algorithms that scale to large data sets (e.g., @cite_10 and references therein). One approach is to decompose the database into smaller chunks", and subsequently parallelize the learning process by assigning distinct processors agents to each of the chunks. In principle, algorithms for parallelizing learning may be useful for distributed learning, and vice-versa. To our knowledge, there has not been an attempt to parallelize reproducing kernel methods using the approach outlined below. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2408432900"
],
"abstract": [
"In this paper, we present a new framework for large scale online kernel learning, making kernel methods efficient and scalable for large-scale online learning applications. Unlike the regular budget online kernel learning scheme that usually uses some budget maintenance strategies to bound the number of support vectors, our framework explores a completely different approach of kernel functional approximation techniques to make the subsequent online learning task efficient and scalable. Specifically, we present two different online kernel machine learning algorithms: (i) Fourier Online Gradient Descent (FOGD) algorithm that applies the random Fourier features for approximating kernel functions; and (ii) Nystrom Online Gradient Descent (NOGD) algorithm that applies the Nystrom method to approximate large kernel matrices. We explore these two approaches to tackle three online learning tasks: binary classification, multi-class classification, and regression. The encouraging results of our experiments on large-scale datasets validate the effectiveness and efficiency of the proposed algorithms, making them potentially more practical than the family of existing budget online kernel learning approaches."
]
} |
cs0601089 | 1912679131 | This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting. | A related area of research lies in the study of ensemble methods in machine learning; examples of these techniques include bagging, boosting, and mixtures of experts (e.g., @cite_13 and others). Typically, the focus of these works is on the statistical and algorithmic advantages of learning with an ensemble and not on the problem of learning under communication constraints. To our knowledge, the methods derived here have not been derived in this related context, though future work in distributed learning may benefit from the many insights gleaned from this important area. | {
"cite_N": [
"@cite_13"
],
"mid": [
"605727707"
],
"abstract": [
"An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity and bias-variance decompositions, and recent progress in information theoretic diversity. Moving on to more advanced topics, the author explains how to achieve better performance through ensemble pruning and how to generate better clustering results by combining multiple clusterings. In addition, he describes developments of ensemble methods in semi-supervised learning, active learning, cost-sensitive learning, class-imbalance learning, and comprehensibility enhancement."
]
} |
cs0601089 | 1912679131 | This paper addresses the problem of distributed learning under communication constraints, motivated by distributed signal processing in wireless sensor networks and data mining with distributed databases. After formalizing a general model for distributed learning, an algorithm for collaboratively training regularized kernel least-squares regression estimators is derived. Noting that the algorithm can be viewed as an application of successive orthogonal projection algorithms, its convergence properties are investigated and the statistical behavior of the estimator is discussed in a simplified theoretical setting. | The research presented here generalizes the model and algorithm discussed in @cite_15 , which focused exclusively on the WSN application. Distinctions between the current and former work are discussed in more detail below. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2109659895"
],
"abstract": [
"We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=@?log|A|@?, where A is the algorithm's action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones. In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1@?b@?@Q(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio @W(log(n) b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n) b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio k^O^(^1^ ^b^) for any choice of @Q(1)@?b@?logk."
]
} |
cs0601127 | 2159112912 | The access graph model for paging, defined by (, 1991) and studied in (, 1992) has a number of troubling aspects. The access graph has to be known in advance to the paging algorithm and the memory required to represent the access graph itself may be very large. We present a truly online strongly competitive paging algorithm in the access graph model that does not have any prior information on the access sequence. We give both strongly competitive deterministic and strongly competitive randomized algorithms. Our algorithms need only O(k log n) bits of memory, where k is the number of page slots available and n is the size of the virtual address space, i.e., no more memory than needed to store the virtual translation tables for pages in memory. In fact, we can reduce this to O(k log k) bits using appropriate probabilistic data structures. We also extend the locality of reference concept captured by the access graph model to allow changes in the behavior of the underlying process. We formalize this by introducing the concept of an "extended access graph". We consider a graph parameter spl Delta that captures the degree of change allowed. We study this new model and give algorithms that are strongly competitive for the (unknown) extended access graph. We can do so for almost all values of spl Delta for which it is possible. | Borodin al @cite_8 also consider deterministic uniform paging algorithms. They prove the existence of an optimal paging algorithm in PSPACE( @math ). They give a natural uniform paging algorithm, called , and prove that obtains a competitive ratio no worse than @math times the asymptotic competitive ratio for the graph. This result is improved in a paper by Irani, Karlin and Phillips @cite_6 in which it is shown that is very strongly competitive. The same paper also presents a very strongly competitive algorithm for a sub-class of access graphs, called . | {
"cite_N": [
"@cite_6",
"@cite_8"
],
"mid": [
"2777690836",
"2079669094"
],
"abstract": [
"In this paper we study the classic online matching problem, introduced in the seminal work of Karp, Vazirani and Vazirani (STOC 1990), in regular graphs. For such graphs, an optimal deterministic algorithm as well as efficient algorithms under stochastic input assumptions were known. In this work, we present a novel randomized algorithm with competitive ratio tending to one on this family of graphs, under adversarial arrival order. Our main contribution is a novel algorithm which achieves competitive ratio [EQUATION] in expectation on d-regular graphs. In contrast, we show that all previously-studied online algorithms have competitive ratio strictly bounded away from one. Moreover, we show the convergence rate of our algorithm's competitive ratio to one is nearly tight, as no algorithm achieves competitive ratio better than [EQUATION]. Finally, we show that our algorithm yields a similar competitive ratio with high probability, as well as guaranteeing each vertex a probability of being matched tending to one.",
"In a seminal paper, Karp, Vazirani, and Vazirani show that a simple ranking algorithm achieves a competitive ratio of 1-1 e for the online bipartite matching problem in the standard adversarial model, where the ratio of 1-1 e is also shown to be optimal. Their result also implies that in the random arrivals model defined by Goel and Mehta, where the online nodes arrive in a random order, a simple greedy algorithm achieves a competitive ratio of 1-1 e. In this paper, we study the ranking algorithm in the random arrivals model, and show that it has a competitive ratio of at least 0.696, beating the 1-1 e ≈ 0.632 barrier in the adversarial model. Our result also extends to the i.i.d. distribution model of , removing the assumption that the distribution is known. Our analysis has two main steps. First, we exploit certain dominance and monotonicity properties of the ranking algorithm to derive a family of factor-revealing linear programs (LPs). In particular, by symmetry of the ranking algorithm in the random arrivals model, we have the monotonicity property on both sides of the bipartite graph, giving good \"strength\" to the LPs. Second, to obtain a good lower bound on the optimal values of all these LPs and hence on the competitive ratio of the algorithm, we introduce the technique of strongly factor-revealing LPs. In particular, we derive a family of modified LPs with similar strength such that the optimal value of any single one of these new LPs is a lower bound on the competitive ratio of the algorithm. This enables us to leverage the power of computer LP solvers to solve for large instances of the new LPs to establish bounds that would otherwise be difficult to attain by human analysis."
]
} |
cs0601127 | 2159112912 | The access graph model for paging, defined by (, 1991) and studied in (, 1992) has a number of troubling aspects. The access graph has to be known in advance to the paging algorithm and the memory required to represent the access graph itself may be very large. We present a truly online strongly competitive paging algorithm in the access graph model that does not have any prior information on the access sequence. We give both strongly competitive deterministic and strongly competitive randomized algorithms. Our algorithms need only O(k log n) bits of memory, where k is the number of page slots available and n is the size of the virtual address space, i.e., no more memory than needed to store the virtual translation tables for pages in memory. In fact, we can reduce this to O(k log k) bits using appropriate probabilistic data structures. We also extend the locality of reference concept captured by the access graph model to allow changes in the behavior of the underlying process. We formalize this by introducing the concept of an "extended access graph". We consider a graph parameter spl Delta that captures the degree of change allowed. We study this new model and give algorithms that are strongly competitive for the (unknown) extended access graph. We can do so for almost all values of spl Delta for which it is possible. | Fiat and Rosen @cite_11 present an access graph based heuristic that is truly online and makes use of a (weighted) dynamic access graph. In this sense we emulate their concept. While the Fiat and Rosen algorithm is experimentally interesting in that it seems to beat , it is certainly not strongly competitive, and is known to have a competitive ratio of @math . | {
"cite_N": [
"@cite_11"
],
"mid": [
"2079669094"
],
"abstract": [
"In a seminal paper, Karp, Vazirani, and Vazirani show that a simple ranking algorithm achieves a competitive ratio of 1-1 e for the online bipartite matching problem in the standard adversarial model, where the ratio of 1-1 e is also shown to be optimal. Their result also implies that in the random arrivals model defined by Goel and Mehta, where the online nodes arrive in a random order, a simple greedy algorithm achieves a competitive ratio of 1-1 e. In this paper, we study the ranking algorithm in the random arrivals model, and show that it has a competitive ratio of at least 0.696, beating the 1-1 e ≈ 0.632 barrier in the adversarial model. Our result also extends to the i.i.d. distribution model of , removing the assumption that the distribution is known. Our analysis has two main steps. First, we exploit certain dominance and monotonicity properties of the ranking algorithm to derive a family of factor-revealing linear programs (LPs). In particular, by symmetry of the ranking algorithm in the random arrivals model, we have the monotonicity property on both sides of the bipartite graph, giving good \"strength\" to the LPs. Second, to obtain a good lower bound on the optimal values of all these LPs and hence on the competitive ratio of the algorithm, we introduce the technique of strongly factor-revealing LPs. In particular, we derive a family of modified LPs with similar strength such that the optimal value of any single one of these new LPs is a lower bound on the competitive ratio of the algorithm. This enables us to leverage the power of computer LP solvers to solve for large instances of the new LPs to establish bounds that would otherwise be difficult to attain by human analysis."
]
} |
quant-ph0512258 | 2951669051 | We propose various new techniques in quantum information theory, including a de Finetti style representation theorem for finite symmetric quantum states. As an application, we give a proof for the security of quantum key distribution which applies to arbitrary protocols. | One of the most popular proof techniques was proposed by Shor and Preskill @cite_37 , based on ideas of Lo and Chau @cite_66 . It uses a connection between key distribution and entanglement purification @cite_55 entanglement purification pointed out by Ekert @cite_15 (see also @cite_13 ). The proof technique of Shor and Preskill was later refined and applied to other protocols (see, e.g., @cite_56 @cite_3 ). | {
"cite_N": [
"@cite_37",
"@cite_55",
"@cite_3",
"@cite_56",
"@cite_15",
"@cite_13",
"@cite_66"
],
"mid": [
"2071764857",
"2165923712",
"1980534149",
"2121957834",
"2807350215",
"1988304269",
"2119617924"
],
"abstract": [
"We prove that the 1984 protocol of Bennett and Brassard (BB84) for quantum key distribution is secure. We first give a key distribution protocol based on entanglement purification, which can be proven secure using methods from Lo and Chau's proof of security for a similar protocol. We then show that the security of this protocol implies the security of BB84. The entanglement purification based protocol uses Calderbank-Shor-Steane codes, and properties of these codes are used to remove the use of quantum computation from the Lo-Chau protocol.",
"Shor and Preskill (see Phys. Rev. Lett., vol.85, p.441, 2000) have provided a simple proof of security of the standard quantum key distribution scheme by Bennett and Brassard (1984) by demonstrating a connection between key distribution and entanglement purification protocols (EPPs) with one-way communications. Here, we provide proofs of security of standard quantum key distribution schemes, Bennett and Brassard and the six-state scheme, against the most general attack, by using the techniques of two-way entanglement purification. We demonstrate clearly the advantage of classical post-processing with two-way classical communications over classical post-processing with only one-way classical communications in quantum key distribution (QKD). This is done by the explicit construction of a new protocol for (the error correction detection and privacy amplification of) Bennett and Brassard that can tolerate a bit error rate of up to 18.9 , which is higher than what any Bennett and Brassard scheme with only one-way classical communications can possibly tolerate. Moreover, we demonstrate the advantage of the six-state scheme over Bennett and Brassard by showing that the six-state scheme can strictly tolerate a higher bit error rate than Bennett and Brassard. In particular, our six-state protocol can tolerate a bit error rate of 26.4 , which is higher than the upper bound of 25 bit error rate for any secure Bennett and Brassard protocol. Consequently, our protocols may allow higher key generation rate and remain secure over longer distances than previous protocols. Our investigation suggests that two-way entanglement purification is a useful tool in the study of advantage distillation, error correction, and privacy amplification protocols.",
"We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel.",
"Standard security proofs of quantum-key-distribution (QKD) protocols often rely on symmetry arguments. In this paper, we prove the security of a three-state protocol that does not possess rotational symmetry. The three-state QKD protocol we consider involves three qubit states, where the first two states @math and @math can contribute to key generation, and the third state @math is for channel estimation. This protocol has been proposed and implemented experimentally in some frequency-based QKD systems where the three states can be prepared easily. Thus, by founding on the security of this three-state protocol, we prove that these QKD schemes are, in fact, unconditionally secure against any attacks allowed by quantum mechanics. The main task in our proof is to upper bound the phase error rate of the qubits given the bit error rates observed. Unconditional security can then be proved not only for the ideal case of a single-photon source and perfect detectors, but also for the realistic case of a phase-randomized weak coherent light source and imperfect threshold detectors. Our result in the phase error rate upper bound is independent of the loss in the channel. Also, we compare the three-state protocol with the Bennett-Brassard 1984 (BB84) protocol. For the single-photon source case, our result proves that the BB84 protocol strictly tolerates a higher quantum bit error rate than the three-state protocol, while for the coherent-source case, the BB84 protocol achieves a higher key generation rate and secure distance than the three-state protocol when a decoy-state method is used.",
"We show that any language in nondeterministic time @math , where the number of iterated exponentials is an arbitrary function @math , can be decided by a multiprover interactive proof system with a classical polynomial-time verifier and a constant number of quantum entangled provers, with completeness @math and soundness @math , where the number of iterated exponentials is @math and @math is a universal constant. The result was previously known for @math and @math ; we obtain it for any time-constructible function @math . The result is based on a compression technique for interactive proof systems with entangled provers that significantly simplifies and strengthens a protocol compression result of Ji (STOC'17). As a separate consequence of this technique we obtain a different proof of Slofstra's recent result (unpublished) on the uncomputability of the entangled value of multiprover games. Finally, we show that even minor improvements to our compression result would yield remarkable consequences in computational complexity theory and the foundations of quantum mechanics: first, it would imply that the class MIP* contains all computable languages; second, it would provide a negative resolution to a multipartite version of Tsirelson's problem on the relation between the commuting operator and tensor product models for quantum correlations.",
"We construct a practically implementable classical processing for the Bennett-Brassard 1984 (BB84) protocol and the six-state protocol that fully utilizes the accurate channel estimation method, which is also known as the quantum tomography. Our proposed processing yields at least as high a key rate as the standard processing by Shor and Preskill. We show two examples of quantum channels over which the key rate of our proposed processing is strictly higher than the standard processing. In the second example, the BB84 protocol with our proposed processing yields a positive key rate even though the so-called error rate is higher than the 25 limit.",
"We describe a proof method for cryptographic protocols, based on a strong secrecy invariant that catalogues conditions under which messages can be published. For typical protocols, a suitable first-order invariant can be generated automatically from the program text, independent of the properties being verified, allowing safety properties to be proved by ordinary first-order reasoning. We have implemented the method in an automatic verifier, TAPS, that proves safety properties roughly equivalent to those in published Isabelle verifications, but does so much faster (usually within a few seconds) and with little or no guidance from the user. We have used TAPS to analyze about 60 protocols, including all but three protocols from the Clark and Jacob survey; on average, these verifications each require less than 4 seconds of CPU time and less than 4 bytes of hints from the user."
]
} |
quant-ph0512258 | 2951669051 | We propose various new techniques in quantum information theory, including a de Finetti style representation theorem for finite symmetric quantum states. As an application, we give a proof for the security of quantum key distribution which applies to arbitrary protocols. | @cite_54 , we have presented a general method for proving the security of QKD which does not rely on entanglement purification. Instead, it is based on a result on the security of privacy amplification in the context of quantum adversaries @cite_30 @cite_52 privacy amplification . Later, this method has been extended and applied to prove the security of new variants of the BB84 and the six-state protocol @cite_44 @cite_67 . @cite_44 @cite_67 we use an alternative technique (different from the quantum de Finetti theorem) to show that collective attacks are equivalent to coherent attacks for certain QKD protocols. The security proof given in this thesis is based on ideas developed in these papers. | {
"cite_N": [
"@cite_30",
"@cite_67",
"@cite_54",
"@cite_52",
"@cite_44"
],
"mid": [
"1980534149",
"2121957834",
"2079729767",
"2071764857",
"2165923712"
],
"abstract": [
"We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel.",
"Standard security proofs of quantum-key-distribution (QKD) protocols often rely on symmetry arguments. In this paper, we prove the security of a three-state protocol that does not possess rotational symmetry. The three-state QKD protocol we consider involves three qubit states, where the first two states @math and @math can contribute to key generation, and the third state @math is for channel estimation. This protocol has been proposed and implemented experimentally in some frequency-based QKD systems where the three states can be prepared easily. Thus, by founding on the security of this three-state protocol, we prove that these QKD schemes are, in fact, unconditionally secure against any attacks allowed by quantum mechanics. The main task in our proof is to upper bound the phase error rate of the qubits given the bit error rates observed. Unconditional security can then be proved not only for the ideal case of a single-photon source and perfect detectors, but also for the realistic case of a phase-randomized weak coherent light source and imperfect threshold detectors. Our result in the phase error rate upper bound is independent of the loss in the channel. Also, we compare the three-state protocol with the Bennett-Brassard 1984 (BB84) protocol. For the single-photon source case, our result proves that the BB84 protocol strictly tolerates a higher quantum bit error rate than the three-state protocol, while for the coherent-source case, the BB84 protocol achieves a higher key generation rate and secure distance than the three-state protocol when a decoy-state method is used.",
"We investigate a general class of quantum key distribution (QKD) protocols using one-way classical communication. We show that full security can be proven by considering only collective attacks. We derive computable lower and upper bounds on the secret-key rate of those QKD protocols involving only entropies of two-qubit density operators. As an illustration of our results, we determine new bounds for the Bennett-Brassard 1984, the 6-state, and the Bennett 1992 protocols. We show that in all these cases the first classical processing that the legitimate partners should apply consists in adding noise.",
"We prove that the 1984 protocol of Bennett and Brassard (BB84) for quantum key distribution is secure. We first give a key distribution protocol based on entanglement purification, which can be proven secure using methods from Lo and Chau's proof of security for a similar protocol. We then show that the security of this protocol implies the security of BB84. The entanglement purification based protocol uses Calderbank-Shor-Steane codes, and properties of these codes are used to remove the use of quantum computation from the Lo-Chau protocol.",
"Shor and Preskill (see Phys. Rev. Lett., vol.85, p.441, 2000) have provided a simple proof of security of the standard quantum key distribution scheme by Bennett and Brassard (1984) by demonstrating a connection between key distribution and entanglement purification protocols (EPPs) with one-way communications. Here, we provide proofs of security of standard quantum key distribution schemes, Bennett and Brassard and the six-state scheme, against the most general attack, by using the techniques of two-way entanglement purification. We demonstrate clearly the advantage of classical post-processing with two-way classical communications over classical post-processing with only one-way classical communications in quantum key distribution (QKD). This is done by the explicit construction of a new protocol for (the error correction detection and privacy amplification of) Bennett and Brassard that can tolerate a bit error rate of up to 18.9 , which is higher than what any Bennett and Brassard scheme with only one-way classical communications can possibly tolerate. Moreover, we demonstrate the advantage of the six-state scheme over Bennett and Brassard by showing that the six-state scheme can strictly tolerate a higher bit error rate than Bennett and Brassard. In particular, our six-state protocol can tolerate a bit error rate of 26.4 , which is higher than the upper bound of 25 bit error rate for any secure Bennett and Brassard protocol. Consequently, our protocols may allow higher key generation rate and remain secure over longer distances than previous protocols. Our investigation suggests that two-way entanglement purification is a useful tool in the study of advantage distillation, error correction, and privacy amplification protocols."
]
} |
quant-ph0512258 | 2951669051 | We propose various new techniques in quantum information theory, including a de Finetti style representation theorem for finite symmetric quantum states. As an application, we give a proof for the security of quantum key distribution which applies to arbitrary protocols. | Our new approach for proving the security of QKD has already found various applications. For example, it is used for the analysis of protocols based on continuous systems continuous variable QKD as well as to improve the analysis of known (practical) protocols practical implementation exploiting the fact that an adversary cannot control the noise in the physical devices owned by Alice and Bob (see, e.g., @cite_38 @cite_41 @cite_4 ). | {
"cite_N": [
"@cite_41",
"@cite_38",
"@cite_4"
],
"mid": [
"1980534149",
"2079729767",
"2121957834"
],
"abstract": [
"We present a technique for proving the security of quantum-key-distribution (QKD) protocols. It is based on direct information-theoretic arguments and thus also applies if no equivalent entanglement purification scheme can be found. Using this technique, we investigate a general class of QKD protocols with one-way classical post-processing. We show that, in order to analyze the full security of these protocols, it suffices to consider collective attacks. Indeed, we give new lower and upper bounds on the secret-key rate which only involve entropies of two-qubit density operators and which are thus easy to compute. As an illustration of our results, we analyze the Bennett-Brassard 1984, the six-state, and the Bennett 1992 protocols with one-way error correction and privacy amplification. Surprisingly, the performance of these protocols is increased if one of the parties adds noise to the measurement data before the error correction. In particular, this additional noise makes the protocols more robust against noise in the quantum channel.",
"We investigate a general class of quantum key distribution (QKD) protocols using one-way classical communication. We show that full security can be proven by considering only collective attacks. We derive computable lower and upper bounds on the secret-key rate of those QKD protocols involving only entropies of two-qubit density operators. As an illustration of our results, we determine new bounds for the Bennett-Brassard 1984, the 6-state, and the Bennett 1992 protocols. We show that in all these cases the first classical processing that the legitimate partners should apply consists in adding noise.",
"Standard security proofs of quantum-key-distribution (QKD) protocols often rely on symmetry arguments. In this paper, we prove the security of a three-state protocol that does not possess rotational symmetry. The three-state QKD protocol we consider involves three qubit states, where the first two states @math and @math can contribute to key generation, and the third state @math is for channel estimation. This protocol has been proposed and implemented experimentally in some frequency-based QKD systems where the three states can be prepared easily. Thus, by founding on the security of this three-state protocol, we prove that these QKD schemes are, in fact, unconditionally secure against any attacks allowed by quantum mechanics. The main task in our proof is to upper bound the phase error rate of the qubits given the bit error rates observed. Unconditional security can then be proved not only for the ideal case of a single-photon source and perfect detectors, but also for the realistic case of a phase-randomized weak coherent light source and imperfect threshold detectors. Our result in the phase error rate upper bound is independent of the loss in the channel. Also, we compare the three-state protocol with the Bennett-Brassard 1984 (BB84) protocol. For the single-photon source case, our result proves that the BB84 protocol strictly tolerates a higher quantum bit error rate than the three-state protocol, while for the coherent-source case, the BB84 protocol achieves a higher key generation rate and secure distance than the three-state protocol when a decoy-state method is used."
]
} |
cs0512060 | 2950348024 | We propose efficient distributed algorithms to aid navigation of a user through a geographic area covered by sensors. The sensors sense the level of danger at their locations and we use this information to find a safe path for the user through the sensor field. Traditional distributed navigation algorithms rely upon flooding the whole network with packets to find an optimal safe path. To reduce the communication expense, we introduce the concept of a skeleton graph which is a sparse subset of the true sensor network communication graph. Using skeleton graphs we show that it is possible to find approximate safe paths with much lower communication cost. We give tight theoretical guarantees on the quality of our approximation and by simulation, show the effectiveness of our algorithms in realistic sensor network situations. | Navigating a sensor field in the presence of danger zones is a problem which is similar to path planning in the presence of obstacles. There are two obvious ways one can approach this problem: a greedy geographic scheme similar to GPSR routing @cite_13 and exhaustive search. In a geographic scheme, one would greedily move towards the destination and traverse around the danger zones encountered on the way. This scheme has very low communication overhead, but can lead to highly suboptimal paths as shown in Fig. . The global exhaustive search algorithm floods the network with packets to carry out a Breadth-First-Search (BFS) on the communication graph. Obviously this algorithm is optimal in terms of path length, but very expensive in terms of communication cost. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2073137959"
],
"abstract": [
"Inter-vehicle communication is regarded as one of the major applications of mobile ad hoc networks (MANETs). Compared to MANETs or wireless sensor networks (WSNs), these so-called vehicular ad hoc networks (VANETs) have unique requirements on network protocols. The requirements result mainly from node mobility and the demands of position-dependent applications. On the routing layer, those requirements are well met by geographic routing protocols. Functional research on geographic routing has already reached a considerable level, whereas security aspects have only been recently taken into account. Position information dissemination has been identified as being crucial for geographic routing since forged position information has severe impact regarding both performance and security. In this work, we first summarize the problems that arise from falsified position data. We then propose a framework that contains different detection mechanisms in order to mitigate or lessen these problems. Our developed mechanisms are capable of recognizing nodes cheating about their position in beacons (periodic position dissemination in most single-path geographic routing protocols, e.g., GPSR). Unlike other proposals described in the literature, our detection system does not rely on additional hardware or special nodes, which would contradict the ad hoc approach. Instead, we use a number of different independent sensors to quickly give an estimation of the trustworthiness of other nodes' position claims. The different sensors run either autonomously on every single node, or they require cooperation between neighboring nodes. The simulation evaluation proves that the combination of autonomous and cooperative position verification mechanisms successfully discloses most nodes disseminating false position information, and thereby widely prevents attacks using position cheating."
]
} |
cs0512060 | 2950348024 | We propose efficient distributed algorithms to aid navigation of a user through a geographic area covered by sensors. The sensors sense the level of danger at their locations and we use this information to find a safe path for the user through the sensor field. Traditional distributed navigation algorithms rely upon flooding the whole network with packets to find an optimal safe path. To reduce the communication expense, we introduce the concept of a skeleton graph which is a sparse subset of the true sensor network communication graph. Using skeleton graphs we show that it is possible to find approximate safe paths with much lower communication cost. We give tight theoretical guarantees on the quality of our approximation and by simulation, show the effectiveness of our algorithms in realistic sensor network situations. | The concept of minimum exposure path were introduced by Meguerdichian et. al. @cite_6 . Veltri et. al. @cite_8 has given heuristics to distributedly compute minimal and maximal exposure paths in sensor networks. Path planning in the context of sensor networks was addressed by Li et. al. @cite_3 where they consider the problem of finding minimum exposure path. Their approach involves exhaustive search over the whole network to find the minimal exposure path. Recently Liu et.al. @cite_2 have used the concept of searching a sparse subgraph to implement algorithms for resource discovery in sensor networks. This work, which was carried out independently of us, however doesn't address the problem of path finding when parts of the sensor network is blocked due to danger. Some of our work is inspired by the mesh generation problem @cite_4 @cite_5 in computational geometry. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_3",
"@cite_6",
"@cite_2",
"@cite_5"
],
"mid": [
"2167477163",
"1991523989",
"2128445255",
"2125242971",
"2011532811",
"2512707330"
],
"abstract": [
"Sensor networks not only have the potential to change the way we use, interact with, and view computers, but also the way we use, interact with, and view the world around us. In order to maximize the effectiveness of sensor networks, one has to identify, examine, understand, and provide solutions for the fundamental problems related to wireless embedded sensor networks. We believe that one of such problems is to determine how well the sensor network monitors the instrumented area. These problems are usually classified as coverage problems. There already exist several methods that have been proposed to evaluate a sensor network's coverage.We start from one of such method and provide a new approach to complement it. The method of using the minimal exposure path to quantify coverage has been optimally solved using a numerical approximation approach. The minimal exposure path can be thought of as the worst-case coverage of a sensor network. Our first goal is to develop an efficient localized algorithm that enables a sensor network to determine its minimal exposure path. The theoretical highlight of this paper is the closed-form solution for minimal exposure in the presence of a single sensor. This solution is the basis for the new and significantly faster localized approximation algorithm that reduces the theoretical complexity of the previous algorithm. On the other hand, we introduce a new coverage problem - the maximal exposure path - which is in a sense the best-case coverage path for a sensor network. We prove that the maximal exposure path problem is NP-hard, and thus, we provide heuristics to generate approximate solutions.In addition, we demonstrate the effectiveness of our algorithms through several simulations. In the case of the minimal single-source minimal exposure path, we use variational calculus to determine exact solutions. For the case of maximal exposure, we use networks with varying numbers of sensors and exposure models.",
"Wireless ad-hoc sensor networks will provide one of the missing connections between the Internet and the physical world. One of the fundamental problems in sensor networks is the calculation of coverage. Exposure is directly related to coverage in that it is a measure of how well an object, moving on an arbitrary path, can be observed by the sensor network over a period of time. In addition to the informal definition, we formally define exposure and study its properties. We have developed an efficient and effective algorithm for exposure calculation in sensor networks, specifically for finding minimal exposure paths. The minimal exposure path provides valuable information about the worst case exposure-based coverage in sensor networks. The algorithm works for any given distribution of sensors, sensor and intensity models, and characteristics of the network. It provides an unbounded level of accuracy as a function of run time and storage. We provide an extensive collection of experimental results and study the scaling behavior of exposure and the proposed algorithm for its calculation.",
"We explore fundamental performance limits of tracking a target in a two-dimensional field of binary proximity sensors, and design algorithms that attain those limits. In particular, using geometric and probabilistic analysis of an idealized model, we prove that the achievable spatial resolution Δ in localizing a target's trajectory is of the order of 1overρ R, where R is the sensing radius and ρ is the sensor density per unit area. Using an Occam's razor approach, we then design a geometric algorithm for computing an economical (in descriptive complexity) piecewise linear path that approximates the trajectory within this fundamental limit of accuracy. We employ analogies between binary sensing and sampling theory to contend that only a \"lowpass\" approximation of the trajectory is attainable, and explore the implications of this obervation for estimating the target's velocity.We show through simulation the effectiveness of the geometric algorithm in tracking both the trajectory and the velocity of the target for idealized models. For non-ideal sensors exhibiting sensing errors, the geometric algorithm can yield poor performance. We show that non-idealities can be handled well using a particle filter based approach, and that geometric post-processing of the output of the Particle Filter algorithm yields an economical path description as in the idealized setting. Finally, we report on our lab-scale experiments using motes with acoustic sensors to validate our theoretical and simulation results.",
"We study the problem of achieving maximum barrier coverage by sensors on a barrier modeled by a line segment, by moving the minimum possible number of sensors, initially placed at arbitrary positions on the line containing the barrier. We consider several cases based on whether or not complete coverage is possible, and whether non-contiguous coverage is allowed in the case when complete coverage is impossible. When the sensors have unequal transmission ranges, we show that the problem of finding a minimum-sized subset of sensors to move in order to achieve maximum contiguous or non-contiguous coverage on a finite line segment barrier is NP-complete. In contrast, if the sensors all have the same range, we give efficient algorithms to achieve maximum contiguous as well as non-contiguous coverage. For some cases, we reduce the problem to finding a maximum-hop path of a certain minimum (maximum) weight on a related graph, and solve it using dynamic programming.",
"Proposing a new approach to barrier coverage in wireless sensor network.Modeling barrier coverage with stochastic edge-weighted graph.Finding an optimal solution for the network stochastic edge-weighted coverage graph.Comparing the performance of the proposed method with the greedy and optimal methods. Barrier coverage is one of the most important applications of wireless sensor networks. It is used to detect mobile objects are entering into the boundary of a sensor network field. Energy efficiency is one of the main concerns in barrier coverage for wireless sensor networks and its solution can be widely used in sensor barrier applications, such as intrusion detectors and border security. In this work, we take the energy efficiency as objectives of the study on barrier coverage. The cost in the present paper can be any performance measurement and normally is defined as any resource which is consumed by sensor barrier. In this paper, barrier coverage problem is modeled based on stochastic coverage graph first. Then, a distributed learning automata-based method is proposed to find a near optimal solution to the stochastic barrier coverage problem. The stochastic barrier coverage problem seeks to find minimum required number of sensor nodes to construct sensor barrier path. To study the performance of the proposed method, computer simulations are conducted. The simulation results show that the proposed algorithm significantly outperforms the greedy based algorithm and optimal method in terms of number of network barrier paths.",
"We consider wireless sensor networks under a heterogeneous random key predistribution scheme and an on-off channel model. The heterogeneous key predistribution scheme has recently been introduced by Yagan - as an extension to the Eschenauer and Gligor scheme - for the cases when the network consists of sensor nodes with varying level of resources and or connectivity requirements, e.g., regular nodes vs. cluster heads. The network is modeled by the intersection of the inhomogeneous random key graph (induced by the heterogeneous scheme) with an Erdős-Renyi graph (induced by the on off channel model). We present conditions (in the form of zero-one laws) on how to scale the parameters of the intersection model so that with high probability all of its nodes are connected to at least k other nodes; i.e., the minimum node degree of the graph is no less than k. We also present numerical results to support our results in the finite-node regime. The numerical results suggest that the conditions that ensure k-connectivity coincide with those ensuring the minimum node degree being no less than k."
]
} |
cs0512069 | 1646304050 | Backup or preservation of websites is often not considered until after a catastrophic event has occurred. In the face of complete website loss, “lazy” webmasters or concerned third parties may be able to recover some of their website from the Internet Archive. Other pages may also be salvaged from commercial search engine caches. We introduce the concept of “lazy preservation”- digital preservation performed as a result of the normal operations of the Web infrastructure (search engines and caches). We present Warrick, a tool to automate the process of website reconstruction from the Internet Archive, Google, MSN and Yahoo. Using Warrick, we have reconstructed 24 websites of varying sizes and composition to demonstrate the feasibility and limitations of website reconstruction from the public Web infrastructure. To measure Warrick’s window of opportunity, we have profiled the time required for new Web resources to enter and leave search engine caches. | In regards to archiving websites, organizations like the Internet Archive and national libraries are currently engaged in archiving the external (or client's) view of selected websites @cite_26 and improving that process by building better web crawlers and tools @cite_7 . Systems have been developed to ensure long-term access to Web content within repositories and digital libraries @cite_36 . | {
"cite_N": [
"@cite_36",
"@cite_26",
"@cite_7"
],
"mid": [
"300178524",
"2100612254",
"1985731768"
],
"abstract": [
"The World Wide Web is becoming a source of information for researchers, who are more aware of the possibilities for collections of Internet content as resources. Some have begun creating archives of web content for social science and humanities research. However, there is a growing gulf between policies shared between global and national institutions creating web archives and the practices of researchers making use of the archives. Each set of stakeholders finds the others’ web archiving contributions less applicable to their own field. Institutions find the contributions of researchers to be too narrow to meet the needs of the institution’s audience, and researchers find the contributions of institutions to be too broad to meet the needs of their research methods. Resources are extended to advance both institutional and researcher tools, but the gulf between the two is persistent. Institutions generally produce web archives that are broad in scope but with limited access and enrichment tools. The design of common access interfaces, such as the Internet Archive’s Wayback Machine, limit access points to archives to only URL and date. This narrow access limits the ways in which web archives can be valuable for exploring research questions in the humanities and social sciences. Individual scholars, in catering to their own disciplinary and methodological needs, produce web archives that are narrow in scope, and whose access and enrichment tools are personalized to work within the boundaries of the project for which the web archive was built. There is no way to explore a subset of an archive by topic, event, or idea. The current search paradigm in web archiving access tools is built primarily on retrieval, not discovery. We suggest that there is a need for extensible tools to enhance access to and enrichment of web archives to make them more readily reusable and so, more valuable for both institutions and researchers, and that annotation activities can serve as one potential guide for development of such tools to bridge the divide.",
"Some large scale topical digital libraries, such as CiteSeer, harvest online academic documents by crawling open-access archives, university and author homepages, and authors' self-submissions. While these approaches have so far built reasonable size libraries, they can suffer from having only a portion of the documents from specific publishing venues. We propose to use alternative online resources and techniques that maximally exploit other resources to build the complete document collection of any given publication venue. We investigate the feasibility of using publication metadata to guide the crawler towards authors' homepages to harvest what is missing from a digital library collection. We collect a real-world dataset from two Computer Science publishing venues, involving a total of 593 unique authors over a time frame of 1998 to 2004. We then identify the missing papers that are not indexed by CiteSeer. Using a fully automatic heuristic-based system that has the capability of locating authors' homepages and then using focused crawling to download the desired papers, we demonstrate that it is practical to harvest using a focused crawler academic papers that are missing from our digital library. Our harvester achieves a performance with an average recall level of 0.82 overall and 0.75 for those missing documents. Evaluation of the crawler's performance based on the harvest rate shows definite advantages over other crawling approaches and consistently outperforms a defined baseline crawler on a number of measures",
"While the Internet community recognized early on the need to store and preserve past content of the Web for future use, the tools developed so far for retrieving information from Web archives are still difficult to use and far less efficient than those developed for the \"live Web.\" We expect that future information retrieval systems will utilize both the \"live\" and \"past Web\" and have thus developed a general framework for a past Web browser. A browser built using this framework would be a client-side system that downloads, in real time, past page versions from Web archives for their customized presentation. It would use passive browsing, change detection and change animation to provide a smooth and satisfactory browsing experience. We propose a meta-archive approach for increasing the coverage of past Web pages and for providing a unified interface to the past Web. Finally, we introduce query-based and localized approaches for filtered browsing that enhance and speed up browsing and information retrieval from Web archives."
]
} |
cs0512069 | 1646304050 | Backup or preservation of websites is often not considered until after a catastrophic event has occurred. In the face of complete website loss, “lazy” webmasters or concerned third parties may be able to recover some of their website from the Internet Archive. Other pages may also be salvaged from commercial search engine caches. We introduce the concept of “lazy preservation”- digital preservation performed as a result of the normal operations of the Web infrastructure (search engines and caches). We present Warrick, a tool to automate the process of website reconstruction from the Internet Archive, Google, MSN and Yahoo. Using Warrick, we have reconstructed 24 websites of varying sizes and composition to demonstrate the feasibility and limitations of website reconstruction from the public Web infrastructure. To measure Warrick’s window of opportunity, we have profiled the time required for new Web resources to enter and leave search engine caches. | Numerous systems have been built to archive individual websites and web pages. InfoMonitor archives the server-side components (e.g., CGI scripts and datafiles) and filesystem of a web server @cite_34 . It requires an administrator to configure the system and a separate server with adequate disk space to hold the archives. Other systems like TTApache @cite_27 and iPROXY @cite_16 archive requested pages from a web server but not the server-side components. TTApache is an Apache module which archives different versions of web resources as they are requested from a web server. Users can view archived content through specially formatted URLs. iPROXY is similar to TTApache except that it uses a proxy server and archives requested resources for the client from any number of web servers. A similar approach using a proxy server with a content management system for storing and accessing Web resources was proposed in @cite_21 . Commercial systems like Furl ( http: furl.net ) and Spurl.net ( http: spurl.net ) also allow users to archive selected web resources that they deem important. | {
"cite_N": [
"@cite_27",
"@cite_34",
"@cite_21",
"@cite_16"
],
"mid": [
"2091712605",
"1510484544",
"2117044215",
"1994530080"
],
"abstract": [
"The Web contains so much information that it is almost beyond measure. How do users manage the useful information that they have seen while screening out the rest that doesn't interest them? Bookmarks help, but bookmarking a page doesn't guarantee that it will be available forever. Search engines are becoming more powerful, but they can't be customized based on the access history of individual users. This paper suggests that a better alternative to managing web information is through a middleware approach based on iPROXY, a programmable proxy server. iPROXY offers a suite of archiving, retrieval, and searching services. It can extend a URL to include commands that archive and retrieve pages. Its modular architecture allows users to plug in new features without having to change existing browsers or servers. Once installed on a network, iPROXY can be accessed by users using different browsers and devices. Internet service providers who offer customers iPROXY will be free to develop new services without having to wait for the dominant browsers to be updated.",
"The Web is ephemeral. Many resources have representations that change over time, and many of those representations are lost forever. A lucky few manage to reappear as archived resources that carry their own URIs. For example, some content management systems maintain version pages that reect a frozen prior state of their changing resources. Archives recurrently crawl the web to obtain the actual representation of resources, and subsequently make those available via special-purpose archived resources. In both cases, the archival copies have URIs that are protocolwise disconnected from the URI of the resource of which they represent a prior state. Indeed, the lack of temporal capabilities in the most common Web protocol, HTTP, prevents getting to an archived resource on the basis of the URI of its original. This turns accessing archived resources into a signicant discovery challenge for both human and software agents, which typically involves following a multitude of links from the original to the archival resource, or of searching archives for the original URI. This paper proposes the protocol-based Memento solution to address this problem, and describes a proof-of-concept experiment that includes major servers of archival content, including Wikipedia and the Internet Archive. The Memento solution is based on existing HTTP capabilities applied in a novel way to add the temporal dimension. The result is a framework in which archived resources can seamlessly be reached via the URI of their original: protocol-based time travel for the Web.",
"This paper presents a transaction-time HTTP server, called TTApache that supports document versioning. A document often consists of a main file formatted in HTML or XML and several included files such as images and stylesheets. A change to any of the files associated with a document creates a new version of that document. To construct a document version history, snapshots of the document's files are obtained over time. Transaction times are associated with each file version to record the version's lifetime. The transaction time is the system time of the edit that created the version. Accounting for transaction time is essential to supporting audit queries that delve into past document versions and differential queries that pinpoint differences between two versions. TTApache performs automatic versioning when a document is read thereby removing the burden of versioning from document authors. Since some versions may be created but never read, TTApache distinguishes between known and assumed versions of a document. TTApache has a simple query language to retrieve desired versions. A browser can request a specific version, or the entire history of a document. Queries can also rewrite links and references to point to current or past versions. Over time, the version history of a document continually grows. To free space, some versions can be vacuumed. Vacuuming a version however changes the semantics of requests for that version. This paper presents several policies for vacuuming versions and strategies for accounting for vacuumed versions in queries.",
"Scholars are increasingly citing electronic “web references” which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To “webcite” a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its “instructions for authors” accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) “prospectively” before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted “citing articles” (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research assessment exercises, being able to measure the impact of Web services and published Web documents through access and Web citation metrics. @PARASPLIT [J Med Internet Res 2005;7(5):e60]"
]
} |
cs0512069 | 1646304050 | Backup or preservation of websites is often not considered until after a catastrophic event has occurred. In the face of complete website loss, “lazy” webmasters or concerned third parties may be able to recover some of their website from the Internet Archive. Other pages may also be salvaged from commercial search engine caches. We introduce the concept of “lazy preservation”- digital preservation performed as a result of the normal operations of the Web infrastructure (search engines and caches). We present Warrick, a tool to automate the process of website reconstruction from the Internet Archive, Google, MSN and Yahoo. Using Warrick, we have reconstructed 24 websites of varying sizes and composition to demonstrate the feasibility and limitations of website reconstruction from the public Web infrastructure. To measure Warrick’s window of opportunity, we have profiled the time required for new Web resources to enter and leave search engine caches. | Estimates of SE coverage of the indexable Web have been performed most recently in @cite_19 , but no measurement of SE cache sizes or types of files stored in the SE caches has been performed. We are also unaware of any research that documents the crawling and caching behavior of commercial SEs. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2037169571"
],
"abstract": [
"Optimal cache content placement in a wireless small cell base station (sBS) with limited backhaul capacity is studied. The sBS has a large cache memory and provides content-level selective offloading by delivering high data rate contents to users in its coverage area. The goal of the sBS content controller (CC) is to store the most popular contents in the sBS cache memory such that the maximum amount of data can be fetched directly form the sBS, not relying on the limited backhaul resources during peak traffic periods. If the popularity profile is known in advance, the problem reduces to a knapsack problem. However, it is assumed in this work that, the popularity profile of the files is not known by the CC, and it can only observe the instantaneous demand for the cached content. Hence, the cache content placement is optimised based on the demand history. By refreshing the cache content at regular time intervals, the CC tries to learn the popularity profile, while exploiting the limited cache capacity in the best way possible. Three algorithms are studied for this cache content placement problem, leading to different exploitation-exploration trade-offs. We provide extensive numerical simulations in order to study the time-evolution of these algorithms, and the impact of the system parameters, such as the number of files, the number of users, the cache size, and the skewness of the popularity profile, on the performance. It is shown that the proposed algorithms quickly learn the popularity profile for a wide range of system parameters."
]
} |
cs0511008 | 1770382502 | A basic calculus is presented for stochastic service guarantee analysis in communication networks. Central to the calculus are two definitions, maximum-(virtual)-backlog-centric (m.b.c) stochastic arrival curve and stochastic service curve, which respectively generalize arrival curve and service curve in the deterministic network calculus framework. With m.b.c stochastic arrival curve and stochastic service curve, various basic results are derived under the (min, +) algebra for the general case analysis, which are crucial to the development of stochastic network calculus. These results include (i) superposition of flows, (ii) concatenation of servers, (iii) output characterization, (iv) per-flow service under aggregation, and (v) stochastic backlog and delay guarantees. In addition, to perform independent case analysis, stochastic strict server is defined, which uses an ideal service process and an impairment process to characterize a server. The concept of stochastic strict server not only allows us to improve the basic results (i) -- (v) under the independent case, but also provides a convenient way to find the stochastic service curve of a serve. Moreover, an approach is introduced to find the m.b.c stochastic arrival curve of a flow and the stochastic service curve of a server. | Table summarizes the properties that are provided by the combination of a traffic model, chosen from t.a.c, v.b.c and m.b.c stochastic arrival curves, and a server model, chosen from weak stochastic service curve and stochastic service curve, without any additional constraints on the traffic model or the server model. In Section , we have discussed that under the context of network calculus, most traffic models used in the literature @cite_32 @cite_7 @cite_4 @cite_14 @cite_1 @cite_18 @cite_5 @cite_12 @cite_28 @cite_21 @cite_11 belong to t.a.c and v.b.c stochastic arrival curve, and most server models @cite_17 @cite_7 @cite_1 @cite_18 @cite_5 @cite_12 @cite_11 belong to weak stochastic service curve. Table shows that without additional constraints, these works can only support part of the five required properties for the stochastic network calculus. In contrast, with m.b.c stochastic arrival curve and stochastic service curve, all these properties have been proved in this section. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_17",
"@cite_5",
"@cite_12",
"@cite_11"
],
"mid": [
"2140314025",
"1971739135",
"2158352245",
"1600548747",
"2082841997",
"1998816621",
"2059060883",
"2039423859",
"1589801689",
"2398179450",
"2020634547",
"2136340918"
],
"abstract": [
"A basic calculus is presented for stochastic service guarantee analysis in communication networks. Central to the calculus are two definitions, maximum-(virtual)-backlog-centric (m. b. c) stochastic arrival curve and stochastic service curve, which respectively generalize arrival curve and service curve in the deterministic network calculus framework. With m. b. c stochastic arrival curve and stochastic service curve, various basic results are derived under the (min, +)algebra for the general case analysis, which are crucial to the development of stochastic network calculus. These results include (i)superposition of flows, (ii)concatenation of servers, (iii) output characterization, (iv)per-flow service under aggregation, and (v)stochastic backlog and delay guarantees. In addition, to perform independent case analysis, stochastic strict server is defined, which uses an ideal service process and an impairment process to characterize a server. The concept of stochastic strict server not only allows us to improve the basic results (i)-(v)under the independent case, but also provides a convenient way to find the stochastic service curve of a serve. Moreover, an approach is introduced to find the m.b.c stochastic arrival curve of a flow and the stochastic service curve of a server.",
"In 1991, D. J. Bertsimas and G. van Ryzin introduced and analyzed a model for stochastic and dynamic vehicle routing in which a single, uncapacitated vehicle traveling at a constant velocity in a Euclidean region must service demands whose time of arrival, location and on-site service are stochastic. The objective is to find a policy to service demands over an infinite horizon that minimizes the expected system time (wait plus service) of the demands. This paper extends our analysis in several directions. First, we analyze the problem of m identical vehicles with unlimited capacity and show that in heavy traffic the system time is reduced by a factor of 1 m2 over the single-server case. One of these policies improves by a factor of two on the best known system time for the single-server case. We then consider the case in which each vehicle can serve at most q customers before returning to a depot. We show that the stability condition in this case depends strongly on the geometry of the region. Several pol...",
"The stochastic network calculus is an evolving new methodology for backlog and delay analysis of networks that can account for statistical multiplexing gain. This paper advances the stochastic network calculus by deriving a network service curve, which expresses the service given to a flow by the network as a whole in terms of a probabilistic bound. The presented network service curve permits the calculation of statistical end-to-end delay and backlog bounds for broad classes of arrival and service distributions. The benefits of the derived service curve are illustrated for the exponentially bounded burstiness (EBB) traffic model. It is shown that end-to-end performance measures computed with a network service curve are bounded by spl Oscr (H log H), where H is the number of nodes traversed by a flow. Using currently available techniques, which compute end-to-end bounds by adding single node results, the corresponding performance measures are bounded by spl Oscr (H sup 3 ).",
"Many communication networks such as wireless networks only provide stochastic service guarantees. For analyzing stochastic service guarantees, research efforts have been made in the past few years to develop stochastic network calculus, a probabilistic version of (min, +) deterministic network calculus. However, many challenges have made the development difficult. Some of them are closely related to server modeling, which include output characterization, concatenation property, stochastic backlog guarantee, stochastic delay guarantee, and per-flow service under aggregation. In this paper, we propose a server model, called stochastic service curve to facilitate stochastic service guarantee analysis. We show that with the concept of stochastic service curve, these challenges can be well addressed. In addition, we introduce strict stochastic server to help find the stochastic service curve of a stochastic server, which characterizes the service of the server by two stochastic processes: an ideal service process and an impairment process.",
"The stochastic network calculus is an evolving new methodology for backlog and delay analysis of networks that can account for statistical multiplexing gain. This paper advances the stochastic network calculus by deriving a network service curve, which expresses the service given to a flow by the network as a whole in terms of a probabilistic bound. The presented network service curve permits the calculation of statistical end-to-end delay and backlog bounds for broad classes of arrival and service distributions. The benefits of the derived service curve are illustrated for the exponentially bounded burstiness (EBB) traffic model. It is shown that end-to-end performance measures computed with a network service curve are bounded by O(Hlog H), where H is the number of nodes traversed by a flow. Using currently available techniques that compute end-to-end bounds by adding single node results, the corresponding performance measures are bounded by O(H3).",
"The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15 . Moreover, improvements of up to 41 can be achieved for some test instances.",
".We consider a class of stochastic processing networks. Assume that the networks satisfy a complete resource pooling condition. We prove that each maximum pressure policy asymptotically minimizes the workload process in a stochastic processing network in heavy traffic. We also show that, under each quadratic holding cost structure, there is a maximum pressure policy that asymptotically minimizes the holding cost. A key to the optimality proofs is to prove a state space collapse result and a heavy traffic limit theorem for the network processes under a maximum pressure policy. We extend a framework of Bramson [Queueing Systems Theory Appl. 30 (1998) 89–148] and Williams [Queueing Systems Theory Appl. 30 (1998b) 5–25] from the multiclass queueing network setting to the stochastic processing network setting to prove the state space collapse result and the heavy traffic limit theorem. The extension can be adapted to other studies of stochastic processing networks. 1. Introduction. This paper is a continuation of Dai and Lin (2005), in which maximum pressure policies are shown to be throughput optimal for a class of stochastic processing networks. Throughput optimality is an important, first-order objective for many networks, but it ignores some key secondary performance measures like queueing delays experienced by jobs in these networks. In this paper we show that maximum pressure policies enjoy additional optimality properties; they are asymptotically optimal in minimizing a certain workload or holding cost of a stochastic processing network. Stochastic processing networks have been introduced in a series of three papers by Harrison (2000, 2002, 2003). In Dai and Lin (2005) and this paper we consider a special class of Harrison’s model. This class of stochastic processing networks is much more general than multiclass queueing networks that have been a subject of intensive study in the last 20 years; see, for example, Harrison (1988), Williams",
"This paper contains a quantitative evaluation of probabilistic traffic assignment models and proposes an alternate formulation. First, the concept of stochastic-user-equilibration (S-U-E) is formalized as an extension of Wardrop's user-equilibration criterion. Then, the stochastic-network-loading (S-N-L) problem (a special case of S-U-E for networks with constant link costs) is analyzed in detail and an expression for the probability of route choice which is based on two general postulates of user behavior is derived. The paper also discusses the weaknesses of existing S-N-L techniques with special attention paid to Dial's multipath method and compares them to the suggested approach. The proposed model seems reasonable and does not exhibit the inherent weaknesses of the logit model when applied to sets of routes which overlap heavily. The discussion is supported by several numerical examples on small contrived networks. The paper concludes with the discussion of two techniques that can be used to approximate the link flows resulting from the proposed model in large networks.",
"Network calculus, a theory dealing with queuing systems found in computer networks, focuses on performance guarantees. The development of an information theory for stochastic service-guarantee analysis has been identified as a grand challenge for future networking research. Towards that end, stochastic network calculus, the probabilistic version or generalization of the (deterministic) Network Calculus, has been recognized by researchers as a crucial step. Stochastic Network Calculus presents a comprehensive treatment for the state-of-the-art in stochastic service-guarantee analysis research and provides basic introductory material on the subject, as well as discusses the most recent research in the area. This helpful volume summarizes results for stochastic network calculus, which can be employed when designing computer networks to provide stochastic service guarantees. Features and Topics: Provides a solid introductory chapter, providing useful background knowledge Reviews fundamental concepts and results of deterministic network calculus Includes end-of-chapter problems, as well as summaries and bibliographic comments Defines traffic models and server models for stochastic network calculus Summarizes the basic properties of stochastic network calculus under different combinations of traffic and server models Highlights independent case analysis Discusses stochastic service guarantees under different scheduling disciplines Presents applications to admission control and traffic conformance study using the analysis results Offers an overall summary and some open research challenges for further study of the topic Key Topics: Queuing systems Performance analysis and guarantees Independent case analysis Traffic and server models Analysis of scheduling disciplines Generalized processor sharing Open research challenges Researchers and graduates in the area of performance evaluation of computer communication networks will benefit substantially from this comprehensive and easy-to-follow volume. Professionals will also find it a worthwhile reference text. Professor Yuming Jiang at the Norwegian University of Science and Technology (NTNU) has lectured using the material presented in this text since 2006. Dr Yong Liu works at the Optical Network Laboratory, National University of Singapore, where he researches QoS for optical communication networks and Metro Ethernet networks.",
"We consider the stochastic on-time arrival (SOTA) problem of finding the optimal routing strategy for reaching a given destination within a pre-specified time budget and provide the first results on using preprocessing techniques for speeding up the query time. We start by identifying some properties of the SOTA problem that limit the types of preprocessing techniques that can be used in this setting, and then define the stochastic variants of two deterministic shortest path preprocessing techniques that can be adapted to the SOTA problem, namely reach and arc-flags. We present the preprocessing and query algorithms for each technique, and also present an extension to the standard reach based preprocessing method that provides additional pruning. Finally, we explain the limitations of this approach due to the inefficiency of the preprocessing phase and present a fast heuristic preprocessing scheme. Numerical results for San Francisco, Luxembourg and a synthetic road network show up to an order of magnitude improvement in the query time for short queries, with even larger gains expected for longer queries.",
"ACM Sigcomm 2006 published a paper [26] which was perceived to unify the deterministic and stochastic branches of the network calculus (abbreviated throughout as DNC and SNC) [39]. Unfortunately, this seemingly fundamental unification---which has raised the hope of a straightforward transfer of all results from DNC to SNC---is invalid. To substantiate this claim, we demonstrate that for the class of stationary and ergodic processes, which is prevalent in traffic modelling, the probabilistic arrival model from [26] is quasi-deterministic, i.e., the underlying probabilities are either zero or one. Thus, the probabilistic framework from [26] is unable to account for statistical multiplexing gain, which is in fact the raison d'etre of packet-switched networks. Other previous formulations of SNC can capture statistical multiplexing gain, yet require additional assumptions [12], [22] or are more involved [14], [9] [28], and do not allow for a straightforward transfer of results from DNC. So, in essence, there is no free lunch in this endeavor. Our intention in this paper is to go beyond presenting a negative result by providing a comprehensive perspective on network calculus. To that end, we attempt to illustrate the fundamental concepts and features of network calculus in a systematic way, and also to rigorously clarify some key facts as well as misconceptions. We touch in particular on the relationship between linear systems, classical queueing theory, and network calculus, and on the lingering issue of tightness of network calculus bounds. We give a rigorous result illustrating that the statistical multiplexing gain scales as Ω(√N), as long as some small violations of system performance constraints are tolerable. This demonstrates that the network calculus can capture actual system behavior tightly when applied carefully. Thus, we positively conclude that it still holds promise as a valuable systematic methodology for the performance analysis of computer and communication systems, though the unification of DNC and SNC remains an open, yet quite elusive task.",
"We consider the problem of resource allocation in downlink OFDMA systems for multi service and unknown environment. Due to users' mobility and intercell interference, the base station cannot predict neither the Signal to Noise Ratio (SNR) of each user in future time slots nor their probability distribution functions. In addition, the traffic is bursty in general with unknown arrival. The probability distribution functions of the SNR, channel state and traffic arrival density are then unknown. Achieving a multi service Quality of Service (QoS) while optimizing the performance of the system (e.g. total throughput) is a hard and interesting task since it depends on the unknown future traffic and SNR values. In this paper we solve this problem by modeling the multiuser queuing system as a discrete time linear dynamic system. We develop a robust H∞ controller to regulate the queues of different users. The queues and Packet Drop Rates (PDR) are controlled by proposing a minimum data rate according to the demanded service type of each user. The data rate vector proposed by the controller is then fed as a constraint to an instantaneous resource allocation framework. This instantaneous problem is formulated as a convex optimization problem for instantaneous subcarrier and power allocation decisions. Simulation results show small delays and better fairness among users."
]
} |
cs0511008 | 1770382502 | A basic calculus is presented for stochastic service guarantee analysis in communication networks. Central to the calculus are two definitions, maximum-(virtual)-backlog-centric (m.b.c) stochastic arrival curve and stochastic service curve, which respectively generalize arrival curve and service curve in the deterministic network calculus framework. With m.b.c stochastic arrival curve and stochastic service curve, various basic results are derived under the (min, +) algebra for the general case analysis, which are crucial to the development of stochastic network calculus. These results include (i) superposition of flows, (ii) concatenation of servers, (iii) output characterization, (iv) per-flow service under aggregation, and (v) stochastic backlog and delay guarantees. In addition, to perform independent case analysis, stochastic strict server is defined, which uses an ideal service process and an impairment process to characterize a server. The concept of stochastic strict server not only allows us to improve the basic results (i) -- (v) under the independent case, but also provides a convenient way to find the stochastic service curve of a serve. Moreover, an approach is introduced to find the m.b.c stochastic arrival curve of a flow and the stochastic service curve of a server. | One type uses a sequence of random variables to stochastically bound the arrival process @cite_34 or the service process @cite_38 . Similar properties as (P.1), (P.3), (P.4) and (P.5) have been studied @cite_34 @cite_38 . These studies generally need the independence assumption. Under this type of traffic and service models, several problems remain open, which are out of the scope of this paper. One is the concatenation property (P.2), another is the general case analysis and the third is researching designing approaches to map known traffic and service characterizations to the required sequences of random variables. | {
"cite_N": [
"@cite_38",
"@cite_34"
],
"mid": [
"1998816621",
"1971739135"
],
"abstract": [
"The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15 . Moreover, improvements of up to 41 can be achieved for some test instances.",
"In 1991, D. J. Bertsimas and G. van Ryzin introduced and analyzed a model for stochastic and dynamic vehicle routing in which a single, uncapacitated vehicle traveling at a constant velocity in a Euclidean region must service demands whose time of arrival, location and on-site service are stochastic. The objective is to find a policy to service demands over an infinite horizon that minimizes the expected system time (wait plus service) of the demands. This paper extends our analysis in several directions. First, we analyze the problem of m identical vehicles with unlimited capacity and show that in heavy traffic the system time is reduced by a factor of 1 m2 over the single-server case. One of these policies improves by a factor of two on the best known system time for the single-server case. We then consider the case in which each vehicle can serve at most q customers before returning to a depot. We show that the stability condition in this case depends strongly on the geometry of the region. Several pol..."
]
} |
cs0511008 | 1770382502 | A basic calculus is presented for stochastic service guarantee analysis in communication networks. Central to the calculus are two definitions, maximum-(virtual)-backlog-centric (m.b.c) stochastic arrival curve and stochastic service curve, which respectively generalize arrival curve and service curve in the deterministic network calculus framework. With m.b.c stochastic arrival curve and stochastic service curve, various basic results are derived under the (min, +) algebra for the general case analysis, which are crucial to the development of stochastic network calculus. These results include (i) superposition of flows, (ii) concatenation of servers, (iii) output characterization, (iv) per-flow service under aggregation, and (v) stochastic backlog and delay guarantees. In addition, to perform independent case analysis, stochastic strict server is defined, which uses an ideal service process and an impairment process to characterize a server. The concept of stochastic strict server not only allows us to improve the basic results (i) -- (v) under the independent case, but also provides a convenient way to find the stochastic service curve of a serve. Moreover, an approach is introduced to find the m.b.c stochastic arrival curve of a flow and the stochastic service curve of a server. | Another type is built upon moments or moment generating functions. This type was initially used for traffic @cite_10 @cite_8 and has also been extended to service @cite_20 @cite_15 . Independence assumption is generally required between arrival and service processes. Extensive study has been conducted for deriving the characteristics of a process under this type of model from some known characterization of the process @cite_10 @cite_24 @cite_20 . Main open problems for this type are the concatenation property (P.2) and the general case analysis. Although these problems are out of the scope of this paper, we prove in Section results that relate the moment generating function model to the proposed m.b.c stochastic arrival curve and stochastic service curve. These results will allow to further relate known traffic service characterization to the proposed traffic and service models in this paper. | {
"cite_N": [
"@cite_8",
"@cite_24",
"@cite_15",
"@cite_10",
"@cite_20"
],
"mid": [
"2140314025",
"1998816621",
"1971739135",
"2099205567",
"1510598052"
],
"abstract": [
"A basic calculus is presented for stochastic service guarantee analysis in communication networks. Central to the calculus are two definitions, maximum-(virtual)-backlog-centric (m. b. c) stochastic arrival curve and stochastic service curve, which respectively generalize arrival curve and service curve in the deterministic network calculus framework. With m. b. c stochastic arrival curve and stochastic service curve, various basic results are derived under the (min, +)algebra for the general case analysis, which are crucial to the development of stochastic network calculus. These results include (i)superposition of flows, (ii)concatenation of servers, (iii) output characterization, (iv)per-flow service under aggregation, and (v)stochastic backlog and delay guarantees. In addition, to perform independent case analysis, stochastic strict server is defined, which uses an ideal service process and an impairment process to characterize a server. The concept of stochastic strict server not only allows us to improve the basic results (i)-(v)under the independent case, but also provides a convenient way to find the stochastic service curve of a serve. Moreover, an approach is introduced to find the m.b.c stochastic arrival curve of a flow and the stochastic service curve of a server.",
"The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15 . Moreover, improvements of up to 41 can be achieved for some test instances.",
"In 1991, D. J. Bertsimas and G. van Ryzin introduced and analyzed a model for stochastic and dynamic vehicle routing in which a single, uncapacitated vehicle traveling at a constant velocity in a Euclidean region must service demands whose time of arrival, location and on-site service are stochastic. The objective is to find a policy to service demands over an infinite horizon that minimizes the expected system time (wait plus service) of the demands. This paper extends our analysis in several directions. First, we analyze the problem of m identical vehicles with unlimited capacity and show that in heavy traffic the system time is reduced by a factor of 1 m2 over the single-server case. One of these policies improves by a factor of two on the best known system time for the single-server case. We then consider the case in which each vehicle can serve at most q customers before returning to a depot. We show that the stability condition in this case depends strongly on the geometry of the region. Several pol...",
"We introduce an extended family of continuous-domain stochastic models for sparse, piecewise-smooth signals. These are specified as solutions of stochastic differential equations, or, equivalently, in terms of a suitable innovation model; the latter is analogous conceptually to the classical interpretation of a Gaussian stationary process as filtered white noise. The two specific features of our approach are 1) signal generation is driven by a random stream of Dirac impulses (Poisson noise) instead of Gaussian white noise, and 2) the class of admissible whitening operators is considerably larger than what is allowed in the conventional theory of stationary processes. We provide a complete characterization of these finite-rate-of-innovation signals within Gelfand's framework of generalized stochastic processes. We then focus on the class of scale-invariant whitening operators which correspond to unstable systems. We show that these can be solved by introducing proper boundary conditions, which leads to the specification of random, spline-type signals that are piecewise-smooth. These processes are the Poisson counterpart of fractional Brownian motion; they are nonstationary and have the same 1 ω-type spectral signature. We prove that the generalized Poisson processes have a sparse representation in a wavelet-like basis subject to some mild matching condition. We also present a limit example of sparse process that yields a MAP signal estimator that is equivalent to the popular TV-denoising algorithm.",
"We study a processing system comprised of parallel queues, whose individual service rates are specified by a global service mode (configuration). The issue is how to switch the system between various possible service modes, so as to maximize its throughput and maintain stability under the most workload-intensive input traffic traces (arrival processes). Stability preserves the job inflow–outflow balance at each queue on the traffic traces. Two key families of service policies are shown to maximize throughput, under the mild condition that traffic traces have long-term average workload rates. In the first family of cone policies, the service mode is chosen based on the system backlog state belonging to a corresponding cone. Two distinct policy classes of that nature are investigated, MaxProduct and FastEmpty. In the second family of batch policies (BatchAdapt), jobs are collectively scheduled over adaptively chosen horizons, according to an asymptotically optimal, robust schedule. The issues of nonpreemptive job processing and non-negligible switching times between service modes are addressed. The analysis is extended to cover feed-forward networks of such processing systems nodes. The approach taken unifies and generalizes prior studies, by developing a general trace-based modeling framework (sample-path approach) for addressing the queueing stability problem. It treats the queueing structure as a deterministic dynamical system and analyzes directly its evolution trajectories. It does not require any probabilistic superstructure, which is typically used in previous approaches. Probability can be superposed later to address finer performance questions (e.g., delay). The throughput maximization problem is seen to be primarily of structural nature. The developed methodology appears to have broader applicability to other queueing systems."
]
} |
cs0511043 | 2952861886 | We present Poseidon, a new anomaly based intrusion detection system. Poseidon is payload-based, and presents a two-tier architecture: the first stage consists of a Self-Organizing Map, while the second one is a modified PAYL system. Our benchmarks on the 1999 DARPA data set show a higher detection rate and lower number of false positives than PAYL and PHAD. | Cannady @cite_32 proposes a SOM-based IDS in which network packets are first classified according to nine features and then presented to the neural network. Attack traffic is generated using a security audit tool. The author extends this work in Cannady @cite_29 @cite_14 . | {
"cite_N": [
"@cite_29",
"@cite_14",
"@cite_32"
],
"mid": [
"2414564754",
"2267339884",
"2531448500"
],
"abstract": [
"A novel intrusion detection system (IDS) using a deep neural network (DNN) is proposed to enhance the security of in-vehicular network. The parameters building the DNN structure are trained with probability-based feature vectors that are extracted from the in-vehicular network packets. For a given packet, the DNN provides the probability of each class discriminating normal and attack packets, and, thus the sensor can identify any malicious attack to the vehicle. As compared to the traditional artificial neural network applied to the IDS, the proposed technique adopts recent advances in deep learning studies such as initializing the parameters through the unsupervised pre-training of deep belief networks (DBN), therefore improving the detection accuracy. It is demonstrated with experimental results that the proposed technique can provide a real-time response to the attack with a significantly improved detection ratio in controller area network (CAN) bus.",
"Redundant and irrelevant features in data have caused a long-term problem in network traffic classification. These features not only slow down the process of classification but also prevent a classifier from making accurate decisions, especially when coping with big data. In this paper, we propose a mutual information based algorithm that analytically selects the optimal feature for classification. This mutual information based feature selection algorithm can handle linearly and nonlinearly dependent data features. Its effectiveness is evaluated in the cases of network intrusion detection. An Intrusion Detection System (IDS), named Least Square Support Vector Machine based IDS (LSSVM-IDS), is built using the features selected by our proposed feature selection algorithm. The performance of LSSVM-IDS is evaluated using three intrusion detection evaluation datasets, namely KDD Cup 99, NSL-KDD and Kyoto 2006+ dataset. The evaluation results show that our feature selection algorithm contributes more critical features for LSSVM-IDS to achieve better accuracy and lower computational cost compared with the state-of-the-art methods.",
"The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks."
]
} |
cs0511043 | 2952861886 | We present Poseidon, a new anomaly based intrusion detection system. Poseidon is payload-based, and presents a two-tier architecture: the first stage consists of a Self-Organizing Map, while the second one is a modified PAYL system. Our benchmarks on the 1999 DARPA data set show a higher detection rate and lower number of false positives than PAYL and PHAD. | Zanero @cite_18 presents a two-tier payload-based system that combines a self-organizing map with a modified version of SmartSifter @cite_6 . While this architecture is similar to POSEIDON, a full comparison is not possible because the benchmarks in @cite_18 concern only the FTP service an no details are given about experiments execution. A two-tier architecture for intrusion detection is also outlined in Zanero and Savaresi @cite_19 . | {
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_6"
],
"mid": [
"191098608",
"2060977605",
"2490096331"
],
"abstract": [
"We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter's infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections.",
"This paper proposes a pose-based algorithm to solve the full Simultaneous Localization And Mapping (SLAM) problem for an Autonomous Underwater Vehicle (AUV), navigating in an unknown and possibly unstructured environment. A probabilistic scan matching technique using range scans gathered from a Mechanical Scanning Imaging Sonar (MSIS) is used together with the robot dead-reckoning displacements. The proposed method utilizes two Extended Kalman Filters (EKFs). The first, estimates the local path traveled by the robot while forming the scan as well as its uncertainty, providing position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augmented state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. Also, a method of estimating the uncertainty of the scan matching estimation is provided. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach.",
"This paper presents a semantics-aware rule recommendation and enforcement (SARRE) system for taming information leakage on Android. SARRE leverages statistical analysis and a novel application of minimum path cover algorithm to identify system event paths from dynamic runtime monitoring. Then, an online recommendation system is developed to automatically assign a fine-grained security rule to each event path, capitalizing on both known security rules and application semantic information. The proposed SARRE system is prototyped on Android devices and evaluated using real-world malware samples and popular apps from Google Play spanning multiple categories. Our results show that SARRE achieves 93.8 precision and 96.4 recall in identifying the event paths, compared with tainting technique. Also, the average difference between rule recommendation and manual configuration is less than 5 , validating the effectiveness of the automatic rule recommendation. It is also demonstrated that by enforcing the recommended security rules through a camouflage engine, SARRE can effectively prevent information leakage and enable fine-grained protection over private data with very small performance overhead."
]
} |
cs0511102 | 2122887675 | Because a delay tolerant network (DTN) can often be partitioned, routing is a challenge. However, routing benefits considerably if one can take advantage of knowledge concerning node mobility. This paper addresses this problem with a generic algorithm based on the use of a high-dimensional Euclidean space, that we call MobySpace, constructed upon nodes' mobility patterns. We provide here an analysis and a large scale evaluation of this routing scheme in the context of ambient networking by replaying real mobility traces. The specific MobySpace evaluated is based on the frequency of visits of nodes to each possible location. We show that routing based on MobySpace can achieve good performance compared to that of a number of standard algorithms, especially for nodes that are present in the network a large portion of the time. We determine that the degree of homogeneity of node mobility patterns has a high impact on routing. And finally, we study the ability of nodes to learn their own mobility patterns. | Some work concerning routing in DTNs has been performed with scheduled contacts, such as the paper by @cite_28 that tries to improve the connectivity of an isolated village to the internet based on knowledge of when a low-earth orbiting relay satellite and a motor bike might be available to make the necessary connections. Also of interest, work on interplanetary networking @cite_18 @cite_26 uses predicted contacts such as the ones between planets within the framework of a DTN architecture. | {
"cite_N": [
"@cite_28",
"@cite_18",
"@cite_26"
],
"mid": [
"2097625638",
"2113397920",
"2147830904"
],
"abstract": [
"Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet, which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a \"least common denominator\" protocol that can operate successfully and (where required) reliably in multiple disparate environments would simplify the development and deployment of such applications. The Internet protocols are ill suited for this purpose. We identify three fundamental principles that would underlie a delay-tolerant networking (DTN) architecture and describe the main structural elements of that architecture, centered on a new end-to-end overlay network protocol called Bundling. We also examine Internet infrastructure adaptations that might yield comparable performance but conclude that the simplicity of the DTN architecture promises easier deployment and extension.",
"Unpredictable node mobility, low node density, and lack of global information make it challenging to achieve effective data forwarding in Delay-Tolerant Networks (DTNs). Most of the current data forwarding schemes choose the nodes with the best cumulative capability of contacting others as relays to carry and forward data, but these nodes may not be the best relay choices within a short time period due to the heterogeneity of transient node contact characteristics. In this paper, we propose a novel approach to improve the performance of data forwarding with a short time constraint in DTNs by exploiting the transient social contact patterns. These patterns represent the transient characteristics of contact distribution, network connectivity and social community structure in DTNs, and we provide analytical formulations on these patterns based on experimental studies of realistic DTN traces. We then propose appropriate forwarding metrics based on these patterns to improve the effectiveness of data forwarding. When applied to various data forwarding strategies, our proposed forwarding metrics achieve much better performance compared to existing schemes with similar forwarding cost.",
"Delay-tolerant networks (DTNs) have the potential to connect devices and areas of the world that are under-served by current networks. A critical challenge for DTNs is determining routes through the network without ever having an end-to-end connection, or even knowing which \"routers\" will be connected at any given time. Prior approaches have focused either on epidemic message replication or on knowledge of the connectivity schedule. The epidemic approach of replicating messages to all nodes is expensive and does not appear to scale well with increasing load. It can, however, operate without any prior network configuration. The alternatives, by requiring a priori connectivity knowledge, appear infeasible for a self-configuring network.In this paper we present a practical routing protocol that only uses observed information about the network. We designed a metric that estimates how long a message will have to wait before it can be transferred to the next hop. The topology is distributed using a link-state routing protocol, where the link-state packets are \"flooded\" using epidemic routing. The routing is recomputed when connections are established. Messages are exchanged if the topology suggests that a connected node is \"closer\" than the current node.We demonstrate through simulation that our protocol provides performance similar to that of schemes that have global knowledge of the network topology, yet without requiring that knowledge. Further, it requires a significantly smaller quantity of buffer, suggesting that our approach will scale with the number of messages in the network, where replication approaches may not."
]
} |
cs0510090 | 1553225280 | To definite and compute differential invariants, like curvatures, for triangular meshes (or polyhedral surfaces) is a key problem in CAGD and the computer vision. The Gaussian curvature and the mean curvature are determined by the differential of the Gauss map of the underlying surface. The Gauss map assigns to each point in the surface the unit normal vector of the tangent plane to the surface at this point. We follow the ideas developed in Chen and Wu Chen2 (2004) and Wu, Chen and Chi Wu (2005) to describe a new and simple approach to estimate the differential of the Gauss map and curvatures from the viewpoint of the gradient and the centroid weights. This will give us a much better estimation of curvatures than Taubin's algorithm Taubin (1995). | Flynn and Jain @cite_9 (1989) used a suitable sphere passing through four vertices to estimate curvatures. Meek and Walton @cite_2 (2000) examined several methods and compared them with the discretization and interpolation method. Gatzke and Grim @cite_3 (2003) systematically analyzed the results of computation of curvatures of surfaces represented by triangular meshes and recommended the surface fitting methods. See also Petitjean @cite_7 (2002) for the surface fitting methods. @cite_5 (2003) employed the Gauss-Bonnet theorem to estimate the Gaussian curvatures and introduced the Laplace-Beltrami operator to approximate the mean curvature. | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_2",
"@cite_5"
],
"mid": [
"2155553113",
"2165995633",
"2517665947",
"2000214666",
"2104776149"
],
"abstract": [
"This paper takes a systematic look at methods for estimating the curvature of surfaces represented by triangular meshes. We have developed a suite of test cases for assessing both the detailed behavior of these methods, and the error statistics that occur for samples from a general mesh. Detailed behavior is represented by the sensitivity of curvature calculation methods to noise, mesh resolution, and mesh regularity factors. Statistical analysis breaks out the effects of valence, triangle shape, and curvature sign. These tests are applied to existing discrete curvature approximation techniques and common surface fitting methods. We provide a summary of existing curvature estimation methods, and also look at alternatives to the standard parameterization techniques. The results illustrate the impact of noise and mesh related issues on the accuracy of these methods and provide guidance in choosing an appropriate method for applications requiring curvature estimates.",
"This paper takes a systematic look at calculating the curvature of surfaces represented by triangular meshes. We have developed a suite of test cases for assessing the sensitivity of curvature calculations, to noise, mesh resolution, and mesh regularity. These tests are applied to existing discrete curvature approximation techniques and three common surface fitting methods (polynomials, radial basis functions and conics). We also introduce a modification to the standard parameterization technique. Finally, we examine the behaviour of the curvature calculation techniques in the context of segmentation.",
"While it is usually not difficult to compute principal curvatures of a smooth surface of sufficient differentiability, it is a rather difficult task when only a polygonal approximation of the surface is available, because of the inherent ambiguity of such representation. A number of different approaches has been proposed in the past that tackle this problem using various techniques. Most papers tend to focus on a particular method, while an comprehensive comparison of the different approaches is usually missing. We present results of a large experiment, involving both common and recently proposed curvature estimation techniques, applied to triangle meshes of varying properties. It turns out that none of the approaches provides reliable results under all circumstances. Motivated by this observation, we investigate mesh statistics, which can be computed from vertex positions and mesh connectivity information only, and which can help in deciding which estimator will work best for a particular case. Finally, we propose a meta-estimator, which makes a choice between existing algorithms based on the value of the mesh statistics, and we demonstrate that such meta-estimator, despite its simplicity, provides considerably more robust results than any existing approach.",
"In this paper, we develop methods to rapidly remove rough features from irregularly triangulated data intended to portray a smooth surface. The main task is to remove undesirable noise and uneven edges while retaining desirable geometric features. The problem arises mainly when creating high-fidelity computer graphics objects using imperfectly-measured data from the real world. Our approach contains three novel features: an implicit integration method to achieve efficiency, stability, and large time-steps; a scale-dependent Laplacian operator to improve the diffusion process; and finally, a robust curvature flow operator that achieves a smoothing of the shape itself, distinct from any parameterization. Additional features of the algorithm include automatic exact volume preservation, and hard and soft constraints on the positions of the points in the mesh. We compare our method to previous operators and related algorithms, and prove that our curvature and Laplacian operators have several mathematically-desirable qualities that improve the appearance of the resulting surface. In consequence, the user can easily select the appropriate operator according to the desired type of fairing. Finally, we provide a series of examples to graphically and numerically demonstrate the quality of our results.",
"An empirical study of the accuracy of five different curvature estimation techniques, using synthetic range images and images obtained from three range sensors, is presented. The results obtained highlight the problems inherent in accurate estimation of curvatures, which are second-order quantities, and thus highly sensitive to noise contamination. The numerical curvature estimation methods are found to perform about as accurately as the analytic techniques, although ensemble estimates of overall surface curvature such as averages are unreliable unless trimmed estimates are used. The median proved to be the best estimator of location. As an exception, it is shown theoretically that zero curvature can be fairly reliably detected, with appropriate selection of threshold values. >"
]
} |
physics0510151 | 1969383134 | The knowledge of real-life traffic patterns is crucial for a good understanding and analysis of transportation systems. These data are quite rare. In this paper we propose an algorithm for extracting both the real physical topology and the network of traffic flows from timetables of public mass transportation systems. We apply this algorithm to timetables of three large transportation networks. This enables us to make a systematic comparison between three different approaches to construct a graph representation of a transportation network; the resulting graphs are fundamentally different. We also find that the real-life traffic pattern is very heterogeneous, in both space and traffic flow intensities, which makes it very difficult to approximate the node load with a number of topological estimators. | Another class of networks that can be constructed with the help of timetables are airport networks @cite_17 @cite_35 @cite_30 @cite_27 . There, the nodes are the airports, and edges are the flight connections. The weight of an edge reflects the traffic on this connection, which can be approximated by the number of flights that use it during one week. In this case, both the topology and the traffic information are given by timetables. This is because the routes of planes are not constrained to any physical infrastructure, as opposed to roads for cars or rail-tracks for trains. So there are no real'' links and shortcut'' links. In a sense all links are real, and the topologies in space ---of ---stops and in space ---of ---stations actually coincide. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_27",
"@cite_17"
],
"mid": [
"2049706966",
"2055654888",
"2620724943",
"2037403365"
],
"abstract": [
"An edge-scheduled network N is a multigraph G = (V, E), where each edge e ϵ E has been assigned two real weights: a start time α(e) and a finish time β(e). Such a multigraph models a communication or transportation network. A multiedge joining vertices u and v represents a direct communication (transportation) link between u and v, and the edges of the multiedge represent potential communications (transportations) between u and v over a fixed period of time. For a, b ϵ V, and k a nonnegative integer, we say that N is k-failure ab-invulnerable for the time period [0, t] if information can be relayed from a to b within that time period, even if up to k edges are deleted, i.e., “fail.” The k-failure ab-vulnerability threshold νab(k) is the earliest time t such that N is k-failure ab-invulnerable for the time period [0, t] [where νab(k) = ∞ if no such t exists]. Let κ denote the smallest k such that νab(k) = ∞. In this paper, we present an O(κ|E|) algorithm for computing νab(i), i = 0, …, κ −1. The latter algorithm constructs a set of κ pairwise edge-disjoint schedule-conforming paths P0, …, Pκ −1 such that the finish time of Pi is νab(i), i = 0, 1, …, κ −1. (A path P = ae1u1e2 ··· Upp−1epb is schedule-conforming if the finish time of edge ei is no greater than the start time of the next edge ei + 1.) The existence of such paths when α(e) = β(e) = 0, for all e ϵ E, implies Menger's Theorem. In this paper, we also show that the obvious analogs of these results for either multiedge deletions or vertex deletions do not hold. In fact, we show that the problem of finding k schedule-conforming paths such that no two paths pass through the same vertex (multiedge) is NP-complete, even for k = 2. © 1996 John Wiley & Sons, Inc.",
"We report a study of the correlations among topological, weighted and spatial properties of large infrastructure networks. We review the empirical results obtained for the air-transportation infrastructure that motivates a network modeling approach which integrates the various attributes of this network. In particular, we describe a class of models which include a weight-topology coupling and the introduction of geographical attributes during the network evolution. The inclusion of spatial features is able to capture the appearance of non-trivial correlations between the traffic flows, the connectivity pattern and the actual distances of vertices. The anomalous fluctuations in the betweenness-degree correlation function observed in empirical studies are also recovered in the model. The presented results suggest that the interplay between topology, weights and geographical constraints is a key ingredient in order to understand the structure and evolution of many real-world networks.",
"Networks provide an informative, yet non-redundant description of complex systems only if links represent truly dyadic relationships that cannot be directly traced back to node-specific properties such as size, importance, or coordinates in some embedding space. In any real-world network, some links may be reducible, and others irreducible, to such local properties. This dichotomy persists despite the steady increase in data availability and resolution, which actually determines an even stronger need for filtering techniques aimed at discerning essential links from non-essential ones. Here we introduce a rigorous method that, for any desired level of statistical significance, outputs the network backbone that is irreducible to the local properties of nodes, i.e. their degrees and strengths. Unlike previous approaches, our method employs an exact maximum-entropy formulation guaranteeing that the filtered network encodes only the links that cannot be inferred from local information. Extensive empirical analysis confirms that this approach uncovers essential backbones that are otherwise hidden amidst many redundant relationships and inaccessible to other methods. For instance, we retrieve the hub-and-spoke skeleton of the US airport network and many specialised patterns of international trade. Being irreducible to local transportation and economic constraints of supply and demand, these backbones single out genuinely higher-order wiring principles.",
"A public transportation network can often be modeled as a timetable graph where (i) each node represents a station; and (ii) each directed edge (u,v) is associated with a timetable that records the departure (resp. arrival) time of each vehicle at station u (resp. v). Several techniques have been proposed for various types of route planning on timetable graphs, e.g., retrieving the route from a node to another with the shortest travel time. These techniques, however, either provide insufficient query efficiency or incur significant space overheads. This paper presents Timetable Labelling (TTL), an efficient indexing technique for route planning on timetable graphs. The basic idea of TTL is to associate each node @math with a set of labels, each of which records the shortest travel time from u to some other node v given a certain departure time from u; such labels would then be used during query processing to improve efficiency. In addition, we propose query algorithms that enable TTL to support three popular types of route planning queries, and investigate how we reduce the space consumption of TTL with advanced preprocessing and label compression methods. By conducting an extensive set of experiments on real world datasets, we demonstrate that TTL significantly outperforms the states of the art in terms of query efficiency, while incurring moderate preprocessing and space overheads."
]
} |
cs0510065 | 1619974530 | This paper describes a new protocol for authentication in ad-hoc networks. The protocol has been designed to meet specialized requirements of ad-hoc networks, such as lack of direct communication between nodes or requirements for revocable anonymity. At the same time, a ad-hoc authentication protocol must be resistant to spoofing, eavesdropping and playback, and man-in-the-middle attacks. The article analyzes existing authentication methods based on the Public Key Infrastructure, and finds that they have several drawbacks in ad-hoc networks. Therefore, a new authentication protocol, basing on established cryptographic primitives (Merkle's puzzles and zero-knowledge proofs) is proposed. The protocol is studied for a model ad-hoc chat application that provides private conversations. | Most systems that provide anonymity are not interested in allowing to trace the user under any circumstances. , proxy servers, have not been designed to provide accountability. For mobile ad hoc networks, approaches exists that provide unconditional anonymity, again without any accountability @cite_0 . | {
"cite_N": [
"@cite_0"
],
"mid": [
"1560625027"
],
"abstract": [
"Mobile ad-hoc networks rely on the cooperation of nodes for routing and forwarding. For individual nodes there are however several advantages resulting from noncooperation, the most obvious being power saving. Nodes that act selfishly or even maliciously pose a threat to availability in mobile adhoc networks. Several approaches have been proposed to detect noncooperative nodes. In this paper, we investigate the effect of using rumors with respect to the detection time of misbehaved nodes as well as the robustness of the reputation system against wrong accusations. We propose a Bayesian approach for reputation representation, updates, and view integration. We also present a mechanism to detect and exclude potential lies. The simulation results indicate that by using this Bayesian approach, the reputation system is robust against slander while still benefitting from the speed-up in detection time provided by the use of rumors."
]
} |
cs0510065 | 1619974530 | This paper describes a new protocol for authentication in ad-hoc networks. The protocol has been designed to meet specialized requirements of ad-hoc networks, such as lack of direct communication between nodes or requirements for revocable anonymity. At the same time, a ad-hoc authentication protocol must be resistant to spoofing, eavesdropping and playback, and man-in-the-middle attacks. The article analyzes existing authentication methods based on the Public Key Infrastructure, and finds that they have several drawbacks in ad-hoc networks. Therefore, a new authentication protocol, basing on established cryptographic primitives (Merkle's puzzles and zero-knowledge proofs) is proposed. The protocol is studied for a model ad-hoc chat application that provides private conversations. | An area that requires both anonymity and accountability is agent systems ( @cite_19 ). Most of the security architectures for those systems do not provide any anonymity, e.g., @cite_1 , @cite_4 , @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_19",
"@cite_1",
"@cite_4"
],
"mid": [
"2100006177",
"2071198236",
"2102714612",
"2904010205"
],
"abstract": [
"Autonomous agents may encapsulate their principals' personal data attributes. These attributes may be disclosed to other agents during agent interactions, producing a loss of privacy. Thus, agents need self-disclosure decision-making mechanisms to autonomously decide whether disclosing personal data attributes to other agents is acceptable or not. Current self-disclosure decision-making mechanisms consider the direct benefit and the privacy loss of disclosing an attribute. However, there are many situations in which the direct benefit of disclosing an attribute is a priori unknown. This is the case in human relationships, where the disclosure of personal data attributes plays a crucial role in their development. In this paper, we present self-disclosure decision-making mechanisms based on psychological findings regarding how humans disclose personal information in the building of their relationships. We experimentally demonstrate that, in most situations, agents following these decision-making mechanisms lose less privacy than agents that do not use them.",
"In mobile agent systems, program code together with some process state can autonomously migrate to new hosts. Despite its many practical benefits, mobile agent technology results in significant new security threats from malicious agents and hosts. In this paper, we propose a security architecture to achieve three goals: certification that a server has the authority to execute an agent on behalf of its sender; flexible selection of privileges, so that an agent arriving at a server may be given the privileges necessary to carry out the task for which it has come to the server; and state appraisal, to ensure that an agent has not become malicious as a consequence of alterations to its state. The architecture models the trust relations between the principals of mobile agent systems and includes authentication and authorization mechanisms.",
"Anonymity services hide user identity at the network or address level but are vulnerable to attacks involving repeated observations of the user. Quantifying the number of observations required for an attack is a useful measure of anonymity.",
"With the proliferation of communication networks and mobile devices, the privacy and security concerns on their information flow are raised. Given a critical system that may leak confidential information, the problem consists of verifying and also enforcing opacity by designing supervisors, to conceal confidential information from unauthorized persons. To find out what the intruder sees, it is required to construct an observer of the system. In this paper, we consider incremental observer generation of modular systems, for verification and enforcement of current state opacity. The synchronization of the subsystems generate a large state space. Moreover, the observer generation with exponential complexity adds even larger state space. To tackle the complexity problem, we prove that observer generation can be done locally before synchronizing the subsystems. The incremental local observer generation along with an abstraction method lead to a significant state space reduction compared to traditional monolithic methods. The existence of shared unobservable events is also considered in the incremental approach. Moreover, we present an illustrative example, where the results of verification and enforcement of current state opacity are shown on a modular multiple floor elevator building with an intruder. Furthermore, we extend the current state opacity, current state anonymity, and language based opacity formulations for verification of modular systems."
]
} |
cs0510065 | 1619974530 | This paper describes a new protocol for authentication in ad-hoc networks. The protocol has been designed to meet specialized requirements of ad-hoc networks, such as lack of direct communication between nodes or requirements for revocable anonymity. At the same time, a ad-hoc authentication protocol must be resistant to spoofing, eavesdropping and playback, and man-in-the-middle attacks. The article analyzes existing authentication methods based on the Public Key Infrastructure, and finds that they have several drawbacks in ad-hoc networks. Therefore, a new authentication protocol, basing on established cryptographic primitives (Merkle's puzzles and zero-knowledge proofs) is proposed. The protocol is studied for a model ad-hoc chat application that provides private conversations. | A different scheme that preserves anonymity is proposed in @cite_3 . The scheme is based on a credential system and offers an optional anonymity revocation. Its main idea is based on oblivious protocols, encryption circuits and the RSA assumption. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2165210192"
],
"abstract": [
"A credential system is a system in which users can obtain credentials from organizations and demonstrate possession of these credentials. Such a system is anonymous when transactions carried out by the same user cannot be linked. An anonymous credential system is of significant practical relevance because it is the best means of providing privacy for users. In this paper we propose a practical anonymous credential system that is based on the strong RSA assumption and the decisional Diffie-Hellman assumption modulo a safe prime product and is considerably superior to existing ones: (1) We give the first practical solution that allows a user to unlinkably demonstrate possession of a credential as many times as necessary without involving the issuing organization. (2) To prevent misuse of anonymity, our scheme is the first to offer optional anonymity revocation for particular transactions. (3) Our scheme offers separability: all organizations can choose their cryptographic keys independently of each other. Moreover, we suggest more effective means of preventing users from sharing their credentials, by introducing all-or-nothing sharing: a user who allows a friend to use one of her credentials once, gives him the ability to use all of her credentials, i.e., taking over her identity. This is implemented by a new primitive, called circular encryption, which is of independent interest, and can be realized from any semantically secure cryptosystem in the random oracle model."
]
} |
math0509333 | 2020416768 | A particular case of initial data for the two-dimensional Euler equations is studied numerically. The results show that the Godunov method does not always converge to the physical solution, at least not on feasible grids. Moreover, they suggest that entropy solutions (in the weak entropy inequality sense) are not well posed. | For multidimensional scalar ( @math ) conservation laws with arbitrary @math , @cite_5 (generalizing earlier work) shows that a global EEF solution exists, is unique, satisfies the VV condition as well, and is stable under @math perturbations of the initial data. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2130710172"
],
"abstract": [
"This paper considers a trapped characteristic initial value problem for the spherically symmetric Einstein-Maxwell-scalar field equations. For an open set of initial data whose closure contains in particular Reissner-Nordstrdata, the future boundary of the maximal domain of development is found to be a light-like surface along which the curvature blows up, and yet the metric can be continuously extended beyond it. This result is related to the strong cosmic censorship conjecture of Roger Penrose. The principle of determinism in classical physics is expressed mathemat- ically by the uniqueness of solutions to the initial value problem for certain equations of evolution. Indeed, in the context of the Einstein equations of general relativity, where the unknown is the very structure of space and time, uniqueness is equivalent on a fundamental level to the validity of this principle. The question of uniqueness may thus be termed the issue of the predictability of the equation. The present paper explores the issue of predictability in general relativity. Since the work of Leray, it has been known that for the Einstein equations, contrary to common experience, uniqueness for the Cauchy problem in the large does not generally hold even within the class of smooth solutions. In other words, uniqueness may fail without any loss in regularity; such failure is thus a global phenomenon. The central question is whether this violation of predictability may occur in solutions representing actual physical processes. Physical phenomena and concepts related to the general theory of relativity, namely gravitational collapse, black holes, angular momentum, etc., must cer- tainly come into play in the study of this problem. Unfortunately, the math- ematical analysis of this exciting problem is very difficult, at present beyond reach for the vacuum Einstein equations in the physical dimension. Conse-"
]
} |
math0509333 | 2020416768 | A particular case of initial data for the two-dimensional Euler equations is studied numerically. The results show that the Godunov method does not always converge to the physical solution, at least not on feasible grids. Moreover, they suggest that entropy solutions (in the weak entropy inequality sense) are not well posed. | @cite_26 proposes the EEF condition for scalar conservation laws ( @math ), proves that it is implied by the VV condition under some circumstances and notes that there is a large set of convex entropies. Apparently independently, @cite_5 obtained analogous results for systems. @cite_32 contains the first use of the term entropy condition'' for the EEF condition. Various forms of the EEF condition had been known and in use for special systems such as the Euler equations for a long time (e.g. by the name of Clausius-Duhem inequality ), especially as shock relations; however, the above references seem to be the first to define the general notion of strictly convex EEF pairs, to propose the EEF condition as a mathematical tool for arbitrary systems of conservation laws and to formulate it in the weak form rather than the special case . | {
"cite_N": [
"@cite_5",
"@cite_26",
"@cite_32"
],
"mid": [
"205360716",
"2014405238",
"2130710172"
],
"abstract": [
"Publisher Summary This chapter provides an overview of shock waves and entropy. It describes systems of the first order partial differential equations in conservation form: ∂ t U + ∂ X F = 0, F = F(u). In many cases, all smooth solutions of the first order partial differential equations in conservation form satisfy an additional conservation law where U is a convex function of u. The chapter discusses that for all weak solutions of ∂ t u j +∂ x f j = 0, j=1,…, m, f j =f j (u 1 ,…, u m ), which are limits of solutions of modifications ∂ t u j +∂ x f j = 0, j=1,…, m, f j =f j (u 1 ,…, u m ) , by the introduction of various kinds of dissipation, satisfy the entropy inequality, that is, ∂ t U + ∂ x F≦ 0. The chapter also explains that for weak solutions, which contain discontinuities of moderate strength, ∂ t U + ∂ x F≦ 0 is equivalent to the usual shock condition involving the number of characteristics impinging on the shock. The chapter also describes all possible entropy conditions of ∂ t U + ∂ x F≦ 0 that can be associated to a given hyperbolic system of two conservation laws.",
"We study a class of semi-Lagrangian schemes which can be interpreted as a discrete version of the Hopf-Lax-Oleinik representation formula for the exact viscosity solution of first order evolutive Hamilton-Jacobi equations. That interpretation shows that the scheme is potentially accurate to any prescribed order. We discuss how the method can be implemented for convex and coercive Hamiltonians with a particular structure and how this method can be coupled with a discrete Legendre trasform. We also show that in one dimension, the first-order semi-Lagrangian scheme coincides with the integration of the Godunov scheme for the corresponding conservation laws. Several test illustrate the main features of semi-Lagrangian schemes for evolutive Hamilton-Jacobi equations.",
"This paper considers a trapped characteristic initial value problem for the spherically symmetric Einstein-Maxwell-scalar field equations. For an open set of initial data whose closure contains in particular Reissner-Nordstrdata, the future boundary of the maximal domain of development is found to be a light-like surface along which the curvature blows up, and yet the metric can be continuously extended beyond it. This result is related to the strong cosmic censorship conjecture of Roger Penrose. The principle of determinism in classical physics is expressed mathemat- ically by the uniqueness of solutions to the initial value problem for certain equations of evolution. Indeed, in the context of the Einstein equations of general relativity, where the unknown is the very structure of space and time, uniqueness is equivalent on a fundamental level to the validity of this principle. The question of uniqueness may thus be termed the issue of the predictability of the equation. The present paper explores the issue of predictability in general relativity. Since the work of Leray, it has been known that for the Einstein equations, contrary to common experience, uniqueness for the Cauchy problem in the large does not generally hold even within the class of smooth solutions. In other words, uniqueness may fail without any loss in regularity; such failure is thus a global phenomenon. The central question is whether this violation of predictability may occur in solutions representing actual physical processes. Physical phenomena and concepts related to the general theory of relativity, namely gravitational collapse, black holes, angular momentum, etc., must cer- tainly come into play in the study of this problem. Unfortunately, the math- ematical analysis of this exciting problem is very difficult, at present beyond reach for the vacuum Einstein equations in the physical dimension. Conse-"
]
} |
math0509333 | 2020416768 | A particular case of initial data for the two-dimensional Euler equations is studied numerically. The results show that the Godunov method does not always converge to the physical solution, at least not on feasible grids. Moreover, they suggest that entropy solutions (in the weak entropy inequality sense) are not well posed. | (TODO: mention that @cite_28 fig 30 p. 296 is our example if the wedge is replaced by stagnation air; see p. 345 Fig 69. Quote: All these and other mathematically possible flow patterns with a singular center Z are at our disposal for interpreting experimental evidence. Which, if any, of these possibilities occurs under given circumstances is a question that cannot possibly be decided within the framework of a theory with such a high degree of indeterminacy. Here we have a typical instance of a theory incomplete and oversimplified in its basic assumptions; only by going more deeply into the physical basis of our theory, i.e. by accounting for heat conduction and viscosity, can we hope to clarify completely the phenomena at a three-shock singularity. It may well be that the boundary layer which develops along the constant discontinuity line modifies the flow pattern sufficiently to account for the observed deviation; [...quote Liepmann paper]'' | {
"cite_N": [
"@cite_28"
],
"mid": [
"2261902652"
],
"abstract": [
"In this article we consider large energy wave maps in dimension 2+1, as in the resolution of the threshold conjecture by Sterbenz and Tataru (Commun. Math. Phys. 298(1):139–230, 2010; Commun. Math. Phys. 298(1):231–264, 2010), but more specifically into the unit Euclidean sphere ( S ^ n-1 R ^ n ) with ( n ), and study further the dynamics of the sequence of wave maps that are obtained in Sterbenz and Tataru (Commun. Math. Phys. 298(1):231–264, 2010) at the final rescaling for a first, finite or infinite, time singularity. We prove that, on a suitably chosen sequence of time slices at this scaling, there is a decomposition of the map, up to an error with asymptotically vanishing energy, into a decoupled sum of rescaled solitons concentrating in the interior of the light cone and a term having asymptotically vanishing energy dispersion norm, concentrating on the null boundary and converging to a constant locally in the interior of the cone, in the energy space. Similar and stronger results have been recently obtained in the equivariant setting by several authors (Cote, Commun. Pure Appl. Math. 68(11):1946–2004, 2015; Cote, Commun. Pure Appl. Math. 69(4):609–612, 2016; Cote, Am. J. Math. 137(1):139–207, 2015; , Am. J. Math. 137(1):209–250, 2015; Krieger, Commun. Math. Phys. 250(3):507–580, 2004), where better control on the dispersive term concentrating on the null boundary of the cone is provided, and in some cases the asymptotic decomposition is shown to hold for all time. Here, however, we do not impose any symmetry condition on the map itself and our strategy follows the one from bubbling analysis of harmonic maps into spheres in the supercritical regime due to Lin and Riviere (Ann. Math. 149(2):785–829, 1999; Duke Math. J. 111:177–193, 2002), which we make work here in the hyperbolic context of Sterbenz and Tataru (Commun. Math. Phys. 298(1), 231–264, 2010)."
]
} |
physics0509217 | 2164680115 | We present a model for the diffusion of management fads and other technologies which lack clear objective evidence about their merits. The choices made by non-Bayesian adopters reflect both their own evaluations and the social in°uence of their peers. We show, both analytically and computationally, that the dynamics lead to outcomes that appear to be deterministic in spite of being governed by a stochastic process. In other words, when the objective evidence about a technology is weak, the evolution of this process quickly settles down to a fraction of adopters that is not predetermined. When the objective evidence is strong, the proportion of adopters is determined by the quality of the evidence and the adopters'competence. | In this paper we propose a model that is consistent with all of Camerer's observations and so is an alternative to canonical herding models. Thus our agents exhibit normatively desirable and empirically plausible monotonicity properties: in particular, the more the social cues favor innovation A over B, the more likely it is that an agent will select A, ceteris paribus. Yet the reasoning that underlies such choices is adaptively rational rather than fully rational. Moreover, unlike many adaptive models of fads, the present model generates analytical solutions, not just computational ones. Many---perhaps most---adaptive models of fads are what has come to be called agent-based models'' and it is virtually a defining feature of such models that they be computational. (For a survey of agent-based models, including several applied to fads, see @cite_12 .) | {
"cite_N": [
"@cite_12"
],
"mid": [
"2143933215"
],
"abstract": [
"This paper deals with Bayesian selection of models that can be specified using inequality constraints among the model parameters. The concept of encompassing priors is introduced, that is, a prior distribution for an unconstrained model from which the prior distributions of the constrained models can be derived. It is shown that the Bayes factor for the encompassing and a constrained model has a very nice interpretation: it is the ratio of the proportion of the prior and posterior distribution of the encompassing model in agreement with the constrained model. It is also shown that, for a specific class of models, selection based on encompassing priors will render a virtually objective selection procedure. The paper concludes with three illustrative examples: an analysis of variance with ordered means; a contingency table analysis with ordered odds-ratios; and a multilevel model with ordered slopes. 1 Inequality constrained statistical models Researchers often have one or more (competing) theories about their field of research. Consider, for example, theories about the effect of behavioral therapy versus medication for children with an attention deficit disorder (ADD). Some researchers in this area believe medication is the only effective treatment for ADD, some believe strongly in behavioral therapy, and others may expect an additive effect of both therapies. To test or compare the plausibility of these theories they need to be translated into statistical models. Subsequently, empirical data can be used to determine which model is best. Inequality constraints on model parameters can be useful in the specification of statistical models. This paper deals with competing models that have the same parameter vector, but in one or more of the models parameters are subjected to inequality constraints. To continue the example, consider an experiment where children with ADD are randomly assigned to one of four conditions: no treatment (1), behavioral therapy (2), medication (3), and behavioral therapy plus medication (4). Let the outcome"
]
} |
cs0509024 | 2951388382 | In this paper, we present a framework for the semantics and the computation of aggregates in the context of logic programming. In our study, an aggregate can be an arbitrary interpreted second order predicate or function. We define extensions of the Kripke-Kleene, the well-founded and the stable semantics for aggregate programs. The semantics is based on the concept of a three-valued immediate consequence operator of an aggregate program. Such an operator approximates the standard two-valued immediate consequence operator of the program, and induces a unique Kripke-Kleene model, a unique well-founded model and a collection of stable models. We study different ways of defining such operators and thus obtain a framework of semantics, offering different trade-offs between precision and tractability. In particular, we investigate conditions on the operator that guarantee that the computation of the three types of semantics remains on the same level as for logic programs without aggregates. Other results show that, in practice, even efficient three-valued immediate consequence operators which are very low in the precision hierarchy, still provide optimal precision. | A more elaborate definition of a stable semantics was given by @cite_29 for programs with weight constraints and implemented by the well-known smodels system. In our language, weight constraints correspond to aggregate atoms build with the @math and @math aggregate relations. An extensive comparison between the @math -stable semantics and the stable semantics of weight constraints can be found in @cite_8 @cite_15 and will not be repeated here. | {
"cite_N": [
"@cite_29",
"@cite_15",
"@cite_8"
],
"mid": [
"1540263588",
"2011124182",
"2152131859"
],
"abstract": [
"We investigate a generalization of weight-constraint programs with stable semantics, as implemented in the ASP solver smodels. Our programs admit atoms of the form ( X, F ) where X is a finite set of propositional atoms and ( F ) is an arbitrary family of subsets of X. We call such atoms set constaints and show that the concept of stable model can be generalized to programs admitting set constraints both in the bodies and the heads of clauses. Natural tools to investigate the fixpoint semantics for such programs are nondeterministic operators in complete lattices. We prove two fixpoint theorems for such operators.",
"A novel logic program like language, weight constraint rules, is developed for answer set programming purposes. It generalizes normal logic programs by allowing weight constraints in place of literals to represent, e.g., cardinality and resource constraints and by providing optimization capabilities. A declarative semantics is developed which extends the stable model semantics of normal programs. The computational complexity of the language is shown to be similar to that of normal programs under the stable model semantics. A simple embedding of general weight constraint rules to a small subclass of the language called basic constraint rules is devised. An implementation of the language, the SMODELS system, is developed based on this embedding. It uses a two level architecture consisting of a front-end and a kernel language implementation. The front-end allows restricted use of variables and functions and compiles general weight constraint rules to basic constraint rules. A major part of the work is the development of an efficient search procedure for computing stable models for this kernel language. The procedure is compared with and empirically tested against satisfiability checkers and an implementation of the stable model semantics. It offers a competitive implementation of the stable model semantics for normal programs and attractive performance for problems where the new types of rules provide a compact representation.",
"Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variabledfree) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variabledfree program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating builtdin predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., builtdin integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported."
]
} |
cs0509024 | 2951388382 | In this paper, we present a framework for the semantics and the computation of aggregates in the context of logic programming. In our study, an aggregate can be an arbitrary interpreted second order predicate or function. We define extensions of the Kripke-Kleene, the well-founded and the stable semantics for aggregate programs. The semantics is based on the concept of a three-valued immediate consequence operator of an aggregate program. Such an operator approximates the standard two-valued immediate consequence operator of the program, and induces a unique Kripke-Kleene model, a unique well-founded model and a collection of stable models. We study different ways of defining such operators and thus obtain a framework of semantics, offering different trade-offs between precision and tractability. In particular, we investigate conditions on the operator that guarantee that the computation of the three types of semantics remains on the same level as for logic programs without aggregates. Other results show that, in practice, even efficient three-valued immediate consequence operators which are very low in the precision hierarchy, still provide optimal precision. | A novel feature of the language of weight constraints was that it allows weight constraints to be present also in the head of the rules. This approach have been further developed in different directions. One line of research was to consider different variations and extensions of weight constraints like abstract constraints @cite_23 , monotone cardinality atoms @cite_10 or set constraints @cite_20 . Such constraint atoms correspond in a natural way to aggregate atoms. The stable semantics of these extensions is also defined in terms of lattice operators. However, since constraint atoms are allowed in the heads of rules, the operators become non-deterministic and the algebraic theory is quite different than the approximation theory we used in this work. However, all the semantics agree on the class of definite aggregate programs and its least model semantics. The equivalent of a definite logic program in @cite_20 is called a Horn SC-logic programs and such programs are also characterized by a unique model which is the least fixpoint of a deterministic monotone operator @math which is the equivalent of our @math operator. | {
"cite_N": [
"@cite_20",
"@cite_10",
"@cite_23"
],
"mid": [
"2011124182",
"1540263588",
"1854994931"
],
"abstract": [
"A novel logic program like language, weight constraint rules, is developed for answer set programming purposes. It generalizes normal logic programs by allowing weight constraints in place of literals to represent, e.g., cardinality and resource constraints and by providing optimization capabilities. A declarative semantics is developed which extends the stable model semantics of normal programs. The computational complexity of the language is shown to be similar to that of normal programs under the stable model semantics. A simple embedding of general weight constraint rules to a small subclass of the language called basic constraint rules is devised. An implementation of the language, the SMODELS system, is developed based on this embedding. It uses a two level architecture consisting of a front-end and a kernel language implementation. The front-end allows restricted use of variables and functions and compiles general weight constraint rules to basic constraint rules. A major part of the work is the development of an efficient search procedure for computing stable models for this kernel language. The procedure is compared with and empirically tested against satisfiability checkers and an implementation of the stable model semantics. It offers a competitive implementation of the stable model semantics for normal programs and attractive performance for problems where the new types of rules provide a compact representation.",
"We investigate a generalization of weight-constraint programs with stable semantics, as implemented in the ASP solver smodels. Our programs admit atoms of the form ( X, F ) where X is a finite set of propositional atoms and ( F ) is an arbitrary family of subsets of X. We call such atoms set constaints and show that the concept of stable model can be generalized to programs admitting set constraints both in the bodies and the heads of clauses. Natural tools to investigate the fixpoint semantics for such programs are nondeterministic operators in complete lattices. We prove two fixpoint theorems for such operators.",
"We propose and study extensions of logic programming with constraints represented as generalized atoms of the form C(X), where X is a finite set of atoms and C is an abstract constraint (formally, a collection of sets of atoms). Atoms C(X) are satisfied by an interpretation (set of atoms) M, if M ∩ X ∈ C. We focus here on monotone constraints, that is, those collections C that are closed under the superset. They include, in particular, weight (or pseudo-boolean) constraints studied both by the logic programming and SAT communities. We show that key concepts of the theory of normal logic programs such as the one-step provability operator, the semantics of supported and stable models, as well as several of their properties including complexity results, can be lifted to such case."
]
} |
cs0509024 | 2951388382 | In this paper, we present a framework for the semantics and the computation of aggregates in the context of logic programming. In our study, an aggregate can be an arbitrary interpreted second order predicate or function. We define extensions of the Kripke-Kleene, the well-founded and the stable semantics for aggregate programs. The semantics is based on the concept of a three-valued immediate consequence operator of an aggregate program. Such an operator approximates the standard two-valued immediate consequence operator of the program, and induces a unique Kripke-Kleene model, a unique well-founded model and a collection of stable models. We study different ways of defining such operators and thus obtain a framework of semantics, offering different trade-offs between precision and tractability. In particular, we investigate conditions on the operator that guarantee that the computation of the three types of semantics remains on the same level as for logic programs without aggregates. Other results show that, in practice, even efficient three-valued immediate consequence operators which are very low in the precision hierarchy, still provide optimal precision. | Another proposal for a stable semantics of disjunctive logic programs extended with aggregates was given in @cite_27 . In the sequel we investigate in more detail the relationship with this semantics to the family of @math -stable semantics defined earlier. First, we recall the definitions of the stable semantics of @cite_27 . | {
"cite_N": [
"@cite_27"
],
"mid": [
"1520574003"
],
"abstract": [
"We introduce a family of partial stable model semantics for logic programs with arbitrary aggregate relations. The semantics are parametrized by the interpretation of aggregate relations in three-valued logic. Any semantics in this family satisfies two important properties: (i) it extends the partial stable semantics for normal logic programs and (ii) total stable models are always minimal. We also give a specific instance of the semantics and show that it has several attractive features."
]
} |
cs0509065 | 2952811798 | For generalized Reed-Solomon codes, it has been proved GuruswamiVa05 that the problem of determining if a received word is a deep hole is co-NP-complete. The reduction relies on the fact that the evaluation set of the code can be exponential in the length of the code -- a property that practical codes do not usually possess. In this paper, we first presented a much simpler proof of the same result. We then consider the problem for standard Reed-Solomon codes, i.e. the evaluation set consists of all the nonzero elements in the field. We reduce the problem of identifying deep holes to deciding whether an absolutely irreducible hypersurface over a finite field contains a rational point whose coordinates are pairwise distinct and nonzero. By applying Schmidt and Cafure-Matera estimation of rational points on algebraic varieties, we prove that the received vector @math for Reed-Solomon @math , @math , cannot be a deep hole, whenever @math is a polynomial of degree @math for @math . | The pursuit of efficient decoding algorithms for Reed-Solomon codes has yielded intriguing results. If the radius of a Hamming ball centered at some received word is less than half the minimum distance, there can be at most one codeword in the Hamming ball. Finding this codeword is called unambiguous decoding . It can be efficiently solved, see @cite_6 for a simple algorithm. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2130539706"
],
"abstract": [
"For an error-correcting code and a distance bound, the list decoding problem is to compute all the codewords within a given distance to a received message. The bounded distance decoding problem is to find one codeword if there is at least one codeword within the given distance, or to output the empty set if there is not. Obviously the bounded distance decoding problem is not as hard as the list decoding problem. For a Reed-Solomon code [n, k] sup q , a simple counting argument shows that for any integer 0 0. We show that the discrete logarithm problem over F sub qh can be efficiently reduced by a randomized algorithm to the bounded distance decoding problem of the Reed-Solomon code [q, g - h] sub q with radius q - g. These results show that the decoding problems for the Reed-Solomon code are at least as hard as the discrete logarithm problem over finite fields. The main tools to obtain these results are an interesting connection between the problem of list-decoding of Reed-Solomon code and the problem of discrete logarithm over finite fields, and a generalization of Katz's theorem on representations of elements in an extension finite field by products of distinct linear factors."
]
} |
cs0509065 | 2952811798 | For generalized Reed-Solomon codes, it has been proved GuruswamiVa05 that the problem of determining if a received word is a deep hole is co-NP-complete. The reduction relies on the fact that the evaluation set of the code can be exponential in the length of the code -- a property that practical codes do not usually possess. In this paper, we first presented a much simpler proof of the same result. We then consider the problem for standard Reed-Solomon codes, i.e. the evaluation set consists of all the nonzero elements in the field. We reduce the problem of identifying deep holes to deciding whether an absolutely irreducible hypersurface over a finite field contains a rational point whose coordinates are pairwise distinct and nonzero. By applying Schmidt and Cafure-Matera estimation of rational points on algebraic varieties, we prove that the received vector @math for Reed-Solomon @math , @math , cannot be a deep hole, whenever @math is a polynomial of degree @math for @math . | The question on decodability of Reed-Solomon codes has attracted attention recently, due to recent discoveries on the relationship between decoding Reed-Solomon codes and some number theoretical problems. Allowing exponential alphabets, Guruswami and Vardy proved that the maximum likelihood decoding is NP-complete. They essentially showed that deciding deep holes is co-NP-complete. When the evaluation set is precisely the whole field or @math , an NP-completeness result is hard to obtain, Cheng and Wan @cite_1 managed to prove that decoding problem of Reed-Solomon codes at certain radius is at least as hard as the discrete logarithm problem over finite fields. In this paper, we wish to establish an additional connection between decoding of standard Reed-Solomon codes and a classical number-theoretic problem -- that of determining the number of rational points on an algebraic hypersurface. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1565759886"
],
"abstract": [
"For generalized Reed-Solomon codes, it has been proved [7] that the problem of determining if a received word is a deep hole is co-NP-complete. The reduction relies on the fact that the evaluation set of the code can be exponential in the length of the code - a property that practical codes do not usually possess. In this paper, we first present a much simpler proof of the same result. We then consider the problem for standard Reed-Solomon codes, i.e. the evaluation set consists of all the nonzero elements in the field. We reduce the problem of identifying deep holes to deciding whether an absolutely irreducible hypersurface over a finite field contains a rational point whose coordinates are pairwise distinct and nonzero. By applying Cafure-Matera estimation of rational points on algebraic varieties, we prove that the received vector (f(α))α∈Fpfor the Reed-Solomon [p - 1, k]p, k < p1 4-Ɛ, cannot be a deep hole, whenever f(x) is a polynomial of degree k + d for 1 ≤ d ≤ p3 13-Ɛ."
]
} |
cs0508009 | 1688492802 | We conduct the most comprehensive study of WLAN traces to date. Measurements collected from four major university campuses are analyzed with the aim of developing fundamental understanding of realistic user behavior in wireless networks. Both individual user and inter-node (group) behaviors are investigated and two classes of metrics are devised to capture the underlying structure of such behaviors. For individual user behavior we observe distinct patterns in which most users are 'on' for a small fraction of the time, the number of access points visited is very small and the overall on-line user mobility is quite low. We clearly identify categories of heavy and light users. In general, users exhibit high degree of similarity over days and weeks. For group behavior, we define metrics for encounter patterns and friendship. Surprisingly, we find that a user, on average, encounters less than 6 of the network user population within a month, and that encounter and friendship relations are highly asymmetric. We establish that number of encounters follows a biPareto distribution, while friendship indexes follow an exponential distribution. We capture the encounter graph using a small world model, the characteristics of which reach steady state after only one day. We hope for our study to have a great impact on realistic modeling of network usage and mobility patterns in wireless networks. | Influenced by the gaining popularity of wireless LANs in recent years, there are increasing interests on studying usage of wireless LANs. Several previous works @cite_8 , @cite_15 , @cite_5 have provided extensive study on wireless network usage statistics and made their traces available to the research community. Our work is built upon these understandings and traces. | {
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_8"
],
"mid": [
"2160494326",
"1991407578",
"2167141469"
],
"abstract": [
"Wireless local-area networks are becoming increasingly popular. They are commonplace on university campuses and inside corporations, and they have started to appear in public areas [17]. It is thus becoming increasingly important to understand user mobility patterns and network usage characteristics on wireless networks. Such an understanding would guide the design of applications geared toward mobile environments (e.g., pervasive computing applications), would help improve simulation tools by providing a more representative workload and better user mobility models, and could result in a more effective deployment of wireless network components.Several studies have recently been performed on wire-less university campus networks and public networks. In this paper, we complement previous research by presenting results from a four week trace collected in a large corporate environment. We study user mobility patterns and introduce new metrics to model user mobility. We also analyze user and load distribution across access points. We compare our results with those from previous studies to extract and explain several network usage and mobility characteristics.We find that average user transfer-rates follow a power law. Load is unevenly distributed across access points and is influenced more by which users are present than by the number of users. We model user mobility with persistence and prevalence. Persistence reflects session durations whereas prevalence reflects the frequency with which users visit various locations. We find that the probability distributions of both measures follow power laws.",
"A user located in a congested area of a wireless LAN may benefit by moving to a less-crowded area and using a less-loaded access point. This idea has gained attention from researchers in recent literature [A. Balachandran, P. Bahl, G. Voelker, Hot-spot congestion relief in public-area wireless networks, in: IEEE WMCSA, Callicoon, NY, June 2002; M. Satyanarayanan, Pervasive computing: visions and challenges, IEEE Personal Communications 8 (4) (2001) 10-17]. However, its effectiveness and stability are questionable. Each user selects the access point that offers the optimal trade-off between load and distance to be traveled. Since users are selfish, a user's selection may adversely impact other users, in turn motivating them to change their selections. Also, future user arrivals and exits may invalidate current selections. This paper presents the first game-theoretic analysis of this idea. We model access point selection as a game, characterize the Nash equilibria of the system and examine distributed myopic selections that naturally mimic selfish users. We analytically and empirically assess the impact of user diversity and dynamic exit patterns on system behavior. The paper contributes to a deeper understanding of the costs, benefits and stability of such a solution in various usage scenarios, which is an essential pre-requisite for real-world deployment.",
"Many studies on measurement and characterization of wireless LANs (WLANs) have been performed recently. Most of these measurements have been conducted from the wired portion of the network based on wired monitoring (e.g. sniffer at some wired point) or SNMP statistics. More recently, wireless monitoring, the traffic measurement from a wireless vantage point, is also widely adopted in both wireless research and commercial WLAN management product development. Wireless monitoring technique can provide detailed PHY MAC information on wireless medium. For the network diagnosis purpose (e.g. anomaly detection and security monitoring) such detailed wireless information is more useful than the information provided by SNMP or wired monitoring. In this paper we have explored various issues in implementing the wireless monitoring system for an IEEE 802.11 based wireless network. We identify the pitfalls that such system needs to be aware of, and then provide feasible solutions to avoid those pitfalls. We implement an actual wireless monitoring system and demonstrate its effectiveness by characterizing a typical computer science department WLAN traffic. Our characterization reveals rich information about the PHY MAC layers of the IEEE 802.11 protocol such as the typical traffic mix of different frame types, their temporal characteristics and correlation with the user activities. Moreover, we identify various anomalies in protocol and security of the IEEE 802.11 MAC. Regarding the security, we identify malicious usages of WLAN, such as email worm and network scanning. Our results also show excessive retransmissions of some management frame types reducing the useful throughput of the wireless network."
]
} |
cs0508009 | 1688492802 | We conduct the most comprehensive study of WLAN traces to date. Measurements collected from four major university campuses are analyzed with the aim of developing fundamental understanding of realistic user behavior in wireless networks. Both individual user and inter-node (group) behaviors are investigated and two classes of metrics are devised to capture the underlying structure of such behaviors. For individual user behavior we observe distinct patterns in which most users are 'on' for a small fraction of the time, the number of access points visited is very small and the overall on-line user mobility is quite low. We clearly identify categories of heavy and light users. In general, users exhibit high degree of similarity over days and weeks. For group behavior, we define metrics for encounter patterns and friendship. Surprisingly, we find that a user, on average, encounters less than 6 of the network user population within a month, and that encounter and friendship relations are highly asymmetric. We establish that number of encounters follows a biPareto distribution, while friendship indexes follow an exponential distribution. We capture the encounter graph using a small world model, the characteristics of which reach steady state after only one day. We hope for our study to have a great impact on realistic modeling of network usage and mobility patterns in wireless networks. | With these traces available, more recent research works focus on modeling user behaviors in wireless LANs. In @cite_11 the authors propose models to describe traffic flows generated by wireless LAN users, which is a different focus to this paper. In the first part of this paper we focus more on identifying metrics that capture important characteristics of user association behaviors. We understand user associations as coarse-grained mobility at per access point granularity. Similar methodology has been used in @cite_8 and @cite_3 . In @cite_3 the authors propose a mobility model based on association session length distribution and AP preferences. However, there are also other important metrics that are not included, such as user on-off behavior and repetitive patterns. We add these metrics to provide a more complete description for user behaviors in wireless networks. | {
"cite_N": [
"@cite_8",
"@cite_3",
"@cite_11"
],
"mid": [
"2160494326",
"2002169759",
"2137688035"
],
"abstract": [
"Wireless local-area networks are becoming increasingly popular. They are commonplace on university campuses and inside corporations, and they have started to appear in public areas [17]. It is thus becoming increasingly important to understand user mobility patterns and network usage characteristics on wireless networks. Such an understanding would guide the design of applications geared toward mobile environments (e.g., pervasive computing applications), would help improve simulation tools by providing a more representative workload and better user mobility models, and could result in a more effective deployment of wireless network components.Several studies have recently been performed on wire-less university campus networks and public networks. In this paper, we complement previous research by presenting results from a four week trace collected in a large corporate environment. We study user mobility patterns and introduce new metrics to model user mobility. We also analyze user and load distribution across access points. We compare our results with those from previous studies to extract and explain several network usage and mobility characteristics.We find that average user transfer-rates follow a power law. Load is unevenly distributed across access points and is influenced more by which users are present than by the number of users. We model user mobility with persistence and prevalence. Persistence reflects session durations whereas prevalence reflects the frequency with which users visit various locations. We find that the probability distributions of both measures follow power laws.",
"In this paper, we analyze the mobility patterns of users of wireless hand-held PDAs in a campus wireless network using an eleven week trace of wireless network activity. Our study has two goals. First, we characterize the high-level mobility and access patterns of hand-held PDA users and compare these characteristics to previous workload mobility studies focused on laptop users. Second, we develop two wireless network topology models for use in wireless mobility studies: an evolutionary topology model based on user proximity and a campus waypoint model that serves as a trace-based complement to the random waypoint model. We use our evolutionary topology model as a case study for preliminary evaluation of three ad hoc routing algorithms on the network topologies created by the access and mobility patterns of users of modern wireless PDAs. Based upon the mobility characteristics of our trace-based campus waypoint model, we find that commonly parameterized synthetic mobility models have overly aggressive mobility characteristics for scenarios where user movement is limited to walking. Mobility characteristics based on realistic models can have significant implications for evaluating systems designed for mobility. When evaluated using our evolutionary topology model, for example, popular ad hoc routing protocols were very successful at adapting to user mobility, and user mobility was not a key factor in their performance.",
"Understanding user mobility is critical for simula- tions of mobile devices in a wireless network, but current mobility models often do not reflect real user movements. In this paper, we provide a foundation for such work by exploring mobility characteristics in traces of mobile users. We present a method to estimate the physical location of users from a large trace of mobile devices associating with access points in a wireless network. Using this method, we extracted tracks of always-on Wi-Fi devices from a 13-month trace. We discovered that the speed and pause time each follow a log-normal distribution and that the direction of movements closely reflects the direction of roads and walkways. Based on the extracted mobility characteristics, we developed a mobility model, focusing on movements among popular regions. Our validation shows that synthetic tracks match real tracks with a median relative error of 17 ."
]
} |
cs0508009 | 1688492802 | We conduct the most comprehensive study of WLAN traces to date. Measurements collected from four major university campuses are analyzed with the aim of developing fundamental understanding of realistic user behavior in wireless networks. Both individual user and inter-node (group) behaviors are investigated and two classes of metrics are devised to capture the underlying structure of such behaviors. For individual user behavior we observe distinct patterns in which most users are 'on' for a small fraction of the time, the number of access points visited is very small and the overall on-line user mobility is quite low. We clearly identify categories of heavy and light users. In general, users exhibit high degree of similarity over days and weeks. For group behavior, we define metrics for encounter patterns and friendship. Surprisingly, we find that a user, on average, encounters less than 6 of the network user population within a month, and that encounter and friendship relations are highly asymmetric. We establish that number of encounters follows a biPareto distribution, while friendship indexes follow an exponential distribution. We capture the encounter graph using a small world model, the characteristics of which reach steady state after only one day. We hope for our study to have a great impact on realistic modeling of network usage and mobility patterns in wireless networks. | Recent research works on protocol design in wireless networks usually utilize synthetic, random mobility models for performance evaluation @cite_9 , such as random waypoint model or random walk model. MNs in such synthetic models are always on and homogeneous in their behavior. Both of these characteristics are not observed in real wireless traces. We argue that to better serve the purpose of testing new protocols, we need models that capture on-off and heterogeneous behavior we observed from the traces. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2002169759"
],
"abstract": [
"In this paper, we analyze the mobility patterns of users of wireless hand-held PDAs in a campus wireless network using an eleven week trace of wireless network activity. Our study has two goals. First, we characterize the high-level mobility and access patterns of hand-held PDA users and compare these characteristics to previous workload mobility studies focused on laptop users. Second, we develop two wireless network topology models for use in wireless mobility studies: an evolutionary topology model based on user proximity and a campus waypoint model that serves as a trace-based complement to the random waypoint model. We use our evolutionary topology model as a case study for preliminary evaluation of three ad hoc routing algorithms on the network topologies created by the access and mobility patterns of users of modern wireless PDAs. Based upon the mobility characteristics of our trace-based campus waypoint model, we find that commonly parameterized synthetic mobility models have overly aggressive mobility characteristics for scenarios where user movement is limited to walking. Mobility characteristics based on realistic models can have significant implications for evaluating systems designed for mobility. When evaluated using our evolutionary topology model, for example, popular ad hoc routing protocols were very successful at adapting to user mobility, and user mobility was not a key factor in their performance."
]
} |
cs0508132 | 2950600983 | We present a declarative language, PP, for the high-level specification of preferences between possible solutions (or trajectories) of a planning problem. This novel language allows users to elegantly express non-trivial, multi-dimensional preferences and priorities over such preferences. The semantics of PP allows the identification of most preferred trajectories for a given goal. We also provide an answer set programming implementation of planning problems with PP preferences. | The work presented in this paper is the natural continuation of the work we presented in @cite_33 , where we rely on prioritized default theories to express limited classes of preferences between trajectories---a strict subset of the preferences covered in this paper. This work is also influenced by other works on exploiting in planning (e.g., @cite_50 @cite_37 @cite_15 ), in which domain-specific knowledge is expressed as a constraint on the trajectories achieving the goal, and hence, is a hard constraint . In subsection , we discuss different approaches to planning with preferences which are directly related to our work. In Subsections -- we present works that are somewhat related to our work and can be used to develop alternative implementation for @math . | {
"cite_N": [
"@cite_37",
"@cite_33",
"@cite_50",
"@cite_15"
],
"mid": [
"2109910161",
"2170377262",
"2134153324",
"2069057437"
],
"abstract": [
"Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem.",
"Between 1998 and 2004, the planning community has seen vast progress in terms of the sizes of benchmark examples that domain-independent planners can tackle successfully. The key technique behind this progress is the use of heuristic functions based on relaxing the planning task at hand, where the relaxation is to assume that all delete lists are empty. The unprecedented success of such methods, in many commonly used benchmark examples, calls for an understanding of what classes of domains these methods are well suited for. In the investigation at hand, we derive a formal background to such an understanding. We perform a case study covering a range of 30 commonly used STRIPS and ADL benchmark domains, including all examples used in the first four international planning competitions. We prove connections between domain structure and local search topology – heuristic cost surface properties – under an idealized version of the heuristic functions used in modern planners. The idealized heuristic function is called h + , and differs from the practically used functions in that it returns the length of an optimal relaxed plan, which is NP-hard to compute. We identify several key characteristics of the topology under h + , concerning the existence non-existence of unrecognized dead ends, as well as the existence non-existence of constant upper bounds on the difficulty of escaping local minima and benches. These distinctions divide the (set of all) planning domains into a taxonomy of classes of varying h + topology. As it turns out, many of the 30 investigated domains lie in classes with a relatively easy topology. Most particularly, 12 of the domains lie in classes where FF’s search algorithm, provided with h + , is a polynomial solving mechanism. We also present results relating h + to its approximation as implemented in FF. The behavior regarding dead ends is provably the same. We summarize the results of an empirical investigation showing that, in many domains, the topological qualities of h + are largely inherited by the approximation. The overall investigation gives a rare example of a successful analysis of the connections between typical-case problem structure, and search performance. The theoretical investigation also gives hints on how the topological phenomena might be automatically recognizable by domain analysis techniques. We outline some preliminary steps we made into that direction.",
"A longstanding goal in planning research is the ability to generalize plans developed for some set of environments to a new but similar environment, with minimal or no replanning. Such generalization can both reduce planning time and allow us to tackle larger domains than the ones tractable for direct planning. In this paper, we present an approach to the generalization problem based on a new framework of relational Markov Decision Processes (RMDPs). An RMDP can model a set of similar environments by representing objects as instances of different classes. In order to generalize plans to multiple environments, we define an approximate value function specified in terms of classes of objects and, in a multiagent setting, by classes of agents. This class-based approximate value function is optimized relative to a sampled subset of environments, and computed using an efficient linear programming method. We prove that a polynomial number of sampled environments suffices to achieve performance close to the performance achievable when optimizing over the entire space. Our experimental results show that our method generalizes plans successfully to new, significantly larger, environments, with minimal loss of performance relative to environment-specific planning. We demonstrate our approach on a real strategic computer war game.",
"We propose a multiple source domain adaptation method, referred to as Domain Adaptation Machine (DAM), to learn a robust decision function (referred to as target classifier) for label prediction of patterns from the target domain by leveraging a set of pre-computed classifiers (referred to as auxiliary source classifiers) independently learned with the labeled patterns from multiple source domains. We introduce a new data-dependent regularizer based on smoothness assumption into Least-Squares SVM (LS-SVM), which enforces that the target classifier shares similar decision values with the auxiliary classifiers from relevant source domains on the unlabeled patterns of the target domain. In addition, we employ a sparsity regularizer to learn a sparse target classifier. Comprehensive experiments on the challenging TRECVID 2005 corpus demonstrate that DAM outperforms the existing multiple source domain adaptation methods for video concept detection in terms of effectiveness and efficiency."
]
} |
cs0508132 | 2950600983 | We present a declarative language, PP, for the high-level specification of preferences between possible solutions (or trajectories) of a planning problem. This novel language allows users to elegantly express non-trivial, multi-dimensional preferences and priorities over such preferences. The semantics of PP allows the identification of most preferred trajectories for a given goal. We also provide an answer set programming implementation of planning problems with PP preferences. | introduced a framework for planning with action costs using logic programming @cite_3 . The focus of their proposal is to express certain classes of quantitative preferences. Each action is assigned an integer cost, and plans with the minimal cost are considered to be optimal. Costs can be either static or relative to the time step in which the action is executed. @cite_3 also presents the encoding of different preferences, such as shortest plan and the cheapest plan. Our approach also emphasizes the use of logic programming, but differs in several aspects. Here, we develop a for preference representation. Our language can express the preferences discussed in @cite_3 , but it is more high-level and flexible than the action costs approach. The approach in @cite_3 also does not allow the use of fully general dynamic preferences. On the other hand, while we only consider planning with complete information, @cite_3 deal with planning in the presence of incomplete information and non-deterministic actions. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1633032608"
],
"abstract": [
"Recently, planning based on answer set programming has been proposed as an approach towards realizing declarative planning systems. In this paper, we present the language κc, which extends the declarative planning language κ by action costs. κc provides the notion of admissible and optimal plans, which are plans whose overall action costs are within a given limit resp. minimum over all plans (i.e., cheapest plans). As we demonstrate, this novel language allows for expressing some nontrivial planning tasks in a declarative way. Furthermore, it can be utilized for representing planning problems under other optimality criteria, such as computing \"shortest\" plans (with the least number of steps), and refinement combinations of cheapest and fastest plans. We study complexity aspects of the language κc and provide a transformation to logic programs, such that planning problems are solved via answer set programming. Furthermore, we report experimental results on selected problems. Our experience is encouraging that answer set planning may be a valuable approach to expressive planning systems in which intricate planning problems can be naturally specified and solved."
]
} |
cs0508132 | 2950600983 | We present a declarative language, PP, for the high-level specification of preferences between possible solutions (or trajectories) of a planning problem. This novel language allows users to elegantly express non-trivial, multi-dimensional preferences and priorities over such preferences. The semantics of PP allows the identification of most preferred trajectories for a given goal. We also provide an answer set programming implementation of planning problems with PP preferences. | Considerable effort has been invested in introducing preferences in logic programming. In @cite_27 preferences are expressed at the level of atoms and used for parsing disambiguation in logic grammars. Rule-level preferences have been used in various proposals for selection of preferred answer sets in answer set programming @cite_23 @cite_18 @cite_36 @cite_28 . Some of the existing answer set solvers include limited forms of (numerical) optimization capabilities. smodels @cite_26 offers the ability to associate to atoms and to compute answer sets that minimize or maximize the total weight. DLV @cite_47 provides the notion of , i.e., constraints of the form [ , , . : [ w : : : l] ] where @math is a numeric penalty for violating the constraint, and @math is a priority level. The total cost of violating constraints at each priority level is computed, and answer sets are compared to minimize total penalty (according to a lexicographic ordering based on priority levels). | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_36",
"@cite_28",
"@cite_27",
"@cite_23",
"@cite_47"
],
"mid": [
"2124627636",
"1551250655",
"2174235632",
"2011124182",
"1540263588",
"1986318362",
"2004414305"
],
"abstract": [
"Abstract In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity.",
"In this dissertation we show how conflict clause learning, a technique that has been very useful in improving the efficiency of Boolean logic satisfiability search, can be adapted to speed up dramatically the search for models of answer set programs. Answer set programming is a knowledge representation paradigm related to the areas of logic programming and nonmonotonic reasoning. Many of the applications of answer set programming come from the areas of artificial intelligence-related diagnosis and planning. The problem of finding an answer set for a normal or extended logic program is NP-hard. Current complete answer set solvers are patterned after the Davis-Putnam-Loveland-Logemann ( DPLL) algorithm for solving Boolean satisfiability (SAT) problems, but are adapted to the nonmonotonic semantics of answer set programming. Recent SAT solvers include improvements to the DPLL algorithm. Conflict clause learning has been particularly effective in this regard. A conflict clause represents a backtracking solver's analysis of why a conflict occurred. This analysis can be used to further prune the search space and to direct the search heuristic. The use of such clauses has improved significantly the efficiency of satisfiability solvers over the past few years, especially on structured problems arising from applications. In this dissertation we describe how we have adapted conflict clause techniques for use in the answer set solver Smodels. We experimentally compare the performance of the resulting program, Smodelscc, to that of the original Smodels program. Our tests show dramatic speedups for Smodelscc on a wide range of problems. We also compare the performance of Smodelscc with that of two other recent answer set solvers, ASSAT and Cmodels-2. ASSAT and Cmodels-2 directly call Boolean satisfiability solvers in order to search for answer sets. On so-called “non-tight” problems, Smodelscc showed substantially better performance than these solvers. The performance advantage that Smodels cc enjoys on non-tight problems is due to the unfounded set test that Smodelscc inherits from the Smodels solver, and the fact that this test is executed frequently throughout the program's search for answer sets.",
"We introduce a methodology and framework for expressing general preference information in logic programming under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of atoms of form s p t where s and t are names. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed program correspond with the preferred answer sets of the original program. Our approach allows the specification of dynamic orderings, in which preferences can appear arbitrarily within a program. Static orderings (in which preferences are external to a logic program) are a trivial restriction of the general dynamic case. First, we develop a specific approach to reasoning with preferences, wherein the preference ordering specifies the order in which rules are to be applied. We then demonstrate the wide range of applicability of our framework by showing how other approaches, among them that of Brewka and Eiter, can be captured within our framework. Since the result of each of these transformations is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a publicly available compiler as a front-end for these programming systems.",
"A novel logic program like language, weight constraint rules, is developed for answer set programming purposes. It generalizes normal logic programs by allowing weight constraints in place of literals to represent, e.g., cardinality and resource constraints and by providing optimization capabilities. A declarative semantics is developed which extends the stable model semantics of normal programs. The computational complexity of the language is shown to be similar to that of normal programs under the stable model semantics. A simple embedding of general weight constraint rules to a small subclass of the language called basic constraint rules is devised. An implementation of the language, the SMODELS system, is developed based on this embedding. It uses a two level architecture consisting of a front-end and a kernel language implementation. The front-end allows restricted use of variables and functions and compiles general weight constraint rules to basic constraint rules. A major part of the work is the development of an efficient search procedure for computing stable models for this kernel language. The procedure is compared with and empirically tested against satisfiability checkers and an implementation of the stable model semantics. It offers a competitive implementation of the stable model semantics for normal programs and attractive performance for problems where the new types of rules provide a compact representation.",
"We investigate a generalization of weight-constraint programs with stable semantics, as implemented in the ASP solver smodels. Our programs admit atoms of the form ( X, F ) where X is a finite set of propositional atoms and ( F ) is an arbitrary family of subsets of X. We call such atoms set constaints and show that the concept of stable model can be generalized to programs admitting set constraints both in the bodies and the heads of clauses. Natural tools to investigate the fixpoint semantics for such programs are nondeterministic operators in complete lattices. We prove two fixpoint theorems for such operators.",
"The addition of preferences to normal logic programs is a convenient way to represent many aspects of default reasoning. If the derivation of an atom A1 is preferred to that of an atom A2, a preference rule can be defined so that A2 is derived only if A1 is not. Although such situations can be modelled directly using default negation, it is often easier to define preference rules than it is to add negation to the bodies of rules. As first noted by [Proc. Internat. Conf. on Logic Programming, 1995, pp. 731-746], for certain grammars, it may be easier to disambiguate parses using preferences than by enforcing disambiguation in the grammar rules themselves. In this paper we define a general fixed-point semantics for preference logic programs based on an embedding into the well-founded semantics, and discuss its features and relation to previous preference logic semantics. We then study how preference logic grammars are used in data standardization, the commercially important process of extracting useful information from poorly structured textual data. This process includes correcting misspellings and truncations that occur in data, extraction of relevant information via parsing, and correcting inconsistencies in the extracted information. The declarativity of Prolog offers natural advantages for data standardization, and a commercial standardizer has been implemented using Prolog. However, we show that the use of preference logic grammars allow construction of a much more powerful and declarative commercial standardizer, and discuss in detail how the use of the non-monotonic construct of preferences leads to improved commercial software.",
"We propose a new translation from normal logic programs with constraints under the answer set semantics to propositional logic. Given a normal logic program, we show that by adding, for each loop in the program, a corresponding loop formula to the program's completion, we obtain a one-to-one correspondence between the answer sets of the program and the models of the resulting propositional theory. In the worst case, there may be an exponential number of loops in a logic program. To address this problem, we propose an approach that adds loop formulas a few at a time, selectively. Based on these results, we implement a system called ASSAT(X), depending on the SAT solver X used, for computing one answer set of a normal logic program with constraints. We test the system on a variety of benchmarks including the graph coloring, the blocks world planning, and Hamiltonian Circuit domains. Our experimental results show that in these domains, for the task of generating one answer set of a normal logic program, our system has a clear edge over the state-of-art answer set programming systems Smodels and DLV."
]
} |
math0506336 | 2952239370 | The rearrangement inequalities of Hardy-Littlewood and Riesz say that certain integrals involving products of two or three functions increase under symmetric decreasing rearrangement. It is known that these inequalities extend to integrands of the form F(u_1,..., u_m) where F is supermodular; in particular, they hold when F has nonnegative mixed second derivatives. This paper concerns the regularity assumptions on F and the equality cases. It is shown here that extended Hardy-Littlewood and Riesz inequalities are valid for supermodular integrands that are just Borel measurable. Under some nondegeneracy conditions, all equality cases are equivalent to radially decreasing functions under transformations that leave the functionals invariant (i.e., measure-preserving maps for the Hardy-Littlewood inequality, translations for the Riesz inequality). The proofs rely on monotone changes of variables in the spirit of Sklar's theorem. | More than thirty years later, Crowe-Zweibel-Rosenbloom proved Eq. ) for @math on @math @cite_36 . They expressed a given continuous supermodular function @math on @math that vanishes on the boundary as the distribution function of a Borel measure @math , @math layer-cake representation | {
"cite_N": [
"@cite_36"
],
"mid": [
"2886824112"
],
"abstract": [
"We study the asymptotic behavior of the persistent homology of i.i.d. samples from a @math -Ahlfors regular measure --- one that satisfies uniform bounds of the form for some @math all @math in the support of @math and all sufficiently small @math Our main result is that if @math are sampled from a @math -Ahlfors regular measure on @math and @math denotes the @math -weight of the minimal spanning tree on @math [E_ (x_1, ,x_n )= e T (x_1, ,x_n ) |e|^ ] then [E_ (x_1, ,x_n ) n^ d- d ] with high probability as @math We also prove theorems about the asymptotic behavior of weighted sums defined in terms of higher-dimensional persistent homology. As an application, we exhibit hypotheses under which the fractal dimension of a measure can be computed from the persistent homology of i.i.d. samples from that space, in a manner similar to that proposed in the experimental work of (2018)."
]
} |
math0506336 | 2952239370 | The rearrangement inequalities of Hardy-Littlewood and Riesz say that certain integrals involving products of two or three functions increase under symmetric decreasing rearrangement. It is known that these inequalities extend to integrands of the form F(u_1,..., u_m) where F is supermodular; in particular, they hold when F has nonnegative mixed second derivatives. This paper concerns the regularity assumptions on F and the equality cases. It is shown here that extended Hardy-Littlewood and Riesz inequalities are valid for supermodular integrands that are just Borel measurable. Under some nondegeneracy conditions, all equality cases are equivalent to radially decreasing functions under transformations that leave the functionals invariant (i.e., measure-preserving maps for the Hardy-Littlewood inequality, translations for the Riesz inequality). The proofs rely on monotone changes of variables in the spirit of Sklar's theorem. | Carlier viewed maximizing the left hand side of Eq. ) for a given right hand side as an optimal transportation problem where the distribution functions of @math define mass distributions @math on @math , the joint distribution defines a transportation plan, and the functional represents the cost after multiplying by a minus sign @cite_35 . He showed that the functional achieves its maximum (i.e., the cost is minimized) when the joint distribution is concentrated on a curve in @math that is nondecreasing in all coordinate directions, and obtained Eq. ) as a corollary. His proof takes advantage of the dual problem of minimizing @math over @math , subject to the constraint that @math for all @math . | {
"cite_N": [
"@cite_35"
],
"mid": [
"2283275366"
],
"abstract": [
"The basic problem of optimal transportation consists in minimizing the expected costs E[c(X1,X2)] by varying the joint distribution (X1,X2) where the marginal distributions of the random variables X1 and X2 are fixed. Inspired by recent applications in mathematical finance and connections with the peacock problem, we study this problem under the additional condition that (Xi)i=1,2 is a martingale, that is, E[X2|X1]=X1. We establish a variational principle for this problem which enables us to determine optimal martingale transport plans for specific cost functions. In particular, we identify a martingale coupling that resembles the classic monotone quantile coupling in several respects. In analogy with the celebrated theorem of Brenier, the following behavior can be observed: If the initial distribution is continuous, then this “monotone martingale” is supported by the graphs of two functions T1,T2:R→R ."
]
} |
math0506336 | 2952239370 | The rearrangement inequalities of Hardy-Littlewood and Riesz say that certain integrals involving products of two or three functions increase under symmetric decreasing rearrangement. It is known that these inequalities extend to integrands of the form F(u_1,..., u_m) where F is supermodular; in particular, they hold when F has nonnegative mixed second derivatives. This paper concerns the regularity assumptions on F and the equality cases. It is shown here that extended Hardy-Littlewood and Riesz inequalities are valid for supermodular integrands that are just Borel measurable. Under some nondegeneracy conditions, all equality cases are equivalent to radially decreasing functions under transformations that leave the functionals invariant (i.e., measure-preserving maps for the Hardy-Littlewood inequality, translations for the Riesz inequality). The proofs rely on monotone changes of variables in the spirit of Sklar's theorem. | The Riesz inequality in Eq. ) is non-trivial even when @math is just a product of two functions. Ahlfors introduced two-point rearrangements to treat this case on @math @cite_22 , Baernstein-Taylor proved the corresponding result on @math @cite_19 , and Beckner noted that the proof remains valid on @math and @math @cite_9 . When @math is a product of @math functions, Eq. ) has applications to spectral invariants of heat kernels via the Trotter product formula @cite_15 . This case was settled by Friedberg-Luttinger @cite_13 , Burchard-Schmuckenschl "ager @cite_25 , and by Morpurgo, who proved Eq. ) more generally for integrands of the form with @math convex (Theorem 3.13 of @cite_8 ). In the above situations, equality cases have been determined @cite_26 @cite_21 @cite_25 @cite_8 . Almgren-Lieb used the technique of Crowe-Zweibel-Rosen -bloom to prove Eq. ) for @math @cite_5 . The special case where @math for some convex function @math was identified by Baernstein as a master inequality' from which many classical geometric inequalities can be derived quickly @cite_32 . Eq. ) for continuous supermodular integrands with @math is due to Draghici @cite_0 . | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_25"
],
"mid": [
"2888452402",
"2609098951",
"2037982184",
"2963747906",
"1975625541",
"2162337698",
"1645016350",
"1999441752",
"2963587665",
"2964153620",
"2052974900",
"1993000235"
],
"abstract": [
"A version of the Riesz-Sobolev convolution inequality is formulated and proved for arbitrary compact connected Abelian groups. Maximizers are characterized and a quantitative stability theorem is proved, under natural hypotheses. A corresponding stability theorem for sets whose sumset has nearly minimal measure is also proved, sharpening recent results of other authors. For the special case of the group @math , a continuous deformation of sets is developed, under which an appropriately scaled Riesz-Sobolev functional is shown to be nondecreasing.",
"The full' edge isoperimetric inequality for the discrete cube (due to Harper, Bernstein, Lindsay and Hart) specifies the minimum size of the edge boundary @math of a set @math , as a function of @math . A weaker (but more widely-used) lower bound is @math , where equality holds iff @math is a subcube. In 2011, the first author obtained a sharp stability' version of the latter result, proving that if @math , then there exists a subcube @math such that @math . The weak' version of the edge isoperimetric inequality has the following well-known generalization for the @math -biased' measure @math on the discrete cube: if @math , or if @math and @math is monotone increasing, then @math . In this paper, we prove a sharp stability version of the latter result, which generalizes the aforementioned result of the first author. Namely, we prove that if @math , then there exists a subcube @math such that @math , where @math . This result is a central component in recent work of the authors proving sharp stability versions of a number of Erd o s-Ko-Rado type theorems in extremal combinatorics, including the seminal complete intersection theorem' of Ahlswede and Khachatrian. In addition, we prove a biased-measure analogue of the full' edge isoperimetric inequality, for monotone increasing sets, and we observe that such an analogue does not hold for arbitrary sets, hence answering a question of Kalai. We use this result to give a new proof of the full' edge isoperimetric inequality, one relying on the Kruskal-Katona theorem.",
"In a celebrated work by Hoeffding [J. Amer. Statist. Assoc. 58 (1963) 13-30], several inequalities for tail probabilities of sums M n = X 1 + ... + X n of bounded independent random variables X j were proved. These inequalities had a considerable impact on the development of probability and statistics, and remained unimproved until 1995 when Talagrand [Inst. Hautes Etudes Sci. Publ. Math. 81 (1995a) 73-205] inserted certain missing factors in the bounds of two theorems. By similar factors, a third theorem was refined by Pinelis [Progress in Probability 43 (1998) 257-314] and refined (and extended) by me. In this article, I introduce a new type of inequality. Namely, I show that P M n ≥ x ≤ cP S n ≥ x , where c is an absolute constant and S n = e 1 + ... + e n is a sum of independent identically distributed Bernoulli random variables (a random variable is called Bernoulli if it assumes at most two values). The inequality holds for those x ∈ Ρ where the survival function x → P S n ≥ x has a jump down. For the remaining x the inequality still holds provided that the function between the adjacent jump points is interpolated linearly or log-linearly. If it is necessary, to estimate P S n ≥ x special bounds can be used for binomial probabilities. The results extend to martingales with bounded differences. It is apparent that Theorem 1.1 of this article is the most important. The inequalities have applications to measure concentration, leading to results of the type where, up to an absolute constant, the measure concentration is dominated by the concentration in a simplest appropriate model, such results will be considered elsewhere.",
"By using optimal mass transportation and a quantitative Holder inequality, we provide estimates for the Borell–Brascamp–Lieb deficit on complete Riemannian manifolds. Accordingly, equality cases in Borell–Brascamp–Lieb inequalities (including Brunn–Minkowski and Prekopa–Leindler inequalities) are characterized in terms of the optimal transport map between suitable marginal probability measures. These results provide several qualitative applications both in the flat and non-flat frameworks. In particular, by using Caffarelli's regularity result for the Monge–Ampere equation, we give a new proof of Dubuc's characterization of the equality in Borell–Brascamp–Lieb inequalities in the Euclidean setting. When the n-dimensional Riemannian manifold has Ricci curvature Ric(M) ≥ (n-1)k for some k ⋲ ℝ, it turns out that equality in the Borell–Brascamp–Lieb inequality is expected only when a particular region of the manifold between the marginal supports has constant sectional curvature k. A precise characterization is provided for the equality in the Lott–Sturm–Villani-type distorted Brunn–Minkowski inequality on Riemannian manifolds. Related results for (not necessarily reversible) Finsler manifolds are also presented.",
"Let (λ,x) be an eigenpair of the matrix A of order n and let (µ,u) be a Ritz pair of A with respect to a subspace K. Saad has derived a simple priori error bound for sin ∠(x, u) in terms of sin ∠(x, K) for A Hermitian. Similar to Saad's result, Stewart has got an equally simple inequality for A non-Hermitian. In this paper, let (θ, w) be a harmonic Ritz pair from a subspace K, a similar priori error bound for sin ∠(x, w) is established in terms of sin ∠(x, K).",
"We study bounds on the exit time of Brownian motion from a set in terms of its size and shape, and the relation of such bounds with isoperimetric inequalities. The first result is an upper bound for the distribution function of the exit time from a subset of a sphere or hyperbolic space of constant curvature in terms of the exit time from a disc of the same volume. This amounts to a rearrangement inequality for the Dirichlet heat kernel. To connect this inequality with the classical isoperimetric inequality, we derive a formula for the perimeter of a set in terms of the heat flow over the boundary. An auxiliary result generalizes Riesz' rearrangement inequality to multiple integrals.",
"Let f be a polynomial of degree n in ZZ[x_1,..,x_n], typically reducible but squarefree. From the hypersurface f=0 one may construct a number of other subschemes Y by extracting prime components, taking intersections, taking unions, and iterating this procedure. We prove that if the number of solutions to f=0 in ^n is not a multiple of p, then all these intersections in ^n_ just described are reduced. (If this holds for infinitely many p, then it holds over as well.) More specifically, there is a_Frobenius splitting_ on ^n_ compatibly splitting all these subschemes Y . We determine when a Gr \"obner degeneration f_0=0 of such a hypersurface f=0 is again such a hypersurface. Under this condition, we prove that compatibly split subschemes degenerate to compatibly split subschemes, and stay reduced. Our results are strongest in the case that f's lexicographically first term is i=1 ^n x_i. Then for all large p, there is a Frobenius splitting that compatibly splits f's hypersurface and all the associated Y . The Gr \"obner degeneration Y' of each such Y is a reduced union of coordinate spaces (a Stanley-Reisner scheme), and we give a result to help compute its Gr \"obner basis. We exhibit an f whose associated Y include Fulton's matrix Schubert varieties, and recover much more easily the Gr \"obner basis theorem of [Knutson-Miller '05]. We show that in Bott-Samelson coordinates on an opposite Bruhat cell X^v_ in G B, the f defining the complement of the big cell also has initial term i=1 ^n x_i, and hence the Kazhdan-Lusztig subvarieties X^v_ w degenerate to Stanley-Reisner schemes. This recovers, in a weak form, the main result of [Knutson '08].",
"where de denotes normalized surface measure, V is the conformal gradient and q = (2n) (n 2). A modern folklore theorem is that by taking the infinitedimensional limit of this inequality, one obtains the Gross logarithmic Sobolev inequality for Gaussian measure, which determines Nelson's hypercontractive estimates for the Hermite semigroup (see [8]). One observes using conformal invariance that the above inequality is equivalent to the sharp Sobolev inequality on Rn for which boundedness and extremal functions can be easily calculated using dilation invariance and geometric symmetrization. The roots here go back to Hardy and Littlewood. The advantage of casting the problem on the sphere is that the role of the constants is evident, and one is led immediately to the conjecture that this inequality should hold whenever possible (for example, 2 < q < 0o if n = 2). This is in fact true and will be demonstrated in Section 2. A clear question at this point is \"What is the situation in dimension 2?\" Two important arguments ([25], [26], [27]) dealt with this issue, both motivated by geometric variational problems. Because q goes to infinity for dimension 2, the appropriate function space is the exponential class. Responding in part",
"We study barycenters in the space of probability measures on a Riemannian manifold, equipped with the Wasserstein metric. Under reasonable assumptions, we establish absolute continuity of the barycenter of general measures Ω∈P(P(M))Ω∈P(P(M)) on Wasserstein space, extending on one hand, results in the Euclidean case (for barycenters between finitely many measures) of Agueh and Carlier [1] to the Riemannian setting, and on the other hand, results in the Riemannian case of Cordero-Erausquin, McCann, Schmuckenschlager [12] for barycenters between two measures to the multi-marginal setting. Our work also extends these results to the case where Ω is not finitely supported. As applications, we prove versions of Jensen's inequality on Wasserstein space and a generalized Brunn–Minkowski inequality for a random measurable set on a Riemannian manifold.",
"We give an overview of results on shape optimization for low eigenvalues of the Laplacian on bounded planar domains with Neumann and Steklov boundary conditions. These results share a common feature: they are proved using methods of complex analysis. In particular, we present modernized proofs of the classical inequalities due to Szego and Weinstock for the first nonzero Neumann and Steklov eigenvalues. We also extend the inequality for the second nonzero Neumann eigenvalue, obtained recently by Nadirashvili and the authors, to nonhomogeneous membranes with log-subharmonic densities. In the homogeneous case, we show that this inequality is strict, which implies that the maximum of the second nonzero Neumann eigenvalue is not attained in the class of simply connected membranes of a given mass. The same is true for the second nonzero Steklov eigenvalue, as follows from our results on the Hersch–Payne–Schiffer inequalities. Copyright © 2009 John Wiley & Sons, Ltd.",
"where A = P1 + P2 + * .. + Pn Naturally, this inequality contains the classical Poisson limit law (Just set pi = A n and note that the right side simplifies to 2A2 n), but it also achieves a great deal more. In particular, Le Cam's inequality identifies the sum of the squares of the pi as a quantity governing the quality of the Poisson approximation. Le Cam's inequality also seems to be one of those facts that repeatedly calls to be proved-and improved. Almost before the ink was dry on Le Cam's 1960 paper, an elementary proof was given by Hodges and Le Cam [18]. This proof was followed by numerous generalizations and refinements including contributions by Kerstan [19], Franken [15], Vervatt [30], Galambos [17], Freedman [16], Serfling [24], and Chen [11, 12]. In fact, for raw simplicity it is hard to find a better proof of Le Cam's inequality than that given in the survey of Serfling [25]. One purpose of this note is to provide a proof of Le Cam's inequality using some basic facts from matrix analysis. This proof is simple, but simplicity is not its raison d'etre. It also serves as a concrete introduction to the semi-group method for approximation of probability distributions. This method was used in Le Cam [20], and it has been used again most recently by Deheuvels and Pfeifer [13] to provide impressively precise results. The semi-group method is elegant and powerful, but it faces tough competition, especially from the coupling method and the Chen-Stein method. The literature of these methods is reviewed, and it is shown how they also lead to proofs of Le Cam's inequality.",
"The best possible constant Dmt in the inequality | ∬ dx dyf(x)g(x —y) h(y)| |, 1 p + llq+ 1 t = 2, is determined; the equality is reached if , g, and h are appropriate Gaussians. The same is shown to be true for the converse inequality (0 < p, q < 1, t < 0), in which case the inequality is reversed. Furthermore, an analogous property is proved for an integral of k functions over n variables, each function depending on a linear combination of the n variables; some of the functions may be taken to be fixed Gaussians. Two applications are given, one of which is a pr∞f of Nelson’s hypercontractive inequality."
]
} |
cs0506002 | 1589766768 | study a collection of heterogeneous XML databases maintain- ing similar and related information, exchanging data via a peer to peer overlay network. In this setting, a mediated global schema is unrealistic. Yet, users applications wish to query the d atabases via one peer using its schema. We have recently developed Hep- ToX, a P2P Heterogeneous XML database system. A key idea is that whenever a peer enters the system, it establishes an acquain- tance with a small number of peer databases, possibly with dif- ferent schema. The peer administrator provides correspondences between the local schema and the acquaintance schema using an informal and intuitive notation of arrows and boxes. We develop a novel algorithm that infers a set of precise mapping rules between the schemas from these visual annotations. We pin down a seman- tics of query translation given such mapping rules, and present a novel query translation algorithm for a simple but expressive frag- ment of XQuery, that employs the mapping rules in either direction. We show the translation algorithm is correct. Finally, we demon- strate the utility and scalability of our ideas and algorith ms with a detailed set of experiments on top of the Emulab, a large scale P2P network emulation testbed. | Schema-matching systems Automated techniques for schema matching (e.g. CUPID @cite_1 , @cite_17 @cite_13 ) are able to output elementary schema-level associations by exploiting linguistic features, context-dependent type matching, similarity functions etc. These associations could constitute the input of our rule inference algorithm if the user does not provide the arrows. | {
"cite_N": [
"@cite_13",
"@cite_1",
"@cite_17"
],
"mid": [
"2139135093",
"1588213250",
"2606149788"
],
"abstract": [
"Schema matching is a critical step in many applications, such as XML message mapping, data warehouse loading, and schema integration. In this paper, we investigate algorithms for generic schema matching, outside of any particular data model or application. We first present a taxonomy for past solutions, showing that a rich range of techniques is available. We then propose a new algorithm, Cupid, that discovers mappings between schema elements based on their names, data types, constraints, and schema structure, using a broader set of techniques than past approaches. Some of our innovations are the integrated use of linguistic and structural matching, context-dependent matching of shared types, and a bias toward leaf structure where much of the schema content resides. After describing our algorithm, we present experimental results that compare Cupid to two other schema matching systems.",
"The purely manual specification of semantic correspondences between schemas is almost infeasible for very large schemas or when many different schemas have to be matched. Hence, solving such large-scale match tasks asks for automatic or semiautomatic schema matching approaches. Large-scale matching needs especially to be supported for XML schemas and different kinds of ontologies due to their increasing use and size, e.g., in e-business and web and life science applications. Unfortunately, correctly and efficiently matching large schemas and ontologies are very challenging, and most previous match systems have only addressed small match tasks. We provide an overview about recently proposed approaches to achieve high match quality or and high efficiency for large-scale matching. In addition to describing some recent matchers utilizing instance and usage data, we cover approaches on early pruning of the search space, divide and conquer strategies, parallel matching, tuning matcher combinations, the reuse of previous match results, and holistic schema matching. We also provide a brief comparison of selected match tools.",
"Despite significant progress of deep learning in recent years, state-of-the-art semantic matching methods still rely on legacy features such as SIFT or HoG. We argue that the strong invariance properties that are key to the success of recent deep architectures on the classification task make them unfit for dense correspondence tasks, unless a large amount of supervision is used. In this work, we propose a deep network, termed AnchorNet, that produces image representations that are well-suited for semantic matching. It relies on a set of filters whose response is geometrically consistent across different object instances, even in the presence of strong intra-class, scale, or viewpoint variations. Trained only with weak image-level labels, the final representation successfully captures information about the object structure and improves results of state-of-the-art semantic matching methods such as the deformable spatial pyramid or the proposal flow methods. We show positive results on the cross-instance matching task where different instances of the same object category are matched as well as on a new cross-category semantic matching task aligning pairs of instances each from a different object class."
]
} |
cs0506002 | 1589766768 | study a collection of heterogeneous XML databases maintain- ing similar and related information, exchanging data via a peer to peer overlay network. In this setting, a mediated global schema is unrealistic. Yet, users applications wish to query the d atabases via one peer using its schema. We have recently developed Hep- ToX, a P2P Heterogeneous XML database system. A key idea is that whenever a peer enters the system, it establishes an acquain- tance with a small number of peer databases, possibly with dif- ferent schema. The peer administrator provides correspondences between the local schema and the acquaintance schema using an informal and intuitive notation of arrows and boxes. We develop a novel algorithm that infers a set of precise mapping rules between the schemas from these visual annotations. We pin down a seman- tics of query translation given such mapping rules, and present a novel query translation algorithm for a simple but expressive frag- ment of XQuery, that employs the mapping rules in either direction. We show the translation algorithm is correct. Finally, we demon- strate the utility and scalability of our ideas and algorith ms with a detailed set of experiments on top of the Emulab, a large scale P2P network emulation testbed. | P2P systems with non-conventional lookups Popular P2P networks, e.g. Kazaa, Gnutella, advertise simple lookup queries on file names. The idea of building full-fledged P2P DBMS is being considered in many works. Internet-scale database queries and functionalities @cite_19 as well as approximate range queries in P2P @cite_18 and XPath queries in small communities of peers @cite_21 have been extensively dealt with. All these works do not deal with reconciling schema heterogeneity. @cite_21 relies on a DHT-based network to address simple XPath queries, while @cite_30 realizes IR-style queries in an efficient P2P relational database. | {
"cite_N": [
"@cite_30",
"@cite_19",
"@cite_18",
"@cite_21"
],
"mid": [
"2098423524",
"1558940048",
"2162031302",
"2133843880"
],
"abstract": [
"We address the problem of querying XML data over a P2P network. In P2P networks, the allowed kinds of queries are usually exact-match queries over file names. We discuss the extensions needed to deal with XML data and XPath queries. A single peer can hold a whole document or a partial complete fragment of the latter. Each XML fragment document is identified by a distinct path expression, which is encoded in a distributed hash table. Our framework differs from content-based routing mechanisms, biased towards finding the most relevant peers holding the data. We perform fragments placement and enable fragments lookup by solely exploiting few path expressions stored on each peer. By taking advantage of quasi-zero replication of global catalogs, our system supports fast full and partial XPath querying. To this purpose, we have extended the Chord simulator and performed an experimental evaluation of our approach.",
"In this paper, we address the problem of designing a scalable, accurate query processor for peer-to-peer filesharing and similar distributed keyword search systems. Using a globally-distributed monitoring infrastructure, we perform an extensive study of the Gnutella filesharing network, characterizing its topology, data and query workloads. We observe that Gnutella's query processing approach performs well for popular content, but quite poorly for rare items with few replicas. We then consider an alternate approach based on Distributed Hash Tables (DHTs). We describe our implementation of PIERSearch, a DHT-based system, and propose a hybrid system where Gnutella is used to locate popular items, and PIERSearch for handling rare items. We develop an analytical model of the two approaches, and use it in concert with our Gnutella traces to study the trade-off between query recall and system overhead of the hybrid system. We evaluate a variety of localized schemes for identifying items that are rare and worth handling via the DHT. Lastly, we show in a live deployment on fifty nodes on two continents that it nicely complements Gnutella in its ability to handle rare items.",
"Peer-to-peer (P2P) systems show numerous advantages over centralized systems, such as load balancing, scalability, and fault tolerance, and they require certain functionality, such as search, repair, and message and data transfer. In particular, structured P2P networks perform an exact search in logarithmic time proportional to the number of peers. However, keyword similarity search in a structured P2P network remains a challenge. Similarity search for service discovery can significantly improve service management in a distributed environment. As services are often described informally in text form, keyword similarity search can find the required services or data items more reliably. This paper presents a fast similarity search algorithm for structured P2P systems. The new algorithm, called P2P fast similarity search (P2PFastSS), finds similar keys in any distributed hash table (DHT) using the edit distance metric, and is independent of the underlying P2P routing algorithm. Performance analysis shows that P2PFastSS carries out a similarity search in time proportional to the logarithm of the number of peers. Simulations on PlanetLab confirm these results and show that a similarity search with 34,000 peers performs in less than three seconds on average. Thus, P2PFastSS is suitable for similarity search in large-scale network infrastructures, such as service description matching in service discovery or searching for similar terms in P2P storage networks.",
"Peer-to-peer systems enable access to data spread over an extremely large number of machines. Most P2P systems support only simple lookup queries. However, many new applications, such as P2P photo sharing and massively multi-player games, would benefit greatly from support for multidimensional range queries. We show how such queries may be supported in a P2P system by adapting traditional spatial-database technologies with novel P2P routing networks and load-balancing algorithms. We show how to adapt two popular spatial-database solutions - kd-trees and space-filling curves - and experimentally compare their effectiveness."
]
} |
cs0506095 | 2950169286 | Recursive loops in a logic program present a challenging problem to the PLP framework. On the one hand, they loop forever so that the PLP backward-chaining inferences would never stop. On the other hand, they generate cyclic influences, which are disallowed in Bayesian networks. Therefore, in existing PLP approaches logic programs with recursive loops are considered to be problematic and thus are excluded. In this paper, we propose an approach that makes use of recursive loops to build a stationary dynamic Bayesian network. Our work stems from an observation that recursive loops in a logic program imply a time sequence and thus can be used to model a stationary dynamic Bayesian network without using explicit time parameters. We introduce a Bayesian knowledge base with logic clauses of the form @math , which naturally represents the knowledge that the @math s have direct influences on @math in the context @math under the type constraints @math . We then use the well-founded model of a logic program to define the direct influence relation and apply SLG-resolution to compute the space of random variables together with their parental connections. We introduce a novel notion of influence clauses, based on which a declarative semantics for a Bayesian knowledge base is established and algorithms for building a two-slice dynamic Bayesian network from a logic program are developed. | Third, most importantly PKB has no mechanism for handling cyclic influences. In PKB, cyclic influences are defined to be inconsistent (see Definition 9 of the paper @cite_10 ) and thus are excluded (PKB excludes cyclic influences by requiring its programs be acyclic). In BKB, however, cyclic influences are interpreted as feedbacks, thus implying a time sequence. This allows us to derive a stationary DBN from a logic program with recursive loops. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1889756448"
],
"abstract": [
"We present combined-case k-induction, a novel technique for verifying software programs. This technique draws on the strengths of the classical inductive-invariant method and a recent application of k-induction to program verification. In previous work, correctness of programs was established by separately proving a base case and inductive step. We present a new k-induction rule that takes an unstructured, reducible control flow graph (CFG), a natural loop occurring in the CFG, and a positive integer k, and constructs a single CFG in which the given loop is eliminated via an unwinding proportional to k. Recursively applying the proof rule eventually yields a loop-free CFG, which can be checked using SAT- SMT-based techniques. We state soundness of the rule, and investigate its theoretical properties. We then present two implementations of our technique: K-INDUCTOR, a verifier for C programs built on top of the CBMC model checker, and K-BOOGIE, an extension of the Boogie tool. Our experiments, using a large set of benchmarks, demonstrate that our k-induction technique frequently allows program verification to succeed using significantly weaker loop invariants than are required with the standard inductive invariant approach."
]
} |
cs0505011 | 1644495374 | As computers become more ubiquitous, traditional two-dimensional interfaces must be replaced with interfaces based on a three-dimensional metaphor. However, these interfaces must still be as simple and functional as their two-dimensional predecessors. This paper introduces SWiM, a new interface for moving application windows between various screens, such as wall displays, laptop monitors, and desktop displays, in a three-dimensional physical environment. SWiM was designed based on the results of initial "paper and pencil" user tests of three possible interfaces. The results of these tests led to a map-like interface where users select the destination display for their application from various icons. If the destination is a mobile display it is not displayed on the map. Instead users can select the screen's name from a list of all possible destination displays. User testing of SWiM was conducted to discover whether it is easy to learn and use. Users that were asked to use SWiM without any instructions found the interface as intuitive to use as users who were given a demonstration. The results show that SWiM combines simplicity and functionality to create an interface that is easy to learn and easy to use. | Moving application windows among various displays has been the focus of research in multiple ubiquitous computing environments. In i-Land, a room with an interactive electronic wall (DynaWall), computer-enhanced chairs, and an interactive table, three methods were introduced for moving application windows on the DynaWall. @cite_6 @cite_3 Two of these methods, shuffling and throwing, are implemented using gestures. Shuffling is done by drawing a quick left or right stroke above the title bar of a window. This will move the window a distance equal to the width of the window in the gestured direction. Throwing is done by making a short gesture backward, then a longer gesture forward. This will move the window a distance proportional to the ratio between the backward and forward movement. The throwing action requires practice because there is no clear indication of how far something will move prior to using it. The final method for moving windows in i-Land is taking. If a user's hand is placed on a window for approximately half a second, that window shrinks into the size of an icon. The next time the user touches any display, the window will grow behind the hand back to its original size. | {
"cite_N": [
"@cite_3",
"@cite_6"
],
"mid": [
"2153377083",
"1999303623"
],
"abstract": [
"We envision a nomadic model of interaction where the personal computer fits in your pocket. Such a computer is extremely limited in screen space. A technique is described for \"spilling\" the display of a hand held computer onto a much larger table top display surface. Because our model of nomadic computing frequently involves the use of untrusted display services we restrict interactive input to the hand held. Navigation techniques such as scrolling or turning the display can be expressed through the table top. The orientation and position of the hand held on the table top is detected using three conductive feet that appear to the touch table like three finger touches. An algorithm is given for detecting the three touch positions from the table's sensing mechanism.",
"A problem of matching gestures, where there are one or few samples per class, is considered in this paper. The proposed approach shows that much better results are achieved if the distance between the pattern of frame-wise distances of two gesture sequences with a third (anchor) sequence from the modelbase is considered. Such a measure is called as conditional distance and these distance pattern are referred to as \"warp vectors\". If these warp vectors are similar, then so are the sequences; if not, they are dissimilar. At the algorithmic core, there are two dynamic time warping processes, one to compute the warp vectors with the anchor sequences and the other to compare these warp vectors. In order to reduce the complexity a speedup strategy is proposed by pre-selecting \"good\" anchor sequences. Conditional distance is used for individual and sentence level gesture matching. Both single and multiple subject datasets are used. Experiments show improved performance above 82 spanning 179 classes. HighlightsWe propose a new distance measure called conditional distance between two gestures sequences when we have only one or a few samples per gesture class.Conditional distance is the distance between query and model gesture sequences in the presence of a third (anchor) gesture sequence.We propose speedup strategy for computing conditional distances by pre-selecting the anchor.We also propose a condition distance based simultaneous gesture segmentation and recognition called conditional level building.We show results of 82 on a multiple subject dataset spanning 179 classes."
]
} |
cs0505011 | 1644495374 | As computers become more ubiquitous, traditional two-dimensional interfaces must be replaced with interfaces based on a three-dimensional metaphor. However, these interfaces must still be as simple and functional as their two-dimensional predecessors. This paper introduces SWiM, a new interface for moving application windows between various screens, such as wall displays, laptop monitors, and desktop displays, in a three-dimensional physical environment. SWiM was designed based on the results of initial "paper and pencil" user tests of three possible interfaces. The results of these tests led to a map-like interface where users select the destination display for their application from various icons. If the destination is a mobile display it is not displayed on the map. Instead users can select the screen's name from a list of all possible destination displays. User testing of SWiM was conducted to discover whether it is easy to learn and use. Users that were asked to use SWiM without any instructions found the interface as intuitive to use as users who were given a demonstration. The results show that SWiM combines simplicity and functionality to create an interface that is easy to learn and easy to use. | In Stanford's iRoom, the PointRight system allows users to use a single mouse and keyboard to control multiple dis -plays. @cite_0 Changing displays is accomplished by simply moving the cursor off the edge of a screen. Currently, iRoom does not move applications across displays, but this mouse technique could be extended to dragging application windows as well. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2006563349"
],
"abstract": [
"We describe the design of and experience with PointRight, a peer-to-peer pointer and keyboard redirection system that operates in multi-machine, multi-user environments. PointRight employs a geometric model for redirecting input across screens driven by multiple independent machines and operating systems. It was created for interactive workspaces that include large, shared displays and individual laptops, but is a general tool that supports many different configurations and modes of use. Although previous systems have provided for re-routing pointer and keyboard control, in this paper we present a more general and flexible system, along with an analysis of the types of re-binding that must be handled by any pointer redirection system This paper describes the system, the ways in which it has been used, and the lessons that have been learned from its use over the last two years."
]
} |
cs0505011 | 1644495374 | As computers become more ubiquitous, traditional two-dimensional interfaces must be replaced with interfaces based on a three-dimensional metaphor. However, these interfaces must still be as simple and functional as their two-dimensional predecessors. This paper introduces SWiM, a new interface for moving application windows between various screens, such as wall displays, laptop monitors, and desktop displays, in a three-dimensional physical environment. SWiM was designed based on the results of initial "paper and pencil" user tests of three possible interfaces. The results of these tests led to a map-like interface where users select the destination display for their application from various icons. If the destination is a mobile display it is not displayed on the map. Instead users can select the screen's name from a list of all possible destination displays. User testing of SWiM was conducted to discover whether it is easy to learn and use. Users that were asked to use SWiM without any instructions found the interface as intuitive to use as users who were given a demonstration. The results show that SWiM combines simplicity and functionality to create an interface that is easy to learn and easy to use. | Another approach for manipulating objects (text, icons and files) on a digital whiteboard is Pick-and-Drop''. @cite_1 Using Pick-and-Drop, the user can move an object by selecting it on a screen with a stylus (a small animation is provided where the object is lifted and a shadow of the object appears) then placing it on another screen by touching the desired screen with the again. The benefits of this approach include a more tangible copy paste buffer and a more direct approach than using FTP or other file transfer techniques. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2108715885"
],
"abstract": [
"This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment."
]
} |
cs0504099 | 1836465448 | The problem of determining asymptotic bounds on the capacity of a random ad hoc network is considered. Previous approaches assumed a threshold-based link layer model in which a packet transmission is successful if the SINR at the receiver is greater than a fixed threshold. In reality, the mapping from SINR to packet success probability is continuous. Hence, over each hop, for every finite SINR, there is a non-zero probability of packet loss. With this more realistic link model, it is shown that for a broad class of routing and scheduling schemes, a fixed fraction of hops on each route have a fixed non-zero packet loss probability. In a large network, a packet travels an asymptotically large number of hops from source to destination. Consequently, it is shown that the cumulative effect of per-hop packet loss results in a per-node throughput of only O(1 n) (instead of Theta(1 sqrt n log n )) as shown previously for the threshold-based link model). A scheduling scheme is then proposed to counter this effect. The proposed scheme improves the link SINR by using conservative spatial reuse, and improves the per-node throughput to O(1 (K_n sqrt n log n )), where each cell gets a transmission opportunity at least once every K_n slots, and K_n tends to infinity as n tends to infinity. | Throughout this paper, we refer to the work of Gupta and Kumar on the capacity of random ad hoc networks @cite_4 . In this work, the authors assume a simplified link layer model in which each packet reception is successful if the receiver has an SINR of at least @math . The authors assume that each packet is decoded at every hop along the path from source to destination. No co-operative communication strategy is used, and interference signal from other simultaneous transmissions is treated just like noise. For this communication model, the authors propose a routing and scheduling strategy, and show that a per-node throughput of @math can be achieved. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2161725792"
],
"abstract": [
"Gupta and Kumar (2000) introduced a random network model for studying the way throughput scales in a wireless network when the nodes are fixed, and showed that the throughput per source-destination pair is spl otimes (1 spl radic nlogn). Grossglauser and Tse (2001) showed that when nodes are mobile it is possible to have a constant or spl otimes (1) throughput scaling per source-destination pair. The focus of this paper is on characterizing the delay and determining the throughput-delay trade-off in such fixed and mobile ad hoc networks. For the Gupta-Kumar fixed network model, we show that the optimal throughput-delay trade-off is given by D(n) = spl otimes (nT(n)), where T(n) and D(n) are the throughput and delay respectively. For the Grossglauser-Tse mobile network model, we show that the delay scales as spl otimes (n sup 1 2 v(n)), where v(n) is the velocity of the mobile nodes. We then describe a scheme that achieves the optimal order of delay for any given throughput. The scheme varies (i) the number of hops, (ii) the transmission range and (iii) the degree of node mobility to achieve the optimal throughput-delay trade-off. The scheme produces a range of models that capture the Gupta-Kumar model at one extreme and the Grossglauser-Tse model at the other. In the course of our work, we recover previous results of Gupta and Kumar, and Grossglauser and Tse using simpler techniques, which might be of a separate interest."
]
} |
cs0504099 | 1836465448 | The problem of determining asymptotic bounds on the capacity of a random ad hoc network is considered. Previous approaches assumed a threshold-based link layer model in which a packet transmission is successful if the SINR at the receiver is greater than a fixed threshold. In reality, the mapping from SINR to packet success probability is continuous. Hence, over each hop, for every finite SINR, there is a non-zero probability of packet loss. With this more realistic link model, it is shown that for a broad class of routing and scheduling schemes, a fixed fraction of hops on each route have a fixed non-zero packet loss probability. In a large network, a packet travels an asymptotically large number of hops from source to destination. Consequently, it is shown that the cumulative effect of per-hop packet loss results in a per-node throughput of only O(1 n) (instead of Theta(1 sqrt n log n )) as shown previously for the threshold-based link model). A scheduling scheme is then proposed to counter this effect. The proposed scheme improves the link SINR by using conservative spatial reuse, and improves the per-node throughput to O(1 (K_n sqrt n log n )), where each cell gets a transmission opportunity at least once every K_n slots, and K_n tends to infinity as n tends to infinity. | In @cite_9 , the authors discuss the limitations of the work in @cite_4 , by taking a network information theoretic approach. The authors discuss how several co-operative strategies such as interference cancellation, network coding, etc. could be used to improve the throughput. However these tools cannot be exploited fully with the current technology which relies on point-to-point coding, and treats all forms of interference as noise. The authors also discuss how the problem of determining the network capacity from an information theoretic view-point is a difficult problem, since even the capacity of a three node relay network is unknown. In Theorem 3.6 in @cite_9 , the authors determine the same bound on the capacity of a random network as obtained in @cite_4 . | {
"cite_N": [
"@cite_9",
"@cite_4"
],
"mid": [
"2121223271",
"2952966314"
],
"abstract": [
"This paper deals with throughput scaling laws for random ad hoc wireless networks in a rich scattering environment. We develop schemes to optimize the ratio lambda(n) of achievable network sum capacity to the sum of the point-to-point capacities of source-destinations (S-D) pairs operating in isolation. Our focus in this paper is on fixed signal-to-noise ratio (SNR) networks, i.e., networks where the worst case SNR over the S-D pairs is fixed independent of n. For such fixed SNR networks, which include fixed area networks as a special case, we show that collaborative strategies yield a scaling law of lambda(n)=Omega(1 n1 3) in contrast to multihop strategies which yield a scaling law of lambda(n)=Theta(1 radicn). While networks where worst case SNR goes to zero do not preclude the possibility of collaboration, multihop strategies achieve optimal throughput. The plausible reason is that the gains due to collaboration cannot offset the effect of vanishing receive SNR. This suggests that for fixed SNR networks, a network designer should look for network protocols that exploit collaboration",
"A generalization of the Gaussian dirty-paper problem to a multiple access setup is considered. There are two additive interference signals, one known to each transmitter but none to the receiver. The rates achievable using Costa's strategies (i.e. by a random binning scheme induced by Costa's auxiliary random variables) vanish in the limit when the interference signals are strong. In contrast, it is shown that lattice strategies (\"lattice precoding\") can achieve positive rates independent of the interferences, and in fact in some cases - which depend on the noise variance and power constraints - they are optimal. In particular, lattice strategies are optimal in the limit of high SNR. It is also shown that the gap between the achievable rate region and the capacity region is at most 0.167 bit. Thus, the dirty MAC is another instance of a network setup, like the Korner-Marton modulo-two sum problem, where linear coding is potentially better than random binning. Lattice transmission schemes and conditions for optimality for the asymmetric case, where there is only one interference which is known to one of the users (who serves as a \"helper\" to the other user), and for the \"common interference\" case are also derived. In the former case the gap between the helper achievable rate and its capacity is at most 0.085 bit."
]
} |
cs0504099 | 1836465448 | The problem of determining asymptotic bounds on the capacity of a random ad hoc network is considered. Previous approaches assumed a threshold-based link layer model in which a packet transmission is successful if the SINR at the receiver is greater than a fixed threshold. In reality, the mapping from SINR to packet success probability is continuous. Hence, over each hop, for every finite SINR, there is a non-zero probability of packet loss. With this more realistic link model, it is shown that for a broad class of routing and scheduling schemes, a fixed fraction of hops on each route have a fixed non-zero packet loss probability. In a large network, a packet travels an asymptotically large number of hops from source to destination. Consequently, it is shown that the cumulative effect of per-hop packet loss results in a per-node throughput of only O(1 n) (instead of Theta(1 sqrt n log n )) as shown previously for the threshold-based link model). A scheduling scheme is then proposed to counter this effect. The proposed scheme improves the link SINR by using conservative spatial reuse, and improves the per-node throughput to O(1 (K_n sqrt n log n )), where each cell gets a transmission opportunity at least once every K_n slots, and K_n tends to infinity as n tends to infinity. | However, just as in @cite_4 , all the above mentioned works assume that over each link a certain non-zero rate can be achieved. They do not take into account the fact that in reality, such a rate is achieved with a probability of bit error arbitrarily close (but not equal) to one . Once the coding and modulation scheme is fixed, the function corresponding to the probability of bit error is also fixed. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2119200834"
],
"abstract": [
"In this paper we propose the following approach to the dimensioning of the radio part of the downlink in OFDMA networks. First, we use information theory to characterize the bit-rate in the channel from a base station to its mobile. It depends on the power and bandwidth allocated to this mobile. Then, we describe the resource (power and bandwidth) allocation problem and characterise feasible configurations of bit-rates of all users. As the key element, we propose some particular sufficient condition (in a multi-Erlang form) for a given configuration of bit-rates to be feasible. Finally, we consider an Erlang's loss model, in which streaming arrivals whose admission would lead to the violation of this sufficient condition are blocked and lost. In this model, the blocking probabilities can be calculated using Kaufman-Roberts algorithm. We propose it to evaluate the minimal density of base stations assuring acceptable blocking probabilities for a streaming traffic of a given load per surface unit. We validate this approach by comparison of the blocking probabilities to these simulated in the similar model in which the admission control is based on the original feasibility property (instead of its sufficient condition). Our sufficient bit-rate feasibility condition can also be used to dimension the network with respect to the elastic traffic."
]
} |
cs0504045 | 2952143109 | Internet worms have become a widespread threat to system and network operations. In order to fight them more efficiently, it is necessary to analyze newly discovered worms and attack patterns. This paper shows how techniques based on Kolmogorov Complexity can help in the analysis of internet worms and network traffic. Using compression, different species of worms can be clustered by type. This allows us to determine whether an unknown worm binary could in fact be a later version of an existing worm in an extremely simple, automated, manner. This may become a useful tool in the initial analysis of malicious binaries. Furthermore, compression can also be useful to distinguish different types of network traffic and can thus help to detect traffic anomalies: Certain anomalies may be detected by looking at the compressibility of a network session alone. We furthermore show how to use compression to detect malicious network sessions that are very similar to known intrusion attempts. This technique could become a useful tool to detect new variations of an attack and thus help to prevent IDS evasion. We provide two new plugins for Snort which demonstrate both approaches. | Evans and Barnett @cite_10 compare the complexity of legal FTP traffic to the complexity of attacks against FTP servers. To achieve this they analyzed the headers of legal and illegal FTP traffic. For this they gathered several hundred bytes of good and bad traffic and compressed it using compress. Our approach differs in that we use the entire packet or even entire TCP sessions. We use this as we believe that in the real world, it is hard to collect several hundred bytes of bad traffic from a single attack session using headers alone. Attacks exploiting vulnerabilities in a server are often very short and will not cause any other malicious traffic on the same port. This is especially the case in non-interactive protocols such as HTTP where all interactions consist of a request and reply only. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2118688880"
],
"abstract": [
"This paper presents a dangerous low-cost traffic analysis attack in packet-based networks, such as the Internet. The attack is mountable in any scenario where a shared routing resource exists among users. A real-world attack successfully compromised the privacy of a user without requiring significant resources in terms of access, memory, or computational power. The effectiveness of our attack is demonstrated in a scenario where the user's DSL router uses FCFS scheduling policy. Specifically, we show that by using a low-rate sequence of probes, a remote attacker can obtain significant traffic-timing and volume information about a particular user, just by observing the round trip time of the probes. We also observe that even when the scheduling policy is changed to round-robin, while the correlation reduces significantly, the attacker can still reliably deduce user's traffic pattern. Most of the router scheduling policies designed to date are evaluated mostly on the metrics of throughput, delay and fairness. Our work is aimed to demonstrate a need for considering an additional metric that quantifies the information leak between the individual traffic flows through the router."
]
} |
cs0504045 | 2952143109 | Internet worms have become a widespread threat to system and network operations. In order to fight them more efficiently, it is necessary to analyze newly discovered worms and attack patterns. This paper shows how techniques based on Kolmogorov Complexity can help in the analysis of internet worms and network traffic. Using compression, different species of worms can be clustered by type. This allows us to determine whether an unknown worm binary could in fact be a later version of an existing worm in an extremely simple, automated, manner. This may become a useful tool in the initial analysis of malicious binaries. Furthermore, compression can also be useful to distinguish different types of network traffic and can thus help to detect traffic anomalies: Certain anomalies may be detected by looking at the compressibility of a network session alone. We furthermore show how to use compression to detect malicious network sessions that are very similar to known intrusion attempts. This technique could become a useful tool to detect new variations of an attack and thus help to prevent IDS evasion. We provide two new plugins for Snort which demonstrate both approaches. | Kulkarni, Evans and Barnett @cite_7 also try to track down denial of service attacks using Kolmogorov complexity. They now estimate the Kolmogorov complexity by computing an estimate of the entropy of 1's contained in the packet. They then track the complexity over time using the method of a complexity differential. For this they sample certain packets from a single flow and then compute the complexity differential once. Here, we always use compression and do not aim to detect DDOS attacks. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2005406777"
],
"abstract": [
"This paper describes an approach to detecting distributed denial of service (DDoS) attacks that is based on fundamentals of Information Theory, specifically Kolmogorov Complexity. A theorem derived using principles of Kolmogorov Complexity states that the joint complexity measure of random strings is lower than the sum of the complexities of the individual strings when the strings exhibit some correlation. Furthermore, the joint complexity measure varies inversely with the amount of correlation. We propose a distributed active network-based algorithm that exploits this property to correlate arbitrary traffic flows in the network to detect possible denial-of-service attacks. One of the strengths of this algorithm is that it does not require special filtering rules and hence it can be used to detect any type of DDoS attack. We implement and investigate the performance of the algorithm in an active network. Our results show that DDoS attacks can be detected in a manner that is not sensitive to legitimate background traffic."
]
} |
cs0504063 | 2950442545 | In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web. | Menczer @cite_30 describes some disadvantages of current Web search engines on the dynamic Web, e.g., the low ratio of fresh or relevant documents. He proposes to complement the search engines with intelligent crawlers, or web mining agents to overcome those disadvantages. Search engines take static snapshots of the Web with relatively large time intervals between two snapshots. Intelligent web mining agents are different: they can find online the required recent information and may evolve intelligent behavior by exploiting the Web linkage and textual information. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2001834587"
],
"abstract": [
"While search engines have become the major decision support tools for the Internet, there is a growing disparity between the image of the World Wide Web stored in search engine repositories and the actual dynamic, distributed nature of Web data. We propose to attack this problem using an adaptive population of intelligent agents mining the Web online at query time. We discuss the benefits and shortcomings of using dynamic search strategies versus the traditional static methods in which search and retrieval are disjoint. This paper presents a public Web intelligence tool called MySpiders, a threaded multiagent system designed for information discovery. The performance of the system is evaluated by comparing its effectiveness in locating recent, relevant documents with that of search engines. We present results suggesting that augmenting search engines with adaptive populations of intelligent search agents can lead to a significant competitive advantage. We also discuss some of the challenges of evaluating such a system on current Web data, introduce three novel metrics for this purpose, and outline some of the lessons learned in the process."
]
} |
cs0504063 | 2950442545 | In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web. | Risvik and Michelsen @cite_0 mention that because of the exponential growth of the Web there is an ever increasing need for more intelligent, (topic-)specific algorithms for crawling, like focused crawling and document classification. With these algorithms crawlers and search engines can operate more efficiently in a topically limited document space. The authors also state that in such vertical regions the dynamics of the Web pages is more homogenous. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2110073539"
],
"abstract": [
"This paper presents a comparative study of strategies for Web crawling. We show that a combination of breadth-first ordering with the largest sites first is a practical alternative since it is fast, simple to implement, and able to retrieve the best ranked pages at a rate that is closer to the optimal than other alternatives. Our study was performed on a large sample of the Chilean Web which was crawled by using simulators, so that all strategies were compared under the same conditions, and actual crawls to validate our conclusions. We also explored the effects of large scale parallelism in the page retrieval task and multiple-page requests in a single connection for effective amortization of latency times."
]
} |
cs0504063 | 2950442545 | In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web. | Menczer @cite_30 also introduces a recency metric which is 1 if all of the documents are recent (i.e., not changed after the last download) and goes to 0 as downloaded documents are getting more and more obsolete. Trivially immediately after a few minutes run of an online crawler the value of this metric will be 1, while the value for the search engine will be lower. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2155467656"
],
"abstract": [
"In web search, recency ranking refers to ranking documents by relevance which takes freshness into account. In this paper, we propose a retrieval system which automatically detects and responds to recency sensitive queries. The system detects recency sensitive queries using a high precision classifier. The system responds to recency sensitive queries by using a machine learned ranking model trained for such queries. We use multiple recency features to provide temporal evidence which effectively represents document recency. Furthermore, we propose several training methodologies important for training recency sensitive rankers. Finally, we develop new evaluation metrics for recency sensitive queries. Our experiments demonstrate the efficacy of the proposed approaches."
]
} |
cs0504063 | 2950442545 | In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web. | @cite_31 present a mathematical crawler model in which the number of obsolete pages can be minimized with a nonlinear equation system. They solved the nonlinear equations with different parameter settings on realistic model data. Their model uses different buckets for documents having different change rates therefore does not need any theoretical model about the change rate of pages. The main limitations of this work are the following: | {
"cite_N": [
"@cite_31"
],
"mid": [
"2018928332"
],
"abstract": [
"This paper outlines the design of a web crawler implemented for IBM Almaden's WebFountain project and describes an optimization model for controlling the crawl strategy. This crawler is scalable and incremental. The model makes no assumptions about the statistical behaviour of web page changes, but rather uses an adaptive approach to maintain data on actual change rates which are in turn used as inputs for the optimization. Computational results with simulated but realistic data show that there is no magic bullet' different, but equally plausible, objectives lead to con icting optimal' strategies. However, we nd that there are compromise objectives which lead to good strategies that are robust against a number of criteria."
]
} |
cs0504063 | 2950442545 | In this paper we compare the performance characteristics of our selection based learning algorithm for Web crawlers with the characteristics of the reinforcement learning algorithm. The task of the crawlers is to find new information on the Web. The selection algorithm, called weblog update, modifies the starting URL lists of our crawlers based on the found URLs containing new information. The reinforcement learning algorithm modifies the URL orderings of the crawlers based on the received reinforcements for submitted documents. We performed simulations based on data collected from the Web. The collected portion of the Web is typical and exhibits scale-free small world (SFSW) structure. We have found that on this SFSW, the weblog update algorithm performs better than the reinforcement learning algorithm. It finds the new information faster than the reinforcement learning algorithm and has better new information all submitted documents ratio. We believe that the advantages of the selection algorithm over reinforcement learning algorithm is due to the small world property of the Web. | by solving the nonlinear equations the content of web pages can not be taken into consideration. The model can not be extended easily to (topic-)specific crawlers, which would be highly advantageous on the exponentially growing web @cite_9 , @cite_0 , @cite_30 . the rapidly changing documents (like on news sites) are not considered to be in any bucket, therefore increasingly important parts of the web are disclosed from the searches. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_30"
],
"mid": [
"2142057089",
"1071368427",
"2158601853"
],
"abstract": [
"Recommender problems with large and dynamic item pools are ubiquitous in web applications like content optimization, online advertising and web search. Despite the availability of rich item meta-data, excess heterogeneity at the item level often requires inclusion of item-specific \"factors\" (or weights) in the model. However, since estimating item factors is computationally intensive, it poses a challenge for time-sensitive recommender problems where it is important to rapidly learn factors for new items (e.g., news articles, event updates, tweets) in an online fashion. In this paper, we propose a novel method called FOBFM (Fast Online Bilinear Factor Model) to learn item-specific factors quickly through online regression. The online regression for each item can be performed independently and hence the procedure is fast, scalable and easily parallelizable. However, the convergence of these independent regressions can be slow due to high dimensionality. The central idea of our approach is to use a large amount of historical data to initialize the online models based on offline features and learn linear projections that can effectively reduce the dimensionality. We estimate the rank of our linear projections by taking recourse to online model selection based on optimizing predictive likelihood. Through extensive experiments, we show that our method significantly and uniformly outperforms other competitive methods and obtains relative lifts that are in the range of 10-15 in terms of predictive log-likelihood, 200-300 for a rank correlation metric on a proprietary My Yahoo! dataset; it obtains 9 reduction in root mean squared error over the previously best method on a benchmark MovieLens dataset using a time-based train test data split.",
"User-generated content can assist epidemiological surveillance in the early detection and prevalence estimation of infectious diseases, such as influenza. Google Flu Trends embodies the first public platform for transforming search queries to indications about the current state of flu in various places all over the world. However, the original model significantly mispredicted influenza-like illness rates in the US during the 2012–13 flu season. In this work, we build on the previous modeling attempt, proposing substantial improvements. Firstly, we investigate the performance of a widely used linear regularized regression solver, known as the Elastic Net. Then, we expand on this model by incorporating the queries selected by the Elastic Net into a nonlinear regression framework, based on a composite Gaussian Process. Finally, we augment the query-only predictions with an autoregressive model, injecting prior knowledge about the disease. We assess predictive performance using five consecutive flu seasons spanning from 2008 to 2013 and qualitatively explain certain shortcomings of the previous approach. Our results indicate that a nonlinear query modeling approach delivers the lowest cumulative nowcasting error, and also suggest that query information significantly improves autoregressive inferences, obtaining state-of-the-art performance.",
"The computation of page importance in a huge dynamic graph has recently attracted a lot of attention because of the web. Page importance, or page rank is defined as the fixpoint of a matrix equation. Previous algorithms compute it off-line and require the use of a lot of extra CPU as well as disk resources (e.g. to store, maintain and read the link matrix). We introduce a new algorithm OPIC that works on-line, and uses much less resources. In particular, it does not require storing the link matrix. It is on-line in that it continuously refines its estimate of page importance while the web graph is visited. Thus it can be used to focus crawling to the most interesting pages. We prove the correctness of OPIC. We present Adaptive OPIC that also works on-line but adapts dynamically to changes of the web. A variant of this algorithm is now used by Xyleme.We report on experiments with synthetic data. In particular, we study the convergence and adaptiveness of the algorithms for various scheduling strategies for the pages to visit. We also report on experiments based on crawls of significant portions of the web."
]
} |
cs0504101 | 2493383280 | We identify a new class of hard 3-SAT instances, namely a random 3-SAT problems having exactly one solution and as few clauses as possible. It is numerically shown that the running time of complete methods as well as of local search algorithms for such problems is larger than for random instances around the phase transition point. We therefore provide instances with an exponential complexity in the so-called easy'' region, below the critical value of m n. This puts a new light on the connection between the phase transition phenomenon and NP-completeness. | Most of the studies of random 3-SAT ensemble have been concerned with the computational cost at a constant @math as a function of the ratio @math , where the characteristic phase transition-like curve is observed. This is in a way surprising because for the computational complexity (and also for practical applications) it is the scaling of running time as the problem size increases which is important, i.e. changing @math at fixed @math . Exponential scaling with @math has been numerically observed near the critical point @cite_14 @cite_5 for random 3-SAT as well as above it (albeit with a smaller exponent). Recently the scaling with @math has been studied and the transition from polynomial to exponential complexity has been observed below @math @cite_41 , again for random 3-SAT. | {
"cite_N": [
"@cite_41",
"@cite_5",
"@cite_14"
],
"mid": [
"2115831572",
"2028633127",
"1510651616"
],
"abstract": [
"For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of 1,...,k n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs.",
"Random graph models with limited choice have been studied extensively with the goal of understanding the mechanism of the emergence of the giant component. One of the standard models are the Achlioptas random graph processes on a fixed set of (n ) vertices. Here at each step, one chooses two edges uniformly at random and then decides which one to add to the existing configuration according to some criterion. An important class of such rules are the bounded-size rules where for a fixed (K 1 ), all components of size greater than (K ) are treated equally. While a great deal of work has gone into analyzing the subcritical and supercritical regimes, the nature of the critical scaling window, the size and complexity (deviation from trees) of the components in the critical regime and nature of the merging dynamics has not been well understood. In this work we study such questions for general bounded-size rules. Our first main contribution is the construction of an extension of Aldous’s standard multiplicative coalescent process which describes the asymptotic evolution of the vector of sizes and surplus of all components. We show that this process, referred to as the standard augmented multiplicative coalescent (AMC) is ‘nearly’ Feller with a suitable topology on the state space. Our second main result proves the convergence of suitably scaled component size and surplus vector, for any bounded-size rule, to the standard AMC. This result is new even for the classical Erdős–Renyi setting. The key ingredients here are a precise analysis of the asymptotic behavior of various susceptibility functions near criticality and certain bounds from (The barely subcritical regime. Arxiv preprint, 2012) on the size of the largest component in the barely subcritical regime.",
"A variational approach to finite connectivity spin-glass-like models is developed and applied to describe the structure of optimal solutions in random satisfiability problems. Our variational scheme accurately reproduces the known replica symmetric results and also allows for the inclusion of replica symmetry breaking effects. For the 3-SAT problem, we find two transitions as the ratio α of logical clauses per Boolean variables increases. At the first one ( a_s 3.96 ), a non-trivial organization of the solution space in geometrically separated clusters emerges. The multiplicity of these clusters as well as the typical distances between different solutions are calculated. At the second threshold ( a_c 4.48 ), satisfying assignments disappear and a finite fraction ( B_o 0.13 ) of variables are overconstrained and take the same values in all optimal (though unsatisfying) assignments. These values have to be compared to ( a_c 4.27 ), ( B_o 0.4 ) obtained from numerical experiments on small instances. Within the present variational approach, the SAT-UNSAT transition naturally appears as a mixture of a first and a second order transition. For the mixed 2+p-SAT with p<2 5, the behavior is as expected much simpler: a unique smooth transition from SAT to UNSAT takes place at ( a_c = 1 (1 - p) )."
]
} |
cs0504101 | 2493383280 | We identify a new class of hard 3-SAT instances, namely a random 3-SAT problems having exactly one solution and as few clauses as possible. It is numerically shown that the running time of complete methods as well as of local search algorithms for such problems is larger than for random instances around the phase transition point. We therefore provide instances with an exponential complexity in the so-called easy'' region, below the critical value of m n. This puts a new light on the connection between the phase transition phenomenon and NP-completeness. | There has been numerical evidence @cite_29 @cite_30 @cite_11 that below @math short instances of 3-SAT as well as of graph coloring @cite_51 can be hard. With respect to the formula size an interesting rigorous result is @cite_48 @cite_21 that an ordered DPLL algorithm needs an exponential time @math to find a resolution proof of an unsatisfiable 3-SAT instance. Note that the coefficient of the exponential growth increases with decreasing @math , i.e. short formulas are harder. For our ensemble of single-solution formulas we will find the same result. | {
"cite_N": [
"@cite_30",
"@cite_48",
"@cite_29",
"@cite_21",
"@cite_51",
"@cite_11"
],
"mid": [
"2001495051",
"1983171306",
"2001663593",
"2078137578",
"1533811829",
"2295840581"
],
"abstract": [
"For each k ≤ 4, we give τ k > 0 such that a random k-CNF formula F with n variables and ⌊r k n⌋ clauses is satisfiable with high probability, but ORDERED-DLL takes exponential time on F with uniformly positive probability. Using results of [2], this can be strengthened to a high probability result for certain natural backtracking schemes and extended to many other DPLL algorithms.",
"We present an algorithm for solving 3SAT instances. Several algorithms have been proved to work whp (with high probability) for various SAT distributions. However, an algorithm that works whp has a drawback. Indeed for typical instances it works well, however for some rare inputs it does not provide a solution at all. Alternatively, one could require that the algorithm always produce a correct answer but perform well on average. Expected polynomial time formalizes this notion. We prove that for some natural distribution on 3CNF formulas, called planted 3SAT, our algorithm has expected polynomial (in fact, almost linear) running time. The planted 3SAT distribution is the set of satisfiable 3CNF formulas generated in the following manner. First, a truth assignment is picked uniformly at random. Then, each clause satisfied by it is included in the formula with probability p. Extending previous work for the planted 3SAT distribution, we present, for the first time for a satisfiable SAT distribution, an expected polynomial time algorithm. Namely, it solves all 3SAT instances, and over the planted distribution (with p = d n2, d > 0 a sufficiently large constant) it runs in expected polynomial time. Our results extend to k-SAT for any constant k.",
"par>We prove some non-approximability results for restrictions of basic combinatorial optimization problems to instances of bounded “degreeror bounded “width.” Specifically: We prove that the Max 3SAT problem on instances where each variable occurs in at most B clauses, is hard to approximate to within a factor @math , unless @math . H stad [18] proved that the problem is approximable to within a factor @math in polynomial time, and that is hard to approximate to within a factor @math . Our result uses a new randomized reduction from general instances of Max 3SAT to bounded-occurrences instances. The randomized reduction applies to other Max SNP problems as well. We observe that the Set Cover problem on instances where each set has size at most B is hard to approximate to within a factor @math unless @math . The result follows from an appropriate setting of parameters in Feige's reduction [11]. This is essentially tight in light of the existence of @math -approximate algorithms [20, 23, 9] We present a new PCP construction, based on applying parallel repetition to the inner verifier,'' and we provide a tight analysis for it. Using the new construction, and some modifications to known reductions from PCP to Hitting Set, we prove that Hitting Set with sets of size B is hard to approximate to within a factor @math . The problem can be approximated to within a factor B [19], and it is the Vertex Cover problem for B =2. The relationship between hardness of approximation and set size seems to have not been explored before. We observe that the Independent Set problem on graphs having degree at most B is hard to approximate to within a factor @math , unless P = NP . This follows from a comination of results by Clementi and Trevisan [28] and Reingold, Vadhan and Wigderson [27]. It had been observed that the problem is hard to approximate to within a factor @math unless P = NP [1]. An algorithm achieving factor @math is also known [21, 2, 30, 16 .",
"Experiments on solvingr-SAT random formulae have provided evidence of a satisfiability threshold phenomenon with respect to the ratio of the number of clauses to the number of variables of formulae. Presently, only the threshold of 2-SAT formulae has been proved to exist and has been computed to be equal to 1. For 3-SAT formulae and more generally forr-SAT formulae, lower and upper bounds of the threshold have been established. The best established bounds concern 3-SAT. For an observed threshold of about 4.25, the best lower bound is 3.003 and the best upper bound 4.76. In this paper we establish a general upper bound of the threshold forr-SAT formulae giving a value for 3-SAT of 4.64, significantly improving the previous best upper bound. For this we have defined a more restrictive structure than a satisfying truth assignment for characterizing the satisfiability of a SAT formula which we have called negatively prime solution (NPS). By merely applying the first moment method to negatively prime solutions of a randomr-SAT formula we obtain our bound.",
"We consider worst case time bounds for several NP-complete problems, based on a constraint satisfaction (CSP) formulation of these problems: (a, b)-CSP instances consist of a set of variables, each with up to a possible values, and constraints disallowing certain b-tuples of variable values; a problem is solved by assigning values to all variables satisfying all constraints, or by showing that no such assignment exist. 3-SAT is equivalent to (2, 3)-CSP while 3-coloring and various related problems are special cases of (3, 2)-CSP; there is also a natural duality transformation from (a, b)-CSP to (b, a)-CSP. We show that n-variable (3, 2)-CSP instances can be solved in time O(1.3645n), that satisfying assignments to (d, 2)-CSP instances can be found in randomized expected time O((0.4518d)n); that 3-coloring of n-vertex graphs can be solved in time O(1.3289n); that 3-list-coloring of n-vertex graphs can be solved in time O(1.3645n); that 3-edge-coloring of n-vertex graphs can be solved in time O(2n 2), and that 3-satisfiability of a formula with t 3-clauses can be solved in time O(nO(1) + 1.3645t).",
"We show an exponential separation between two well-studied models of algebraic computation, namely read-once oblivious algebraic branching programs (ROABPs) and multilinear depth three circuits. In particular we show the following: 1. There exists an explicit n-variate polynomial computable by linear sized multilinear depth three circuits (with only two product gates) such that every ROABP computing it requires 2^ Omega(n) size. 2. Any multilinear depth three circuit computing IMM_ n,d (the iterated matrix multiplication polynomial formed by multiplying d, n * n symbolic matrices) has n^ Omega(d) size. IMM_ n,d can be easily computed by a poly(n,d) sized ROABP. 3. Further, the proof of 2 yields an exponential separation between multilinear depth four and multilinear depth three circuits: There is an explicit n-variate, degree d polynomial computable by a poly(n,d) sized multilinear depth four circuit such that any multilinear depth three circuit computing it has size n^ Omega(d) . This improves upon the quasi-polynomial separation result by Raz and Yehudayoff [2009] between these two models. The hard polynomial in 1 is constructed using a novel application of expander graphs in conjunction with the evaluation dimension measure used previously in Nisan [1991], Raz [2006,2009], Raz and Yehudayoff [2009], and Forbes and Shpilka [2013], while 2 is proved via a new adaptation of the dimension of the partial derivatives measure used by Nisan and Wigderson [1997]. Our lower bounds hold over any field."
]
} |
cs0503011 | 2949386080 | In-degree, PageRank, number of visits and other measures of Web page popularity significantly influence the ranking of search results by modern search engines. The assumption is that popularity is closely correlated with quality, a more elusive concept that is difficult to measure directly. Unfortunately, the correlation between popularity and quality is very weak for newly-created pages that have yet to receive many visits and or in-links. Worse, since discovery of new content is largely done by querying search engines, and because users usually focus their attention on the top few results, newly-created but high-quality pages are effectively shut out,'' and it can take a very long time before they become popular. We propose a simple and elegant solution to this problem: the introduction of a controlled amount of randomness into search result ranking methods. Doing so offers new pages a chance to prove their worth, although clearly using too much randomness will degrade result quality and annul any benefits achieved. Hence there is a tradeoff between exploration to estimate the quality of new pages and exploitation of pages already known to be of high quality. We study this tradeoff both analytically and via simulation, in the context of an economic objective function based on aggregate result quality amortized over time. We show that a modest amount of randomness leads to improved search results. | The exploration exploitation tradeoff that arises in our context is akin to problems studied in the field of reinforcement learning @cite_20 . However, direct application of reinforcement learning algorithms appears prohibitively expensive at Web scales. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2020920737"
],
"abstract": [
"We provide a fresh look at the problem of exploration in reinforcement learning, drawing on ideas from information theory. First, we show that Boltzmann-style exploration, one of the main exploration methods used in reinforcement learning, is optimal from an information-theoretic point of view, in that it optimally trades expected return for the coding cost of the policy. Second, we address the problem of curiosity-driven learning. We propose that, in addition to maximizing the expected return, a learner should choose a policy that also maximizes the learner’s predictive power. This makes the world both interesting and exploitable. Optimal policies then have the form of Boltzmann-style exploration with a bonus, containing a novel exploration–exploitation trade-off which emerges naturally from the proposed optimization principle. Importantly, this exploration–exploitation trade-off persists in the optimal deterministic policy, i.e., when there is no exploration due to randomness. As a result, exploration is understood as an emerging behavior that optimizes information gain, rather than being modeled as pure randomization of action choices."
]
} |
cs0503047 | 2098985161 | We consider the capacity problem for wireless networks. Networks are modeled as random unit-disk graphs, and the capacity problem is formulated as one of finding the maximum value of a multicommodity flow. In this paper, we develop a proof technique based on which we are able to obtain a tight characterization of the solution to the linear program associated with the multiflow problem, to within constants independent of network size. We also use this proof method to analyze network capacity for a variety of transmitter receiver architectures, for which we obtain some conclusive results. These results contain as a special case (and strengthen) those of Gupta and Kumar for random networks, for which a new derivation is provided using only elementary counting and discrete probability tools. | This work is primarily motivated by our struggle to understand the results of Gupta and Kumar on the capacity of wireless networks @cite_19 . And the main idea behind our approach is simple: the transport capacity problem posed in @cite_19 , in the context of random networks, is essentially a throughput stability problem---the goal is to determine how much data can be injected by each node into the network while keeping the system stable---, and this throughput stability problem admits a very simple formulation in term of flow networks. Note also that because of the mechanism for generating source destination pairs, all connections have the same average length (one half of one network diameter), and thus we do not need to deal with the bit-meters sec metric considered in @cite_19 . | {
"cite_N": [
"@cite_19"
],
"mid": [
"2114106914"
],
"abstract": [
"We develop a new metric for quantifying end-to-end throughput in multihop wireless networks, which we term random access transport capacity, since the interference model presumes uncoordinated transmissions. The metric quantifies the average maximum rate of successful end-to-end transmissions, multiplied by the communication distance, and normalized by the network area. We show that a simple upper bound on this quantity is computable in closed-form in terms of key network parameters when the number of retransmissions is not restricted and the hops are assumed to be equally spaced on a line between the source and destination. We also derive the optimum number of hops and optimal per hop success probability and show that our result follows the well-known square root scaling law while providing exact expressions for the preconstants, which contain most of the design-relevant network parameters. Numerical results demonstrate that the upper bound is accurate for the purpose of determining the optimal hop count and success (or outage) probability."
]
} |
cs0503047 | 2098985161 | We consider the capacity problem for wireless networks. Networks are modeled as random unit-disk graphs, and the capacity problem is formulated as one of finding the maximum value of a multicommodity flow. In this paper, we develop a proof technique based on which we are able to obtain a tight characterization of the solution to the linear program associated with the multiflow problem, to within constants independent of network size. We also use this proof method to analyze network capacity for a variety of transmitter receiver architectures, for which we obtain some conclusive results. These results contain as a special case (and strengthen) those of Gupta and Kumar for random networks, for which a new derivation is provided using only elementary counting and discrete probability tools. | As mentioned before, @cite_19 sparked significant interest in these problems. Follow up results from the same group were reported in @cite_10 @cite_29 . Some information theoretic bounds for large-area networks were obtained in @cite_30 . When nodes are allowed to move, assuming transmission delays proportional to the mixing time of the network, the total network throughput is @math , and therefore the network can carry a non-vanishing rate per node @cite_2 . Using a linear programming formulation, non-asymptotic versions of the results in @cite_19 are given in @cite_22 ; an extended version of that work can be found in @cite_15 . An alternative method for deriving transport capacity was presented in @cite_1 . The capacity of large Gaussian relay networks was found in @cite_3 . Preliminary versions of our work based on network flows have appeared in @cite_9 @cite_32 ; and network flow techniques have been proposed to study network capacity problems (cf., e.g., @cite_34 , [Ch. 14.10] cover-thomas:it-book ), and network coding problems @cite_0 . From the network coding literature, of particular relevance to this work is the work on multiple unicast sessions @cite_26 . | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_22",
"@cite_29",
"@cite_9",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_34",
"@cite_10"
],
"mid": [
"2534339435",
"2138203492",
"2097463269",
"2105831729",
"2323365737",
"1505858209",
"2002531683",
"2131992719",
"2002425684",
"2032566227",
"2049732373",
"2144431033",
"2115678412",
"2009182597"
],
"abstract": [
"This paper considers network communications under a hard timeliness constraint , where a source node streams perishable information to a destination node over a directed acyclic graph subject to a hard delay constraint. Transmission along any edge incurs unit delay, and it is required that every information bit generated at the source at the beginning of time @math to be received and recovered by the destination at the end of time @math , where @math is the maximum allowed end-to-end delay. We study the corresponding delay-constrained unicast capacity problem. This paper presents the first example showing that network coding (NC) can achieve strictly higher delay-constrained throughput than routing even for the single unicast setting and the NC gain can be arbitrarily close to 2 in some instances. This is in sharp contrast to the delay-unconstrained ( @math ) single-unicast case where the classic min-cut max-flow theorem implies that coding cannot improve throughput over routing. Motivated by the above findings, a series of investigation on the delay-constrained capacity problem is also made, including: 1) an equivalent multiple-unicast representation based on a time-expanded graph approach; 2) a new delay-constrained capacity upper bound and its connections to the existing routing-based results [ 2011]; 3) an example showing that the penalty of using random linear NC can be unbounded; and 4) a counter example of the tree-packing Edmonds’ theorem in the new delay-constrained setting. Built upon the time-expanded graph approach, we also discuss how our results can be readily extended to cyclic networks. Overall, our results suggest that delay-constrained communication is fundamentally different from the well-understood delay-unconstrained one and call for investigation participation.",
"The capacity of a particular large Gaussian relay network is determined in the limit as the number of relays tends to infinity. Upper bounds are derived from cut-set arguments, and lower bounds follow from an argument involving uncoded transmission. It is shown that in cases of interest, upper and lower bounds coincide in the limit as the number of relays tends to infinity. Hence, this paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance. The findings are illustrated by geometric interpretations. The techniques developed in this paper are then applied to a sensor network situation. This is a network joint source-channel coding problem, and it is well known that the source-channel separation theorem does not extend to this case. The present paper extends this insight by providing an example where separating source from channel coding does not only lead to suboptimal performance-it leads to an exponential penalty in performance scaling behavior (as a function of the number of nodes). Finally, the techniques developed in this paper are extended to include certain models of ad hoc wireless networks, where a capacity scaling law can be established: When all nodes act purely as relays for a single source-destination pair, capacity grows with the logarithm of the number of nodes.",
"Gupta and Kumar (see IEEE Transactions an Information Theory, vol.46, no.2, p.388-404, 2000) determined the capacity of wireless networks under certain assumptions, among them point-to-point coding, which excludes for example multi-access and broadcast codes. We consider essentially the same physical model of a wireless network under a different traffic pattern, namely the relay traffic pattern, but we allow for arbitrarily complex network coding. In our model, there is only one active source destination pair, while all other nodes assist this transmission. We show code constructions leading to achievable rates and derive upper bounds from the max-flow min-cut theorem. It is shown that lower and upper bounds meet asymptotically as the number of nodes in the network goes to infinity, thus proving that the capacity of the wireless network with n nodes under the relay traffic pattern behaves like log n bits per second. This demonstrates also that network coding is essential: under the point-to-point coding assumption considered by , the achievable rate is constant, independent of the number of nodes. Moreover, the result of this paper has implications' and extensions to fading channels and to sensor networks.",
"We introduce a new class of problems called network information flow which is inspired by computer network applications. Consider a point-to-point communication network on which a number of information sources are to be multicast to certain sets of destinations. We assume that the information sources are mutually independent. The problem is to characterize the admissible coding rate region. This model subsumes all previously studied models along the same line. We study the problem with one information source, and we have obtained a simple characterization of the admissible coding rate region. Our result can be regarded as the max-flow min-cut theorem for network information flow. Contrary to one's intuition, our work reveals that it is in general not optimal to regard the information to be multicast as a \"fluid\" which can simply be routed or replicated. Rather, by employing coding at the nodes, which we refer to as network coding, bandwidth can in general be saved. This finding may have significant impact on future design of switching systems.",
"In this paper, we study the asymptotic throughput capacity of a static multi-channel multi-interface infrastructure wireless mesh network (InfWMN) wherein each infrastructure node has m interfaces and c channels of unequal bandwidth are available. First, an upper bound on the InfWMN per-user capacity is established. Then, the feasible lower bound is derived by construction. We prove that both lower and upper bounds are tight. We limit our analysis for more practical case of @math . However, for the asymptotic upper bound, our analysis can be used for the general case in which there is no constraint on m and c. Our study shows that in such a network with Nc randomly distributed mesh clients, Nr regularly placed mesh routers, and Ng gateways, the asymptotic per-client throughput capacity has different bounds, which depend on the ratio between the total available bandwidth for the network and the sum of m first greatest data rates of c available channels, i.e., @math . The results of this paper are more general compared to the existing published researches. In addition, in the case that @math , our results reduce to the previously reported studies. This implies that our study is comprehensive compared to the formerly published researches.",
"In this paper, we consider a class of single-source multicast relay networks. We assume that all outgoing channels of a node in the network to its neighbors are orthogonal while the incoming signals from its neighbors can interfere with each other. We first focus on Gaussian relay networks with interference and find an achievable rate using a lattice coding scheme. We show that the achievable rate of our scheme is within a constant bit gap from the information theoretic cut-set bound, where the constant depends only on the network topology, but not on the transmit power, noise variance, and channel gains. This is similar to a recent result by Avestimehr, Diggavi, and Tse, who showed an approximate capacity characterization for general Gaussian relay networks. However, our achievability uses a structured code instead of a random one. Using the idea used in the Gaussian case, we also consider a linear finite-field symmetric network with interference and characterize its capacity using a linear coding scheme.",
"Introduction. The problem discussed in this paper was formulated by T. Harris as follows: \"Consider a rail network connecting two cities by way of a number of intermediate cities, where each link of the network has a number assigned to it representing its capacity. Assuming a steady state condition, find a maximal flow from one given city to the other.\" While this can be set up as a linear programming problem with as many equations as there are cities in the network, and hence can be solved by the simplex method (1), it turns out that in the cases of most practical interest, where the network is planar in a certain restricted sense, a much simpler and more efficient hand computing procedure can be described. In §1 we prove the minimal cut theorem, which establishes that an obvious upper bound for flows over an arbitrary network can always be achieved. The proof is non-constructive. However, by specializing the network (§2), we obtain as a consequence of the minimal cut theorem an effective computational scheme. Finally, we observe in §3 the duality between the capacity problem and that of finding the shortest path, via a network, between two given points.",
"We consider the problem of determining rates of growth for the maximum stable throughput achievable in dense wireless networks. We formulate this problem as one of finding maximum flows on random unit-disk graphs. Equipped with the max-flow min-cut theorem as our basic analysis tool, we obtain rates of growth under three models of communication: (a) omnidirectional transmissions; (b) \"simple\" directional transmissions, in which sending nodes generate a single beam aimed at a particular receiver; and (c) \"complex\" directional transmissions, in which sending nodes generate multiple beams aimed at multiple receivers. Our main finding is that an increase of Θlog2n in maximum stable throughput is all that can be achieved by allowing arbitrarily complex signal processing (in the form of generation of directed beams) at the transmitters and receivers. We conclude therefore that neither directional antennas, nor the ability to communicate simultaneously with multiple nodes, can be expected in practice to effectively circumvent the constriction on capacity in dense networks that results from the geometric layout of nodes in space.",
"In most network models for quality of service support, the communication links interconnecting the switches and gateways are assumed to have fixed bandwidth and zero error rate. This assumption of steadiness, especially in a heterogeneous internet-working environment, might be invalid owing to subnetwork multiple-access mechanism, link-level flow error control, and user mobility. Techniques are presented in this paper to characterize and analyze work-conserving communication nodes with varying output rate. In the deterministic approach, the notion of \"fluctuation constraint,\" analogous to the \"burstiness constraint\" for traffic characterization, is introduced to characterize the node. In the statistical approach, the variable-rate output is modelled as an \"exponentially bounded fluctuation\" process in a way similar to the \"exponentially bounded burstiness\" method for traffic modelling. Based on these concepts, deterministic and statistical bounds on queue size and packet delay in isolated variable-rate communication server-nodes are derived, including cases of single-input and multiple-input under first-come-first-serve queueing. Queue size bounds are shown to be useful for buffer requirement and packet loss probability estimation at individual nodes. Our formulations also facilitate the computation of end-to-end performance bounds across a feedforward network of variable-rate server-nodes. Several numerical examples of interest are given in the discussion.",
"We present a novel modeling approach to derive closed-form throughput expressions for CSMA networks with hidden terminals. The key modeling principle is to break the interdependence of events in a wireless network using conditional expressions that capture the effect of a specific factor each, yet preserve the required dependences when combined together. Different from existing models that use numerical aggregation techniques, our approach is the first to jointly characterize the three main critical factors affecting flow throughput (referred to as hidden terminals, information asymmetry and flow-in-the-middle) within a single analytical expression. We have developed a symbolic implementation of the model, that we use for validation against realistic simulations and experiments with real wireless hardware, observing high model accuracy in the evaluated scenarios. The derived closed-form expressions enable new analytical studies of capacity and protocol performance that would not be possible with prior models. We illustrate this through an application of network utility maximization in complex networks with collisions, hidden terminals, asymmetric interference and flow-in-the-middle instances. Despite that such problematic scenarios make utility maximization a challenging problem, the model-based optimization yields vast fairness gains and an average per-flow throughput gain higher than 500 with respect to 802.11 in the evaluated networks.",
"The class of Gupta-Kumar results, which predict the throughput capacity in wireless networks, is restricted to asymptotic regimes. This tutorial presents a methodology to address a corresponding non-asymptotic analysis based on the framework of the stochastic network calculus, in a rigorous mathematical manner. In particular, we derive explicit closed-form results on the distribution of the end-to-end capacity and delay, for a fixed source-destination pair, in a network with broad assumptions on its topology and degree of spatial correlations. The results are non-asymptotic in that they hold for finite time scales and network sizes, as well as bursty arrivals. The generality of the results enables the research of several interesting problems, concerning for instance the effects of time scales or randomness in topology on the network capacity.",
"This paper presents an analytical study of the stable throughput for multiple broadcast sessions in a multi-hop wireless tandem network with random access. Intermediate nodes leverage on the broadcast nature of wireless medium access to perform inter-session network coding among different flows. This problem is challenging due to the interaction among nodes, and has been addressed so far only in the saturated mode where all nodes always have packet to send, which results in infinite packet delay. In this paper, we provide a novel model based on multi-class queueing networks to investigate the problem in unsaturated mode. We devise a theoretical framework for computing maximum stable throughput of network coding for a slotted ALOHA-based random access system. Using our formulation, we compare the performance of network coding and traditional routing. Our results show that network coding leads to high throughput gain over traditional routing. We also define a new metric, network unbalance ratio (NUR), that indicates the unbalance status of the utilization factors at different nodes. We show that although the throughput gain of the network coding compared to the traditional routing decreases when the number of nodes tends to infinity, NUR of the former outperforms the latter. We carry out simulations to confirm our theoretical analysis.",
"Gupta and Kumar (2000) introduced a random model to study throughput scaling in a wireless network with static nodes, and showed that the throughput per source-destination pair is Theta(1 radic(nlogn)). Grossglauser and Tse (2001) showed that when nodes are mobile it is possible to have a constant throughput scaling per source-destination pair. In most applications, delay is also a key metric of network performance. It is expected that high throughput is achieved at the cost of high delay and that one can be improved at the cost of the other. The focus of this paper is on studying this tradeoff for wireless networks in a general framework. Optimal throughput-delay scaling laws for static and mobile wireless networks are established. For static networks, it is shown that the optimal throughput-delay tradeoff is given by D(n)=Theta(nT(n)), where T(n) and D(n) are the throughput and delay scaling, respectively. For mobile networks, a simple proof of the throughput scaling of Theta(1) for the Grossglauser-Tse scheme is given and the associated delay scaling is shown to be Theta(nlogn). The optimal throughput-delay tradeoff for mobile networks is also established. To capture physical movement in the real world, a random-walk (RW) model for node mobility is assumed. It is shown that for throughput of Oscr(1 radic(nlogn)), which can also be achieved in static networks, the throughput-delay tradeoff is the same as in static networks, i.e., D(n)=Theta(nT(n)). Surprisingly, for almost any throughput of a higher order, the delay is shown to be Theta(nlogn), which is the delay for throughput of Theta(1). Our result, thus, suggests that the use of mobility to increase throughput, even slightly, in real-world networks would necessitate an abrupt and very large increase in delay.",
"Abstract This paper proposes a mathematical justification of the phenomenon of extreme congestion at a very limited number of nodes in very large networks. It is argued that this phenomenon occurs as a combination of the negative curvature property of the network together with minimum-length routing. More specifically, it is shown that in a large n-dimensional hyperbolic ball B of radius R viewed as a roughly similar model of a Gromov hyperbolic network, the proportion of traffic paths transiting through a small ball near the center is Θ(1), whereas in a Euclidean ball, the same proportion scales as Θ(1 R n−1). This discrepancy persists for the traffic load, which at the center of the hyperbolic ball scales as volume2(B), whereas the same traffic load scales as volume1+1 n (B) in the Euclidean ball. This provides a theoretical justification of the experimental exponent discrepancy observed by Narayan and Saniee between traffic loads in Gromov-hyperbolic networks from the Rocketfuel database and synthetic ..."
]
} |
cs0503061 | 2949324727 | We introduce the use, monitoring, and enforcement of integrity constraints in trust management-style authorization systems. We consider what portions of the policy state must be monitored to detect violations of integrity constraints. Then we address the fact that not all participants in a trust management system can be trusted to assist in such monitoring, and show how many integrity constraints can be monitored in a conservative manner so that trusted participants detect and report if the system enters a policy state from which evolution in unmonitored portions of the policy could lead to a constraint violation. | In , we listed several papers presenting various trust management systems. None of these incorporates a notion of integrity constrains. The work in trust management that is most closely related is @cite_13 . As we discussed at the beginning of , that work is complimentary to ours. It studies the problem of determining, given a state , a role monitor , and a constraint @math , whether there is a reachable state in which @math is violated. By contrast, we analyze the problem of which roles must have their definitions monitored to detect when such a is entered. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2170496240"
],
"abstract": [
"We identify the trust management problem as a distinct and important component of security in network services. Aspects of the trust management problem include formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Existing systems that support security in networked applications, including X.509 and PGP, address only narrow subsets of the overall trust management problem and often do so in a manner that is appropriate to only one application. This paper presents a comprehensive approach to trust management, based on a simple language for specifying trusted actions and trust relationships. It also describes a prototype implementation of a new trust management system, called PolicyMaker, that will facilitate the development of security features in a wide range of network services."
]
} |
cs0502003 | 1709954365 | We consider the simulation of wireless sensor networks (WSN) using a new approach. We present Shawn, an open-source discrete-event simulator that has considerable differences to all other existing simulators. Shawn is very powerful in simulating large scale networks with an abstract point of view. It is, to the best of our knowledge, the first simulator to support generic high-level algorithms as well as distributed protocols on exactly the same underlying networks. | The TinyOS mote simulator'' simulates TinyOS @cite_16 motes at the bit level and is hence a platform-specific simulator emulator. It directly compiles code written for TinyOS to an executable file that can be run on standard PC equipment. Using this technique, developers can test their implementation without having to deploy it on real sensor network hardware. TOSSIM can run simulations with a few thousand virtual TinyOS nodes. It ships with a GUI ( TinyViz'') that can visualize and interact with running simulations. Just recently, PowerTOSSIM @cite_3 , a power modeling extension, has been integrated into TOSSIM. PowerTOSSIM models the power consumed by TinyOS applications and includes a detailed model of the power consumption of the Mica2 @cite_13 motes. | {
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_3"
],
"mid": [
"2110936068",
"2623629680",
"2124567303"
],
"abstract": [
"Developing sensor network applications demands a new set of tools to aid programmers. A number of simulation environments have been developed that provide varying degrees of scalability, realism, and detail for understanding the behavior of sensor networks. To date, however, none of these tools have addressed one of the most important aspects of sensor application design: that of power consumption. While simple approximations of overall power usage can be derived from estimates of node duty cycle and communication rates, these techniques often fail to capture the detailed, low-level energy requirements of the CPU, radio, sensors, and other peripherals. In this paper, we present, a scalable simulation environment for wireless sensor networks that provides an accurate, per-node estimate of power consumption. PowerTOSSIM is an extension to TOSSIM, an event-driven simulation environment for TinyOS applications. In PowerTOSSIM, TinyOS components corresponding to specific hardware peripherals (such as the radio, EEPROM, LEDs, and so forth) are instrumented to obtain a trace of each device's activity during the simulation runPowerTOSSIM employs a novel code-transformation technique to estimate the number of CPU cycles executed by each node, eliminating the need for expensive instruction-level simulation of sensor nodes. PowerTOSSIM includes a detailed model of hardware energy consumption based on the Mica2 sensor node platform. Through instrumentation of actual sensor nodes, we demonstrate that PowerTOSSIM provides accurate estimation of power consumption for a range of applications and scales to support very large simulations.",
"Convolutional neural networks (CNNs) have become the dominant neural network architecture for solving many state-of-the-art (SOA) visual processing tasks. Even though graphical processing units are most often used in training and deploying CNNs, their power efficiency is less than 10 GOp s W for single-frame runtime inference. We propose a flexible and efficient CNN accelerator architecture called NullHop that implements SOA CNNs useful for low-power and low-latency application scenarios. NullHop exploits the sparsity of neuron activations in CNNs to accelerate the computation and reduce memory requirements. The flexible architecture allows high utilization of available computing resources across kernel sizes ranging from @math to @math . NullHop can process up to 128 input and 128 output feature maps per layer in a single pass. We implemented the proposed architecture on a Xilinx Zynq field-programmable gate array (FPGA) platform and presented the results showing how our implementation reduces external memory transfers and compute time in five different CNNs ranging from small ones up to the widely known large VGG16 and VGG19 CNNs. Postsynthesis simulations using Mentor Modelsim in a 28-nm process with a clock frequency of 500 MHz show that the VGG19 network achieves over 450 GOp s. By exploiting sparsity, NullHop achieves an efficiency of 368 , maintains over 98 utilization of the multiply–accumulate units, and achieves a power efficiency of over 3 TOp s W in a core area of 6.3 mm2. As further proof of NullHop’s usability, we interfaced its FPGA implementation with a neuromorphic event camera for real-time interactive demonstrations.",
"Power dissipation has become one of the most critical factors for the continued development of both high-end and low-end computer systems. We present a complete system power simulator, called SoftWatt, that models the CPU, memory hierarchy, and a low-power disk subsystem and quantifies the power behavior of both the application and operating system. This tool, built on top of the SimOS infrastructure, uses validated analytical energy models to identify the power hotspots in the system components, capture relative contributions of the user and kernel code to the system power profile, identify the power-hungry operating system services and characterize the variance in kernel power profile with respect to workload. Our results using Spec JVM98 benchmark suite emphasize the importance of complete system simulation to understand the power impact of architecture and operating system on application execution."
]
} |
cs0502025 | 2950320586 | The software approach to developing Digital Signal Processing (DSP) applications brings some great features such as flexibility, re-usability of resources and easy upgrading of applications. However, it requires long and tedious tests and verification phases because of the increasing complexity of the software. This implies the need of a software programming environment capable of putting together DSP modules and providing facilities to debug, verify and validate the code. The objective of the work is to provide such facilities as simulation and verification for developing DSP software applications. This led us to develop an extension toolkit, Epspectra, built upon Pspectra, one of the first toolkits available to design basic software radio applications on standard PC workstations. In this paper, we first present Epspectra, an Esterel-based extension of Pspectra that makes the design and implementation of portable DSP applications easier. It allows drastic reduction of testing and verification time while requiring relatively little expertise in formal verification methods. Second, we demonstrate the use of Epspectra, taking as an example the radio interface part of a GSM base station. We also present the verification procedures for the three safety properties of the implementation programs which have complex control-paths. These have to obey strict scheduling rules. In addition, Epspectra achieves the verification of the targeted application since the same model is used for the executable code generation and for the formal verification. | @cite_3 proposed to dynamically select a suitable partitioning according to the property to be proved, avoiding exponential explosion of the analysis caused by in-depth detailed partitioning. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1582451030"
],
"abstract": [
"We apply linear relation analysis [CH78, HPR97] to the verification of declarative synchronous programs [Hal98]. In this approach, state partitioning plays an important role: on one hand the precision of the results highly depends on the fineness of the partitioning; on the other hand, a too much detailed partitioning may result in an exponential explosion of the analysis. In this paper, we propose to dynamically select a suitable partitioning according to the property to be proved."
]
} |
cs0502056 | 2949328284 | The field of digital libraries (DLs) coalesced in 1994: the first digital library conferences were held that year, awareness of the World Wide Web was accelerating, and the National Science Foundation awarded @math AuthorRank$ as an indicator of the impact of an individual author in the network. The results are validated against conference program committee members in the same period. The results show clear advantages of PageRank and AuthorRank over degree, closeness and betweenness centrality metrics. We also investigate the amount and nature of international participation in Joint Conference on Digital Libraries (JCDL). | Social network analysis is based on the premise that the relationships between social actors can be described by a graph. The graph's nodes represent social actors and the graph's edges connect pairs of nodes and thus represent social interactions. This representation allows researchers to apply graph theory @cite_12 to the analysis of what would otherwise be considered an inherently elusive and poorly understood problem: the tangled web of our social interactions. In this article, we will assume such graph representation and use the terms , , and interchangeably. The terms , , and are also used interchangeably. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1584092895"
],
"abstract": [
"Social network analysis uses techniques from graph theory to analyze the structure of relationships among social actors such as individuals or groups. We investigate the effect of the layout of a social network on the inferences drawn by observers about the number of social groupings evident and the centrality of various actors in the network. We conducted an experiment in which eighty subjects provided answers about three drawings. The subjects were not told that the drawings were chosen from five different layouts of the same graph. We found that the layout has a significant effect on their inferences and present some initial results about the way certain Euclidean features will affect perceptions of structural features of the network. There is no “best” layout for a social network; when layouts are designed one must take into account the most important features of the network to be presented as well as the network itself."
]
} |
cs0502056 | 2949328284 | The field of digital libraries (DLs) coalesced in 1994: the first digital library conferences were held that year, awareness of the World Wide Web was accelerating, and the National Science Foundation awarded @math AuthorRank$ as an indicator of the impact of an individual author in the network. The results are validated against conference program committee members in the same period. The results show clear advantages of PageRank and AuthorRank over degree, closeness and betweenness centrality metrics. We also investigate the amount and nature of international participation in Joint Conference on Digital Libraries (JCDL). | An early example of a co-authorship network is the Erd " o s Number Project, in which the smallest number of co-authorship links between any individual mathematician and the Hungarian mathematician Erd " o s are calculated @cite_23 . (A mathematician's Erd " o s Number'' is analogous to an actor's Bacon Number''.) Newman studied and compared the co-authorship graph of arXiv, Medline, SPIRES, and NCSTRL @cite_19 @cite_3 and found a number of network differences between experimental and theoretical disciplines. Co-authorship analysis has also been applied to various ACM conferences: Information Retrieval (SIGIR) @cite_9 , Management of Data (SIGMOD) @cite_17 and Hypertext @cite_14 , as well as mathematics and neuroscience @cite_6 , information systems @cite_5 , and the field of social network analysis @cite_27 . International co-authorship networks have been studied in Journal of American Society for Information Science & Technology (JASIST) @cite_4 and Science Citation Index @cite_33 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_5",
"@cite_17"
],
"mid": [
"1608828583",
"2049086518",
"2145845082",
"2382843593",
"2068557399",
"1972830890",
"2129364433",
"2162010993",
"1991638834",
"2024296872",
"2109480754"
],
"abstract": [
"This article introduces a suite of approaches and measures to study the impact of co-authorship teams based on the number of publications and their citations on a local and global scale. In particular, we present a novel weighted graph representation that encodes coupled author-paper networks as a weighted co-authorship graph. This weighted graph representation is applied to a dataset that captures the emergence of a new field of science and comprises 614 articles published by 1036 unique authors between 1974 and 2004. To characterize the properties and evolution of this field, we first use four different measures of centrality to identify the impact of authors. A global statistical analysis is performed to characterize the distribution of paper production and paper citations and its correlation with the co-authorship team size. The size of co-authorship clusters over time is examined. Finally, a novel local, author-centered measure based on entropy is applied to determine the global evolution of the field and the identification of the contribution of a single author's impact across all of its co-authorship relations. A visualization of the growth of the weighted co-author network, and the results obtained from the statistical analysis indicate a drift toward a more cooperative, global collaboration process as the main drive in the production of scientific knowledge. © 2005 Wiley Periodicals, Inc. Complexity 10: 57–67, 2005",
"In this study, we propose and validate social networks based theoretical model for exploring scholars' collaboration (co-authorship) network properties associated with their citation-based research performance (i.e., g-index). Using structural holes theory, we focus on how a scholar's egocentric network properties of density, efficiency and constraint within the network associate with their scholarly performance. For our analysis, we use publication data of high impact factor journals in the field of ''Information Science & Library Science'' between 2000 and 2009, extracted from Scopus. The resulting database contained 4837 publications reflecting the contributions of 8069 authors. Results from our data analysis suggest that research performance of scholars' is significantly correlated with scholars' ego-network measures. In particular, scholars with more co-authors and those who exhibit higher levels of betweenness centrality (i.e., the extent to which a co-author is between another pair of co-authors) perform better in terms of research (i.e., higher g-index). Furthermore, scholars with efficient collaboration networks who maintain a strong co-authorship relationship with one primary co-author within a group of linked co-authors (i.e., co-authors that have joint publications) perform better than those researchers with many relationships to the same group of linked co-authors.",
"The co-authorship network of scientists represents a prototype of complex evolving networks. In addition, it offers one of the most extensive database to date on social networks. By mapping the electronic database containing all relevant journals in mathematics and neuro-science for an 8-year period (1991–98), we infer the dynamic and the structural mechanisms that govern the evolution and topology of this complex system. Three complementary approaches allow us to obtain a detailed characterization. First, empirical measurements allow us to uncover the topological measures that characterize the network at a given moment, as well as the time evolution of these quantities. The results indicate that the network is scale-free, and that the network evolution is governed by preferential attachment, affecting both internal and external links. However, in contrast with most model predictions the average degree increases in time, and the node separation decreases. Second, we propose a simple model that captures the network's time evolution. In some limits the model can be solved analytically, predicting a two-regime scaling in agreement with the measurements. Third, numerical simulations are used to uncover the behavior of quantities that could not be predicted analytically. The combined numerical and analytical results underline the important role internal links play in determining the observed scaling behavior and network topology. The results and methodologies developed in the context of the co-authorship network could be useful for a systematic study of other complex evolving networks as well, such as the world wide web, Internet, or other social networks.",
"We present insights from a bibliometric analysis and scientific paper publication mining of 551 papers in Requirements Engineering (RE) series of conference (11 years from 2005 to 2015). We study cross-disciplinary and interdisciplinary nature of RE re- search by analyzing the cited disciplines in the reference section of each paper. We apply topic modeling on a corpus consisting of 551 abstracts and extract topics as frequently co-occurring and connected terms. We use topic modeling to study the structure and composition of RE research and analyze popular topics in industry as well as research track. Co-authorship in papers is an indicator of collaboration and interaction between scientists as well as institutions and we analyze co-authorship data to in- vestigate university-industry collaboration, internal and external collaborations. We present results on the distribution of the num- ber of co-authors in each paper as well as distribution of authors across world regions. We present our analysis on the public or proprietary dataset as well as the domain of the dataset used in studies published in Requirements Engineering (RE) series of conferences.",
"As part of the celebration of twenty-five years of ACM SIGIR conferences we performed a content analysis of all papers published in the proceedings of SIGIR conferences, including those from 2002. From this we determined, using information retrieval approaches of course, which topics had come and gone over the last two and a half decades, and which topics are currently \"hot\". We also performed a co-authorship analysis among authors of the 853 SIGIR conference papers to determine which author is the most \"central\" in terms of a co-authorship graph and is our equivalent of Paul Erdos in Mathematics. In the first section we report on the content analysis, leading to our prediction as to the most topical paper likely to appear at SIGIR2003. In the second section we present details of our co-authorship analysis, revealing who is the \"Christopher Lee\" of SIGIR, and in the final section we give pointers to where readers who are SIGIR conference paper authors may find details of where they fit into the coauthorship graph.",
"Many studies on coauthorship networks focus on network topology and network statistical mechanics. This article takes a different approach by studying micro-level network properties with the aim of applying centrality measures to impact analysis. Using coauthorship data from 16 journals in the field of library and information science (LIS) with a time span of 20 years (1988–2007), we construct an evolving coauthorship network and calculate four centrality measures (closeness centrality, betweenness centrality, degree centrality, and PageRank) for authors in this network. We find that the four centrality measures are significantly correlated with citation counts. We also discuss the usability of centrality measures in author ranking and suggest that centrality measures can be useful indicators for impact analysis. © 2009 Wiley Periodicals, Inc.",
"A critical aspect of malware forensics is authorship analysis. The successful outcome of such analysis is usually determined by the reverse engineer’s skills and by the volume and complexity of the code under analysis. To assist reverse engineers in such a tedious and error-prone task, it is desirable to develop reliable and automated tools for supporting the practice of malware authorship attribution. In a recent work, machine learning was used to rank and select syntax-based features such as n-grams and flow graphs. The experimental results showed that the top ranked features were unique for each author, which was regarded as an evidence that those features capture the author’s programming styles. In this paper, however, we show that the uniqueness of features does not necessarily correspond to authorship. Specifically, our analysis demonstrates that many “unique” features selected using this method are clearly unrelated to the authors’ programming styles, for example, unique IDs or random but unique function names generated by the compiler; furthermore, the overall accuracy is generally unsatisfactory. Motivated by this discovery, we propose a layered Onion Approach for Binary Authorship Attribution called OBA2. The novelty of our approach lies in the three complementary layers: preprocessing, syntax-based attribution, and semantic-based attribution. Experiments show that our method produces results that not only are more accurate but have a meaningful connection to the authors’ styles. a 2014 The Author. Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http: creativecommons.org licenses by-nc-nd 3.0 ).",
"Recent graph-theoretic approaches have demonstrated remarkable successes for ranking networked entities, but most of their applications are limited to homogeneous networks such as the network of citations between publications. This paper proposes a novel method for co-ranking authors and their publications using several networks: the social network connecting the authors, the citation network connecting the publications, as well as the authorship network that ties the previous two together. The new co-ranking framework is based on coupling two random walks, that separately rank authors and documents following the PageRankparadigm. As a result, improved rankings of documents and their authors depend on each other in a mutually reinforcing way, thus taking advantage of the additional information implicit in the heterogeneous network of authors and documents.",
"Authorship analysis by means of textual features is an important task in linguistic studies. We employ complex networks theory to tackle this disputed problem. In this work, we focus on some measurable quantities of word co-occurrence network of each book for authorship characterization. Based on the network features, attribution probability is defined for authorship identification. Furthermore, two scaling exponents, q-parameter and α-exponent, are combined to classify personal writing style with acceptable high resolution power. The q-parameter, generally known as the nonextensivity measure, is calculated for degree distribution and the α-exponent comes from a power law relationship between number of links and number of nodes in the co-occurrence network constructed for different books written by each author. The applicability of the presented method is evaluated in an experiment with thirty six books of five Persian litterateurs. Our results show high accuracy rate in authorship attribution.",
"The objective of this work was to test the relationship between characteristics of an author's network of coauthors to identify which enhance the h-index. We randomly selected a sample of 238 authors from the Web of Science, calculated their h-index as well as the h-index of all co-authors from their h-index articles, and calculated an adjacency matrix where the relation between co-authors is the number of articles they published together. Our model was highly predictive of the variability in the h-index (R 2 = 0.69). Most of the variance was explained by number of co-authors. Other significant variables were those associated with highly productive co-authors. Contrary to our hypothesis, network structure as measured by components was not predictive. This analysis suggests that the highest h-index will be achieved by working with many co-authors, at least some with high h-indexes themselves. Little improvement in h-index is to be gained by structuring a co-author network to maintain separate research communities.",
"The problem of predicting links or interactions between objects in a network, is an important task in network analysis. Along this line, link prediction between co-authors in a co-author network is a frequently studied problem. In most of these studies, authors are considered in a homogeneous network, .e., only one type of objects(author type) and one type of links (co-authorship) exist in the network. However, in a real bibliographic network, there are multiple types of objects ( .g., venues, topics, papers) and multiple types of links among these objects. In this paper, we study the problem of co-author relationship prediction in the heterogeneous bibliographic network, and a new methodology called, .e., meta path-based relationship prediction model, is proposed to solve this problem. First, meta path-based topological features are systematically extracted from the network. Then, a supervised model is used to learn the best weights associated with different topological features in deciding the co-author relationships. We present experiments on a real bibliographic network, the DBLP network, which show that metapath-based heterogeneous topological features can generate more accurate prediction results as compared to homogeneous topological features. In addition, the level of significance of each topological feature can be learned from the model, which is helpful in understanding the mechanism behind the relationship building."
]
} |
cs0502088 | 2950691337 | In [Hitzler and Wendt 2002, 2005], a new methodology has been proposed which allows to derive uniform characterizations of different declarative semantics for logic programs with negation. One result from this work is that the well-founded semantics can formally be understood as a stratified version of the Fitting (or Kripke-Kleene) semantics. The constructions leading to this result, however, show a certain asymmetry which is not readily understood. We will study this situation here with the result that we will obtain a coherent picture of relations between different semantics for normal logic programs. | Loyer, Spyratos and Stamate, in @cite_11 , presented a parametrized approach to different semantics. It allows to substitute the preference for falsehood by preference for truth in the stable and well-founded semantics, but uses entirely different means than presented here. Its purpose is also different --- while we focus on the strenghtening of the mathematical foundations for the field, the work in @cite_11 is motivated by the need to deal with open vs. closed world assumption in some application settings. The exact relationship between their approach and ours remains to be worked out. | {
"cite_N": [
"@cite_11"
],
"mid": [
"1983914828"
],
"abstract": [
"In this paper we recast the classical Darondeau---Degano's causal semantics of concurrency in a coalgebraic setting, where we derive a compact model. Our construction is inspired by the one of Montanari and Pistore yielding causal automata, but we show that it is instance of an existing categorical framework for modeling the semantics of nominal calculi, whose relevance is further demonstrated. The key idea is to represent events as names, and the occurrence of a new event as name generation. We model causal semantics as a coalgebra over a presheaf, along the lines of the Fiore---Turi approach to the semantics of nominal calculi. More specifically, we take a suitable category of finite posets, representing causal relations over events, and we equip it with an endofunctor that allocates new events and relates them to their causes. Presheaves over this category express the relationship between processes and causal relations among the processes' events. We use the allocation operator to define a category of well-behaved coalgebras: it models the occurrence of a new event along each transition. Then we turn the causal transition relation into a coalgebra in this category, where labels only exhibit maximal events with respect to the source states' poset, and we show that its bisimilarity is essentially Darondeau---Degano's strong causal bisimilarity. This coalgebra is still infinite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, where states only retain the poset of the most recent events for each atomic subprocess, and are isomorphic up to order-preserving permutations. Remarkably, this reduction of states is automatically performed along the equivalence."
]
} |
cs0501006 | 2949144408 | The paper considers various formalisms based on Automata, Temporal Logic and Regular Expressions for specifying queries over sequences. Unlike traditional binary semantics, the paper presents a similarity based semantics for thse formalisms. More specifically, a distance measure in the range [0,1] is associated with a sequence, query pair denoting how closely the sequence satisfies the query. These measures are defined using a spectrum of normed vector distance measures. Various distance measures based on the syntax and the traditional semantics of the query are presented. Efficient algorithms for computing these distance measure are presented. These algorithms can be employed for retrieval of sequence from a database that closely satisfy a given. | There have been various formalisms for representing uncertainty (see @cite_21 ) such as probability measures, Dempster-Shafer belief functions, plausibility measures, etc. Our similarity measures for temporal logics and automata can possibly be categorized under plausibility measures and they are quite different from probability measures. The book @cite_21 also describes logics for reasoning about uncertainty. Also, probabilistic versions of Propositional Dynamic Logics were presented in @cite_25 . However, these works do not consider logics and formalisms on sequences, and do not use the various vector distance measures considered in this paper. | {
"cite_N": [
"@cite_21",
"@cite_25"
],
"mid": [
"2169160043",
"2021343121"
],
"abstract": [
"Different formalisms for solving problems of inference under uncertainty have been developed so far. The most popular numerical approach is the theory of Bayesian inference [Lauritzen and Spiegelhalter, 1988]. More general approaches are the Dempster-Shafer theory of evidence [Shafer, 1976], and possibility theory [Dubois and Prade, 1990], which is closely related to fuzzy systems. For these systems computer implementations are available. In competition with these numerical methods are different symbolic approaches. Many of them are based on different types of non-monotonic logic.",
"This thesis presents a logical formalism for representing and reasoning with probabilistic knowledge. The formalism differs from previous efforts in this area in a number of ways. Most previous work has investigated ways of assigning probabilities to the sentences of a logical language. Such an assignment fails to capture an important class of probabilistic assertions, empirical generalizations. Such generalizations are particularly important for AI, since they can be accumulated through experience with the world. Thus, they offer the possibility of reasoning in very general domains, domains where no experts are available to gather subjective probabilities from. A logic is developed which can represent these empirical generalizations. Reasoning can be performed through a proof theory which is shown to be sound and complete. Furthermore, the logic can represent and reason with a very general set of assertions, including many non-numeric assertions. This also is important for AI as numbers are usually not available. The logic makes it clear that there is an essential difference between empirical, or statistical, probabilities and probabilities assigned to sentences, e.g., subjective probabilities. The second part of the formalism is an inductive mechanism for assigning degrees of belief to sentences based on the empirical generalizations expressed in the logic. these degrees of belief have a strong advantage over subjective probabilities: they are founded on objective statistical knowledge about the world. Furthermore, the mechanism of assigning degrees of belief gives a natural answer to the question \"Where do the probabilities come from:\" they come from our experience with the world. The two parts of the formalism offer combined, interacting, but still clearly separated, plausible inductive inference and sound deductive inference."
]
} |
cs0501006 | 2949144408 | The paper considers various formalisms based on Automata, Temporal Logic and Regular Expressions for specifying queries over sequences. Unlike traditional binary semantics, the paper presents a similarity based semantics for thse formalisms. More specifically, a distance measure in the range [0,1] is associated with a sequence, query pair denoting how closely the sequence satisfies the query. These measures are defined using a spectrum of normed vector distance measures. Various distance measures based on the syntax and the traditional semantics of the query are presented. Efficient algorithms for computing these distance measure are presented. These algorithms can be employed for retrieval of sequence from a database that closely satisfy a given. | Since the appearance of a preliminary version of this paper @cite_19 , other non-probabilistic quantitative versions of temporal logic have been proposed in @cite_23 @cite_13 . Both these works consider infinite computations and branching time temporal logics. The similarity measure they give, for the linear time fragment of their logic, corresponds to the infinite norm among the vector distance functions. On the contrary, we consider formalism and logics on finite sequences and give similarity based measures that use a spectrum vector of distance measures. We also present methods fo computing similarity values of a database sequence with respect to queries given in the different formalisms. | {
"cite_N": [
"@cite_19",
"@cite_13",
"@cite_23"
],
"mid": [
"2339807279",
"2049399166",
"2140028191"
],
"abstract": [
"This paper introduces a framework for inference of timed temporal logic properties from data. The dataset is given as a finite set of pairs of finite-time system traces and labels, where the labels indicate whether the traces exhibit some desired behavior (e.g., a ship traveling along a safe route). We propose a decision-tree based approach for learning signal temporal logic classifiers. The method produces binary decision trees that represent the inferred formulae. Each node of the tree contains a test associated with the satisfaction of a simple formula, optimally tuned from a predefined finite set of primitives. Optimality is assessed using heuristic impurity measures, which capture how well the current primitive splits the data with respect to the traces' labels. We propose extensions of the usual impurity measures from machine learning literature to handle classification of system traces by leveraging upon the robustness degree concept. The proposed incremental construction procedure greatly improves the execution time and the accuracy compared to existing algorithms. We present two case studies that illustrate the usefulness and the computational advantages of the algorithms. The first is an anomaly detection problem in a maritime environment. The second is a fault detection problem in an automotive powertrain system.",
"In this paper, we consider the robust interpretation of Metric Temporal Logic (MTL) formulas over signals that take values in metric spaces. For such signals, which are generated by systems whose states are equipped with non-trivial metrics, for example continuous or hybrid, robustness is not only natural, but also a critical measure of system performance. Thus, we propose multi-valued semantics for MTL formulas, which capture not only the usual Boolean satisfiability of the formula, but also topological information regarding the distance, @e, from unsatisfiability. We prove that any other signal that remains @e-close to the initial one also satisfies the same MTL specification under the usual Boolean semantics. Finally, our framework is applied to the problem of testing formulas of two fragments of MTL, namely Metric Interval Temporal Logic (MITL) and closed Metric Temporal Logic (clMTL), over continuous-time signals using only discrete-time analysis. The motivating idea behind our approach is that if the continuous-time signal fulfills certain conditions and the discrete-time signal robustly satisfies the temporal logic specification, then the corresponding continuous-time signal should also satisfy the same temporal logic specification.",
"A unifying framework for the study of real-time logics is developed. In analogy to the untimed case, the underlying classical theory of timed state sequences is identified, it is shown to be nonelementarily decidable, and its complexity and expressiveness are used as a point of reference. Two orthogonal extensions of PTL (timed propositional temporal logic and metric temporal logic) that inherit its appeal are defined: they capture elementary, yet expressively complete, fragments of the theory of timed state sequences, and thus are excellent candidates for practical real-time specification languages."
]
} |
cs0412007 | 2140668275 | Mapping the Internet generally consists in sampling the network from a limited set of sources by using traceroute-like probes. This methodology, akin to the merging of different spanning trees to a set of destination, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. In this paper, we explore these biases and provide a statistical analysis of their origin. We derive an analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular, we find that the edge and vertex detection probability depends on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with broad distributions of connectivity. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in network models with different topologies. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. Moreover, we characterize the level of redundancy and completeness of the exploration process as a function of the topological properties of the network. Finally, we study numerically how the fraction of vertices and edges discovered in the sampled graph depends on the particular deployements of probing sources. The results might hint the steps toward more efficient mapping strategies. | In this section, we shortly review some recent works devoted to the sampling of graphs by shortest path probing procedures. @cite_20 have shown that biases can seriously affect the estimation of degree distributions. In particular, power-law like distributions can be observed for subgraphs of Erd "os-R 'enyi random graphs when the subgraph is the product of a traceroute exploration with relatively few sources and destinations. They discuss the origin of these biases and the effect of the distance between source and target in the mapping process. In a recent work @cite_23 , Clauset and Moore have given analytical foundations to the numerical work of @cite_20 . They have modeled the single source probing to all possible destinations using differential equations. For an Erd "os-Renyi random graph with average degree @math , they have found that the connectivity distribution of the obtained spanning tree displays a power-law behavior @math , with an exponential cut-off setting in at a characteristic degree @math . | {
"cite_N": [
"@cite_23",
"@cite_20"
],
"mid": [
"2107648668",
"2120511087"
],
"abstract": [
"Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias.",
"Understanding the structure of the Internet graph is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining its graph structure is a surprisingly difficult task, as edges cannot be explicitly queried. Instead, empirical studies rely on traceroutes to build what are essentially single-source, all-destinations, shortest-path trees. These trees only sample a fraction of the network's edges, and a recent paper by found empirically that the resuting sample is intrinsically biased. For instance, the observed degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson.In this paper, we study the bias of traceroute sampling systematically, and, for a very general class of underlying degree distributions, calculate the likely observed distributions explicitly. To do this, we use a continuous-time realization of the process of exposing the BFS tree of a random graph with a given degree distribution, calculate the expected degree distribution of the tree, and show that it is sharply concentrated. As example applications of our machinery, we show how traceroute sampling finds power-law degree distributions in both δ-regular and Poisson-distributed random graphs. Thus, our work puts the observations of on a rigorous footing, and extends them to nearly arbitrary degree distributions."
]
} |
cs0412007 | 2140668275 | Mapping the Internet generally consists in sampling the network from a limited set of sources by using traceroute-like probes. This methodology, akin to the merging of different spanning trees to a set of destination, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. In this paper, we explore these biases and provide a statistical analysis of their origin. We derive an analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular, we find that the edge and vertex detection probability depends on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with broad distributions of connectivity. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in network models with different topologies. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. Moreover, we characterize the level of redundancy and completeness of the exploration process as a function of the topological properties of the network. Finally, we study numerically how the fraction of vertices and edges discovered in the sampled graph depends on the particular deployements of probing sources. The results might hint the steps toward more efficient mapping strategies. | In a slightly different context, Petermann and De Los Rios have studied a traceroute -like procedure on various examples of scale-free graphs @cite_10 , showing that, in the case of a single source, power-law distributions with underestimated exponents are obtained. Analytical estimates of the measured exponents as a function of the true ones were also derived. Finally, a recent preprint by Guillaume and Latapy @cite_27 reports about the shortest-paths explorations of synthetic graphs, focusing on the comparison between properties of the resulting sampled graph with those of the original network. The proportion of discovered vertices and edges in the graph as a function of the number of sources and targets gives also hints for an optimization of the exploration process. | {
"cite_N": [
"@cite_27",
"@cite_10"
],
"mid": [
"1540064387",
"2107648668"
],
"abstract": [
"Mapping the Internet generally consists in sampling the network from a limited set of sources by using \"traceroute\"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.",
"Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias."
]
} |
cond-mat0412368 | 2949195487 | Dense subgraphs of sparse graphs (communities), which appear in most real-world complex networks, play an important role in many contexts. Computing them however is generally expensive. We propose here a measure of similarities between vertices based on random walks which has several important advantages: it captures well the community structure in a network, it can be computed efficiently, it works at various scales, and it can be used in an agglomerative algorithm to compute efficiently the community structure of a network. We propose such an algorithm which runs in time O(mn^2) and space O(n^2) in the worst case, and in time O(n^2log n) and space O(n^2) in most real-world cases (n and m are respectively the number of vertices and edges in the input graph). Experimental evaluation shows that our algorithm surpasses previously proposed ones concerning the quality of the obtained community structures and that it stands among the best ones concerning the running time. This is very promising because our algorithm can be improved in several ways, which we sketch at the end of the paper. | In the current situation, one can process graphs with up to a few hundreds of thousands vertices using the method in @cite_17 . All other algorithms have more limited performances (they generally cannot manage more than some thousands of vertices). | {
"cite_N": [
"@cite_17"
],
"mid": [
"2606413522"
],
"abstract": [
"Processing a one trillion-edge graph has recently been demonstrated by distributed graph engines running on clusters of tens to hundreds of nodes. In this paper, we employ a single heterogeneous machine with fast storage media (e.g., NVMe SSD) and massively parallel coprocessors (e.g., Xeon Phi) to reach similar dimensions. By fully exploiting the heterogeneous devices, we design a new graph processing engine, named Mosaic, for a single machine. We propose a new locality-optimizing, space-efficient graph representation---Hilbert-ordered tiles, and a hybrid execution model that enables vertex-centric operations in fast host processors and edge-centric operations in massively parallel coprocessors. Our evaluation shows that for smaller graphs, Mosaic consistently outperforms other state-of-the-art out-of-core engines by 3.2-58.6x and shows comparable performance to distributed graph engines. Furthermore, Mosaic can complete one iteration of the Pagerank algorithm on a trillion-edge graph in 21 minutes, outperforming a distributed disk-based engine by 9.2×."
]
} |
cs0412021 | 2950884425 | A widely adopted approach to solving constraint satisfaction problems combines systematic tree search with constraint propagation for pruning the search space. Constraint propagation is performed by propagators implementing a certain notion of consistency. Bounds consistency is the method of choice for building propagators for arithmetic constraints and several global constraints in the finite integer domain. However, there has been some confusion in the definition of bounds consistency. In this paper we clarify the differences and similarities among the three commonly used notions of bounds consistency. | reworded Lhomme @cite_15 defines which formalizes bounds propagation for both integer and real constraints. He proposes an efficient propagation algorithm implementing arc B-consistency with complexity analysis and experimental results. However, his study focuses on constraints defined by numeric relations (i.e. numeric CSPs). Lhomme @cite_15 defines which formalizes bounds propagation techniques for numeric CSPs. Unlike our definition of CSPs, constraints in numeric CSPs cannot be given extensionally and must be defined by numeric relations, which can be interpreted in either the real or the finite integer domain. Numeric CSPs also restrict the domain of variables to be a single interval. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1548650523"
],
"abstract": [
"Many problems can be expressed in terms of a numeric constraint satisfaction problem over finite or continuous domains (numeric CSP). The purpose of this paper is to show that the consistency techniques that have been developed for CSPs can be adapted to numeric CSPs. Since the numeric domains are ordered the underlying idea is to handle domains only by their bounds. The semantics that have been elaborated, plus the complexity analysis and good experimental results, confirm that these techniques can be used in real applications."
]
} |
cs0412021 | 2950884425 | A widely adopted approach to solving constraint satisfaction problems combines systematic tree search with constraint propagation for pruning the search space. Constraint propagation is performed by propagators implementing a certain notion of consistency. Bounds consistency is the method of choice for building propagators for arithmetic constraints and several global constraints in the finite integer domain. However, there has been some confusion in the definition of bounds consistency. In this paper we clarify the differences and similarities among the three commonly used notions of bounds consistency. | Maher @cite_9 introduces the notion of propagation completeness together with a general framework to unify a wide range of consistency. These include hull consistency of real constraints and consistency of integer constraints. Propagation completeness aims to capture the timeliness property of propagation. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1594857028"
],
"abstract": [
"We develop a framework for addressing correctness and timeliness-of-propagation issues for reactive constraints - global constraints or user-defined constraints that are implemented through constraint propagation. The notion of propagation completeness is introduced to capture timeliness of constraint propagation. A generalized form of arc-consistency is formulated which unifies many local consistency conditions in the literature. We show that propagation complete implementations of reactive constraints achieve this arc-consistency when propagation quiesces. Finally, we use the framework to state and prove an impossibility result: that CHR cannot implement a common relation with a desirable degree of timely constraint propagation."
]
} |
cs0412021 | 2950884425 | A widely adopted approach to solving constraint satisfaction problems combines systematic tree search with constraint propagation for pruning the search space. Constraint propagation is performed by propagators implementing a certain notion of consistency. Bounds consistency is the method of choice for building propagators for arithmetic constraints and several global constraints in the finite integer domain. However, there has been some confusion in the definition of bounds consistency. In this paper we clarify the differences and similarities among the three commonly used notions of bounds consistency. | The application of bounds consistency is not limited to integer and real constraints. Bounds consistency has been formalized for solving set constraints @cite_13 , and more recently, multiset constraints @cite_12 . | {
"cite_N": [
"@cite_13",
"@cite_12"
],
"mid": [
"2085716817",
"1500096329"
],
"abstract": [
"Local consistency techniques have been introduced in logic programming in order to extend the application domain of logic programming languages. The existing languages based on these techniques consider arithmetic constraints applied to variables ranging over finite integer domains. This makes difficult a natural and concise modelling as well as an efficient solving of a class of N P-complete combinatorial search problems dealing with sets. To overcome these problems, we propose a solution which consists in extending the notion of integer domains to that of set domains (sets of sets). We specify a set domain by an interval whose lower and upper bounds are known sets, ordered by set inclusion. We define the formal and practical framework of a new constraint logic programming language over set domains, called Conjunto. Conjunto comprises the usual set operation symbols( n), and the set inclusion relation (). Set expressions built using the operation symbols are interpreted as relations (s s 1 = s 2 ,...). In addition, Conjunto provides us with a set of constraints called graduated constraints (e.g. the set cardinality) which map sets onto arithmetic terms. This allows us to handle optimization problems by applying a cost function to the quantiiable, i.e., arithmetic, terms which are associated to set terms. The constraint solving in Conjunto is based on local consistency techniques using interval reasoning which are extended to handle set constraints. The main contribution of this paper concerns the formal deenition of the language and its design and implementation as a practical language.",
"We study from a formal perspective the consistency and propagation of constraints involving multiset variables. That is, variables whose values are multisets. These help us model problems more naturally and can, for example, prevent introducing unnecessary symmetry into a model. We identify a number of different representations for multiset variables and compare them. We then propose a definition of local consistency for constraints involving multiset, set and integer variables. This definition is a generalization of the notion of bounds consistency for integer variables. We show how this local consistency property can be enforced by means of some simple inference rules which tighten bounds on the variables. We also study a number of global constraints on set and multiset variables. Surprisingly, unlike finite domain variables, the decomposition of global constraints over set or multiset variables often does not hinder constraint propagation."
]
} |
cs0412041 | 1670256845 | An efficient and flexible engine for computing fixed points is critical for many practical applications. In this paper, we firstly present a goal-directed fixed point computation strategy in the logic programming paradigm. The strategy adopts a tabled resolution (or memorized resolution) to mimic the efficient semi-naive bottom-up computation. Its main idea is to dynamically identify and record those clauses that will lead to recursive variant calls, and then repetitively apply those alternatives incrementally until the fixed point is reached. Secondly, there are many situations in which a fixed point contains a large number or even infinite number of solutions. In these cases, a fixed point computation engine may not be efficient enough or feasible at all. We present a mode-declaration scheme which provides the capabilities to reduce a fixed point from a big solution set to a preferred small one, or from an infeasible infinite set to a finite one. The mode declaration scheme can be characterized as a meta-level operation over the original fixed point. We show the correctness of the mode declaration scheme. Thirdly, the mode-declaration scheme provides a new declarative method for dynamic programming, which is typically used for solving optimization problems. There is no need to define the value of an optimal solution recursively, instead, defining a general solution suffices. The optimal value as well as its corresponding concrete solution can be derived implicitly and automatically using a mode-directed fixed point computation engine. Finally, this fixed point computation engine has been successfully implemented in a commercial Prolog system. Experimental results are shown to indicate that the mode declaration improves both time and space performances in solving dynamic programming problems. | The huge implementation effort needed for implementing OLDT and SLG can be avoided by choosing alternative methods for tabled resolutions that maintain a single computation tree similar to traditional SLD resolution, rather than maintaining a forest of SLD trees . SLDT resolution @cite_23 @cite_9 was the first attempt in this direction. The main idea behind SLDT is to steal the backtracking point---using the terminology in @cite_23 @cite_9 ---of the previous tabled call when a variant call is found, to avoid exploring the current recursive clause which may lead to non-termination. However, because the variant call avoids applying the same recursive clause as the previous call, the computation may be incomplete. Thus, repeated computation of tabled calls is required to make up for the lost answers and to make sure that the fixed-point is complete. SLDT does not propose a complete theory regarding when a tabled call is completely evaluated, rather it relies on blindly recomputing the tabled calls to ensure completeness. SLDT resolution was implemented in early versions of B-Prolog system. However, recently this resolution strategy has been discarded, instead, a variant of DRA resolution @cite_22 has been adopted in the latest version of B-Prolog system @cite_1 . | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_22",
"@cite_23"
],
"mid": [
"2070598037",
"1518621415",
"2096979400",
"2495949634"
],
"abstract": [
"SLD resolution with negation as finite failure (SLDNF) reflects the procedural interpretation of predicate calculus as a programming language and forms the computational basis for Prolog systems. Despite its advantages for stack-based memory management, SLDNF is often not appropriate for query evaluation for three reasons: (a) it may not terminate due to infinite positive recursion; (b) it may be terminate due to infinite recursion through negation; and (c) it may repeatedly evaluate the same literal in a rule body, leading to unacceptable performance. We address all three problems for goal-oriented query evaluation of general logic programs by presenting tabled evaluation with delaying, called SLG resolution. It has three distinctive features: (i) SLG resolution is a partial deduction procedure, consisting of seven fundamental transformations. A query is transformed step by step into a set of answers. The use of transformations separates logical issues of query evaluation from procedural ones. SLG allows an arbitrary computation rule for selecting a literal from a rule body and an arbitrary control strategy for selecting transformations to apply. (ii) SLG resolution is sound and search space complete with respect to the well-founded partial model for all non-floundering queries, and preserves all three-valued stable models. To evaluate a query under differenc three-valued stable models, SLG resolution can be enhanced by further processing of the answers of subgoals relevant to a query. (iii) SLG resolution avoids both positive and negative loops and always terminates for programs with the bounded-term-size property. It has a polynomial time data complexity for well-founded negation of function-free programs. Through a delaying mechanism for handling ground negative literals involved in loops, SLG resolution avoids the repetition of any of its derivation steps. Restricted forms of SLG resolution are identified for definite, locally stratified, and modularly stratified programs, shedding light on the role each transformation plays.",
"Tabled evaluations ensure termination of logic programs with finite models by keeping track of which subgoals have been called. Given several variant subgoals in an evaluation, only the first one encountered will use program clause resolution; the rest uses answer resolution. This use of answer resolution prevents infinite looping which happens in SLD. Given the asynchronicity of answer generation and answer return, tabling systems face an important scheduling choice not present in traditional top-down evaluation: How does the order of returning answers to consuming subgoals affect program efficiency.",
"Delaying-based tabling mechanisms, such as the one adopted in XSB, are non-linear in the sense that the computation state of delayed calls has to be preserved. In this paper, we present the implementation of a linear tabling mechanism. The key idea is to let a call execute from the backtracking point of a former variant call if such a call exists. The linear tabling mechanism has the following advantages over non-linear ones: (1) it is relatively easy to implement; (2) it imposes no overhead on standard Prolog programs; and (3) the cut operator works as for standard Prolog programs and thus it is possible to use the cut operator to express negation-as-failure and conditionals in tabled programs. The weakness of the linear mechanism is the necessity of re-computation for computing fix-points. However, we have found that re-computation can be avoided for a large portion of calls of directly-recursive tabled predicates. We have implemented the linear tabling mechanism in B-Prolog. Experimental comparison shows that B-Prolog is close in speed to XSB and outperforms XSB when re-computation can be avoided. Concerning space efficiency, B-Prolog is an order of magnitude better than XSB for some programs.",
"SLG resolution, a type of tabled resolution and a technique of logic programming (LP), has polynomial data complexity for ground Datalog queries with negation, making it suitable for deductive database (DDB). It evaluates non-stratified negation according to the three-valued Well-Founded Semantics, making it a suitable starting point for non-monotonic reasoning (NMR). Furthermore, SLG has an efficient partial implementation in the SLG-WAM which, in the XSB logic programming system, has proven an order of magnitude faster than current DDR systems for in-memory queries. Building on SLG resolution, we formulate a method for distributed tabled resolution termed Multi-Processor SLG (SLGMP). Since SLG is modeled as a forest of trees, it then becomes natural to think of these trees as executing at various places over a distributed network in SLGMP. Incremental completion, which is necessary for efficient sequential evaluation, can be modeled through the use of a subgoal dependency graph (SDG), or its approximation. However the subgoal dependency graph is a global property of a forest; in a distributed environment each processor should maintain as small a view of the SDG as possible. The formulation of what and when dependency information must be maintained and propagated in order for distributed completion to be performed safely is the central contribution of SLGMP. Specifically, subgoals in SLGMP are properly numbered such that most of the dependencies among subgoals are represented by the subgoal numbers. Dependency information that is not represented by subgoal numbers is maintained explicitly at each processor and propagated by each processor. SLGMP resolution aims at efficiently evaluating normal logic programs in a distributed environment. SLGMP operations are explicitly defined and soundness and completeness is proven for SLGMP with respect to SLG for programs which terminate for SLG evaluation. The resulting framework can serve as a basis for query processing and non-monotonic reasoning within a distributed environment. We also implemented Distributed XSB, a prototype implementation of SLGMP. Distributed XSB, as a distributed tabled evaluation model, is really a distributed problem solving system, where the data to solve the problem is distributed and each participating process cooperates with other participants (perhaps including itself), by sending and receiving data. Distributed XSB proposes a distributed data computing model, where there may be cyclic dependencies among participating processes and the dependencies can be both negative and positive."
]
} |
cs0412042 | 2952981805 | In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3. | Constraint satisfaction problems (CSPs) have always played a central role in this direction of research, since the CSP framework contains many natural computational problems, for example, from graph theory and propositional logic. Moreover, certain CSPs were used to build foundations for the theory of complexity for optimization problems @cite_13 , and some CSPs provided material for the first optimal inapproximability results @cite_8 (see also survey @cite_15 ). In a CSP, informally speaking, one is given a finite collection of constraints on overlapping sets of variables, and the goal is to decide whether there is an assignment of values from a given domain to the variables satisfying all constraints (decision problem) or to find an assignment satisfying maximum number of constraints (optimization problem). In this paper we will focus on the optimization problems, which are known as maximum constraint satisfaction problems, Max CSP for short. The most well-known examples of such problems are Max @math -Sat and Max Cut . Let us now formally define these problems. | {
"cite_N": [
"@cite_15",
"@cite_13",
"@cite_8"
],
"mid": [
"2068190866",
"1818081266",
"2962951564"
],
"abstract": [
"We study optimization problems that may be expressed as \"Boolean constraint satisfaction problems.\" An instance of a Boolean constraint satisfaction problem is given by m constraints applied to n Boolean variables. Different computational problems arise from constraint satisfaction problems depending on the nature of the \"underlying\" constraints as well as on the goal of the optimization task. Here we consider four possible goals: Max CSP (Min CSP) is the class of problems where the goal is to find an assignment maximizing the number of satisfied constraints (minimizing the number of unsatisfied constraints). Max Ones (Min Ones) is the class of optimization problems where the goal is to find an assignment satisfying all constraints with maximum (minimum) number of variables set to 1. Each class consists of infinitely many problems and a problem within a class is specified by a finite collection of finite Boolean functions that describe the possible constraints that may be used. Tight bounds on the approximability of every problem in Max CSP were obtained by Creignou [ J. Comput. System Sci., 51 (1995), pp. 511--522]. In this work we determine tight bounds on the \"approximability\" (i.e., the ratio to within which each problem may be approximated in polynomial time) of every problem in Max Ones, Min CSP, and Min Ones. Combined with the result of Creignou, this completely classifies all optimization problems derived from Boolean constraint satisfaction. Our results capture a diverse collection of optimization problems such as MAX 3-SAT, Max Cut, Max Clique, Min Cut, Nearest Codeword, etc. Our results unify recent results on the (in-)approximability of these optimization problems and yield a compact presentation of most known results. Moreover, these results provide a formal basis to many statements on the behavior of natural optimization problems that have so far been observed only empirically.",
"Random instances of constraint satisfaction problems (CSPs) appear to be hard for all known algorithms when the number of constraints per variable lies in a certain interval. Contributing to the general understanding of the structure of the solution space of a CSP in the satisfiable regime, we formulate a set of technical conditions on a large family of random CSPs and prove bounds on three most interesting thresholds for the density of such an ensemble: namely, the satisfiability threshold, the threshold for clustering of the solution space, and the threshold for an appropriate reconstruction problem on the CSPs. The bounds become asymptoticlally tight as the number of degrees of freedom in each clause diverges. The families are general enough to include commonly studied problems such as random instances of Not-All-Equal SAT, k-XOR formulae, hypergraph 2-coloring, and graph k-coloring. An important new ingredient is a condition involving the Fourier expansion of clauses, which characterizes the class of ...",
"An instance of the Valued Constraint Satisfaction Problem (VCSP) is given by a finite set of variables, a finite domain of labels, and a sum of functions, each function depending on a subset of the variables. Each function can take finite values specifying costs of assignments of labels to its variables or the infinite value, which indicates infeasible assignments. The goal is to find an assignment of labels to the variables that minimizes the sum. We study (assuming that P a#x2260; NP) how the complexity of this very general problem depends on the set of functions allowed in the instances, the so-called constraint language. The case when all allowed functions take values in 0, a#x221E; corresponds to ordinary CSPs, where one deals only with the feasibility issue and there is no optimization. This case is the subject of the Algebraic CSP Dichotomy Conjecture predicting for which constraint languages CSPs are tractable and for which NP-hard. The case when all allowed functions take only finite values corresponds to finite-valued CSP, where the feasibility aspect is trivial and one deals only with the optimization issue. The complexity of finite-valued CSPs was fully classified by Thapper and Zivny. An algebraic necessary condition for tractability of a general-valued CSP with a fixed constraint language was recently given by Kozik and Ochremiak. As our main result, we prove that if a constraint language satisfies this algebraic necessary condition, and the feasibility CSP corresponding to the VCSP with this language is tractable, then the VCSP is tractable. The algorithm is a simple combination of the assumed algorithm for the feasibility CSP and the standard LP relaxation. As a corollary, we obtain that a dichotomy for ordinary CSPs would imply a dichotomy for general-valued CSPs."
]
} |
cs0412042 | 2952981805 | In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3. | Note that throughout the paper the values 0 and 1 taken by any predicate will be considered, rather unusually, as integers, not as Boolean values, and addition will always denote the addition of integers. It easy to check that, in the Boolean case, our problem coincides with the Max CSP problem considered in @cite_18 @cite_21 @cite_2 . We say that a predicate is non-trivial if it is not identically 0. Throughout the paper, we assume that @math is finite and contains only non-trivial predicates. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_2"
],
"mid": [
"2097646889",
"2952621790",
"2000931246"
],
"abstract": [
"By the breakthrough work of Hastad [J ACM 48(4) (2001), 798–859], several constraint satisfaction problems are now known to have the following approximation resistance property: Satisfying more clauses than what picking a random assignment would achieve is NP-hard. This is the case for example for Max E3-Sat, Max E3-Lin, and Max E4-Set Splitting. A notable exception to this extreme hardness is constraint satisfaction over two variables (2-CSP); as a corollary of the celebrated Goemans-Williamson algorithm [J ACM 42(6) (1995), 1115–1145], we know that every Boolean 2-CSP has a nontrivial approximation algorithm whose performance ratio is better than that obtained by picking a random assignment to the variables. An intriguing question then is whether this is also the case for 2-CSPs over larger, non-Boolean domains. This question is still open, and is equivalent to whether the generalization of Max 2-SAT to domains of size d, can be approximated to a factor better than (1 − 1 d2). In an attempt to make progress towards this question, in this paper we prove, first, that a slight restriction of this problem, namely, a generalization of linear inequations with two variables per constraint, is not approximation resistant, and, second, that the Not-All-Equal Sat problem over domain size d with three variables per constraint, is approximation resistant, for every d ≥ 3. In the Boolean case, Not-All-Equal Sat with three variables per constraint is equivalent to Max 2-SAT and thus has a nontrivial approximation algorithm; for larger domain sizes, Max 2-SAT can be reduced to Not-All-Equal Sat with three variables per constraint. Our approximation algorithm implies that a wide class of 2-CSPs called regular 2-CSPs can all be approximated beyond their random assignment threshold. © 2004 Wiley Periodicals, Inc. Random Struct. Alg. 2004",
"Let @math be a nontrivial @math -ary predicate. Consider a random instance of the constraint satisfaction problem @math on @math variables with @math constraints, each being @math applied to @math randomly chosen literals. Provided the constraint density satisfies @math , such an instance is unsatisfiable with high probability. The problem is to efficiently find a proof of unsatisfiability. We show that whenever the predicate @math supports a @math - probability distribution on its satisfying assignments, the sum of squares (SOS) algorithm of degree @math (which runs in time @math ) refute a random instance of @math . In particular, the polynomial-time SOS algorithm requires @math constraints to refute random instances of CSP @math when @math supports a @math -wise uniform distribution on its satisfying assignments. Together with recent work of [LRS15], our result also implies that polynomial-size semidefinite programming relaxation for refutation requires at least @math constraints. Our results (which also extend with no change to CSPs over larger alphabets) subsume all previously known lower bounds for semialgebraic refutation of random CSPs. For every constraint predicate @math , they give a three-way hardness tradeoff between the density of constraints, the SOS degree (hence running time), and the strength of the refutation. By recent algorithmic results of [AOW15] and [RRS16], this full three-way tradeoff is , up to lower-order factors.",
"Let f be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number k such that if the ratio of the number of clauses over the number of variables of f strictly exceeds k , then f is almost certainly unsatisfiable. By a well-known and more or less straightforward argument, it can be shown that kF5.191. This upper bound was improved by to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of k is around 4.2. In this work, we define, in terms of the random formula f, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then f is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for k equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601q . In general, by letting the U This work was performed while the first author was visiting the School of Computer Science, Carleton Ž University, and was partially supported by NSERC Natural Sciences and Engineering Research Council . of Canada , and by a grant from the University of Patras for sabbatical leaves. The second and third Ž authors were supported in part by grants from NSERC Natural Sciences and Engineering Research . Council of Canada . During the last stages of this research, the first and last authors were also partially Ž . supported by EU ESPRIT Long-Term Research Project ALCOM-IT Project No. 20244 . †An extended abstract of this paper was published in the Proceedings of the Fourth Annual European Ž Symposium on Algorithms, ESA’96, September 25]27, 1996, Barcelona, Spain Springer-Verlag, LNCS, . pp. 27]38 . That extended abstract was coauthored by the first three authors of the present paper. Correspondence to: L. M. Kirousis Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r030253-17 253"
]
} |
cs0412042 | 2952981805 | In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3. | The Max-CSP framework has been well-studied in the Boolean case. Many fundamental results have been obtained, concerning both complexity classifications and approximation properties (see, e.g., @cite_18 @cite_21 @cite_8 @cite_3 @cite_2 @cite_26 ). In the non-Boolean case, a number of results have been obtained that concern exact (superpolynomial) algorithms or approximation properties (see, e.g., @cite_5 @cite_1 @cite_0 @cite_10 ). The main research problem we will look at in this paper is the following. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_10"
],
"mid": [
"2097646889",
"2084350425",
"2068190866",
"22128777",
"2007385254",
"2008724760",
"2953362411",
"1990426442",
"2529974411",
"2953012544"
],
"abstract": [
"By the breakthrough work of Hastad [J ACM 48(4) (2001), 798–859], several constraint satisfaction problems are now known to have the following approximation resistance property: Satisfying more clauses than what picking a random assignment would achieve is NP-hard. This is the case for example for Max E3-Sat, Max E3-Lin, and Max E4-Set Splitting. A notable exception to this extreme hardness is constraint satisfaction over two variables (2-CSP); as a corollary of the celebrated Goemans-Williamson algorithm [J ACM 42(6) (1995), 1115–1145], we know that every Boolean 2-CSP has a nontrivial approximation algorithm whose performance ratio is better than that obtained by picking a random assignment to the variables. An intriguing question then is whether this is also the case for 2-CSPs over larger, non-Boolean domains. This question is still open, and is equivalent to whether the generalization of Max 2-SAT to domains of size d, can be approximated to a factor better than (1 − 1 d2). In an attempt to make progress towards this question, in this paper we prove, first, that a slight restriction of this problem, namely, a generalization of linear inequations with two variables per constraint, is not approximation resistant, and, second, that the Not-All-Equal Sat problem over domain size d with three variables per constraint, is approximation resistant, for every d ≥ 3. In the Boolean case, Not-All-Equal Sat with three variables per constraint is equivalent to Max 2-SAT and thus has a nontrivial approximation algorithm; for larger domain sizes, Max 2-SAT can be reduced to Not-All-Equal Sat with three variables per constraint. Our approximation algorithm implies that a wide class of 2-CSPs called regular 2-CSPs can all be approximated beyond their random assignment threshold. © 2004 Wiley Periodicals, Inc. Random Struct. Alg. 2004",
"We consider the problem MAX CSP over multi-valued domains with variables ranging over sets of size si ≤ s and constraints involving kj ≤ k variables. We study two algorithms with approximation ratios A and B. respectively, so we obtain a solution with approximation ratio max (A, B).The first algorithm is based on the linear programming algorithm of Serna, Trevisan, and Xhafa [Proc. 15th Annual Symp. on Theoret. Aspects of Comput. Sci., 1998, pp. 488-498] and gives ratio A which is bounded below by s1-k. For k = 2, our bound in terms of the individual set sizes is the minimum over all constraints involving two variables of (1 2√s1+ 1 2√s2)2, where s1 and s2 are the set sizes for the two variables.We then give a simple combinatorial algorithm which has approximation ratio B, with B > A e. The bound is greater than s1-k e in general, and greater than s1-k(1 - (s - 1) 2(k - 1)) for s ≤ k - 1, thus close to the s1-k linear programming bound for large k. For k = 2, the bound is 4 9 if s = 2, 1 2(s - 1) if s ≥ 3, and in general greater than the minimum of 1 4S1 + 1 4s2 over constraints with set sizes s1 and s2, thus within a factor of two of the linear programming bound.For the case of k = 2 and s = 2 we prove an integrality gap of 4 9 (1 + O(n-1 2)). This shows that our analysis is tight for any method that uses the linear programming upper bound.",
"We study optimization problems that may be expressed as \"Boolean constraint satisfaction problems.\" An instance of a Boolean constraint satisfaction problem is given by m constraints applied to n Boolean variables. Different computational problems arise from constraint satisfaction problems depending on the nature of the \"underlying\" constraints as well as on the goal of the optimization task. Here we consider four possible goals: Max CSP (Min CSP) is the class of problems where the goal is to find an assignment maximizing the number of satisfied constraints (minimizing the number of unsatisfied constraints). Max Ones (Min Ones) is the class of optimization problems where the goal is to find an assignment satisfying all constraints with maximum (minimum) number of variables set to 1. Each class consists of infinitely many problems and a problem within a class is specified by a finite collection of finite Boolean functions that describe the possible constraints that may be used. Tight bounds on the approximability of every problem in Max CSP were obtained by Creignou [ J. Comput. System Sci., 51 (1995), pp. 511--522]. In this work we determine tight bounds on the \"approximability\" (i.e., the ratio to within which each problem may be approximated in polynomial time) of every problem in Max Ones, Min CSP, and Min Ones. Combined with the result of Creignou, this completely classifies all optimization problems derived from Boolean constraint satisfaction. Our results capture a diverse collection of optimization problems such as MAX 3-SAT, Max Cut, Max Clique, Min Cut, Nearest Codeword, etc. Our results unify recent results on the (in-)approximability of these optimization problems and yield a compact presentation of most known results. Moreover, these results provide a formal basis to many statements on the behavior of natural optimization problems that have so far been observed only empirically.",
"Given an instance @math of a CSP, a tester for @math distinguishes assignments satisfying @math from those which are far from any assignment satisfying @math . The efficiency of a tester is measured by its query complexity, the number of variable assignments queried by the algorithm. In this paper, we characterize the hardness of testing Boolean CSPs in terms of the algebra generated by the relations used to form constraints. In terms of computational complexity, we show that if a non-trivial Boolean CSP is sublinear-query testable (resp., not sublinear-query testable), then the CSP is in NL (resp., P-complete, ⊕L-complete or NL-complete) and that if a sublinear-query testable Boolean CSP is constant-query testable (resp., not constant-query testable), then counting the number of solutions of the CSP is in P (resp., @math P-complete). Also, we conjecture that a CSP instance is testable in sublinear time if its Gaifman graph has bounded treewidth. We confirm the conjecture when a near-unanimity operation is a polymorphism of the CSP.",
"We initiate a study of when the value of mathematical relaxations such as linear and semi-definite programs for constraint satisfaction problems (CSPs) is approximately preserved when restricting the instance to a sub-instance induced by a small random subsample of the variables. Let C be a family of CSPs such as 3SAT, Max-Cut, etc., and let Π be a mathematical program that is a relaxation for C, in the sense that for every instance P ∈ C, Π(P) is a number in [0, 1] upper bounding the maximum fraction of satisfiable constraints of P. Loosely speaking, we say that subsampling holds for C and Π if for every sufficiently dense instance P ∈ C and every e > 0, if we let P' be the instance obtained by restricting P to a sufficiently large constant number of variables, then Π(P') ∈ (1 ± e)Π(P). We say that weak subsampling holds if the above guarantee is replaced with Π(P') = 1 − θ(γ) whenever Π(P) = 1 − γ, where θ hides only absolute constants. We obtain both positive and negative results, showing that: 1. Subsampling holds for the BasicLP and BasicSDP programs. BasicSDP is a variant of the semi-definite program considered by Raghavendra (2008), who showed it gives an optimal approximation factor for every constraint-satisfaction problem under the unique games conjecture. BasicLP is the linear programming analog of BasicSDP. 2. For tighter versions of BasicSDP obtained by adding additional constraints from the Lasserre hierarchy, weak subsampling holds for CSPs of unique games type. 3. There are non-unique CSPs for which even weak subsampling fails for the above tighter semi-definite programs. Also there are unique CSPs for which (even weak) subsampling fails for the Sherali-Adams linear programming hierarchy. As a corollary of our weak subsampling for strong semi-definite programs, we obtain a polynomial-time algorithm to certify that random geometric graphs (of the type considered by Feige and Schechtman, 2002) of max-cut value 1 − γ have a cut value at most 1 − γ 10. More generally, our results give an approach to obtaining average-case algorithms for CSPs using semi-definite programming hierarchies.",
"In this paper we study a fundamental open problem in the area of probabilistic checkable proofs: What is the smallest s such that NP ⊆ naPCP1,s[O(log n),3]? In the language of hardness of approximation, this problem is equivalent to determining the smallest s such that getting an s-approximation for satisfiable 3-bit constraint satisfaction problems (\"3-CSPs\") is NP-hard. The previous best upper bound and lower bound for s are 20 27+µ by Khot and Saket [KS06], and 5 8 (assuming NP subseteq BPP) by Zwick [Zwi98]. In this paper we close the gap assuming Khot's d-to-1 Conjecture. Formally, we prove that if Khot's d-to-1 Conjecture holds for any finite constant integer d, then NP naPCP1,5 8+ µ[O(log n),3] for any constant µ > 0. Our conditional result also solves Hastad's open question [Has01] on determining the inapproximability of satisfiable Max-NTW (\"Not Two\") instances and confirms Zwick's conjecture [Zwi98] that the 5 8-approximation algorithm for satisfiable 3-CSPs is optimal.",
"We report new results on the complexity of the valued constraint satisfaction problem (VCSP). Under the unique games conjecture, the approximability of finite-valued VCSP is fairly well-understood. However, there is yet no characterisation of VCSPs that can be solved exactly in polynomial time. This is unsatisfactory, since such results are interesting from a combinatorial optimisation perspective; there are deep connections with, for instance, submodular and bisubmodular minimisation. We consider the Min and Max CSP problems (i.e. where the cost functions only attain values in 0,1 ) over four-element domains and identify all tractable fragments. Similar classifications were previously known for two- and three-element domains. In the process, we introduce a new class of tractable VCSPs based on a generalisation of submodularity. We also extend and modify a graph-based technique by Kolmogorov and Zivny (originally introduced by Takhanov) for efficiently obtaining hardness results in our setting. This allow us to prove the result without relying on computer-assisted case analyses (which otherwise are fairly common when studying the complexity and approximability of VCSPs.) The hardness results are further simplified by the introduction of powerful reduction techniques.",
"Abstract We present in this paper a unified processing for real, integer, and Boolean constraints based on a general narrowing algorithm which applies to any n-ary relation on R. The basic idea is to define, for every such relation ρ, a narrowing function ρ based on the approximation of ρ by a Cartesian product of intervals whose bounds are floating-point numbers. We then focus on nonconvex relations and establish several properties. The more important of these properties is applied to justify the computation of usual relations defined in terms of intersections of simpler relations. We extend the scope of the narrowing algorithm used in the language BNR-Prolog to integer and disequality constraints, to Boolean constraints, and to relations mixing numerical and Boolean values. As a result, we propose a new Constraint Logic Programming language called CLP(BNR), where BNR stands for Booleans, Naturals, and Reals. In this language, constraints are expressed in a unique structure, allowing the mixing of real numbers, integers, and Booleans. We end with the presentation of several examples showing the advantages of such an approach from the point of view of the expressiveness, and give some preliminary computational results from a prototype.",
"We show that for constraint satisfaction problems (CSPs), sub-exponential size linear programming relaxations are as powerful as nΩ(1)-rounds of the Sherali-Adams linear programming hierarchy. As a corollary, we obtain sub-exponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAX-CUT and MAX-3SAT. This is a nearly-exponential improvement over previous results; previously, the best known lower bounds were quasi-polynomial in n (Chan, Lee, Raghavendra, Steurer 2013). Our bounds are obtained by exploiting and extending the recent progress in communication complexity for \"lifting\" query lower bounds to communication problems. The main ingredient in our results is a new structural result on \"high-entropy rectangles\" that may of independent interest in communication complexity.",
"An important question in the study of constraint satisfaction problems (CSP) is understanding how the graph or hypergraph describing the incidence structure of the constraints influences the complexity of the problem. For binary CSP instances (i.e., where each constraint involves only two variables), the situation is well understood: the complexity of the problem essentially depends on the treewidth of the graph of the constraints. However, this is not the correct answer if constraints with unbounded number of variables are allowed, and in particular, for CSP instances arising from query evaluation problems in database theory. Formally, if H is a class of hypergraphs, then let CSP(H) be CSP restricted to instances whose hypergraph is in H. Our goal is to characterize those classes of hypergraphs for which CSP(H) is polynomial-time solvable or fixed-parameter tractable, parameterized by the number of variables. Note that in the applications related to database query evaluation, we usually assume that the number of variables is much smaller than the size of the instance, thus parameterization by the number of variables is a meaningful question. The most general known property of H that makes CSP(H) polynomial-time solvable is bounded fractional hypertree width. Here we introduce a new hypergraph measure called submodular width, and show that bounded submodular width of H implies that CSP(H) is fixed-parameter tractable. In a matching hardness result, we show that if H has unbounded submodular width, then CSP(H) is not fixed-parameter tractable, unless the Exponential Time Hypothesis fails."
]
} |
cs0412042 | 2952981805 | In the maximum constraint satisfaction problem (Max CSP), one is given a finite collection of (possibly weighted) constraints on overlapping sets of variables, and the goal is to assign values from a given domain to the variables so as to maximize the number (or the total weight, for the weighted case) of satisfied constraints. This problem is NP-hard in general, and, therefore, it is natural to study how restricting the allowed types of constraints affects the approximability of the problem. It is known that every Boolean (that is, two-valued) Max CSP problem with a finite set of allowed constraint types is either solvable exactly in polynomial time or else APX-complete (and hence can have no polynomial time approximation scheme unless P=NP. It has been an open problem for several years whether this result can be extended to non-Boolean Max CSP, which is much more difficult to analyze than the Boolean case. In this paper, we make the first step in this direction by establishing this result for Max CSP over a three-element domain. Moreover, we present a simple description of all polynomial-time solvable cases of our problem. This description uses the well-known algebraic combinatorial property of supermodularity. We also show that every hard three-valued Max CSP problem contains, in a certain specified sense, one of the two basic hard Max CSP problems which are the Maximum k-colourable subgraph problems for k=2,3. | For the Boolean case, Problem was solved in @cite_18 @cite_21 @cite_2 . It appears that a Boolean @math also exhibits a dichotomy in that it either is solvable exactly in polynomial time or else does not admit a PTAS (polynomial-time approximation scheme) unless = . These papers also describe the boundary between the two cases. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_2"
],
"mid": [
"1881931158",
"2000217912",
"2152035036"
],
"abstract": [
"Generalised Satisfiability Problems (or Boolean Constraint Satisfaction Problems), introduced by Schaefer in 1978, are a general class of problem which allow the systematic study of the complexity of satisfiability problems with different types of constraints. In 1979, Valiant introduced the complexity class parity P, the problem of counting the number of solutions to NP problems modulo two. Others have since considered the question of counting modulo other integers. We give a dichotomy theorem for the complexity of counting the number of solutions to Generalised Satisfiability Problems modulo integers. This follows from an earlier result of Creignou and Hermann which gave a counting dichotomy for these types of problem, and the dichotomy itself is almost identical. Specifically, counting the number of solutions to a Generalised Satisfiability Problem can be done in polynomial time if all the relations are affine. Otherwise, except for one special case with k = 2, it is #_kP-complete.",
"For certain subclasses of NP, @math P, or #P characterized by local constraints, it is known that if there exist any problems within that subclass that are not polynomial time computable, then all the problems in the subclass are NP-complete, @math P-complete, or #P-complete. Such dichotomy results have been proved for characterizations such as constraint satisfaction problems and directed and undirected graph homomorphism problems, often with additional restrictions. Here we give a dichotomy result for the more expressive framework of Holant problems. For example, these additionally allow for the expression of matching problems, which have had pivotal roles in the development of complexity theory. As our main result we prove the dichotomy theorem that, for the class @math P, every set of symmetric Holant signatures of any arities that is not polynomial time computable is @math P-complete. The result exploits some special properties of the class @math P and characterizes four distinct tractable ...",
"We consider the problem of finding a characterization for polynomial time computable queries on finite structures in terms of logical definability. It is well known that fixpoint logic provides such a characterization in the presence of a built-in linear order, but without linear order even very simple polynomial time queries involving counting are not expressible in fixpoint logic. Our approach to the problem is based on generalized quantifiers. A generalized quantifier isn-ary if it binds any number of formulas, but at mostnvariables in each formula. We prove that, for each natural numbern, there is a query on finite structures which is expressible in fixpoint logic, but not in the extension of first-order logic by any set ofn-ary quantifiers. It follows that the expressive power of fixpoint logic cannot be captured by adding finitely many quantifiers to first-order logic. Furthermore, we prove that, for each natural numbern, there is a polynomial time computable query which is not definable in any extension of fixpoint logic byn-ary quantifiers. In particular, this rules out the possibility of characterizing PTIME in terms of definability in fixpoint logic extended by a finite set of generalized quantifiers."
]
} |
cs0411010 | 2952058472 | We propose a new simple logic that can be used to specify , i.e. security properties that refer to a single participant of the protocol specification. Our technique allows a protocol designer to provide a formal specification of the desired security properties, and integrate it naturally into the design process of cryptographic protocols. Furthermore, the logic can be used for formal verification. We illustrate the utility of our technique by exposing new attacks on the well studied protocol TMN. | In this section we discuss some related work. In @cite_18 , Roscoe identifies two ways of specifying protocol security goals: firstly, using specifications, and secondly using specifications. An extensional specification describes the intended service provided by the protocol in terms of behavioural equivalence @cite_8 @cite_7 @cite_0 . On the other hand, an intensional specification describes the underlying mechanism of a procotol, in terms of states or events @cite_2 @cite_5 @cite_18 @cite_6 @cite_17 @cite_13 . | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_13",
"@cite_17"
],
"mid": [
"2124900567",
"1926128235",
"1508967933",
"2594004772",
"2023930035",
"2165446401",
"1577867985",
"2069717895",
"2011329056"
],
"abstract": [
"A novel approach to malware detection by recognizing known inter-process and intra-process malicious functionalities in software behavior is proposed. It encompasses two essential tasks: the specification of a functionality that may involve a joint activity of several apparently independent processes, and efficient recognition of the specified functionality in the process behavior. The robustness of the proposed technology is achieved by the generalization of the specification domain that is separated from the detection domain. The functionalities of interest are defined in the abstract system domain through activity diagrams, thus resulting in formal specifications that are rather generic and less prone to false negatives. To facilitate the detection, we developed a procedure that automatically generates a Colored Petri Net recognizing the specified functionality in the system call domain. The separation of specification and recognition domains results in signature expressiveness and recognition efficiency. The approach is illustrated by the analysis, specification and consequent recognition of several common malicious functionalities including self-replication engines and popular payloads. A prototype IDS implementing the proposed approach has been developed and successfully tested on a set of real malware.",
"When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear mappings among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the non-trivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to CCS and the π-calculus. Although defining a simple and general bijection between MSR and PA appears difficult, we show that the sublanguages needed to specify a large class of cryptographic protocols (immediate decryption protocols) admits an effective translation that is not only bijective and trace-preserving, but also induces a weak form of bisimulation across the two languages. In particular, the correspondence sketched in this abstract permits transferring several important trace-based properties such as secrecy and many forms of authentication.",
"Since the 1980s, two approaches have been developed for analyzing security protocols. One of the approaches relies on a computational model that considers issues of complexity and probability. This approach captures a strong notion of security, guaranteed against all probabilistic polynomial-time attacks. The other approach relies on a symbolic model of protocol executions in which cryptographic primitives are treated as black boxes. Since the seminal work of Dolev and Yao, it has been realized that this latter approach enables significantly simpler and often automated proofs. However, the guarantees that it offers have been quite unclear. In this paper, we show that it is possible to obtain the best of both worlds: fully automated proofs and strong, clear security guarantees. Specifically, for the case of protocols that use signatures and asymmetric encryption, we establish that symbolic integrity and secrecy proofs are sound with respect to the computational model. The main new challenges concern secrecy properties for which we obtain the first soundness result for the case of active adversaries. Our proofs are carried out using Casrul, a fully automated tool.",
"We present a formal model for modeling and reasoning about security protocols. Our model extends standard, inductive, trace-based, symbolic approaches with a formalization of physical properties of the environment, namely communication, location, and time. In particular, communication is subject to physical constraints, for example, message transmission takes time determined by the communication medium used and the distance traveled. All agents, including intruders, are subject to these constraints and this results in a distributed intruder with restricted, but more realistic, communication capabilities than the standard Dolev-Yao intruder. We have formalized our model in Isabelle HOL and used it to verify protocols for authenticated ranging, distance bounding, and broadcast authentication based on delayed key disclosure.",
"We present a method for constructing replicated services that retain their availability and integrity despite several servers and clients being corrupted by an intruder, in addition to others failing benignly. We also address the issue of maintaining a causal order among client requests. We illustrate a security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service and propose an approach to counter this attack. An important and novel feature of our techniques is that the client need not be able to identify or authenticate even a single server. Instead, the client is required to possess only a single public key for the service. We demonstrate the performance of our techniques with a service we have implemented using one of our protocols.",
"An intensional model for the programming language PCF is described in which the types of PCF are interpreted by games and the terms by certain history-free strategies. This model is shown to capture definability in PCF. More precisely, every compact strategy in the model is definable in a certain simple extension of PCF. We then introduce an intrinsic preorder on strategies and show that it satisfies some striking properties such that the intrinsic preorder on function types coincides with the pointwise preorder. We then obtain an order-extensional fully abstract model of PCF by quotienting the intensional model by the intrinsic preorder. This is the first syntax-independent description of the fully abstract model for PCF. (Hyland and Ong have obtained very similar results by a somewhat different route, independently and at the same time.) We then consider the effective version of our model and prove a universality theorem: every element of the effective extensional model is definable in PCF. Equivalently, every recursive strategy is definable up to observational equivalence.",
"Many languages and algebras have been proposed in recent years for the specification of authorization policies. For some proposals, such as XACML, the main motivation is to address real-world requirements, typically by providing a complex policy language with somewhat informal evaluation methods; others try to provide a greater degree of formality --- particularly with respect to policy evaluation --- but support far fewer features. In short, there are very few proposals that combine a rich set of language features with a well-defined semantics, and even fewer that do this for authorization policies for attribute-based access control in open environments. In this paper, we decompose the problem of policy specification into two distinct sub-languages: the policy target language (PTL) for target specification, which determines when a policy should be evaluated; and the policy composition language (PCL) for building more complex policies from existing ones. We define syntax and semantics for two such languages and demonstrate that they can be both simple and expressive. PTaCL, the language obtained by combining the features of these two sub-languages, supports the specification of a wide range of policies. However, the power of PTaCL means that it is possible to define policies that could produce unexpected results. We provide an analysis of how PTL should be restricted and how policies written in PCL should be evaluated to minimize the likelihood of undesirable results.",
"A secure function evaluation protocol allows two parties to jointly compute a function f(x,y) of their inputs in a manner not leaking more information than necessary. A major result in this field is: “any function f that can be computed using polynomial resources can be computed securely using polynomial resources” (where “resources” refers to communication and computation). This result follows by a general transformation from any circuit for f to a secure protocol that evaluates f . Although the resources used by protocols resulting from this transformation are polynomial in the circuit size, they are much higher (in general) than those required for an insecure computation of f . We propose a new methodology for designing secure protocols, utilizing the communication complexity tree (or branching program) representation of f . We start with an efficient (insecure) protocol for f and transform it into a secure protocol. In other words, any function f that can be computed using communication complexity c can be can be computed securely using communication complexity that is polynomial in c and a security parameter''. We show several simple applications of this new methodology resulting in protocols efficient either in communication or in computation. In particular, we exemplify a protocol for the Millionaires problem, where two participants want to compare their values but reveal no other information. Our protocol is more efficient than previously known ones in either communication or computation.",
"Traditional security protocols are mainly concerned with authentication and key establishment and rely on predistributed keys and properties of cryptographic operators. In contrast, new application areas are emerging that establish and rely on properties of the physical world. Examples include protocols for secure localization, distance bounding, and secure time synchronization. We present a formal model for modeling and reasoning about such physical security protocols. Our model extends standard, inductive, trace-based, symbolic approaches with a formalization of physical properties of the environment, namely communication, location, and time. In particular, communication is subject to physical constraints, for example, message transmission takes time determined by the communication medium used and the distance between nodes. All agents, including intruders, are subject to these constraints and this results in a distributed intruder with restricted, but more realistic, communication capabilities than those of the standard Dolev-Yao intruder. We have formalized our model in Isabelle HOL and have used it to verify protocols for authenticated ranging, distance bounding, broadcast authentication based on delayed key disclosure, and time synchronization."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.