aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
@cite_1 , authors state that existing network infrastructure in smart cities can not sustain the traffic generated by sensors. To overcome this problem, an investment in telecommunication infrastructure is required. However, authors proposed to exploit buses in a Delay Tolerant Network (DTN) to transfer data in smart cities. In @cite_5 , the authors introduce mobile cloud servers by installing servers on vehicles and use them in relief efforts of large-scale disasters to collect and share data. These mobile cloud servers convey data among isolated shelters while traveling and finally returning to the disaster relief headquarters. Vehicles exchange data while waiting in the disaster relief headquarters, which is connected to the Internet.
{ "cite_N": [ "@cite_5", "@cite_1" ], "mid": [ "2782962104", "2607377528" ], "abstract": [ "During large-scale disasters, such as the Great East Japan Earthquake in 2011 or Kumamoto huge Earthquake in 2016, many regions were isolated from critical information exchanges due to problems with communication infrastructures. In those serious disasters, quick and flexible disaster recovery network is required to deliver the disaster related information after disaster. In this paper, mobile cloud computing for vehicle server for information exchange among isolated shelters in such cases is introduced. The vehicle with mobile cloud server traverses the isolated shelters and exchanges information and returns to the disaster headquarter which is connected to Internet. DTN function is introduced to store, carry and exchange message as a message ferry among the shelters even in the challenged network environment where wired and wireless communication means are completely damaged. The prototype system is constructed using Wi-Fi network as mobility network and a note PC mobile cloud server and IBR-DTN and DTN2 software as the DTN function.", "Sensors in future smart cities will continuously monitor the environment in order to prevent critical situations and waste of resources or to offer new services to end users. Likely, the existing networks will not be able to sustain such a traffic without huge investments in the telecommunication infrastructure. One possible solution to overcome these problems is to apply the Delay Tolerant Network (DTN) paradigm. This paper presents the Sink and Delay Aware Bus (S&DA-Bus) routing protocol, a DTN routing protocol designed for smart cities able to exploit mobility of people, vehicles and buses roaming around the city. Particular attention is put on the public transportation system: S&DA-Bus takes advantage of the predictable and quasi-periodic mobility that characterizes it." ] }
1906.00850
2947982151
Interest in smart cities is rapidly rising due to the global rise in urbanization and the wide-scale instrumentation of modern cities. Due to the considerable infrastructural cost of setting up smart cities and smart communities, researchers are exploring the use of existing vehicles on the roads as "message ferries" for the transport data for smart community applications to avoid the cost of installing new communication infrastructure. In this paper, we propose an opportunistic data ferry selection algorithm that strives to select vehicles that can minimize the overall delay for data delivery from a source to a given destination. Our proposed opportunistic algorithm utilizes an ensemble of online hiring algorithms, which are run together in passive mode, to select the online hiring algorithm that has performed the best in recent history. The proposed ensemble based algorithm is evaluated empirically using real-world traces from taxies plying routes in Shanghai, China, and its performance is compared against a baseline of four state-of-the-art online hiring algorithms. A number of experiments are conducted and our results indicate that the proposed algorithm can reduce the overall delay compared to the baseline by an impressive 13 to 258 .
@cite_18 conduct a study on using taxi cabs as oblivious data mules for data collection and delivery in smart cities. They have no guarantee on data communications since they are using taxi cabs without any selection criteria. They use real taxi traces in the city of Rome and divide the city into blocks of size @math meter @math . Depending only on opportunistic connections between vehicles and nodes, the authors claim achieving a coverage of 80 The aforementioned papers mostly utilize multiple relays for transferring data between source-destination locations. Furthermore, these papers do not approach the ferry selection problem from an online perspective. Conversely, in this paper we propose an approach where each vehicle transfers a data bundle from source to destination without having to use relays and decisions are made in an online fashion---these assumptions are practical as more vehicles utilize OBU and GPS units that provide exact or probabilistic information about the path of the vehicle. Additionally, this paper considers online hiring algorithms for data ferry selection.
{ "cite_N": [ "@cite_18" ], "mid": [ "2254736503" ], "abstract": [ "Abstract How to deliver data to, or collect data from the hundreds of thousands of sensors and actuators integrated in “things” spread across virtually every smart city streets (garbage cans, storm drains, advertising panels, etc.)? The answer to the question is neither straightforward nor unique, given the scale of the issue, the lack of a single administrative entity for such tiny devices (arguably run by a multiplicity of distinct and independent service providers), and the cost and power concerns that their direct connectivity to the cellular network might pose. This paper posits that one possible alternative consists in connecting such devices to their data collection gateways using “oblivious data mules”, namely transport fleets such as taxi cabs which (unlike most data mules considered in past work) have no relation whatsoever with the smart city service providers, nor are required to follow any pre-established or optimized path, nor are willing to share their LTE connectivity. We experimentally evaluate data collection and delivery performance using real world traces gathered over a six month period in the city of Rome. Results suggest that even relatively small fleets, such as an average of about 120 vehicles, operating in parallel in a very large and irregular city such as Rome, can achieve an 80 coverage of the downtown area in less than 24 h." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
It is well known that transferring learned information to a new task as an auxiliary information enables efficient learning of a new task @cite_21 , while providing acquired information from a wider network to a thinner network improves the performance of the thinner network @cite_3 .
{ "cite_N": [ "@cite_21", "@cite_3" ], "mid": [ "2294193936", "1775792793" ], "abstract": [ "We consider an interesting problem in this paper that uses transfer learning in two directions to compensate missing knowledge from the target domain. Transfer learning tends to be exploited as a powerful tool that mitigates the discrepancy between different databases used for knowledge transfer. It can also be used for knowledge transfer between different modalities within one database. However, in either case, transfer learning will fail if the target data are missing. To overcome this, we consider knowledge transfer between different databases and modalities simultaneously in a single framework, where missing target data from one database are recovered to facilitate recognition task. We referred to this framework as Latent Low-rank Transfer Subspace Learning method (L2TSL). We first propose to use a low-rank constraint as well as dictionary learning in a learned subspace to guide the knowledge transfer between and within different databases. We then introduce a latent factor to uncover the underlying structure of the missing target data. Next, transfer learning in two directions is proposed to integrate auxiliary database for transfer learning with missing target data. Experimental results of multi-modalities knowledge transfer with missing target data demonstrate that our method can successfully inherit knowledge from the auxiliary database to complete the target domain, and therefore enhance the performance when recognizing data from the modality without any training data.", "Transfer learning is usually exploited to leverage previously well-learned source domain for evaluating the unknown target domain; however, it may fail if no target data are available in the training stage. This problem arises when the data are multi-modal. For example, the target domain is in one modality, while the source domain is in another. To overcome this, we first borrow an auxiliary database with complete modalities, then consider knowledge transfer across databases and across modalities within databases simultaneously in a unified framework. The contributions are threefold: 1) a latent factor is introduced to uncover the underlying structure of the missing modality from the known data; 2) transfer learning in two directions allows the data alignment between both modalities and databases, giving rise to a very promising recovery; and 3) an efficient solution with theoretical guarantees to the proposed latent low-rank transfer learning algorithm. Comprehensive experiments on multi-modal knowledge transfer with missing target modality verify that our method can successfully inherit knowledge from both auxiliary database and source modality, and therefore significantly improve the recognition performance even when test modality is inaccessible in the training stage." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
Auxiliary information from the input data also improves the performance. In the stage-wise learning, coarse to finer images, which are subsampled from the original images, are fed to the network step by step to enhance the learning process @cite_22 . The ROCK architecture introduces an auxiliary block which can perform multiple tasks of extracting useful information from the input and inserting it to the input for a main task @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_22" ], "mid": [ "2891303672", "2787420051" ], "abstract": [ "Multi-Task Learning (MTL) is appealing for deep learning regularization. In this paper, we tackle a specific MTL context denoted as primary MTL, where the ultimate goal is to improve the performance of a given primary task by leveraging several other auxiliary tasks. Our main methodological contribution is to introduce ROCK, a new generic multi-modal fusion block for deep learning tailored to the primary MTL context. ROCK architecture is based on a residual connection, which makes forward prediction explicitly impacted by the intermediate auxiliary representations. The auxiliary predictor's architecture is also specifically designed to our primary MTL context, by incorporating intensive pooling operators for maximizing complementarity of intermediate representations. Extensive experiments on NYUv2 dataset (object detection with scene classification, depth prediction, and surface normal estimation as auxiliary tasks) validate the relevance of the approach and its superiority to flat MTL approaches. Our method outperforms state-of-the-art object detection models on NYUv2 by a large margin, and is also able to handle large-scale heterogeneous inputs (real and synthetic images) and missing annotation modalities.", "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
There have been proposed numerous approaches to utilize hierarchical class information as well. connect multi-layer perceptrons (MLPs) and let each MLP sequentially learn a hierarchical class as rear layer takes the output of the preceding layer as its input. insert coarse category component and fine category component after a shared layer. Classes are classified into K-coarse categories, and K-fine category components are targeted at each coarse category. In @cite_9 , CNN learns label generated by maximum margin clustering at root node, and images in the same cluster are classified at leaf node.
{ "cite_N": [ "@cite_9" ], "mid": [ "2756815061" ], "abstract": [ "Convolutional Neural Network (CNN) image classifiers are traditionally designed to have sequential convolutional layers with a single output layer. This is based on the assumption that all target classes should be treated equally and exclusively. However, some classes can be more difficult to distinguish than others, and classes may be organized in a hierarchy of categories. At the same time, a CNN is designed to learn internal representations that abstract from the input data based on its hierarchical layered structure. So it is natural to ask if an inverse of this idea can be applied to learn a model that can predict over a classification hierarchy using multiple output layers in decreasing order of class abstraction. In this paper, we introduce a variant of the traditional CNN model named the Branch Convolutional Neural Network (B-CNN). A B-CNN model outputs multiple predictions ordered from coarse to fine along the concatenated convolutional layers corresponding to the hierarchical structure of the target classes, which can be regarded as a form of prior knowledge on the output. To learn with B-CNNs a novel training strategy, named the Branch Training strategy (BT-strategy), is introduced which balances the strictness of the prior with the freedom to adjust parameters on the output layers to minimize the loss. In this way we show that CNN based models can be forced to learn successively coarse to fine concepts in the internal layers at the output stage, and that hierarchical prior knowledge can be adopted to boost CNN models' classification performance. Our models are evaluated to show that the B-CNN extensions improve over the corresponding baseline CNN on the benchmark datasets MNIST, CIFAR-10 and CIFAR-100." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
B-CNN learns from coarse features to fine features by calculating loss between superclasses and outputs from the branches of the architecture @cite_2 , where the loss of B-CNN is the weighted sum of all losses over branches. In @cite_4 , an ultrametric tree is proposed based on semantic meaning of all classes to use hierarchical class information. The probability of each node of the ultrametric tree is the sum of the probabilities of leaves (which has a path from the leaves to the node) and all nodes on the path from the leaves to the node.
{ "cite_N": [ "@cite_4", "@cite_2" ], "mid": [ "2903794034", "2756815061" ], "abstract": [ "In this paper, we proposed a novel Probabilistic Attribute Tree-CNN (PAT-CNN) to explicitly deal with the large intra-class variations caused by identity-related attributes, e.g., age, race, and gender. Specifically, a novel PAT module with an associated PAT loss was proposed to learn features in a hierarchical tree structure organized according to attributes, where the final features are less affected by the attributes. Then, expression-related features are extracted from leaf nodes. Samples are probabilistically assigned to tree nodes at different levels such that expression-related features can be learned from all samples weighted by probabilities. We further proposed a semi-supervised strategy to learn the PAT-CNN from limited attribute-annotated samples to make the best use of available data. Experimental results on five facial expression datasets have demonstrated that the proposed PAT-CNN outperforms the baseline models by explicitly modeling attributes. More impressively, the PAT-CNN using a single model achieves the best performance for faces in the wild on the SFEW dataset, compared with the state-of-the-art methods using an ensemble of hundreds of CNNs.", "Convolutional Neural Network (CNN) image classifiers are traditionally designed to have sequential convolutional layers with a single output layer. This is based on the assumption that all target classes should be treated equally and exclusively. However, some classes can be more difficult to distinguish than others, and classes may be organized in a hierarchy of categories. At the same time, a CNN is designed to learn internal representations that abstract from the input data based on its hierarchical layered structure. So it is natural to ask if an inverse of this idea can be applied to learn a model that can predict over a classification hierarchy using multiple output layers in decreasing order of class abstraction. In this paper, we introduce a variant of the traditional CNN model named the Branch Convolutional Neural Network (B-CNN). A B-CNN model outputs multiple predictions ordered from coarse to fine along the concatenated convolutional layers corresponding to the hierarchical structure of the target classes, which can be regarded as a form of prior knowledge on the output. To learn with B-CNNs a novel training strategy, named the Branch Training strategy (BT-strategy), is introduced which balances the strictness of the prior with the freedom to adjust parameters on the output layers to minimize the loss. In this way we show that CNN based models can be forced to learn successively coarse to fine concepts in the internal layers at the output stage, and that hierarchical prior knowledge can be adopted to boost CNN models' classification performance. Our models are evaluated to show that the B-CNN extensions improve over the corresponding baseline CNN on the benchmark datasets MNIST, CIFAR-10 and CIFAR-100." ] }
1906.00852
2947767238
Conventional application of convolutional neural networks (CNNs) for image classification and recognition is based on the assumption that all target classes are equal(i.e., no hierarchy) and exclusive of one another (i.e., no overlap). CNN-based image classifiers built on this assumption, therefore, cannot take into account an innate hierarchy among target classes (e.g., cats and dogs in animal image classification) or additional information that can be easily derived from the data (e.g.,numbers larger than five in the recognition of handwritten digits), thereby resulting in scalability issues when the number of target classes is large. Combining two related but slightly different ideas of hierarchical classification and logical learning by auxiliary inputs, we propose a new learning framework called hierarchical auxiliary learning, which not only address the scalability issues with a large number of classes but also could further reduce the classification recognition errors with a reasonable number of classes. In the hierarchical auxiliary learning, target classes are semantically or non-semantically grouped into superclasses, which turns the original problem of mapping between an image and its target class into a new problem of mapping between a pair of an image and its superclass and the target class. To take the advantage of superclasses, we introduce an auxiliary block into a neural network, which generates auxiliary scores used as additional information for final classification recognition; in this paper, we add the auxiliary block between the last residual block and the fully-connected output layer of the ResNet. Experimental results demonstrate that the proposed hierarchical auxiliary learning can reduce classification errors up to 0.56, 1.6 and 3.56 percent with MNIST, SVHN and CIFAR-10 datasets, respectively.
Furthermore, auxiliary inputs are used to check logical reasoning in @cite_20 . Auxiliary inputs based on human knowledge are provided to the network to let the network learn logical reasoning. The network verifies the logical information with the auxiliary inputs first and proceeds to the next stage.
{ "cite_N": [ "@cite_20" ], "mid": [ "2809918697" ], "abstract": [ "This paper describes a neural network design using auxiliary inputs, namely the indicators, that act as the hints to explain the predicted outcome through logical reasoning, mimicking the human behavior of deductive reasoning. Besides the original network input and output, we add an auxiliary input that reflects the specific logic of the data to formulate a reasoning process for cross-validation. We found that one can design either meaningful indicators, or even meaningless ones, when using such auxiliary inputs, upon which one can use as the basis of reasoning to explain the predicted outputs. As a result, one can formulate different reasonings to explain the predicted results by designing different sets of auxiliary inputs without the loss of trustworthiness of the outcome. This is similar to human explanation process where one can explain the same observation from different perspectives with reasons. We demonstrate our network concept by using the MNIST data with different sets of auxiliary inputs, where a series of design guidelines are concluded. Later, we validated our results by using a set of images taken from a robotic grasping platform. We found that our network enhanced the last 1-2 of the prediction accuracy while eliminating questionable predictions with self-conflicting logics. Future application of our network with auxiliary inputs can be applied to robotic detection problems such as autonomous object grasping, where the logical reasoning can be introduced to optimize robotic learning." ] }
1906.00928
2947601766
We consider the problem of learning a causal graph in the presence of measurement error. This setting is for example common in genomics, where gene expression is corrupted through the measurement process. We develop a provably consistent procedure for estimating the causal structure in a linear Gaussian structural equation model from corrupted observations on its nodes, under a variety of measurement error models. We provide an estimator based on the method-of-moments, which can be used in conjunction with constraint-based causal structure discovery algorithms. We prove asymptotic consistency of the procedure and also discuss finite-sample considerations. We demonstrate our method's performance through simulations and on real data, where we recover the underlying gene regulatory network from zero-inflated single-cell RNA-seq data.
In the presence of latent variables, identifiability is further weakened (only the so-called PAG is identifiable) and various algorithms have been developed for learning a PAG @cite_13 @cite_22 @cite_4 @cite_19 . However, these algorithms cannot estimate causal relations among the latent variables, which is our problem of interest. @cite_28 study identifiability of directed Gaussian graphical models in the presence of a single latent variable. @cite_6 , @cite_21 , @cite_26 and @cite_23 all consider the problem of learning causal edges among latent variables from the observed variables, i.e. models as in Figure a or generalizations thereof, but under assumptions that may not hold for our applications of interest, namely that the measurement error is independent of the latent variables @cite_6 , that the observed variables are a linear function of the latent variables @cite_21 , that the observed variables are binary @cite_26 , or that each latent variable is non-Gaussian with sufficient outgoing edges to guarantee identifiability @cite_23 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_28", "@cite_21", "@cite_6", "@cite_19", "@cite_23", "@cite_13" ], "mid": [ "2963254467", "2763292376", "2146531590", "2626207843", "1505105018", "2542071751", "2805530975", "2137099275", "2770947558" ], "abstract": [ "We study parameter identifiability of directed Gaussian graphical models with one latent variable. In the scenario we consider, the latent vari- able is a confounder that forms a source node of the graph and is a parent to all other nodes, which correspond to the observed variables. We give a graphical condition that is sufficient for the Jacobian matrix of the parametrization map to be full rank, which entails that the parametrization is generically finite-to- one, a fact that is sometimes also referred to as local identifiability. We also derive a graphical condition that is necessary for such identifiability. Finally, we give a condition under which generic parameter identifiability can be deter- mined from identifiability of a model associated with a subgraph. The power of these criteria is assessed via an exhaustive algebraic computational study on models with 4, 5, and 6 observable variables.", "Suppose we observe samples of a subset of a collection of random variables. No additional information is provided about the number of latent variables, nor of the relationship between the latent and observed variables. Is it possible to discover the number of latent components, and to learn a statistical model over the entire collection of variables? We address this question in the setting in which the latent and observed variables are jointly Gaussian, with the conditional statistics of the observed variables conditioned on the latent variables being specified by a graphical model. As a first step we give natural conditions under which such latent-variable Gaussian graphical models are identifiable given marginal statistics of only the observed variables. Essentially these conditions require that the conditional graphical model among the observed variables is sparse, while the effect of the latent variables is \"spread out\" over most of the observed variables. Next we propose a tractable convex program based on regularized maximum-likelihood for model selection in this latent-variable setting; the regularizer uses both the @math norm and the nuclear norm. Our modeling framework can be viewed as a combination of dimensionality reduction (to identify latent variables) and graphical modeling (to capture remaining statistical structure not attributable to the latent variables), and it consistently estimates both the number of latent components and the conditional graphical model structure among the observed variables. These results are applicable in the high-dimensional setting in which the number of latent observed variables grows with the number of samples of the observed variables. The geometric properties of the algebraic varieties of sparse matrices and of low-rank matrices play an important role in our analysis.", "By taking into account the nonlinear effect of the cause, the inner noise effect, and the measurement distortion effect in the observed variables, the post-nonlinear (PNL) causal model has demonstrated its excellent performance in distinguishing the cause from effect. However, its identifiability has not been properly addressed, and how to apply it in the case of more than two variables is also a problem. In this paper, we conduct a systematic investigation on its identifiability in the two-variable case. We show that this model is identifiable in most cases; by enumerating all possible situations in which the model is not identifiable, we provide sufficient conditions for its identifiability. Simulations are given to support the theoretical results. Moreover, in the case of more than two variables, we show that the whole causal structure can be found by applying the PNL causal model to each structure in the Markov equivalent class and testing if the disturbance is independent of the direct causes for each variable. In this way the exhaustive search over all possible causal structures is avoided.", "Measurement error in the observed values of the variables can greatly change the output of various causal discovery methods. This problem has received much attention in multiple fields, but it is not clear to what extent the causal model for the measurement-error-free variables can be identified in the presence of measurement error with unknown variance. In this paper, we study precise sufficient identifiability conditions for the measurement-error-free causal model and show what information of the causal model can be recovered from observed data. In particular, we present two different sets of identifiability conditions, based on the second-order statistics and higher-order statistics of the data, respectively. The former was inspired by the relationship between the generating model of the measurement-error-contaminated data and the factor analysis model, and the latter makes use of the identifiability result of the over-complete independent component analysis problem.", "This work considers the problem of learning linear Bayesian networks when some of the variables are unobserved. Identifiability and efficient recovery from low-order observable moments are established under a novel graphical constraint. The constraint concerns the expansion properties of the underlying directed acyclic graph (DAG) between observed and unobserved variables in the network, and it is satisfied by many natural families of DAGs that include multi-level DAGs, DAGs with effective depth one, as well as certain families of polytrees.", "Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are @math observed contexts and @math arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality @math ( @math ). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the @math mean reward matrix @math (for each context in @math and each arm in @math ) factorizes into non-negative factors @math ( @math ) and @math ( @math ). This insight enables us to propose an @math -greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of @math at time @math , as compared to @math for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of @math . These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.", "Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modifications to the standard objective function. These approaches generally assume a simple diagonal Gaussian prior and as a result are not able to reliably disentangle discrete factors of variation. We propose a two-level hierarchical objective to control relative degree of statistical independence between blocks of variables and individual variables within blocks. We derive this objective as a generalization of the evidence lower bound, which allows us to explicitly represent the trade-offs between mutual information between data and representation, KL divergence between representation and prior, and coverage of the support of the empirical data distribution. Experiments on a variety of datasets demonstrate that our objective can not only disentangle discrete variables, but that doing so also improves disentanglement of other variables and, importantly, generalization even to unseen combinations of factors.", "We describe anytime search procedures that (1) find disjoint subsets of recorded variables for which the members of each subset are d-separated by a single common unrecorded cause, if such exists; (2) return information about the causal relations among the latent factors so identified. We prove the procedure is point-wise consistent assuming (a) the causal relations can be represented by a directed acyclic graph (DAG) satisfying the Markov Assumption and the Faithfulness Assumption; (b) unrecorded variables are not caused by recorded variables; and (c) dependencies are linear. We compare the procedure with standard approaches over a variety of simulated structures and sample sizes, and illustrate its practical value with brief studies of social science data sets. Finally, we consider generalizations for non-linear systems.", "Machine learning models are vulnerable to adversarial examples: minor, in many cases imperceptible, perturbations to classification inputs. Among other suspected causes, adversarial examples exploit ML models that offer no well-defined indication as to how well a particular prediction is supported by training data, yet are forced to confidently extrapolate predictions in areas of high entropy. In contrast, Bayesian ML models, such as Gaussian Processes (GP), inherently model the uncertainty accompanying a prediction in the well-studied framework of Bayesian Inference. This paper is first to explore adversarial examples and their impact on uncertainty estimates for Gaussian Processes. To this end, we first present three novel attacks on Gaussian Processes: GPJM and GPFGS exploit forward derivatives in GP latent functions, and Latent Space Approximation Networks mimic the latent space representation in unsupervised GP models to facilitate attacks. Further, we show that these new attacks compute adversarial examples that transfer to non-GP classification models, and vice versa. Finally, we show that GP uncertainty estimates not only differ between adversarial examples and benign data, but also between adversarial examples computed by different algorithms." ] }
1906.00679
2947597348
The holy grail of networking is to create that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted adversarial perturbations, which does not bode well for the future of ML-based cognitive networks unless ML vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. The purpose of this article is to highlight the problem of insecure ML and to sensitize the readers to the danger of adversarial ML by showing how an easily-crafted adversarial ML example can compromise the operations of the cognitive self-driving network. In this paper, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications (namely, intrusion detection and network traffic classification). We also provide some guidelines to design secure ML models for cognitive networks that are robust to adversarial attacks on the ML pipeline of cognitive networks.
With the well-known attacks proposed in the literature @cite_8 , the bar of effort required for launching new attacks has lowered since the same canned attacks can be used by others. Although Sommer and Paxson @cite_14 were probably right in 2010 to downplay the potential of security attacks on ML saying exploiting the specifics of a machine learning implementation requires significant effort, time, and expertise on the attacker's side,'' the danger is real now when an attack can be launched on ML-based implementations with minimal effort, time, and expertise.
{ "cite_N": [ "@cite_14", "@cite_8" ], "mid": [ "2969695741", "2603766943" ], "abstract": [ "Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.", "Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, all existing adversarial example attacks require knowledge of either the model internals or its training data. We introduce the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge. Indeed, the only capability of our black-box adversary is to observe labels given by the DNN to chosen inputs. Our attack strategy consists in training a local model to substitute for the target DNN, using inputs synthetically generated by an adversary and labeled by the target DNN. We use the local substitute to craft adversarial examples, and find that they are misclassified by the targeted DNN. To perform a real-world and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online deep learning API. We find that their DNN misclassifies 84.24 of the adversarial examples crafted with our substitute. We demonstrate the general applicability of our strategy to many ML techniques by conducting the same attack against models hosted by Amazon and Google, using logistic regression substitutes. They yield adversarial examples misclassified by Amazon and Google at rates of 96.19 and 88.94 . We also find that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder." ] }
1906.00679
2947597348
The holy grail of networking is to create that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted adversarial perturbations, which does not bode well for the future of ML-based cognitive networks unless ML vulnerabilities for the cognitive networking environment are identified, addressed, and fixed. The purpose of this article is to highlight the problem of insecure ML and to sensitize the readers to the danger of adversarial ML by showing how an easily-crafted adversarial ML example can compromise the operations of the cognitive self-driving network. In this paper, we demonstrate adversarial attacks on two simple yet representative cognitive networking applications (namely, intrusion detection and network traffic classification). We also provide some guidelines to design secure ML models for cognitive networks that are robust to adversarial attacks on the ML pipeline of cognitive networks.
All classification schemes depicted in the taxonomy are directly related to the intent goal of the adversary. Most of the existing adversarial ML attacks are white-box attacks, which are later converted to black-box attacks by exploiting the transferability property of adversarial examples @cite_7 . The transferability property of adversarial ML means that adversarial perturbations generated for one ML model will often mislead other unseen ML models. Related research has been carried out on adversarial pattern recognition for more than a decade, and even before that there was a smattering of works focused on performing ML in the presence of malicious errors @cite_8 .
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "2950774971", "2903785932" ], "abstract": [ "It has been well demonstrated that adversarial examples, i.e., natural images with visually imperceptible perturbations added, generally exist for deep networks to fail on image classification. In this paper, we extend adversarial examples to semantic segmentation and object detection which are much more difficult. Our observation is that both segmentation and detection are based on classifying multiple targets on an image (e.g., the basic target is a pixel or a receptive field in segmentation, and an object proposal in detection), which inspires us to optimize a loss function over a set of pixels proposals for generating adversarial perturbations. Based on this idea, we propose a novel algorithm named Dense Adversary Generation (DAG), which generates a large family of adversarial examples, and applies to a wide range of state-of-the-art deep networks for segmentation and detection. We also find that the adversarial perturbations can be transferred across networks with different training data, based on different architectures, and even for different recognition tasks. In particular, the transferability across networks with the same architecture is more significant than in other cases. Besides, summing up heterogeneous perturbations often leads to better transfer performance, which provides an effective method of black-box adversarial attack.", "Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9 accuracy, our method achieves 55.7 ; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6 accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6 classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10 . Code is available at this https URL." ] }
1906.00860
2947878631
We prove the linear stability of slowly rotating Kerr black holes as solutions of the Einstein vacuum equation: linearized perturbations of a Kerr metric decay at an inverse polynomial rate to a linearized Kerr metric plus a pure gauge term. We work in a natural wave map DeTurck gauge and show that the pure gauge term can be taken to lie in a fixed 7-dimensional space with a simple geometric interpretation. Our proof rests on a robust general framework, based on recent advances in microlocal analysis and non-elliptic Fredholm theory, for the analysis of resolvents of operators on asymptotically flat spaces. With the mode stability of the Schwarzschild metric as well as of certain scalar and 1-form wave operators on the Schwarzschild spacetime as an input, we establish the linear stability of slowly rotating Kerr black holes using perturbative arguments; in particular, our proof does not make any use of special algebraic properties of the Kerr metric. The heart of the paper is a detailed description of the resolvent of the linearization of a suitable hyperbolic gauge-fixed Einstein operator at low energies. As in previous work by the second and third authors on the nonlinear stability of cosmological black holes, constraint damping plays an important role. Here, it eliminates certain pathological generalized zero energy states; it also ensures that solutions of our hyperbolic formulation of the linearized Einstein equation have the stated asymptotics and decay for general initial data and forcing terms, which is a useful feature in nonlinear and numerical applications.
In the algebraically more complicated but analytically less degenerate context of cosmological black holes, we recall that S 'a Barreto--Zworski @cite_113 studied the distribution of resonances of SdS black holes; exponential decay of linear scalar waves to constants was proved by Bony--H "afner @cite_36 and Melrose--S a Barreto--Vasy @cite_112 on SdS and by Dyatlov @cite_70 @cite_31 on KdS spacetimes, and substantially refined by Dyatlov @cite_73 to a full resonance expansion. (See @cite_56 for a physical space approach giving superpolynomial energy decay.) Tensor-valued and nonlinear equations on KdS spacetimes were studied in a series of works by Hintz--Vasy @cite_63 @cite_24 @cite_102 @cite_116 @cite_28 . For a physical space approach to resonances, see Warnick @cite_110 , and for the Maxwell equation on SdS spacetimes, see Keller @cite_66 .
{ "cite_N": [ "@cite_36", "@cite_70", "@cite_28", "@cite_112", "@cite_102", "@cite_113", "@cite_56", "@cite_24", "@cite_116", "@cite_63", "@cite_110", "@cite_31", "@cite_73", "@cite_66" ], "mid": [ "1666285156", "2150477501", "2722540036", "1868264628", "1697896460", "2103138450", "2046838574", "2200779991", "326163211", "2083456890", "1968751168", "2962947744", "2028523941", "1921678059" ], "abstract": [ "This paper contains the first two parts (I-II) of a three-part series concerning the scalar wave equation = 0 on a fixed Kerr background. We here restrict to two cases: (II1) |a| M, general or (II2) |a| < M, axisymmetric. In either case, we prove a version of 'integrated local energy decay', specifically, that the 4-integral of an energy-type density (degenerating in a neighborhood of the Schwarzschild photon sphere and at infinity), integrated over the domain of dependence of a spacelike hypersurface connecting the future event horizon with spacelike infinity or a sphere on null infinity, is bounded by a natural (non-degenerate) energy flux of through . (The case (II1) has in fact been treated previously in our Clay Lecture notes: Lectures on black holes and linear waves, arXiv:0811.0354.) In our forthcoming Part III, the restriction to axisymmetry for the general |a| < M case is removed. The complete proof is surveyed in our companion paper The black hole stability problem for linear scalar perturbations, which includes the essential details of our forthcoming Part III. Together with previous work (see our: A new physical-space approach to decay for the wave equation with applications to black hole spacetimes, in XVIth International Congress on Mathematical Physics, Pavel Exner ed., Prague 2009 pp. 421-433, 2009, arxiv:0910.4957), this result leads, under suitable assumptions on initial data of , to polynomial decay bounds for the energy flux of through the foliation of the black hole exterior defined by the time translates of a spacelike hypersurface terminating on null infinity, as well as to pointwise decay estimates, of a definitive form useful for nonlinear applications.", "These lecture notes, based on a course given at the Zurich Clay Summer School (June 23-July 18, 2008), review our current mathematical understanding of the global behaviour of waves on black hole exterior backgrounds. Interest in this problem stems from its relationship to the non-linear stability of the black hole spacetimes themselves as solutions to the Einstein equations, one of the central open problems of general relativity. After an introductory discussion of the Schwarzschild geometry and the black hole concept, the classical theorem of Kay and Wald on the boundedness of scalar waves on the exterior region of Schwarzschild is reviewed. The original proof is presented, followed by a new more robust proof of a stronger boundedness statement. The problem of decay of scalar waves on Schwarzschild is then addressed, and a theorem proving quantitative decay is stated and its proof sketched. This decay statement is carefully contrasted with the type of statements derived heuristically in the physics literature for the asymptotic tails of individual spherical harmonics. Following this, our recent proof of the boundedness of solutions to the wave equation on axisymmetric stationary backgrounds (including slowly-rotating Kerr and Kerr-Newman) is reviewed and a new decay result for slowly-rotating Kerr spacetimes is stated and proved. This last result was announced at the summer school and appears in print here for the first time. A discussion of the analogue of these problems for spacetimes with a positive cosmological constant follows. Finally, a general framework is given for capturing the red-shift effect for non-extremal black holes. This unifies and extends some of the analysis of the previous sections. The notes end with a collection of open problems.", "In this work, we consider solutions of the Maxwell equations on the Schwarzschild-de Sitter family of black hole spacetimes. We prove that, in the static region bounded by black hole and cosmological horizons, solutions of the Maxwell equations decay to stationary Coulomb solutions at a super-polynomial rate, with decay measured according to ingoing and outgoing null coordinates. Our method employs a differential transformation of Maxwell tensor components to obtain higher-order quantities satisfying a Fackerell-Ipser equation, in the style of Chandrasekhar and the more recent work of Pasqualotto. The analysis of the Fackerell-Ipser equation is accomplished by means of the vector field method, with decay estimates for the higher-order quantities leading to decay estimates for components of the Maxwell tensor.", "We consider solutions to the linear wave equation @math on a non-extremal maximally extended Schwarzschild-de Sitter spacetime arising from arbitrary smooth initial data prescribed on an arbitrary Cauchy hypersurface. (In particular, no symmetry is assumed on initial data, and the support of the solutions may contain the sphere of bifurcation of the black white hole horizons and the cosmological horizons.) We prove that in the region bounded by a set of black white hole horizons and cosmological horizons, solutions @math converge pointwise to a constant faster than any given polynomial rate, where the decay is measured with respect to natural future-directed advanced and retarded time coordinates. We also give such uniform decay bounds for the energy associated to the Killing field as well as for the energy measured by local observers crossing the event horizon. The results in particular include decay rates along the horizons themselves. Finally, we discuss the relation of these results to previous heuristic analysis of Price and", "This paper concludes the series begun in [M. Dafermos and I. Rodnianski, Decay for solutions of the wave equation on Kerr exterior spacetimes I-II: the cases |a| << M or axisymmetry, arXiv:1010.5132], providing the complete proof of definitive boundedness and decay results for the scalar wave equation on Kerr backgrounds in the general subextremal |a| < M case without symmetry assumptions. The essential ideas of the proof (together with explicit constructions of the most difficult multiplier currents) have been announced in our survey [M. Dafermos and I. Rodnianski, The black hole stability problem for linear scalar perturbations, in Proceedings of the 12th Marcel Grossmann Meeting on General Relativity, T. (ed.), World Scientific, Singapore, 2011, pp. 132-189, arXiv:1010.5137]. Our proof appeals also to the quantitative mode-stability proven in [Y. Shlapentokh-Rothman, Quantitative Mode Stability for the Wave Equation on the Kerr Spacetime, arXiv:1302.6902, to appear, Ann. Henri Poincare], together with a streamlined continuity argument in the parameter a, appearing here for the first time. While serving as Part III of a series, this paper repeats all necessary notations so that it can be read independently of previous work.", "This book consists of two independent works: Part I is 'Solutions of the Einstein Vacuum Equations', by Lydia Bieri. Part II is 'Solutions of the Einstein-Maxwell Equations', by Nina Zipser. A famous result of Christodoulou and Klainerman is the global nonlinear stability of Minkowski spacetime. In this book, Bieri and Zipser provide two extensions to this result. In the first part, Bieri solves the Cauchy problem for the Einstein vacuum equations with more general, asymptotically flat initial data, and describes precisely the asymptotic behavior. In particular, she assumes less decay in the power of @math and one less derivative than in the Christodoulou-Klainerman result. She proves that in this case, too, the initial data, being globally close to the trivial data, yields a solution which is a complete spacetime, tending to the Minkowski spacetime at infinity along any geodesic. In contrast to the original situation, certain estimates in this proof are borderline in view of decay, indicating that the conditions in the main theorem on the decay at infinity on the initial data are sharp. In the second part, Zipser proves the existence of smooth, global solutions to the Einstein-Maxwell equations. A nontrivial solution of these equations is a curved spacetime with an electromagnetic field. To prove the existence of solutions to the Einstein-Maxwell equations, Zipser follows the argument and methodology introduced by Christodoulou and Klainerman. To generalize the original results, she needs to contend with the additional curvature terms that arise due to the presence of the electromagnetic field @math ; in her case the Ricci curvature of the spacetime is not identically zero but rather represented by a quadratic in the components of @math . In particular the Ricci curvature is a constant multiple of the stress-energy tensor for @math . Furthermore, the traceless part of the Riemann curvature tensor no longer satisfies the homogeneous Bianchi equations but rather inhomogeneous equations including components of the spacetime Ricci curvature. Therefore, the second part of this book focuses primarily on the derivation of estimates for the new terms that arise due to the presence of the electromagnetic field.", "We prove sharp pointwise t−3 decay for scalar linear perturbations of a Schwarzschild black hole without symmetry assumptions on the data. We also consider electromagnetic and gravitational perturbations for which we obtain decay rates t−4, and t−6, respectively. We proceed by decomposition into angular momentum l and summation of the decay estimates on the Regge-Wheeler equation for fixed l. We encounter a dichotomy: the decay law in time is entirely determined by the asymptotic behavior of the Regge-Wheeler potential in the far field, whereas the growth of the constants in l is dictated by the behavior of the Regge-Wheeler potential in a small neighborhood around its maximum. In other words, the tails are controlled by small energies, whereas the number of angular derivatives needed on the data is determined by energies close to the top of the Regge-Wheeler potential. This dichotomy corresponds to the well-known principle that for initial times the decay reflects the presence of complex resonances generated by the potential maximum, whereas for later times the tails are determined by the far field. However, we do not invoke complex resonances at all, but rely instead on semiclassical Sigal-Soffer type propagation estimates based on a Mourre bound near the top energy.", "Adapting and extending the techniques developed in recent work with Vasy for the study of the Cauchy horizon of cosmological spacetimes, we obtain boundedness, regularity and decay of linear scalar waves on subextremal Reissner-Nordstr \"om and (slowly rotating) Kerr spacetimes, without any symmetry assumptions; in particular, we provide simple microlocal and scattering theoretic proofs of analogous results by Franzen. We show polynomial decay of linear waves relative to a Sobolev space of order slightly above @math . This complements the generic @math blow-up result of Luk and Oh.", "We develop a definitive physical-space scattering theory for the scalar wave equation on Kerr exterior backgrounds in the general subextremal case |a|<M. In particular, we prove results corresponding to \"existence and uniqueness of scattering states\" and \"asymptotic completeness\" and we show moreover that the resulting \"scattering matrix\" mapping radiation fields on the past horizon and past null infinity to radiation fields on the future horizon and future null infinity is a bounded operator. The latter allows us to give a time-domain theory of superradiant reflection. The boundedness of the scattering matrix shows in particular that the maximal amplification of solutions associated to ingoing finite-energy wave packets on past null infinity is bounded. On the frequency side, this corresponds to the novel statement that the suitably normalised reflection and transmission coefficients are uniformly bounded independently of the frequency parameters. We further complement this with a demonstration that superradiant reflection indeed amplifies the energy radiated to future null infinity of suitable wave-packets as above. The results make essential use of a refinement of our recent proof [M. Dafermos, I. Rodnianski and Y. Shlapentokh-Rothman, Decay for solutions of the wave equation on Kerr exterior spacetimes III: the full subextremal case |a|<M, arXiv:1402.6034] of boundedness and decay for solutions of the Cauchy problem so as to apply in the class of solutions where only a degenerate energy is assumed finite. We show in contrast that the analogous scattering maps cannot be defined for the class of finite non-degenerate energy solutions. This is due to the fact that the celebrated horizon red-shift effect acts as a blue-shift instability when solving the wave equation backwards.", "We establish a Bohr–Sommerfeld type condition for quasi-normal modes of a slowly rotating Kerr–de Sitter black hole, providing their full asymptotic description in any strip of fixed width. In particular, we observe a Zeeman-like splitting of the high multiplicity modes at a = 0 (Schwarzschild–de Sitter), once spherical symmetry is broken. The numerical results presented in Appendix B show that the asymptotics are in fact accurate at very low energies and agree with the numerical results established by other methods in the physics literature. We also prove that solutions of the wave equation can be asymptotically expanded in terms of quasi-normal modes; this confirms the validity of the interpretation of their real parts as frequencies of oscillations, and imaginary parts as decay rates of gravitational waves.", "A well-known open problem in general relativity, dating back to 1972, has been to prove Price’s law for an appropriate model of gravitational collapse. This law postulates inverse-power decay rates for the gravitational radiation flux through the event horizon and null infinity with respect to appropriately normalized advanced and retarded time coordinates. It is intimately related both to astrophysical observations of black holes and to the fate of observers who dare cross the event horizon. In this paper, we prove a well-defined (upper bound) formulation of Price’s law for the collapse of a self-gravitating scalar field with spherically symmetric initial data. We also allow the presence of an additional gravitationally coupled Maxwell field. Our results are obtained by a new mathematical technique for understanding the long-time behavior of large data solutions to the resulting coupled non-linear hyperbolic system of p.d.e.’s in 2 independent variables. The technique is based on the interaction of the conformal geometry, the celebrated red-shift effect, and local energy conservation; we feel it may be relevant for the problem of non-linear stability of the Kerr solution. When combined with previous work of the first author concerning the internal structure of charged black holes, which had assumed the validity of Price’s law, our results can be applied to the strong cosmic censorship conjecture for the Einstein-Maxwell-real scalar field system with complete spacelike asymptotically flat spherically symmetric initial data. Under Christodoulou’s C0-formulation, the conjecture is proven to be false.", "We review our recent work on linear stability for scalar perturba- tions of Kerr spacetimes, that is to say, boundedness and decay properties for solutions of the scalar wave equation 2g = 0 on Kerr exterior backgrounds (M,ga,M). We begin with the very slowly rotating caseSaS ≪M, where first boundedness and then decay has been shown in rapid developments over the last two years, following earlier progress in the Schwarzschild case a= 0. We then turn to the general subextremal range SaS <M, where we give here for the first time the essential elements of a proof of definitive decay bounds for solutions . These developments give hope that the problem of the non-linear stability of the Kerr family of black holes might soon be addressed. This paper accompanies a talk by one of the authors (I.R.) at the 12th Marcel Grossmann Meeting, Paris, June 2009.", "The stability of the inner Reissner-Nordstroem geometry is studied with test massless integer-spin fields. In contrast to previous mathematical treatments we present physical arguments for the processes involved and show that ray tracing and simple first-order scattering suffice to elucidate most of the results. Monochromatic waves which are of small amplitude and ingoing near the outer horizon develop infinite energy densities near the inner Cauchy horizon (as measured by a freely falling observer). Previous work has shown that certain derivatives of the field in a general (nonmonochromatic) disturbance must fall off exponentially near the inner (Cauchy) horizon (r = r sub - ) if energy densities are to remain finite. Thus the solution is unstable to physically reasonable perturbations which arise outside the black hole because such perturbations, if localized near past null infinity (I sup - ), cannot be localized near r sub + , the outer horizon. The mass-energy of an infalling disturbance would generate multipole moments on the black hole. Price, Sibgatullin, and Alekseev have shown that such moments are radiated away as ''tails'' which travel outward and are rescattered inward yielding a wave field with a time dependence t sup -p , p > 0. This decay in time is sufficiently slow that themore » tails yield infinite energy densities on the Cauchy horizon. (The amplification of the low-frequency tails upon interacting with the time-dependent potential between the horizons is an important feature guaranteeing the infinite energy density.) The interior structure of the analytically extended solution is thus disrupted by finite external disturbances. have further shown that even perturbations which are localized as they cross the outer horizon produce singularities at the inner horizon. It is shown that this singularity arises when the incoming radiation is first scattered just inside the outer horizon« less", "We construct a large class of dynamical vacuum black hole spacetimes whose exterior geometry asymptotically settles down to a fixed Schwarzschild or Kerr metric. The construction proceeds by solving a backwards scattering problem for the Einstein vacuum equations with characteristic data prescribed on the event horizon and (in the limit) at null infinity. The class admits the full \"functional\" degrees of freedom for the vacuum equations, and thus our solutions will in general possess no geometric or algebraic symmetries. It is essential, however, for the construction that the scattering data (and the resulting solution spacetime) converge to stationarity exponentially fast, in advanced and retarded time, their rate of decay intimately related to the surface gravity of the event horizon. This can be traced back to the celebrated redshift effect, which in the context of backwards evolution is seen as a blueshift." ] }
1906.01009
2948602696
The Mallows model, introduced in the seminal paper of Mallows 1957, is one of the most fundamental ranking distribution over the symmetric group @math . To analyze more complex ranking data, several studies considered the Generalized Mallows model defined by Fligner and Verducci 1986. Despite the significant research interest of ranking distributions, the exact sample complexity of estimating the parameters of a Mallows and a Generalized Mallows Model is not well-understood. The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model. We approach the learning problem by analyzing a more general model which interpolates between the single parameter Mallows Model and the @math parameter Mallows model. We call our model Mallows Block Model -- referring to the Block Models that are a popular model in theoretical statistics. Our sample complexity analysis gives tight bound for learning the Mallows Block Model for any number of blocks. We provide essentially matching lower bounds for our sample complexity results. As a corollary of our analysis, it turns out that, if the central ranking is known, one single sample from the Mallows Block Model is sufficient to estimate the spread parameters with error that goes to zero as the size of the permutations goes to infinity. In addition, we calculate the exact rate of the parameter estimation error.
There has been a significant volume of research work on algorithmic and learning problems related to our work. In the , a finite set @math of rankings is given, and we want to compute the ranking @math . This problem is known to be NP-hard , but it admits a polynomial-time @math -approximation algorithm problem and a PTAS . When the rankings are i.i.d. samples from a Mallows distribution, consensus ranking is equivalent to computing the maximum likelihood ranking, which does not depend on the spread parameter. Intuitively, the problem of finding the central ranking should not be hard, if the probability mass is concentrated around the central ranking. @cite_8 came up with a branch and bound technique which relies on this observation. @cite_9 proposed a dynamic programming approach that computes the consensus ranking efficiently, under the Mallows model. @cite_10 showed that the central ranking can be recovered from a logarithmic number of i.i.d. samples from a Mallows distribution (see also Theorem ).
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_8" ], "mid": [ "2113815377", "2952852844", "2487418934" ], "abstract": [ "We analyze the generalized Mallows model, a popular exponential model over rankings. Estimating the central (or consensus) ranking from data is NP-hard. We obtain the following new results: (1) We show that search methods can estimate both the central ranking pi0 and the model parameters theta exactly. The search is n! in the worst case, but is tractable when the true distribution is concentrated around its mode; (2) We show that the generalized Mallows model is jointly exponential in (pi0; theta), and introduce the conjugate prior for this model class; (3) The sufficient statistics are the pairwise marginal probabilities that item i is preferred to item j. Preliminary experiments confirm the theoretical predictions and compare the new algorithm and existing heuristics.", "The probability that a user will click a search result depends both on its relevance and its position on the results page. The position based model explains this behavior by ascribing to every item an attraction probability, and to every position an examination probability. To be clicked, a result must be both attractive and examined. The probabilities of an item-position pair being clicked thus form the entries of a rank- @math matrix. We propose the learning problem of a Bernoulli rank- @math bandit where at each step, the learning agent chooses a pair of row and column arms, and receives the product of their Bernoulli-distributed values as a reward. This is a special case of the stochastic rank- @math bandit problem considered in recent work that proposed an elimination based algorithm Rank1Elim, and showed that Rank1Elim's regret scales linearly with the number of rows and columns on \"benign\" instances. These are the instances where the minimum of the average row and column rewards @math is bounded away from zero. The issue with Rank1Elim is that it fails to be competitive with straightforward bandit strategies as @math . In this paper we propose Rank1ElimKL which simply replaces the (crude) confidence intervals of Rank1Elim with confidence intervals based on Kullback-Leibler (KL) divergences, and with the help of a novel result concerning the scaling of KL divergences we prove that with this change, our algorithm will be competitive no matter the value of @math . Experiments with synthetic data confirm that on benign instances the performance of Rank1ElimKL is significantly better than that of even Rank1Elim, while experiments with models derived from real data confirm that the improvements are significant across the board, regardless of whether the data is benign or not.", "Abstract A partition ( C 1 , C 2 , … , C q ) of G = ( V , E ) into clusters of strong (respectively, weak) diameter d , such that the supergraph obtained by contracting each C i is l -colorable is called a strong (resp., weak) ( d , l ) -network-decomposition. Network-decompositions were introduced in a seminal paper by Awerbuch, Goldberg, Luby and Plotkin in 1989. showed that strong ( d , l ) -network-decompositions with d = l = exp ⁡ O ( log ⁡ n log ⁡ log ⁡ n ) can be computed in distributed deterministic time O ( d ) . Even more importantly, they demonstrated that network-decompositions can be used for a great variety of applications in the message-passing model of distributed computing. The result of was improved by Panconesi and Srinivasan in 1992: in the latter result d = l = exp ⁡ O ( log ⁡ n ) , and the running time is O ( d ) as well. In another remarkable breakthrough Linial and Saks (in 1992) showed that weak ( O ( log ⁡ n ) , O ( log ⁡ n ) ) -network-decompositions can be computed in distributed randomized time O ( log 2 ⁡ n ) . Much more recently Barenboim (2012) devised a distributed randomized constant-time algorithm for computing strong network decompositions with d = O ( 1 ) . However, the parameter l in his result is O ( n 1 2 + ϵ ) . In this paper we drastically improve the result of Barenboim and devise a distributed randomized constant-time algorithm for computing strong ( O ( 1 ) , O ( n ϵ ) ) -network-decompositions. As a corollary we derive a constant-time randomized O ( n ϵ ) -approximation algorithm for the distributed minimum coloring problem, improving the previously best-known O ( n 1 2 + ϵ ) approximation guarantee. We also derive other improved distributed algorithms for a variety of problems. Most notably, for the extremely well-studied distributed minimum dominating set problem currently there is no known deterministic polylogarithmic-time algorithm. We devise a deterministic polylogarithmic-time approximation algorithm for this problem, addressing an open problem of Lenzen and Wattenhofer (2010)." ] }
1906.01009
2948602696
The Mallows model, introduced in the seminal paper of Mallows 1957, is one of the most fundamental ranking distribution over the symmetric group @math . To analyze more complex ranking data, several studies considered the Generalized Mallows model defined by Fligner and Verducci 1986. Despite the significant research interest of ranking distributions, the exact sample complexity of estimating the parameters of a Mallows and a Generalized Mallows Model is not well-understood. The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model. We approach the learning problem by analyzing a more general model which interpolates between the single parameter Mallows Model and the @math parameter Mallows model. We call our model Mallows Block Model -- referring to the Block Models that are a popular model in theoretical statistics. Our sample complexity analysis gives tight bound for learning the Mallows Block Model for any number of blocks. We provide essentially matching lower bounds for our sample complexity results. As a corollary of our analysis, it turns out that, if the central ranking is known, one single sample from the Mallows Block Model is sufficient to estimate the spread parameters with error that goes to zero as the size of the permutations goes to infinity. In addition, we calculate the exact rate of the parameter estimation error.
@cite_5 considered learning the spread parameter of a Mallows model based on a single sample, assuming that the central ranking is known. He studied the asymptotic behavior of his estimator and proved consistency. We strengthen this result by showing that our parameter estimator, based on single sample, can achieve optimal error for Mallows Block model (Corollary ).
{ "cite_N": [ "@cite_5" ], "mid": [ "2113815377" ], "abstract": [ "We analyze the generalized Mallows model, a popular exponential model over rankings. Estimating the central (or consensus) ranking from data is NP-hard. We obtain the following new results: (1) We show that search methods can estimate both the central ranking pi0 and the model parameters theta exactly. The search is n! in the worst case, but is tractable when the true distribution is concentrated around its mode; (2) We show that the generalized Mallows model is jointly exponential in (pi0; theta), and introduce the conjugate prior for this model class; (3) The sufficient statistics are the pairwise marginal probabilities that item i is preferred to item j. Preliminary experiments confirm the theoretical predictions and compare the new algorithm and existing heuristics." ] }
1906.01009
2948602696
The Mallows model, introduced in the seminal paper of Mallows 1957, is one of the most fundamental ranking distribution over the symmetric group @math . To analyze more complex ranking data, several studies considered the Generalized Mallows model defined by Fligner and Verducci 1986. Despite the significant research interest of ranking distributions, the exact sample complexity of estimating the parameters of a Mallows and a Generalized Mallows Model is not well-understood. The main result of the paper is a tight sample complexity bound for learning Mallows and Generalized Mallows Model. We approach the learning problem by analyzing a more general model which interpolates between the single parameter Mallows Model and the @math parameter Mallows model. We call our model Mallows Block Model -- referring to the Block Models that are a popular model in theoretical statistics. Our sample complexity analysis gives tight bound for learning the Mallows Block Model for any number of blocks. We provide essentially matching lower bounds for our sample complexity results. As a corollary of our analysis, it turns out that, if the central ranking is known, one single sample from the Mallows Block Model is sufficient to estimate the spread parameters with error that goes to zero as the size of the permutations goes to infinity. In addition, we calculate the exact rate of the parameter estimation error.
The parameter estimation of the Generalized Mallows Model has been examined from a practical point of view by @cite_7 but no theoretical guarantees for the sample complexity have been provided. Several ranking models are routinely used in analyzing ranking data , such as Plackett-Luce model , Babington-Smith model and spectral analysis based methods and non-parametric methods . However, to our best knowledge, none of these ranking methods have been analyzed from point of distribution learning which comes with guarantee on some information theoretic distance. considered the problem of learning parameters of Plackett-Luce model and they came up with high probability bounds for their estimator that is tight in a sense that there is no algorithm which can achieve lower estimation error with fewer examples.
{ "cite_N": [ "@cite_7" ], "mid": [ "2113815377" ], "abstract": [ "We analyze the generalized Mallows model, a popular exponential model over rankings. Estimating the central (or consensus) ranking from data is NP-hard. We obtain the following new results: (1) We show that search methods can estimate both the central ranking pi0 and the model parameters theta exactly. The search is n! in the worst case, but is tractable when the true distribution is concentrated around its mode; (2) We show that the generalized Mallows model is jointly exponential in (pi0; theta), and introduce the conjugate prior for this model class; (3) The sufficient statistics are the pairwise marginal probabilities that item i is preferred to item j. Preliminary experiments confirm the theoretical predictions and compare the new algorithm and existing heuristics." ] }
1906.00777
2947226932
Drone base station (DBS) is a promising technique to extend wireless connections for uncovered users of terrestrial radio access networks (RAN). To improve user fairness and network performance, in this paper, we design 3D trajectories of multiple DBSs in the drone assisted radio access networks (DA-RAN) where DBSs fly over associated areas of interests (AoIs) and relay communications between the base station (BS) and users in AoIs. We formulate the multi-DBS 3D trajectory planning and scheduling as a mixed integer non-linear programming (MINLP) problem with the objective of minimizing the average DBS-to-user (D2U) pathloss. The 3D trajectory variations in both horizontal and vertical directions, as well as the state-of-the-art DBS-related channel models are considered in the formulation. To address the non-convexity and NP-hardness of the MINLP problem, we first decouple it into multiple integer linear programming (ILP) and quasi-convex sub-problems in which AoI association, D2U communication scheduling, horizontal trajectories and flying heights of DBSs are respectively optimized. Then, we design a multi-DBS 3D trajectory planning and scheduling algorithm to solve the sub-problems iteratively based on the block coordinate descent (BCD) method. A k-means-based initial trajectory generation and a search-based start slot scheduling are considered in the proposed algorithm to improve trajectory design performance and ensure inter-DBS distance constraint, respectively. Extensive simulations are conducted to investigate the impacts of DBS quantity, horizontal speed and initial trajectory on the trajectory planning results. Compared with the static DBS deployment, the proposed trajectory planning can achieve 10-15 dB reduction on average D2U pathloss, and reduce the D2U pathloss standard deviation by 68 , which indicate the improvements of network performance and user fairness.
Promoted by the advancements in the flying control and communication technologies, both industry and academia are devoting many efforts to exploit the full potential of DA-RAN @cite_11 . As the foundation for drone communication and DA-RAN research, Al-Hourani . built the D2U pathloss model for DBS according to abundant field test data in various scenarios @cite_23 . A close-form expression of D2U pathloss model suiting different scenarios is proposed in which the probabilities of both LoS and NLoS D2U links are considered. As the extension work, they further formulated the pathloss model for D2B communication in suburban scenario @cite_8 where the D2B links are dominated by LoS links. Leveraging the pathloss model in @cite_23 and @cite_8 , various studies have emerged in both static DBS deployment and DBS trajectory planning.
{ "cite_N": [ "@cite_8", "@cite_23", "@cite_11" ], "mid": [ "2206930994", "2039409843", "2084503286" ], "abstract": [ "In this paper, the deployment of an unmanned aerial vehicle (UAV) as a flying base station used to provide the fly wireless communications to a given geographical area is analyzed. In particular, the coexistence between the UAV, that is transmitting data in the downlink, and an underlaid device-to-device (D2D) communication network is considered. For this model, a tractable analytical framework for the coverage and rate analysis is derived. Two scenarios are considered: a static UAV and a mobile UAV. In the first scenario, the average coverage probability and the system sum-rate for the users in the area are derived as a function of the UAV altitude and the number of D2D users. In the second scenario, using the disk covering problem, the minimum number of stop points that the UAV needs to visit in order to completely cover the area is computed. Furthermore, considering multiple retransmissions for the UAV and D2D users, the overall outage probability of the D2D users is derived. Simulation and analytical results show that, depending on the density of D2D users, the optimal values for the UAV altitude, which lead to the maximum system sum-rate and coverage probability, exist. Moreover, our results also show that, by enabling the UAV to intelligently move over the target area, the total required transmit power of UAV while covering the entire area, can be minimized. Finally, in order to provide full coverage for the area of interest, the tradeoff between the coverage and delay, in terms of the number of stop points, is discussed.", "We consider a collection of single-antenna ground nodes communicating with a multi-antenna unmanned aerial vehicle (UAV) over a multiple-access ground-to-air communications link. The UAV uses beamforming to mitigate inter-user interference and achieve spatial division multiple access (SDMA). First, we consider a simple scenario with two static ground nodes and analytically investigate the effect of the UAV's heading on the system sum rate. We then study a more general setting with multiple mobile ground-based terminals, and develop an algorithm for dynamically adjusting the UAV heading to maximize the approximate ergodic sum rate of the uplink channel, using a prediction filter to track the positions of the mobile ground nodes. For the common scenario where a strong line-of-sight (LOS) channel exists between the ground nodes and UAV, we use an asymptotic analysis to find simplified versions of the algorithm for low and high SNR. We present simulation results that demonstrate the benefits of adapting the UAV heading in order to optimize the uplink communications performance. The simulation results also show that the simplified algorithms provide near-optimal performance.", "A robust and accurate positioning solution is required to increase the safety in GPS-denied environments. Although there is a lot of available research in this area, little has been done for confined environments such as tunnels. Therefore, we organized a measurement campaign in a basement tunnel of Linkoping university, in which we obtained ultra-wideband (UWB) complex impulse responses for line-of-sight (LOS), and three non-LOS (NLOS) scenarios. This paper is focused on time-of-arrival (TOA) ranging since this technique can provide the most accurate range estimates, which are required for range-based positioning. We describe the measurement setup and procedure, select the threshold for TOA estimation, analyze the channel propagation parameters obtained from the power delay profile (PDP), and provide statistical model for ranging. According to our results, the rise-time should be used for NLOS identification, and the maximum excess delay should be used for NLOS error mitigation. However, the NLOS condition cannot be perfectly determined, so the distance likelihood has to be represented in a Gaussian mixture form. We also compared these results with measurements from a mine tunnel, and found a similar behavior." ] }
1906.00777
2947226932
Drone base station (DBS) is a promising technique to extend wireless connections for uncovered users of terrestrial radio access networks (RAN). To improve user fairness and network performance, in this paper, we design 3D trajectories of multiple DBSs in the drone assisted radio access networks (DA-RAN) where DBSs fly over associated areas of interests (AoIs) and relay communications between the base station (BS) and users in AoIs. We formulate the multi-DBS 3D trajectory planning and scheduling as a mixed integer non-linear programming (MINLP) problem with the objective of minimizing the average DBS-to-user (D2U) pathloss. The 3D trajectory variations in both horizontal and vertical directions, as well as the state-of-the-art DBS-related channel models are considered in the formulation. To address the non-convexity and NP-hardness of the MINLP problem, we first decouple it into multiple integer linear programming (ILP) and quasi-convex sub-problems in which AoI association, D2U communication scheduling, horizontal trajectories and flying heights of DBSs are respectively optimized. Then, we design a multi-DBS 3D trajectory planning and scheduling algorithm to solve the sub-problems iteratively based on the block coordinate descent (BCD) method. A k-means-based initial trajectory generation and a search-based start slot scheduling are considered in the proposed algorithm to improve trajectory design performance and ensure inter-DBS distance constraint, respectively. Extensive simulations are conducted to investigate the impacts of DBS quantity, horizontal speed and initial trajectory on the trajectory planning results. Compared with the static DBS deployment, the proposed trajectory planning can achieve 10-15 dB reduction on average D2U pathloss, and reduce the D2U pathloss standard deviation by 68 , which indicate the improvements of network performance and user fairness.
In most static DBS deployment works, the terrestrial user QoS or network performance is improved through optimizing the hovering position of single or multiple DBSs. For instance, through a clustering based approach, Mozaffari . designed the optimal locations of DBSs that maximize the information collection gain from terrestrial IoT devices @cite_13 . In @cite_28 , Zhang . optimized the DBS density in DBS network to maximize the network throughput while satisfying the efficiency requirements of the cellular network. Zhou . studied the downlink coverage features of DBS using Nakagami-m fading models, and calculated the optimal height and density of multiple DBSs to achieve maximal coverage probability @cite_16 . Although various works have investigated the static DBS deployments in different scenarios with different methods, the D2B link quality constraint is simplified or ignored by most works. In the works considering the D2B links, the D2B channel models are either as same as the D2U pathloss model @cite_24 or traditional terrestrial channel models @cite_10 . In this paper, we further implement the specific D2B channel model derived in @cite_8 to highlight the D2B channel features.
{ "cite_N": [ "@cite_13", "@cite_8", "@cite_28", "@cite_24", "@cite_16", "@cite_10" ], "mid": [ "2281709771", "1975618234", "2758501291", "2136340918", "2015301486", "2076773434" ], "abstract": [ "In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit probability, which is the probability that a user at an arbitrary location in the plane will find the content that it requires in one of the BSs that it is covered by. We consider the problem of optimally placing content in all BSs jointly. As this problem is not convex, we provide a heuristic scheme by finding the optimal placement policy for one type of base station conditioned on the placement in all other types. We demonstrate that these individual optimization problems are convex and we provide an analytical solution. As an illustration, we find the optimal placement policy of the small base stations (SBSs) depending on the placement policy of the macro base stations (MBSs). We show how the hit probability evolves as the deployment density of the SBSs varies. We show that the heuristic of placing the most popular content in the MBSs is almost optimal after deploying the SBSs with optimal placement policies. Also, for the SBSs no such heuristic can be used; the optimal placement is significantly better than storing the most popular content. Finally, we show that solving the individual problems to find the optimal placement policies for different types of BSs iteratively, namely repeatedly updating the placement policies, does not improve the performance.", "We investigate a wireless system of multiple cells, each having a downlink shared channel in support of high-speed packet data services. In practice, such a system consists of hierarchically organized entities including a central server, Base Stations (BSs), and Mobile Stations (MSs). Our goal is to improve global resource utilization and reduce regional congestion given asymmetric arrivals and departures of mobile users, a goal requiring load balancing among multiple cells. For this purpose, we propose a scalable cross-layer framework to coordinate packet-level scheduling, call-level cell-site selection and handoff, and system-level cell coverage based on load, throughput, and channel measurements. In this framework, an opportunistic scheduling algorithm--the weighted Alpha-Rule--exploits the gain of multiuser diversity in each cell independently, trading aggregate (mean) down-link throughput for fairness and minimum rate guarantees among MSs. Each MS adapts to its channel dynamics and the load fluctuations in neighboring cells, in accordance with MSs' mobility or their arrival and departure, by initiating load-aware handoff and cell-site selection. The central server adjusts schedulers of all cells to coordinate their coverage by prompting cell breathing or distributed MS handoffs. Across the whole system, BSs and MSs constantly monitor their load, throughput, or channel quality in order to facilitate the overall system coordination. Our specific contributions in such a framework are highlighted by the minimum-rate guaranteed weighted Alpha-Rule scheduling, the load-aware MS handoff cell-site selection, and the Media Access Control (MAC)-layer cell breathing. Our evaluations show that the proposed framework can improve global resource utilization and load balancing, resulting in a smaller blocking rate of MS arrivals without extra resources while the aggregate throughput remains roughly the same or improved at the hot-spots. Our simulation tests also show that the coordinated system is robust to dynamic load fluctuations and is scalable to both the system dimension and the size of MS population.", "Drone base stations (DBSs) can enhance network coverage and area capacity by moving supply towards demand when required. This degree of freedom could be especially useful for future applications with extreme demands, such as ultra reliable and low latency communications (uRLLC). However, deployment of DBSs can face several challenges. One issue is finding the 3D placement of such BSs to satisfy dynamic requirements of the system. Second, the availability of reliable wireless backhaul links and the related resource allocation are principal issues that should be considered. Finally, association of the users with BSs becomes an involved problem due to mobility of DBSs. In this paper, we consider a macro-BS (MBS) and several DBSs that rely on the wireless links to the MBS for backhauling. Considering regular and uRLLC users, we propose an algorithm to find efficient 3D locations of DBSs in addition to the user-BS associations and wireless backhaul bandwidth allocations to maximize the sum logarithmic rate of the users. To this end, a decomposition method is employed to first find the user-BS association and bandwidth allocations. Then DBS locations are updated using a heuristic particle swarm optimization algorithm. Simulation results show the effectiveness of the proposed method and provide useful insights on the effects of traffic distributions and antenna beamwidth.", "We consider the problem of resource allocation in downlink OFDMA systems for multi service and unknown environment. Due to users' mobility and intercell interference, the base station cannot predict neither the Signal to Noise Ratio (SNR) of each user in future time slots nor their probability distribution functions. In addition, the traffic is bursty in general with unknown arrival. The probability distribution functions of the SNR, channel state and traffic arrival density are then unknown. Achieving a multi service Quality of Service (QoS) while optimizing the performance of the system (e.g. total throughput) is a hard and interesting task since it depends on the unknown future traffic and SNR values. In this paper we solve this problem by modeling the multiuser queuing system as a discrete time linear dynamic system. We develop a robust H∞ controller to regulate the queues of different users. The queues and Packet Drop Rates (PDR) are controlled by proposing a minimum data rate according to the demanded service type of each user. The data rate vector proposed by the controller is then fed as a constraint to an instantaneous resource allocation framework. This instantaneous problem is formulated as a convex optimization problem for instantaneous subcarrier and power allocation decisions. Simulation results show small delays and better fairness among users.", "We propose joint spatial division and multiplexing (JSDM), an approach to multiuser MIMO downlink that exploits the structure of the correlation of the channel vectors in order to allow for a large number of antennas at the base station while requiring reduced-dimensional channel state information at the transmitter (CSIT). JSDM achieves significant savings both in the downlink training and in the CSIT uplink feedback, thus making the use of large antenna arrays at the base station potentially suitable also for frequency division duplexing (FDD) systems, for which uplink downlink channel reciprocity cannot be exploited. In the proposed scheme, the multiuser MIMO downlink precoder is obtained by concatenating a prebeamforming matrix, which depends only on the channel second-order statistics, with a classical multiuser precoder, based on the instantaneous knowledge of the resulting reduced dimensional “effective” channel matrix. We prove a simple condition under which JSDM incurs no loss of optimality with respect to the full CSIT case. For linear uniformly spaced arrays, we show that such condition is approached in the large number of antennas limit. For this case, we use Szego's asymptotic theory of Toeplitz matrices to show that a DFT-based prebeamforming matrix is near-optimal, requiring only coarse information about the users angles of arrival and angular spread. Finally, we extend these ideas to the case of a 2-D base station antenna array, with 3-D beamforming, including multiple beams in the elevation angle direction. We provide guidelines for the prebeamforming optimization and calculate the system spectral efficiency under proportional fairness and max-min fairness criteria, showing extremely attractive performance. Our numerical results are obtained via asymptotic random matrix theory, avoiding lengthy Monte Carlo simulations and providing accurate results for realistic (finite) number of antennas and users.", "We consider spatial stochastic models of downlink heterogeneous cellular networks (HCNs) with multiple tiers, where the base stations (BSs) of each tier have a particular spatial density, transmission power and path-loss exponent. Prior works on such spatial models of HCNs assume, due to its tractability, that the BSs are deployed according to homogeneous Poisson point processes. This means that the BSs are located independently of each other and their spatial correlation is ignored. In the current paper, we propose two spatial models for the analysis of downlink HCNs, in which the BSs are deployed according to @a-Ginibre point processes. The @a-Ginibre point processes constitute a class of determinantal point processes and account for the repulsion between the BSs. Besides, the degree of repulsion is adjustable according to the value of @[email protected]?(0,1]. In one proposed model, the BSs of different tiers are deployed according to mutually independent @a-Ginibre processes, where the @a can take different values for the different tiers. In the other model, all the BSs are deployed according to an @a-Ginibre point process and they are classified into multiple tiers by mutually independent marks. For these proposed models, we derive computable representations for the coverage probability of a typical user-the probability that the downlink signal-to-interference-plus-noise ratio for the typical user achieves a target threshold. We exhibit the results of some numerical experiments and compare the proposed models and the Poisson based model." ] }
1906.01012
2948721154
Action recognition is so far mainly focusing on the problem of classification of hand selected preclipped actions and reaching impressive results in this field. But with the performance even ceiling on current datasets, it also appears that the next steps in the field will have to go beyond this fully supervised classification. One way to overcome those problems is to move towards less restricted scenarios. In this context we present a large-scale real-world dataset designed to evaluate learning techniques for human action recognition beyond hand-crafted datasets. To this end we put the process of collecting data on its feet again and start with the annotation of a test set of 250 cooking videos. The training data is then gathered by searching for the respective annotated classes within the subtitles of freely available videos. The uniqueness of the dataset is attributed to the fact that the whole process of collecting the data and training does not involve any human intervention. To address the problem of semantic inconsistencies that arise with this kind of training data, we further propose a semantical hierarchical structure for the mined classes.
Action recognition has been a challenging topic for long and a lot of innovative approaches, mainly for the task of action classification @cite_30 @cite_26 @cite_32 , have come up in the research community. But, obviously, we are still far away from the real-world task of learning arbitrary action classes from video data. One limitation here might be the lack of availability of real-world datasets that are just based on real random collections of videos.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_32" ], "mid": [ "1981781955", "2486913577", "2146048167" ], "abstract": [ "Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.", "We consider the problem of detecting and localizing a human action from continuous action video from depth cameras. We believe that this problem is more challenging than the problem of traditional action recognition as we do not have the information about the starting and ending frames of an action class. Another challenge which makes the problem difficult, is the latency in detection of actions. In this paper, we introduce a greedy approach to detect the action class, invariant of their temporal scale in the testing sequences using class templates and basic skeleton based feature representation from the depth stream data generated using Microsoft Kinect. We evaluate the proposed method on the standard G3D and UTKinect-Action datasets consisting of five and ten actions, respectively. Our results demonstrate that the proposed approach performs well for action detection and recognition under different temporal scales, and is able to outperform the state of the art methods at low latency.", "Action recognition has often been posed as a classification problem, which assumes that a video sequence only have one action class label and different actions are independent. However, a single human body can perform multiple concurrent actions at the same time, and different actions interact with each other. This paper proposes a concurrent action detection model where the action detection is formulated as a structural prediction problem. In this model, an interval in a video sequence can be described by multiple action labels. An detected action interval is determined both by the unary local detector and the relations with other actions. We use a wavelet feature to represent the action sequence, and design a composite temporal logic descriptor to describe the action relations. The model parameters are trained by structural SVM learning. Given a long video sequence, a sequential decision window search algorithm is designed to detect the actions. Experiments on our new collected concurrent action dataset demonstrate the strength of our method." ] }
1906.01012
2948721154
Action recognition is so far mainly focusing on the problem of classification of hand selected preclipped actions and reaching impressive results in this field. But with the performance even ceiling on current datasets, it also appears that the next steps in the field will have to go beyond this fully supervised classification. One way to overcome those problems is to move towards less restricted scenarios. In this context we present a large-scale real-world dataset designed to evaluate learning techniques for human action recognition beyond hand-crafted datasets. To this end we put the process of collecting data on its feet again and start with the annotation of a test set of 250 cooking videos. The training data is then gathered by searching for the respective annotated classes within the subtitles of freely available videos. The uniqueness of the dataset is attributed to the fact that the whole process of collecting the data and training does not involve any human intervention. To address the problem of semantic inconsistencies that arise with this kind of training data, we further propose a semantical hierarchical structure for the mined classes.
Apart from first generation datasets @cite_2 @cite_23 where actors were required to perform certain actions in controlled environment, current datasets such as HMDB @cite_22 , UCF @cite_27 or the recently released Kinetics dataset @cite_24 are mainly acquired from web sources such as YouTube clips or movies with the aim to represent realistic scenarios from training and testing. Here, videos were usually first searched by predefined action-queries and later clipped and organized to capture the atomic actions or its repetitions. Other datasets such as Thumos @cite_15 , MPI Cooking @cite_6 , Breakfast @cite_16 or the recently released Epic Kitchen dataset @cite_13 focus on the labeling of one or more action segments in single long videos, trying to temporally detect or segment predefined action classes within the video.
{ "cite_N": [ "@cite_22", "@cite_6", "@cite_24", "@cite_27", "@cite_23", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2780470340", "2963524571", "2619082050", "2949827582", "2618799552", "2949310145", "2777542469", "2108710284", "2949594863" ], "abstract": [ "This paper describes a procedure for the creation of large-scale video datasets for action classification and localization from unconstrained, realistic web data. The scalability of the proposed procedure is demonstrated by building a novel video benchmark, named SLAC (Sparsely Labeled ACtions), consisting of over 520K untrimmed videos and 1.75M clip annotations spanning 200 action categories. Using our proposed framework, annotating a clip takes merely 8.8 seconds on average. This represents a saving in labeling time of over 95 compared to the traditional procedure of manual trimming and localization of actions. Our approach dramatically reduces the amount of human labeling by automatically identifying hard clips, i.e., clips that contain coherent actions but lead to prediction disagreement between action classifiers. A human annotator can disambiguate whether such a clip truly contains the hypothesized action in a handful of seconds, thus generating labels for highly informative samples at little cost. We show that our large-scale dataset can be used to effectively pre-train action recognition models, significantly improving final metrics on smaller-scale benchmarks after fine-tuning. On Kinetics, UCF-101 and HMDB-51, models pre-trained on SLAC outperform baselines trained from scratch, by 2.0 , 20.1 and 35.4 in top-1 accuracy, respectively when RGB input is used. Furthermore, we introduce a simple procedure that leverages the sparse labels in SLAC to pre-train action localization models. On THUMOS14 and ActivityNet-v1.3, our localization model improves the mAP of baseline model by 8.6 and 2.5 , respectively.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.2 on HMDB-51 and 97.9 on UCF-101.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.", "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. We will release the dataset publicly. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6 mAP, underscoring the need for developing new approaches for video understanding.", "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 437 15-minute video clips, where actions are localized in space and time, resulting in 1.59M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.8 mAP, underscoring the need for developing new approaches for video understanding.", "We propose a new task of unsupervised action detection by action matching. Given two long videos, the objective is to temporally detect all pairs of matching video segments. A pair of video segments are matched if they share the same human action. The task is category independent---it does not matter what action is being performed---and no supervision is used to discover such video segments. Unsupervised action detection by action matching allows us to align videos in a meaningful manner. As such, it can be used to discover new action categories or as an action proposal technique within, say, an action detection pipeline. Moreover, it is a useful pre-processing step for generating video highlights, e.g., from sports videos. We present an effective and efficient method for unsupervised action detection. We use an unsupervised temporal encoding method and exploit the temporal consistency in human actions to obtain candidate action segments. We evaluate our method on this challenging task using three activity recognition benchmarks, namely, the MPII Cooking activities dataset, the THUMOS15 action detection benchmark and a new dataset called the IKEA dataset. On the MPII Cooking dataset we detect action segments with a precision of 21.6 and recall of 11.7 over 946 long video pairs and over 5000 ground truth action segments. Similarly, on THUMOS dataset we obtain 18.4 precision and 25.1 recall over 5094 ground truth action segment pairs.", "This paper is the first to address the problem of unsupervised action localization in videos. Given unlabeled data without bounding box annotations, we propose a novel approach that: 1) Discovers action class labels and 2) Spatio-temporally localizes actions in videos. It begins by computing local video features to apply spectral clustering on a set of unlabeled training videos. For each cluster of videos, an undirected graph is constructed to extract a dominant set, which are known for high internal homogeneity and in-homogeneity between vertices outside it. Next, a discriminative clustering approach is applied, by training a classifier for each cluster, to iteratively select videos from the non-dominant set and obtain complete video action classes. Once classes are discovered, training videos within each cluster are selected to perform automatic spatio-temporal annotations, by first over-segmenting videos in each discovered class into supervoxels and constructing a directed graph to apply a variant of knapsack problem with temporal constraints. Knapsack optimization jointly collects a subset of supervoxels, by enforcing the annotated action to be spatio-temporally connected and its volume to be the size of an actor. These annotations are used to train SVM action classifiers. During testing, actions are localized using a similar Knapsack approach, where supervoxels are grouped together and SVM, learned using videos from discovered action classes, is used to recognize these actions. We evaluate our approach on UCF-Sports, Sub-JHMDB, JHMDB, THUMOS13 and UCF101 datasets. Our experiments suggest that despite using no action class labels and no bounding box annotations, we are able to get competitive results to the state-of-the-art supervised methods.", "We are given a set of video clips, each one annotated with an ordered list of actions, such as “walk” then “sit” then “answer phone” extracted from, for example, the associated text script. We seek to temporally localize the individual actions in each clip as well as to learn a discriminative classifier for each action. We formulate the problem as a weakly supervised temporal assignment with ordering constraints. Each video clip is divided into small time intervals and each time interval of each video clip is assigned one action label, while respecting the order in which the action labels appear in the given annotations. We show that the action label assignment can be determined together with learning a classifier for each action in a discriminative manner. We evaluate the proposed model on a new and challenging dataset of 937 video clips with a total of 787720 frames containing sequences of 16 different actions from 69 Hollywood movies.", "We are given a set of video clips, each one annotated with an ordered list of actions, such as \"walk\" then \"sit\" then \"answer phone\" extracted from, for example, the associated text script. We seek to temporally localize the individual actions in each clip as well as to learn a discriminative classifier for each action. We formulate the problem as a weakly supervised temporal assignment with ordering constraints. Each video clip is divided into small time intervals and each time interval of each video clip is assigned one action label, while respecting the order in which the action labels appear in the given annotations. We show that the action label assignment can be determined together with learning a classifier for each action in a discriminative manner. We evaluate the proposed model on a new and challenging dataset of 937 video clips with a total of 787720 frames containing sequences of 16 different actions from 69 Hollywood movies." ] }
cs0003054
2949339967
The idle computers on a local area, campus area, or even wide area network represent a significant computational resource---one that is, however, also unreliable, heterogeneous, and opportunistic. This type of resource has been used effectively for embarrassingly parallel problems but not for more tightly coupled problems. We describe an algorithm that allows branch-and-bound problems to be solved in such environments. In designing this algorithm, we faced two challenges: (1) scalability, to effectively exploit the variably sized pools of resources available, and (2) fault tolerance, to ensure the reliability of services. We achieve scalability through a fully decentralized algorithm, by using a membership protocol for managing dynamically available resources. However, this fully decentralized design makes achieving reliability even more challenging. We guarantee fault tolerance in the sense that the loss of up to all but one resource will not affect the quality of the solution. For propagating information efficiently, we use epidemic communication for both the membership protocol and the fault-tolerance mechanism. We have developed a simulation framework that allows us to evaluate design alternatives. Results obtained in this framework suggest that our techniques can execute scalably and reliably.
The only fully decentralized, fault-tolerant B &B algorithm for distributed-memory architectures is DIB (Distributed Implementation of Backtracking) @cite_0 . DIB was designed for a wide range of tree-based applications, such as recursive backtrack, branch-and-bound, and alpha-beta pruning. It is a distributed, asynchronous algorithm that uses a dynamic load-balancing technique. Its failure recovery mechanism is based on keeping track of which machine is responsible for each unsolved problem. Each machine memorizes the problems for which it is responsible, as well as the machines to which it sent problems or from which it received problems. The completion of a problem is reported to the machine the problem came from. Hence, each machine can determine whether the work for which it is responsible is still unsolved, and can redo that work in the case of failure.
{ "cite_N": [ "@cite_0" ], "mid": [ "1984263429" ], "abstract": [ "DIB is a general-purpose package that allows a wide range of applications such as recursive backtrack, branch and bound, and alpha-beta search to be implemented on a multicomputer. It is very easy to use. The application program needs to specify only the root of the recursion tree, the computation to be performed at each node, and how to generate children at each node. In addition, the application program may optionally specify how to synthesize values of tree nodes from their children's values and how to disseminate information (such as bounds) either globally or locally in the tree. DIB uses a distributed algorithm, transparent to the application programmer, that divides the problem into subproblems and dynamically allocates them to any number of (potentially nonhomogeneous) machines. This algorithm requires only minimal support from the distributed operating system. DIB can recover from failures of machines even if they are not detected. DIB currently runs on the Crystal multicomputer at the University of Wisconsin-Madison. Many applications have been implemented quite easily, including exhaustive traversal ( N queens, knight's tour, negamax tree evaluation), branch and bound (traveling salesman) and alpha-beta search (the game of NIM). Speedup is excellent for exhaustive traversal and quite good for branch and bound." ] }
cs0003008
1524644832
This paper presents a method of computing a revision of a function-free normal logic program. If an added rule is inconsistent with a program, that is, if it leads to a situation such that no stable model exists for a new program, then deletion and addition of rules are performed to avoid inconsistency. We specify a revision by translating a normal logic program into an abductive logic program with abducibles to represent deletion and addition of rules. To compute such deletion and addition, we propose an adaptation of our top-down abductive proof procedure to compute a relevant abducibles to an added rule. We compute a minimally revised program, by choosing a minimal set of abducibles among all the sets of abducibles computed by a top-down proof procedure.
There are many procedures to compute stable models, generalized stable models or abduction. If we use a bottom-up procedure for our translated abductive logic program to compute all the generalized stable models naively, then sets of abducibles to be compared would be larger since abducibles of irrelevant temporary rules and addable rules with inconsistency will be considered. Therefore, it is better to compute abducibles related with inconsistency. To our knowledge, top-down procedure which can be used for this purpose is only Satoh and Iwayama's procedure since we need a bottom-up consistency checking of addition deletion of literals during computing abducibles for revision. This task is similar to integrity constraint checking in @cite_16 and Satoh and Iwayama's procedure includes this task.
{ "cite_N": [ "@cite_16" ], "mid": [ "176609766" ], "abstract": [ "Horn clause logic programming can be extended to include abduction with integrity constraints. In the resulting extension of logic programming, negation by failure can be simulated by making negative conditions abducible and by imposing appropriate denials and disjunctions as integrity constraints. This gives an alternative semantics for negation by failure, which generalises the stable model semantics of negation by failure. The abductive extension of logic programming extends negation by failure in three ways: (1) computation can be perfonned in alternative minimal models, (2) positive as well as negative conditions can be made abducible, and (3) other integrity constraints can also be accommodated. * This paper was written while the first author was at Imperial College. 235 Introduction The tenn \"abduction\" was introduced by the philosopher Charles Peirce [1931] to refer to a particular kind of hypothetical reasoning. In the simplest case, it has the fonn: From A and A fB infer B as a possible \"explanation\" of A. Abduction has been given prominence in Charniak and McDennot's [1985] \"Introduction to Artificial Intelligence\", where it has been applied to expert systems and story comprehension. Independently, several authors have developed deductive techniques to drive the generation of abductive hypotheses. Cox and Pietrzykowski [1986] construct hypotheses from the \"dead ends\" of linear resolution proofs. Finger and Genesereth [1985] generate \"deductive solutions to design problems\" using the \"residue\" left behind in resolution proofs. Poole, Goebel and Aleliunas [1987] also use linear resolution to generate hypotheses. All impose the restriction that hypotheses should be consistent with the \"knowledge base\". Abduction is a fonn of non-monotonic reasoning, because hypotheses which are consistent with one state of a knowledge base may become inconSistent when new knowledge is added. Poole [1988] argues that abduction is preferable to noh-monotonic logics for default reasoning. In this view, defaults are hypotheses fonnulated within classical logic rather than conclusions derived withln some fonn of non-monotonic logic. The similarity between abduction and default reasoning was also pointed out in [Kowalski, 1979]. In this paper we show how abduction can be integrated with logic programming, and we concentrate on the use of abduction to generalise negation by failure. Conditional Answers Compared with Abduction In the simplest case, a logic program consists of a set of Horn Clauses, which are used backward to_reduce goals to sub goals. The initial goal is solved when there are no subgollls left;" ] }
cs0003028
2953269011
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
Dealing with preferences on rules seems to necessitate a two-level approach. This in fact is a characteristic of many approaches found in the literature. The majority of these approaches treat preference at the meta-level by defining alternative semantics. @cite_1 proposes a modification of well-founded semantics in which dynamic preferences may be given for rules employing @math . @cite_12 and @cite_5 propose different prioritized versions of answer set semantics. In @cite_12 static preferences are addressed first, by defining the reduct of a logic program @math , which is a subset of @math that is most preferred. For the following example, their approach gives two answer sets (one with @math and one with @math ) which seems to be counter-intuitive; ours in contrast has a single answer set containing @math . Moreover, the dynamic case is addressed by specifying a transformation of a dynamic program to a set of static programs.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_12" ], "mid": [ "2124627636", "2174235632", "1565029141" ], "abstract": [ "Abstract In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity.", "We introduce a methodology and framework for expressing general preference information in logic programming under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of atoms of form s p t where s and t are names. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed program correspond with the preferred answer sets of the original program. Our approach allows the specification of dynamic orderings, in which preferences can appear arbitrarily within a program. Static orderings (in which preferences are external to a logic program) are a trivial restriction of the general dynamic case. First, we develop a specific approach to reasoning with preferences, wherein the preference ordering specifies the order in which rules are to be applied. We then demonstrate the wide range of applicability of our framework by showing how other approaches, among them that of Brewka and Eiter, can be captured within our framework. Since the result of each of these transformations is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a publicly available compiler as a front-end for these programming systems.", "We extend answer set semantics to deal with inconsistent programs (containing classical negation), by finding a \"best\" answer set. Within the context of inconsistent programs, it is natural to have a partial order on rules, representing a preference for satisfying certain rules, possibly at the cost of violating less important ones. We show that such a rule order induces a natural order on extended answer sets, the minimal elements of which we call preferred answer sets. We characterize the expressiveness of the resulting semantics and show that it can simulate negation as failure as well as disjunction. We illustrate an application of the approach by considering database repairs, where minimal repairs are shown to correspond to preferred answer sets." ] }
cs0003028
2953269011
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
Brewka and Eiter @cite_5 address static preferences on rules in extended logic programs. They begin with a strict partial order on a set of rules, but define preference with respect to total orders that conform to the original partial order. Preferred answer sets are then selected from among the collection of answer sets of the (unprioritised) program. In contrast, we deal only with the original partial order, which is translated into the object theory. As well, only preferred extensions are produced in our approach; there is no need for meta-level filtering of extensions.
{ "cite_N": [ "@cite_5" ], "mid": [ "2174235632" ], "abstract": [ "We introduce a methodology and framework for expressing general preference information in logic programming under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of atoms of form s p t where s and t are names. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed program correspond with the preferred answer sets of the original program. Our approach allows the specification of dynamic orderings, in which preferences can appear arbitrarily within a program. Static orderings (in which preferences are external to a logic program) are a trivial restriction of the general dynamic case. First, we develop a specific approach to reasoning with preferences, wherein the preference ordering specifies the order in which rules are to be applied. We then demonstrate the wide range of applicability of our framework by showing how other approaches, among them that of Brewka and Eiter, can be captured within our framework. Since the result of each of these transformations is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a publicly available compiler as a front-end for these programming systems." ] }
cs0003028
2953269011
We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.
A two-level approach is also found in @cite_7 , where a methodology for directly encoding preferences in logic programs is proposed. The second-order flavour'' of this approach stems from the reification of rules and preferences. For example, a rule ( p r, s, q ) is expressed by the formula ( default (n, p, [r, s], [q]) ) where @math is the name of the rule. The Prolog-like list notation @math and @math raises the possibility of an infinite Herbrand universe; this is problematic for systems like smodels and dlv that rely on finite Herbrand universes.
{ "cite_N": [ "@cite_7" ], "mid": [ "2124627636" ], "abstract": [ "Abstract In this paper, we address the issue of how Gelfond and Lifschitz's answer set semantics for extended logic programs can be suitably modified to handle prioritized programs. In such programs an ordering on the program rules is used to express preferences. We show how this ordering can be used to define preferred answer sets and thus to increase the set of consequences of a program. We define a strong and a weak notion of preferred answer sets. The first takes preferences more seriously, while the second guarantees the existence of a preferred answer set for programs possessing at least one answer set. Adding priorities to rules is not new, and has been explored in different contexts. However, we show that many approaches to priority handling, most of which are inherited from closely related formalisms like default logic, are not suitable and fail on intuitive examples. Our approach, which obeys abstract, general principles that any approach to prioritized knowledge representation should satisfy, handles them in the expected way. Moreover, we investigate the complexity of our approach. It appears that strong preference on answer sets does not add on the complexity of the principal reasoning tasks, and weak preference leads only to a mild increase in complexity." ] }
cs0005010
2951494809
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
If we look in a broader context, then finding a stable model is a combinatorial search problem. Other forms of combinatorial search problems are propositional satisfiability, constraint satisfaction, constraint logic programming and integer linear programming problems, and some other logic programming problems such as those expressible in @cite_28 . The difference between these problem formalisms and the stable model semantics is that they do not include default negation. In addition, all but the last one are not nonmonotonic.
{ "cite_N": [ "@cite_28" ], "mid": [ "49730540" ], "abstract": [ "Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop formulas to disjunctive programs and to arbitrary first-order sentences. We also extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models by Such programs inherit from the general language the ability to handle nonmonotonic reasoning under the stable model semantics even in the absence of the unique name and the domain closure assumptions, while yielding more succinct loop formulas than the general language due to the restricted syntax. We also show certain syntactic conditions under which query answering for an extended program can be reduced to entailment checking in first-order logic, providing a way to apply first-order theorem provers to reasoning about non-Herbrand stable models." ] }
cs0005010
2951494809
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
From an algorithmic standpoint the progenitor of the @math algorithm is the Davis-Putnam (-Logemann-Loveland) procedure @cite_37 for determining the satisfiability of propositional formulas. This procedure can be seen as a backtracking search procedure that makes assumptions about the truth values of the propositional atoms in a formula and that then derives new truth values from these assumptions in order to prune the search space.
{ "cite_N": [ "@cite_37" ], "mid": [ "1973734335" ], "abstract": [ "The Davis-Putnam-Logemann-Loveland algorithm is one of the most popular algorithms for solving the satisfiability problem. Its efficiency depends on its choice of a branching rule. We construct a sequence of instances of the satisfiability problem that fools a variety of sensible'''' branching rules in the following sense: when the instance has n variables, each of the sensible'''' branching rules brings about Omega(2^(n 5)) recursive calls of the Davis-Putnam-Logemann-Loveland algorithm, even though only O(1) such calls are necessary." ] }
cs0005010
2951494809
An algorithm for computing the stable model semantics of logic programs is developed. It is shown that one can extend the semantics and the algorithm to handle new and more expressive types of rules. Emphasis is placed on the use of efficient implementation techniques. In particular, an implementation of lookahead that safely avoids testing every literal for failure and that makes the use of lookahead feasible is presented. In addition, a good heuristic is derived from the principle that the search space should be minimized. Due to the lack of competitive algorithms and implementations for the computation of stable models, the system is compared with three satisfiability solvers. This shows that the heuristic can be improved by breaking ties, but leaves open the question of how to break them. It also demonstrates that the more expressive rules of the stable model semantics make the semantics clearly preferable over propositional logic when a problem has a more compact logic program representation. Conjunctive normal form representations are never more compact than logic program ones.
While the extended rules of this work are novel, there are some analogous constructions in the literature. The choice rule can be seen as a generalization of the disjunctive rule of the possible model semantics @cite_43 . The disjunctive rule of disjunctive logic programs @cite_14 also resembles the choice rule, but the semantics is in this case different. The stable models of a disjunctive program are subset minimal while the stable models of a logic program are grounded, i.e., atoms can not justify their own inclusion. If a program contains choice rules, then a grounded model is not necessarily subset minimal.
{ "cite_N": [ "@cite_43", "@cite_14" ], "mid": [ "2085084839", "49730540" ], "abstract": [ "We introduce the stable model semantics fordisjunctive logic programs and deductive databases, which generalizes the stable model semantics, defined earlier for normal (i.e., non-disjunctive) programs. Depending on whether only total (2-valued) or all partial (3-valued) models are used we obtain thedisjunctive stable semantics or thepartial disjunctive stable semantics, respectively. The proposed semantics are shown to have the following properties: • For normal programs, the disjunctive (respectively, partial disjunctive) stable semantics coincides with thestable (respectively,partial stable) semantics. • For normal programs, the partial disjunctive stable semantics also coincides with thewell-founded semantics. • For locally stratified disjunctive programs both (total and partial) disjunctive stable semantics coincide with theperfect model semantics. • The partial disjunctive stable semantics can be generalized to the class ofall disjunctive logic programs. • Both (total and partial) disjunctive stable semantics can be naturally extended to a broader class of disjunctive programs that permit the use ofclassical negation. • After translation of the programP into a suitable autoepistemic theory ( P ) the disjunctive (respectively, partial disjunctive) stable semantics ofP coincides with the autoepistemic (respectively, 3-valued autoepistemic) semantics of ( P ) .", "Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop formulas to disjunctive programs and to arbitrary first-order sentences. We also extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models by Such programs inherit from the general language the ability to handle nonmonotonic reasoning under the stable model semantics even in the absence of the unique name and the domain closure assumptions, while yielding more succinct loop formulas than the general language due to the restricted syntax. We also show certain syntactic conditions under which query answering for an extended program can be reduced to entailment checking in first-order logic, providing a way to apply first-order theorem provers to reasoning about non-Herbrand stable models." ] }
math0005204
1540167525
We present some new and recent algorithmic results concerning polynomial system solving over various rings. In particular, we present some of the best recent bounds on: (a) the complexity of calculating the complex dimension of an algebraic set, (b) the height of the zero-dimensional part of an algebraic set over C, and (c) the number of connected components of a semi-algebraic set. We also present some results which significantly lower the complexity of deciding the emptiness of hypersurface intersections over C and Q, given the truth of the Generalized Riemann Hypothesis. Furthermore, we state some recent progress on the decidability of the prefixes and , quantified over the positive integers. As an application, we conclude with a result connecting Hilbert's Tenth Problem in three variables and height bounds for integral points on algebraic curves. This paper is based on three lectures presented at the conference corresponding to this proceedings volume. The titles of the lectures were Some Speed-Ups in Computational Algebraic Geometry,'' Diophantine Problems Nearly in the Polynomial Hierarchy,'' and Curves, Surfaces, and the Frontier to Undecidability.''
As for more general relations between @math and its analogue over @math , it is easy to see that the decidability of @math implies the decidability of its analogue over @math . Unfortunately, the converse is currently unknown. Via Lagrange's Theorem (that any positive integer can be written as a sum of four squares) one can easily show that the un decidability of @math implies the un decidability of the analogue of @math over @math . More recently, Zhi-Wei Sun has shown that the @math can be replaced by @math @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "1924023668" ], "abstract": [ "A function @math is @math -resilient if all its Fourier coefficients of degree at most @math are zero, i.e., @math is uncorrelated with all low-degree parities. We study the notion of @math @math of Boolean functions, where we say that @math is @math -approximately @math -resilient if @math is @math -close to a @math -valued @math -resilient function in @math distance. We show that approximate resilience essentially characterizes the complexity of agnostic learning of a concept class @math over the uniform distribution. Roughly speaking, if all functions in a class @math are far from being @math -resilient then @math can be learned agnostically in time @math and conversely, if @math contains a function close to being @math -resilient then agnostic learning of @math in the statistical query (SQ) framework of Kearns has complexity of at least @math . This characterization is based on the duality between @math approximation by degree- @math polynomials and approximate @math -resilience that we establish. In particular, it implies that @math approximation by low-degree polynomials, known to be sufficient for agnostic learning over product distributions, is in fact necessary. Focusing on monotone Boolean functions, we exhibit the existence of near-optimal @math -approximately @math -resilient monotone functions for all @math . Prior to our work, it was conceivable even that every monotone function is @math -far from any @math -resilient function. Furthermore, we construct simple, explicit monotone functions based on @math and @math that are close to highly resilient functions. Our constructions are based on a fairly general resilience analysis and amplification. These structural results, together with the characterization, imply nearly optimal lower bounds for agnostic learning of monotone juntas." ] }
cs0005026
1644526253
A one-time pad (OTP) based cipher to insure both data protection and integrity when mobile code arrives to a remote host is presented. Data protection is required when a mobile agent could retrieve confidential information that would be encrypted in untrusted nodes of the network; in this case, information management could not rely on carrying an encryption key. Data integrity is a prerequisite because mobile code must be protected against malicious hosts that, by counterfeiting or removing collected data, could cover information to the server that has sent the agent. The algorithm described in this article seems to be simple enough, so as to be easily implemented. This scheme is based on a non-interactive protocol and allows a remote host to change its own data on-the-fly and, at the same time, protecting information against handling by other hosts.
Strong foundation is a requirement for future work in the topic of mobile agents @cite_12 . To design semantics and type-safety languages for agents in untrusted networks @cite_14 and supporting permissions languages for specifying distributed processes in dynamically evolving networks, as the languages derived from the @math -calculus @cite_18 are important to protect hosts against malicious code. spoonhower:telephony have shown that agents could be used for collaborative applications reducing network bandwidth requeriments. sander:hosts have proposed a way to obtain code privacy using non-interactive evaluation of encrypted functions (EEF). hohl:mess has proposed the possibility of use algorithms to mess up'' code.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_12" ], "mid": [ "2071198236", "2006520458", "118562019" ], "abstract": [ "In mobile agent systems, program code together with some process state can autonomously migrate to new hosts. Despite its many practical benefits, mobile agent technology results in significant new security threats from malicious agents and hosts. In this paper, we propose a security architecture to achieve three goals: certification that a server has the authority to execute an agent on behalf of its sender; flexible selection of privileges, so that an agent arriving at a server may be given the privileges necessary to carry out the task for which it has come to the server; and state appraisal, to ensure that an agent has not become malicious as a consequence of alterations to its state. The architecture models the trust relations between the principals of mobile agent systems and includes authentication and authorization mechanisms.", "We describe a foundational language for specifying dynamically evolving networks of distributed processes, D�. The language is a distributed extension of the �-calculus which incorporates the notions of remote execution, migration, and site failure. Novel features of D� include 1. Communication channels are explicitly located: the use of a channel requires knowledge of both the channel and its location. 2. Names are endowed with permissions: the holder of a name may only use that name in the manner allowed by these permissions.A type system is proposed in which-the types control the allocation of permissions; in well-typed processes all names are used in accordance with the permissions allowed by the types. We prove Subject Reduction and Type Safety Theorems for the type system. In the final section we define a semantic theory based on barbed bisimulations and discuss its characterization in terms of a bisimulation relation over a relativized labelled transition system.", "Mobile agent technology offers a new computing paradigm in which a program, in the form of a software agent, can suspend its execution on a host computer, transfer itself to another agent-enabled host on the network, and resume execution on the new host. The use of mobile code has a long history dating back to the use of remote job entry systems in the 1960's. Today's agent incarnations can be characterized in a number of ways ranging from simple distributed objects to highly organized software with embedded intelligence. As the sophistication of mobile software has increased over time, so too have the associated threats to security. This report provides an overview of the range of threats facing the designers of agent platforms and the developers of agentbased applications. The report also identifies generic security objectives, and a range of measures for countering the identified threats and fulfilling these security objectives." ] }
cs0006023
2949089885
We describe a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as Statement, Question, Backchannel, Agreement, Disagreement, and Apology. Our model detects and predicts dialogue acts based on lexical, collocational, and prosodic cues, as well as on the discourse coherence of the dialogue act sequence. The dialogue model is based on treating the discourse structure of a conversation as a hidden Markov model and the individual dialogue acts as observations emanating from the model states. Constraints on the likely sequence of dialogue acts are modeled via a dialogue act n-gram. The statistical dialogue grammar is combined with word n-grams, decision trees, and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue act. We develop a probabilistic integration of speech recognition with dialogue modeling, to improve both speech recognition and dialogue act classification accuracy. Models are trained and evaluated using a large hand-labeled database of 1,155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We achieved good dialogue act labeling accuracy (65 based on errorful, automatically recognized words and prosody, and 71 based on word transcripts, compared to a chance baseline accuracy of 35 and human accuracy of 84 ) and a small reduction in word recognition error.
Previous research on DA modeling has generally focused on task-oriented dialogue, with three tasks in particular garnering much of the research effort. The Map Task corpus @cite_49 @cite_12 consists of conversations between two speakers with slightly different maps of an imaginary territory. Their task is to help one speaker reproduce a route drawn only on the other speaker's map, all without being able to see each other's maps. Of the DA modeling algorithms described below, TaylorEtAl:LS98 and Wright:98 were based on Map Task. The VERBMOBIL corpus consists of two-party scheduling dialogues. A number of the DA modeling algorithms described below were developed for VERBMOBIL, including those of MastEtAl:96 , WarnkeEtAl:97 , Reithinger:96 , Reithinger:97 , and Samuel:98 . The ATR Conference corpus is a subset of a larger ATR Dialogue database consisting of simulated dialogues between a secretary and a questioner at international conferences. Researchers using this corpus include Nagata:92 , [1994] NagataMorimoto:93 , NagataMorimoto:94 and KitaEtAl:96 . Table shows the most commonly used versions of the tag sets from those three tasks.
{ "cite_N": [ "@cite_12", "@cite_49" ], "mid": [ "2118142207", "2122514299" ], "abstract": [ "This paper describes a corpus of unscripted, task-oriented dialogues which has been designed, digitally recorded, and transcribed to support the study of spontaneous speech on many levels. The corpus uses the Map Task (Brown, Anderson, Yule, and Shillcock, 1983) in which speakers must collaborate verbally to reproduce on one participant's map a route printed on the other's. In all, the corpus includes four conversations from each of 64 young adults and manipulates the following variables: familiarity of speakers, eye contact between speakers, matching between landmarks on the participants' maps, opportunities for contrastive stress, and phonological characteristics of landmark names. The motivations for the design are set out and basic corpus statistics are presented.", "Most previous work on trainable language generation has focused on two paradigms: (a) using a generation decisions of an existing generator. Both approaches rely on the existence of a handcrafted generation component, which is likely to limit their scalability to new domains. The first contribution of this article is to present Bagel, a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases, according to Factored Language Models (FLMs). As domain utterances are not readily available for most natural language generation tasks, a large creative effort is required to produce the data necessary to represent human linguistic variation for nontrivial domains. This article is based on the assumption that learning to produce paraphrases can be facilitated by collecting data from a large sample of untrained annotators using crowdsourcing—rather than a few domain experts—by relying on a coarse meaning representation. A second contribution of this article is to use crowdsourced data to show how dialogue naturalness can be improved by learning to vary the output utterances generated for a given semantic input. Two data-driven methods for generating paraphrases in dialogue are presented: (a) by sampling from the n-best list of realizations produced by Bagel's FLM reranker; and (b) by learning a structured perceptron predicting whether candidate realizations are valid paraphrases. We train Bagel on a set of 1,956 utterances produced by 137 annotators, which covers 10 types of dialogue acts and 128 semantic concepts in a tourist information system for Cambridge. An automated evaluation shows that Bagel outperforms utterance class LM baselines on this domain. A human evaluation of 600 resynthesized dialogue extracts shows that Bagel's FLM output produces utterances comparable to a handcrafted baseline, whereas the perceptron classifier performs worse. Interestingly, human judges find the system sampling from the n-best list to be more natural than a system always returning the first-best utterance. The judges are also more willing to interact with the n-best system in the future. These results suggest that capturing the large variation found in human language using data-driven methods is beneficial for dialogue interaction." ] }
cs0006029
2952170707
The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.
There is a large body of literature dealing with verification of protocols. Verification systems typically address well-defined properties --such as safety , liveness , and responsiveness @cite_37 -- and aim to detect violations of these properties. In general, the two main approaches for protocol verification are theorem proving and reachability analysis @cite_2 . Theorem proving systems define a set of axioms and relations to prove properties, and include model-based and logic-based formalisms @cite_35 @cite_17 . These systems are useful in many applications. However, these systems tend to abstract out some network dynamics that we will study (e.g., selective packet loss). Moreover, they do not synthesize network topologies and do not address performance issues per se.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_17", "@cite_2" ], "mid": [ "2033263591", "1508967933", "2291637985", "1926128235" ], "abstract": [ "In this article we present a comprehensive survey of various approaches for the verification of cache coherence protocols based on state enumeration, (symbolic model checking , and symbolic state models . Since these techniques search the state space of the protocol exhaustively, the amount of memory required to manipulate that state information and the verification time grow very fast with the number of processors and the complexity of the protocol mechanisms. To be successful for systems of arbitrary complexity, a verification technique must solve this so-called state space explosion problem. The emphasis of our discussion is onthe underlying theory in each method of handling the state space exposion problem, and formulationg and checking the safety properties (e.g., data consistency) and the liveness properties (absence of deadlock and livelock). We compare the efficiency and discuss the limitations of each technique in terms of memory and computation time. Also, we discuss issues of generality, applicability, automaticity, and amenity for existing tools in each class of methods. No method is truly superior because each method has its own strengths and weaknesses. Finally, refinements that can further reduce the verification time and or the memory requirement are also discussed.", "Since the 1980s, two approaches have been developed for analyzing security protocols. One of the approaches relies on a computational model that considers issues of complexity and probability. This approach captures a strong notion of security, guaranteed against all probabilistic polynomial-time attacks. The other approach relies on a symbolic model of protocol executions in which cryptographic primitives are treated as black boxes. Since the seminal work of Dolev and Yao, it has been realized that this latter approach enables significantly simpler and often automated proofs. However, the guarantees that it offers have been quite unclear. In this paper, we show that it is possible to obtain the best of both worlds: fully automated proofs and strong, clear security guarantees. Specifically, for the case of protocols that use signatures and asymmetric encryption, we establish that symbolic integrity and secrecy proofs are sound with respect to the computational model. The main new challenges concern secrecy properties for which we obtain the first soundness result for the case of active adversaries. Our proofs are carried out using Casrul, a fully automated tool.", "Formal verification is used to establish the compliance of software and hardware systems with important classes of requirements. System compliance with functional requirements is frequently analyzed using techniques such as model checking, and theorem proving. In addition, a technique called quantitative verification supports the analysis of the reliability, performance, and other quality-of-service (QoS) properties of systems that exhibit stochastic behavior. In this paper, we extend the applicability of quantitative verification to the common scenario when the probabilities of transition between some or all states of the Markov models analyzed by the technique are unknown, but observations of these transitions are available. To this end, we introduce a theoretical framework, and a tool chain that establish confidence intervals for the QoS properties of a software system modelled as a Markov chain with uncertain transition probabilities. We use two case studies from different application domains to assess the effectiveness of the new quantitative verification technique. Our experiments show that disregarding the above source of uncertainty may significantly affect the accuracy of the verification results, leading to wrong decisions, and low-quality software systems.", "When formalizing security protocols, different specification languages support very different reasoning methodologies, whose results are not directly or easily comparable. Therefore, establishing clear mappings among different frameworks is highly desirable, as it permits various methodologies to cooperate by interpreting theoretical and practical results of one system in another. In this paper, we examine the non-trivial relationship between two general verification frameworks: multiset rewriting (MSR) and a process algebra (PA) inspired to CCS and the π-calculus. Although defining a simple and general bijection between MSR and PA appears difficult, we show that the sublanguages needed to specify a large class of cryptographic protocols (immediate decryption protocols) admits an effective translation that is not only bijective and trace-preserving, but also induces a weak form of bisimulation across the two languages. In particular, the correspondence sketched in this abstract permits transferring several important trace-based properties such as secrecy and many forms of authentication." ] }
cs0006029
2952170707
The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.
There is a good number of publications dealing with conformance testing @cite_19 @cite_23 @cite_22 @cite_5 . However, conformance testing verifies that an implementation (as a black box) adheres to a given specification of the protocol by constructing input output sequences. Conformance testing is useful during the implementation testing phase --which we do not address in this paper-- but does not address performance issues nor topology synthesis for design testing. By contrast, our method synthesizes test scenarios for protocol design, according to evaluation criteria.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_22", "@cite_23" ], "mid": [ "2107360681", "2029755436", "2339842460", "2121954581" ], "abstract": [ "This chapter presents principles and techniques for modelbased black-box conformance testing of real-time systems using the Uppaal model-checking tool-suite. The basis for testing is given as a network of concurrent timed automata specified by the test engineer. Relativized input output conformance serves as the notion of implementation correctness, essentially timed trace inclusion taking environment assumptions into account. Test cases can be generated offline and later executed, or they can be generated and executed online. For both approaches this chapter discusses how to specify test objectives, derive test sequences, apply these to the system under test, and assign a verdict.", "A novel procedure presented here generates test sequences for checking the conformity of protocol implementations to their specifications. The test sequences generated by this procedure only detect the presence of many faults, but they do not locate the faults. It can always detect the problem in an implementation with a single fault. A protocol entity is specified as a finite state machine (FSM). It typically has two interfaces: an interface with the user and with the lower-layer protocol. The inputs from both interfaces are merged into a single set I and the outputs from both interfaces are merged into a single set O. The implementation is assumed to be a black box. The key idea in this procedure is to tour all states and state transitions and to check a unique signature for each state, called the Unique Input Output (UIO) sequence. A UIO sequence for a state is an I O behavior that is not exhibited by any other state.", "Industrial-sized hybrid systems are typically not amenable to formal verification techniques. For this reason, a common approach is to formally verify abstractions of (parts of) the original system. However, we need to show that this abstraction conforms to the actual system implementation including its physical dynamics. In particular, verified properties of the abstract system need to transfer to the implementation. To this end, we introduce a formal conformance relation, called reachset conformance, which guarantees transference of safety properties, while being a weaker relation than the existing trace inclusion conformance. Based on this formal relation, we present a conformance testing method which allows us to tune the trade-off between accuracy and computational load. Additionally, we present a test selection algorithm that uses a coverage measure to reduce the number of test cases for conformance testing. We experimentally show the benefits of our novel techniques based on an example from autonomous driving.", "The authors present a detailed study of four formal methods (T-, U-, D-, and W-methods) for generating test sequences for protocols. Applications of these methods to the NBS Class 4 Transport Protocol are discussed. An estimation of fault coverage of four protocol-test-sequence generation techniques using Monte Carlo simulation is also presented. The ability of a test sequence to decide whether a protocol implementation conforms to its specification heavily relies on the range of faults that it can capture. Conformance is defined at two levels, namely, weak and strong conformance. This study shows that a test sequence produced by T-method has a poor fault detection capability, whereas test sequences produced by U-, D-, and W-methods have comparable (superior to that for T-method) fault coverage on several classes of randomly generated machines used in this study. Also, some problems with a straightforward application of the four protocol-test-sequence generation methods to real-world communication protocols are pointed out. >" ] }
cs0006029
2952170707
The advent of multipoint (multicast-based) applications and the growth and complexity of the Internet has complicated network protocol design and evaluation. In this paper, we present a method for automatic synthesis of worst and best case scenarios for multipoint protocol performance evaluation. Our method uses a fault-oriented test generation (FOTG) algorithm for searching the protocol and system state space to synthesize these scenarios. The algorithm is based on a global finite state machine (FSM) model. We extend the algorithm with timing semantics to handle end-to-end delays and address performance criteria. We introduce the notion of a virtual LAN to represent delays of the underlying multicast distribution tree. As a case study, we use our method to evaluate variants of the timer suppression mechanism, used in various multipoint protocols, with respect to two performance criteria: overhead of response messages and response time. Simulation results for reliable multicast protocols show that our method provides a scalable way for synthesizing worst-case scenarios automatically. We expect our method to serve as a model for applying systematic scenario generation to other multipoint protocols.
Automatic test generation techniques have been used in several fields. VLSI chip testing @cite_12 uses test vector generation to detect target faults. Test vectors may be generated based on circuit and fault models, using the fault-oriented technique, that utilizes implication techniques. These techniques were adopted in @cite_25 to develop fault-oriented test generation (FOTG) for multicast routing. @cite_25 , FOTG was used to study correctness of a multicast routing protocol on a LAN. We extend FOTG to study performance of end-to-end multipoint mechanisms. We introduce the concept of a virtual LAN to represent the underlying network, integrate timing and delay semantics into our model and use performance criteria to drive our synthesis algorithm.
{ "cite_N": [ "@cite_25", "@cite_12" ], "mid": [ "1962021926", "2495715420" ], "abstract": [ "We present a new algorithm for automatic test generation for multicast routing. Our algorithm processes a finite state machine (FSM) model of the protocol and uses a mix of forward and backward search techniques to generate the tests. The output tests include a set of topologies, protocol events and network failures, that lead to violation of protocol correctness and behavioral requirements. We target protocol robustness in specific, and do not attempt to verify other properties in this paper. We apply our method to a multicast routing protocol; PIM-DM, and investigate its behavior in the presence of selective packet loss on LANs and router crashes. Our study unveils several robustness violations in PIM-DM, for which we suggest fixes with the aid of the presented algorithm.", "In engineering of safety critical systems, regulatory standards often put requirements on both traceable specification-based testing, and structural coverage on program units. Automated test generation techniques can be used to generate inputs to cover the structural aspects of a program. However, there is no conclusive evidence on how automated test generation compares to manual test design, or how testing based on the program implementation relates to specification-based testing. In this paper, we investigate specification -- and implementation-based testing of embedded software written in the IEC 61131-3 language, a programming standard used in many embedded safety critical software systems. Further, we measure the efficiency and effectiveness in terms of fault detection. For this purpose, a controlled experiment was conducted, comparing tests created by a total of twenty-three software engineering master students. The participants worked individually on manually designing and automatically generating tests for two IEC 61131-3 programs. Tests created by the participants in the experiment were collected and analyzed in terms of mutation score, decision coverage, number of tests, and testing duration. We found that, when compared to implementation-based testing, specification-based testing yields significantly more effective tests in terms of the number of faults detected. Specifically, specification-based tests more effectively detect comparison and value replacement type of faults, compared to implementation-based tests. On the other hand, implementation-based automated test generation leads to fewer tests (up to 85 improvement) created in shorter time than the ones manually created based on the specification." ] }
cs0007002
2949562458
Many problems in robust control and motion planning can be reduced to either find a sound approximation of the solution space determined by a set of nonlinear inequalities, or to the guaranteed tuning problem'' as defined by Jaulin and Walter, which amounts to finding a value for some tuning parameter such that a set of inequalities be verified for all the possible values of some perturbation vector. A classical approach to solve these problems, which satisfies the strong soundness requirement, involves some quantifier elimination procedure such as Collins' Cylindrical Algebraic Decomposition symbolic method. Sound numerical methods using interval arithmetic and local consistency enforcement to prune the search space are presented in this paper as much faster alternatives for both soundly solving systems of nonlinear inequalities, and addressing the guaranteed tuning problem whenever the perturbation vector has dimension one. The use of these methods in camera control is investigated, and experiments with the prototype of a declarative modeller to express camera motion using a cinematic language are reported and commented.
The method presented by @cite_40 is strongly related to the one we present in the following, since they rely on usual interval constraint solving techniques to compute sound boxes for some constraint system. Starting from a seed that is known to belong to the solution space, they enlarge the domain of the variables around it in such a way that the new box computed is still included in the solution space. They do so by using local consistency techniques to find the points at which the truth value of the constraints change. Their algorithm is particularly well suited for the applications they target, the enlargement of tolerances. It is however not designed to solve the guaranteed tuning problem. In addition, it is necessary to obtain a seed for each connected subset of the solution space, and to apply the algorithm on each seed if one is interested in computing several solutions (e.g. to ensure representativeness of the samples).
{ "cite_N": [ "@cite_40" ], "mid": [ "179407972" ], "abstract": [ "We report on a novel technique called spatial coupling and its application in the analysis of random constraint satisfaction problems (CSP). Spatial coupling was invented as an engineering construction in the area of error correcting codes where it has resulted in efficient capacity-achieving codes for a wide range of channels. However, this technique is not limited to problems in communications, and can be applied in the much broader context of graphical models. We describe here a general methodology for applying spatial coupling to random constraint satisfaction problems and obtain lower bounds for their (rough) satisfiability threshold. The main idea is to construct a distribution of geometrically structured random K-SAT instances - namely the spatially coupled ensemble - which has the same (rough) satisfiability threshold, and is at the same time algorithmically easier to solve. Then by running well-known algorithms on the spatially coupled ensemble we obtain a lower bound on the (rough) satisfiability threshold of the original ensemble. The method is versatile because one can choose the CSP, there is a certain amount of freedom in the construction of the spatially coupled ensemble, and also in the choice of the algorithm. In this work we focus on random K-SAT but we have also checked that the method is successful for Coloring, NAE-SAT and XOR-SAT. We choose Unit Clause propagation for the algorithm which is analyzed over the spatially coupled instances. For K = 3, for instance, our lower bound is equal to 3.67 which is better than the current bounds in the literature. Similarly, for graph 3-colorability we get a bound of 2.22 which is also better than the current bounds in the literature." ] }
cs0007004
1931024191
Despite the effort of many researchers in the area of multi-agent systems (MAS) for designing and programming agents, a few years ago the research community began to take into account that common features among different MAS exists. Based on these common features, several tools have tackled the problem of agent development on specific application domains or specific types of agents. As a consequence, their scope is restricted to a subset of the huge application domain of MAS. In this paper we propose a generic infrastructure for programming agents whose name is Brainstorm J. The infrastructure has been implemented as an object oriented framework. As a consequence, our approach supports a broader scope of MAS applications than previous efforts, being flexible and reusable.
JAFIMA (Java Framework for Intelligent and Mobile Agents) @cite_12 takes a different approach from the other tools: it is primarily targeted at expert developers who want to develop agents from scratch based on the abstract classes provided, so the programming effort is greater than in the other tools. The weakest point of JAFIMA is its rule-based mechanism for defining agents' behavior. This mechanism does not support complex behaviors such as on-line planning or learning. Moreover, the abstractions for representing mental states lack flexibility and services for manipulating symbolic data.
{ "cite_N": [ "@cite_12" ], "mid": [ "2061692729" ], "abstract": [ "Almost all agent development to date has been “homegrown” [4] and done from scratch, independently, byeach development team. This has led to the followingproblems:• Lack of an agreed definition: Agents built bydifferent teams have different capabilities.• Duplication of effort: There has been little reuse ofagent architectures, designs, or components.• Inability to satisfy industrial strengthrequirements: Agents must integrate with existingsoftware and computer infrastructure. They must alsoaddress security and scaling concerns.Agents are complex and ambitious software systems thatwill be entrusted with critical applications. As such,agent based systems must be engineered with validsoftware engineering principles and not constructed in anad hoc fashion.Agent systems must have a strong foundation based onmasterful software patterns. Software patterns arose outof Alexander’s [2] work in architecture and urbanplanning. Many urban plans and architectures aregrandiose and ill-fated. Overly ambitious agent basedsystems built in an ad hoc fashion risk the same fate.They may never be built, or, due to their fragile nature,they may be built and either never used or used once andthen abandoned. A software pattern is a recurringproblem and solution; it may address conceptual,architectural or design problems.A pattern is described in a set format to ease itsdissemination. The format states the problem addressedby the pattern and the forces acting on it. There is also acontext that must be present for the pattern to be valid, astatement of the solution, and any known uses. Thefollowing sections summarize some key patterns of agentbased systems; for brevity, many of the patterns arepresented in an abbreviated “patlet” form. When kn ownuses are not listed for an individual pattern, it means thatthe pattern has arisen from the JAFIMA activity. Thepatterns presented in this paper represent progress towarda pattern language or living methodology for intelligentand mobile agents." ] }
cs0007004
1931024191
Despite the effort of many researchers in the area of multi-agent systems (MAS) for designing and programming agents, a few years ago the research community began to take into account that common features among different MAS exists. Based on these common features, several tools have tackled the problem of agent development on specific application domains or specific types of agents. As a consequence, their scope is restricted to a subset of the huge application domain of MAS. In this paper we propose a generic infrastructure for programming agents whose name is Brainstorm J. The infrastructure has been implemented as an object oriented framework. As a consequence, our approach supports a broader scope of MAS applications than previous efforts, being flexible and reusable.
A framework as Brainstorm J is not just a collection of components but also defines a generic design. When programmers use a framework they reuse that design and save time and effort. In addition, because of the bidirectional flow of control frameworks can contain much more functionality than a traditional library regardless if it is a procedural or class library @cite_9 .
{ "cite_N": [ "@cite_9" ], "mid": [ "2106259924" ], "abstract": [ "Programmers commonly reuse existing frameworks or libraries to reduce software development efforts. One common problem in reusing the existing frameworks or libraries is that the programmers know what type of object that they need, but do not know how to get that object with a specific method sequence. To help programmers to address this issue, we have developed an approach that takes queries of the form \"Source object type → Destination object type\" as input, and suggests relevant method-invocation sequences that can serve as solutions that yield the destination object from the source object given in the query. Our approach interacts with a code search engine (CSE) to gather relevant code samples and performs static analysis over the gathered samples to extract required sequences. As code samples are collected on demand through CSE, our approach is not limited to queries of any specific set of frameworks or libraries. We have implemented our approach with a tool called PARSEWeb, and conducted four different evaluations to show that our approach is effective in addressing programmer's queries. We also show that PARSEWeb performs better than existing related tools: Prospector and Strathcona" ] }
cs0010019
2949302825
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
Our definition of correlation-intractability is related to a definition by Okamoto @cite_21 . Using our terminology, Okamoto considers function ensembles for which it is infeasible to form input-output relations with respect to a specific evasive relation [Def. 19] Ok92 (rather than all such relations). He uses the assumption that such function ensembles exists, for a specific evasive relation in [Thm. 20] Ok92 .
{ "cite_N": [ "@cite_21" ], "mid": [ "1590334370" ], "abstract": [ "Correlation intractable function ensembles were introduced in an attempt to capture the \"unpredictability\" property of a random oracle: It is assumed that if R is a random oracle then it is infeasible to find an input x such that the input-output pair (x,R(x)) has some desired property. Since this property is often useful to design many cryptographic applications in the random oracle model, it is desirable that a plausible construction of correlation intractable function ensembles will be provided. However, no plausibility result has been proposed. In this paper, we show that proving the implication, \"if one-way functions exist then correlation intractable function ensembles exist\", is as hard as proving that \"3-round auxiliary-input zero-knowledge Arthur-Merlin proofs exist only for trivial languages such as BPP languages.\" As far as we know, proving the latter claim is a fundamental open problem in the theory of zero-knowledge proofs. Therefore, our result can be viewed as strong evidence that the construction based solely on one-way functions will be impossible, i.e., that any plausibility result will require stronger cryptographic primitives." ] }
cs0010019
2949302825
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
First steps in the direction of identifying and studying useful special-purpose properties of the have been taken by Canetti @cite_10 . Specifically, Canetti considered a property called perfect one-wayness'', provided a definition of this property, constructions which possess this property (under some reasonable assumptions), and applications for which such functions suffice. Additional constructions have been suggested by Canetti, Micciancio and Reingold @cite_4 . Another context where specific properties of the random oracle where captured and realized is the signature scheme of Gennaro, Halevi and Rabin @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_4" ], "mid": [ "1590334370", "2064423787", "2139033758" ], "abstract": [ "Correlation intractable function ensembles were introduced in an attempt to capture the \"unpredictability\" property of a random oracle: It is assumed that if R is a random oracle then it is infeasible to find an input x such that the input-output pair (x,R(x)) has some desired property. Since this property is often useful to design many cryptographic applications in the random oracle model, it is desirable that a plausible construction of correlation intractable function ensembles will be provided. However, no plausibility result has been proposed. In this paper, we show that proving the implication, \"if one-way functions exist then correlation intractable function ensembles exist\", is as hard as proving that \"3-round auxiliary-input zero-knowledge Arthur-Merlin proofs exist only for trivial languages such as BPP languages.\" As far as we know, proving the latter claim is a fundamental open problem in the theory of zero-knowledge proofs. Therefore, our result can be viewed as strong evidence that the construction based solely on one-way functions will be impossible, i.e., that any plausibility result will require stronger cryptographic primitives.", "We show that the existence of one-way functions is necessary and sufficient for the existence of pseudo-random generators in the following sense. Let ƒ be an easily computable function such that when x is chosen randomly: (1) from ƒ( x ) it is hard to recover an x 1 with ƒ( x 1 ) = ƒ( x ) by a small circuit, or; (2) ƒ has small degeneracy and from ƒ( x ) it is hard to recover x by a fast algorithm. From one-way functions of type (1) or (2) we show how to construct pseudo-random generators secure against small circuits or fast algorithms, respectively, and vice-versa. Previous results show how to construct pseudo-random generators from one-way functions that have special properties ([Blum, Micali 82], [Yao 82], [Levin 85], [Goldreich, Krawczyk, Luby 88]). We use the results of [Goldreich, Levin 89] in an essential way.", "The random oracle model is a very convenient setting for designing cryptographic protocols. In this idealized model all parties have access to a common, public random function, called a random oracle. Protocols in this model are often very simple and efficient; also the analysis is often clearer. However, we do not have a general mechanism for transforming protocols that are secure in the random oracle model into protocols that are secure in real life. In fact, we do not even know how to meaningfully specify the properties required from such a mechanism. Instead, it is a common practice to simply replace — often without mathematical justification — the random oracle with a ‘cryptographic hash function’ (e.g., MD5 or SHA). Consequently, the resulting protocols have no meaningful proofs of security." ] }
cs0010019
2949302825
We take a critical look at the relationship between the security of cryptographic schemes in the Random Oracle Model, and the security of the schemes that result from implementing the random oracle by so called "cryptographic hash functions". The main result of this paper is a negative one: There exist signature and encryption schemes that are secure in the Random Oracle Model, but for which any implementation of the random oracle results in insecure schemes. In the process of devising the above schemes, we consider possible definitions for the notion of a "good implementation" of a random oracle, pointing out limitations and challenges.
Following the preliminary version of the current work @cite_25 , Hada and Tanaka observed that the existence of even restricted correlation intractable functions (in the non uniform model) would be enough to prove that 3-round auxiliary-input zero-knowledge AM proof systems only exist for languages in BPP @cite_0 . (Recall that auxiliary-input zero-knowledge is seemingly weaker than black-box zero-knowledge, and so the result of @cite_0 is incomparable to prior work of Goldreich and Krawczyk @cite_3 that showed that constant-round auxiliary-input zero-knowledge AM proof systems only exist for languages in BPP.)
{ "cite_N": [ "@cite_0", "@cite_25", "@cite_3" ], "mid": [ "1590334370", "1987890787", "2962993321" ], "abstract": [ "Correlation intractable function ensembles were introduced in an attempt to capture the \"unpredictability\" property of a random oracle: It is assumed that if R is a random oracle then it is infeasible to find an input x such that the input-output pair (x,R(x)) has some desired property. Since this property is often useful to design many cryptographic applications in the random oracle model, it is desirable that a plausible construction of correlation intractable function ensembles will be provided. However, no plausibility result has been proposed. In this paper, we show that proving the implication, \"if one-way functions exist then correlation intractable function ensembles exist\", is as hard as proving that \"3-round auxiliary-input zero-knowledge Arthur-Merlin proofs exist only for trivial languages such as BPP languages.\" As far as we know, proving the latter claim is a fundamental open problem in the theory of zero-knowledge proofs. Therefore, our result can be viewed as strong evidence that the construction based solely on one-way functions will be impossible, i.e., that any plausibility result will require stronger cryptographic primitives.", "The wide applicability of zero-knowledge interactive proofs comes from the possibility of using these proofs as subroutines in cryptographic protocols. A basic question concerning this use is whether the (sequential and or parallel) composition of zero-knowledge protocols is zero-knowledge too. We demonstrate the limitations of the composition of zero-knowledge protocols by proving that the original definition of zero-knowledge is not closed under sequential composition; and that even the strong formulations of zero-knowledge (e.g., black-box simulation) are not closed under parallel execution. We present lower bounds on the round complexity of zero-knowledge proofs, with significant implications for the parallelization of zero-knowledge protocols. We prove that three-round interactive proofs and constant-round Arthur--Merlin proofs that are black-box simulation zero-knowledge exist only for languages in BPP. In particular, it follows that the \"parallel versions\" of the first interactive proofs systems presented for quadratic residuosity, graph isomorphism, and any language in NP, are not black-box simulation zero-knowledge, unless the corresponding languages are in BPP. Whether these parallel versions constitute zero-knowledge proofs was an intriguing open questions arising from the early works on zero-knowledge. Other consequences are a proof of optimality for the round complexity of various known zero-knowledge protocols and the necessity of using secret coins in the design of \"parallelizable\" constant-round zero-knowledge proofs.", "Zero knowledge plays a central role in cryptography and complexity. The seminal work of Ben- (STOC 1988) shows that zero knowledge can be achieved unconditionally for any language in NEXP, as long as one is willing to make a suitable *physical assumption*: if the provers are spatially isolated, then they can be assumed to be playing independent strategies. Quantum mechanics, however, tells us that this assumption is unrealistic, because spatially-isolated provers could share a quantum entangled state and realize a non-local correlated strategy. The MIP^* model captures this setting. In this work we study the following question: does spatial isolation still suffice to unconditionally achieve zero knowledge even in the presence of quantum entanglement? We answer this question in the affirmative: we prove that every language in NEXP has a 2-prover *zero knowledge* interactive proof that is sound against entangled provers; that is, NEXP ⊆ ZK-MIP^*. Our proof consists of constructing a zero knowledge interactive PCP with a strong algebraic structure, and then lifting it to the MIP^* model. This lifting relies on a new framework that builds on recent advances in low-degree testing against entangled strategies, and clearly separates classical and quantum tools. Our main technical contribution is the development of algebraic techniques for obtaining unconditional zero knowledge; this includes a zero knowledge variant of the celebrated sumcheck protocol, a key building block in many probabilistic proof systems. A core component of our sumcheck protocol is a new algebraic commitment scheme, whose analysis relies on algebraic complexity theory." ] }
cs0011005
2952710481
This paper presents a practical solution for detecting data races in parallel programs. The solution consists of a combination of execution replay (RecPlay) with automatic on-the-fly data race detection. This combination enables us to perform the data race detection on an unaltered execution (almost no probe effect). Furthermore, the usage of multilevel bitmaps and snooped matrix clocks limits the amount of memory used. As the record phase of RecPlay is highly efficient, there is no need to switch it off, hereby eliminating the possibility of Heisenbugs because tracing can be left on all the time.
Although much theoretical work has been done in the field of data race detection @cite_19 @cite_25 @cite_10 @cite_17 few implementations for general systems have been proposed. Tools proposed in the past had limited capabilities: they were targeted at programs using one semaphore @cite_11 , programs using only post wait synchronisation @cite_22 or programs with nested fork-join parallelism @cite_10 @cite_21 . The tools that come closest to our data race detection mechanism, apart from @cite_26 for a proprietary system, is an on-the-fly data race detection mechanism for the CVM (Concurrent Virtual Machine) system @cite_24 . The tool only instruments the memory references to distributed shared data (about 1 unable to perform reference identification: it will return the variable that was involved in a data race, but not the instructions that are responsible for the reference.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_21", "@cite_17", "@cite_24", "@cite_19", "@cite_10", "@cite_25", "@cite_11" ], "mid": [ "2088270410", "2168171401", "2170200862", "2165365531", "2147506153", "2597739964", "2043201977", "2150248374", "2623550824" ], "abstract": [ "For shared-memory parallel programs that use explicit synchronization, data race detection is an important part of debugging. A data race exists when concurrently executing sections of code access common shared variables. In programs intended to be data race free, they are sources of nondeterminism usually considered bugs. Previous methods for detecting data races in executions of parallel programs can determine when races occurred, but can report many data races that are artifacts of others and not direct manifestations of program bugs. Artifacts exist because some races can cause others and can also make false races appear real. Such artifacts can overwhelm the programmer with information irrelevant for debugging. This paper presents results showing how to identify nonartifact data races by validation and ordering. Data race validation attempts to determine which races involve events that either did execute concurrently or could have (called feasible data races). We show how each detected race can either be guaranteed feasible, or when insufficient information is available, sets of races can be identified within which at least one is guaranteed feasible. Data race ordering attempts to identify races that did not occur only as a result of others. Data races can be partitioned so that it is known whether a race in one partition may have affected a race in another. The first partitions are guaranteed to contain at least one feasible data race that is not an artifact of any kind. By combining validation and ordering, the programmer can be directed to those data races that should be investigated first for debugging. hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh Research supported in part by National Science Foundation grant CCR-8815928, Office of Naval Research grant N00014-89-J-1222, and a Digital Equipment Corporation External Research Grant. To appear in Proc. of the Third ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Williamsburg, VA, April 1991.", "We describe an integrated approach to support debugging of nondeterministic concurrent programs. Our tool provides reproducible program behavior and incorporates mechanisms to identify synchronization bugs commonly termed data races or access anomalies. Both features are based on partially ordered event logs captured at run time. Our mechanism identifies a race condition that is guaranteed to be unaffected by other races in the considered execution. Data collection and analysis for race detection has no impact on the original computation since it is done in replay mode. The race detection and execution replay mechanisms are integrated in the MOSKITO operating system.", "Detecting data races in shared-memory parallel programs is an important debugging problem. This paper presents a new protocol for run-time detection of data races in ex­ ecutions of shared-memory programs with nested fork-join parallelism and no other inter-thread synch ronization. This protocol has signifi cantly smaller worst-case run-time over­ head than previous techniques. The worst-case space re­ quired by our protocol when monitoring an execution of a program P is O(V N), where V is the number of shared variables in P, and N is the maximum dynamic nesting of parallel constructs in P's execution. The worst-case time required to perform any monitoring operation is O(N). We formally prove that our new protocol always reports a non­ empty subset of the data races in a monitored program ex­ ecution and describe how this property leads to an effective debugging strategy.", "The authors present a data-race-free-1, shared-memory model that unifies four earlier models: weak ordering, release consistency (with sequentially consistent special operations), the VAX memory model, and data-race-free-0. Data-race-free-1 unifies the models of weak ordering, release consistency, the VAX, and data-race-free-0 by formalizing the intuition that if programs synchronize explicitly and correctly, then sequential consistency can be guaranteed with high performance in a manner that retains the advantages of each of the four models. Data-race-free-1 expresses the programmer's interface more explicitly and formally than weak ordering and the VAX, and allows an implementation not allowed by weak ordering, release consistency, or data-race-free-0. The implementation proposal for data-race-free-1 differs from earlier implementations by permitting the execution of all synchronization operations of a processor even while previous data operations of the processor are in progress. To ensure sequential consistency, two sychronizing processors exchange information to delay later operations of the second processor that conflict with an incomplete data operation of the first processor. >", "For shared-memory systems, the most commonly assumed programmer’s model of memory is sequential consistency. The weaker models of weak ordering, release consistency with sequentially consistent synchronization operations, data-race-free-O, and data-race-free-1 provide higher performance by guaranteeing sequential consistency to only a restricted class of programs - mainly programs that do not exhibit data races. To allow programmers to use the intuition and algorithms already developed for sequentially consistent systems, it is impontant to determine when a program written for a weak system exhibits no data races. In this paper, we investigate the extension of dynamic data race detection techniques developed for sequentially consistent systems to weak systems. A potential problem is that in the presence of a data race, weak systems fail to guarantee sequential consistency and therefore dynamic techniques may not give meaningful results. However, we reason that in practice a weak system will preserve sequential consistency at least until the “first” data races since it cannot predict if a data race will occur. We formalize this condition and show that it allows data races to be dynamically detected. Further, since this condition is already obeyed by all proposed implementations of weak systems, the full performance of weak systems can be exploited.", "Data race detection has become an important problem in GPU programming. Previous designs of CPU race-checking tools are mainly task parallel and incur high overhead on GPUs due to access instrumentation, especially when monitoring many thousands of threads routinely used by GPU programs. This article presents a novel data-parallel solution designed and optimized for the GPU architecture. It includes compiler support and a set of runtime techniques. It uses value-based checking, which detects the races reported in previous work, finds new races, and supports race-free deterministic GPU execution. More important, race checking is massively data parallel and does not introduce divergent branching or atomic synchronization. Its slowdown is less than 5 × for over half of the tests and 10 × on average, which is orders of magnitude more efficient than the cuda-memcheck tool by Nvidia and the methods that use fine-grained access instrumentation.", "Even the careful GPU programmer can inadvertently introduce data races while writing and optimizing code. Currently available GPU race checking methods fall short either in terms of their formal guarantees, ease of use, or practicality. Existing symbolic methods: (1) do not fully support existing CUDA kernels, (2) may require user-specified assertions or invariants, (3) often require users to guess which inputs may be safely made concrete, (4) tend to explode in complexity when the number of threads is increased, and (5) explode in the face of thread-ID based decisions, especially in a loop. We present SESA, a new tool combining Symbolic Execution and Static Analysis to analyze C++ CUDA programs that overcomes all these limitations. SESA also scales well to handle non-trivial benchmarks such as Parboil and Lonestar, and is the only tool of its class that handles such practical examples. This paper presents SESA's methodological innovations and practical results.", "The growing scale of concurrency requires automated abstraction techniques to cut down the effort in concurrent system analysis. In this paper, we show that the high degree of behavioral symmetry present in GPU programs allows CUDA race detection to be dramatically simplified through abstraction. Our abstraction techniques is one of automatically creating parametric flows -- control-flow equivalence classes of threads that diverge in the same manner -- and checking for data races only across a pair of threads per parametric flow. We have implemented this approach as an extension of our recently proposed GKLEE symbolic analysis framework and show that all our previous results are dramatically improved in that (i) the parametric flow-based analysis takes far less time, and (ii) because of the much higher scalability of the analysis, we can detect even more data race situations that were previously missed by GKLEE because it was forced to downscale examples to limit analysis complexity. Moreover, the parametric flow-based analysis is applicable to other programs with SPMD models.", "Big Data applications suffer from unpredictable and unacceptably high pause times due to Garbage Collection (GC). This is the case in latency-sensitive applications such as on-line credit-card fraud detection, graph-based computing for analysis on social networks, etc. Such pauses compromise latency requirements of the whole application stack and result from applications' aggressive buffering caching of data, exposing an ill-suited GC design, which assumes that most objects will die young and does not consider that applications hold large amounts of middle-lived data in memory. To avoid such pauses, we propose NG2C, a new GC algorithm that combines pretenuring with user-defined dynamic generations. By being able to allocate objects into different generations, NG2C is able to group objects with similar lifetime profiles in the same generation. By allocating objects with similar lifetime profiles close to each other, i.e. in the same generation, we avoid object promotion (copying between generations) and heap fragmentation (which leads to heap compactions) both responsible for most of the duration of HotSpot GC pause times. NG2C is implemented for the OpenJDK 8 HotSpot Java Virtual Machine, as an extension of the Garbage First GC. We evaluate NG2C using Cassandra, Lucene, and GraphChi with three different GCs: Garbage First (G1), Concurrent Mark Sweep (CMS), and NG2C. Results show that NG2C decreases the worst observable GC pause time by up to 94.8 for Cassandra, 85.0 for Lucene and 96.45 for GraphChi, when compared to current collectors (G1 and CMS). In addition, NG2c has no negative impact on application throughput or memory usage." ] }
cs0012007
2950755945
We have implemented Kima, an automated error correction system for concurrent logic programs. Kima corrects near-misses such as wrong variable occurrences in the absence of explicit declarations of program properties. Strong moding typing and constraint-based analysis are turning to play fundamental roles in debugging concurrent logic programs as well as in establishing the consistency of communication protocols and data types. Mode type analysis of Moded Flat GHC is a constraint satisfaction problem with many simple mode type constraints, and can be solved efficiently. We proposed a simple and efficient technique which, given a non-well-moded typed program, diagnoses the reasons'' of inconsistency by finding minimal inconsistent subsets of mode type constraints. Since each constraint keeps track of the symbol occurrence in the program, a minimal subset also tells possible sources of program errors. Kima realizes automated correction by replacing symbol occurrences around the possible sources and recalculating modes and types of the rewritten programs systematically. As long as bugs are near-misses, Kima proposes a rather small number of alternatives that include an intended program.
Analysis of malfunctioning systems based on their intended logical specification has been studied in the field of artificial intelligence @cite_9 and known as model-based diagnosis, which has some similarities with our work. However, the purpose of model-based diagnosis is to analyze the differences between intended and observed behaviors, while our system does not require that the intended behavior of a program be given as declarations.
{ "cite_N": [ "@cite_9" ], "mid": [ "2133291246" ], "abstract": [ "Several artificial intelligence architectures and systems based on \"deep\" models of a domain have been proposed, in particular for the diagnostic task. These systems have several advantages over traditional knowledge based systems, but they have a main limitation in their computational complexity. One of the ways to face this problem is to rely on a knowledge compilation phase, which produces knowledge that can be used more effectively with respect to the original one. We show how a specific knowledge compilation approach can focus reasoning in abductive diagnosis, and, in particular, can improve the performances of AID, an abductive diagnosis system. The approach aims at focusing the overall diagnostic cycle in two interdependent ways: avoiding the generation of candidate solutions to be discarded a posteriori and integrating the generation of candidate solutions with discrimination among different candidates. Knowledge compilation is used off-line to produce operational (i.e., easily evaluated) conditions that embed the abductive reasoning strategy and are used in addition to the original model, with the goal of ruling out parts of the search space or focusing on parts of it. The conditions are useful to solve most cases using less time for computing the same solutions, yet preserving all the power of the model-based system for dealing with multiple faults and explaining the solutions. Experimental results showing the advantages of the approach are presented." ] }
cs0012007
2950755945
We have implemented Kima, an automated error correction system for concurrent logic programs. Kima corrects near-misses such as wrong variable occurrences in the absence of explicit declarations of program properties. Strong moding typing and constraint-based analysis are turning to play fundamental roles in debugging concurrent logic programs as well as in establishing the consistency of communication protocols and data types. Mode type analysis of Moded Flat GHC is a constraint satisfaction problem with many simple mode type constraints, and can be solved efficiently. We proposed a simple and efficient technique which, given a non-well-moded typed program, diagnoses the reasons'' of inconsistency by finding minimal inconsistent subsets of mode type constraints. Since each constraint keeps track of the symbol occurrence in the program, a minimal subset also tells possible sources of program errors. Kima realizes automated correction by replacing symbol occurrences around the possible sources and recalculating modes and types of the rewritten programs systematically. As long as bugs are near-misses, Kima proposes a rather small number of alternatives that include an intended program.
Wand proposed an algorithm for diagnosing non-well-typed functional programs @cite_5 . His approach was to extend the unification algorithm for type reconstruction to record which symbol occurrence imposed which constraint. In contrast, our framework is built outside any underlying framework of constraint solving. It does not incur any overhead for well-moded typed programs or modify the constraint-solving algorithm.
{ "cite_N": [ "@cite_5" ], "mid": [ "2571527823" ], "abstract": [ "We study the compressed sensing reconstruction problem for a broad class of random, band-diagonal sensing matrices. This construction is inspired by the idea of spatial coupling in coding theory. As demonstrated heuristically and numerically by Krzakala [30], message passing algorithms can effectively solve the reconstruction problem for spatially coupled measurements with undersampling rates close to the fraction of nonzero coordinates. We use an approximate message passing (AMP) algorithm and analyze it through the state evolution method. We give a rigorous proof that this approach is successful as soon as the undersampling rate δ exceeds the (upper) Renyi information dimension of the signal, d(pX). More precisely, for a sequence of signals of diverging dimension n whose empirical distribution converges to pX, reconstruction is with high probability successful from d(pX) n+o(n) measurements taken according to a band diagonal matrix. For sparse signals, i.e., sequences of dimension n and k(n) nonzero entries, this implies reconstruction from k(n)+o(n) measurements. For “discrete” signals, i.e., signals whose coordinates take a fixed finite set of values, this implies reconstruction from o(n) measurements. The result is robust with respect to noise, does not apply uniquely to random signals, but requires the knowledge of the empirical distribution of the signal pX." ] }
cs0102023
2122926560
AbstractThis note addresses the input and output of intervals in the sense of intervalarithmetic and interval constraints. The most obvious, and so far most widely usednotation, for intervals has drawbacks that we remedy with a new notation that wepropose to call factored notation. It is more compact and allows one to find a goodtrade-off between interval width and ease of reading. We describe how such a trade-off can be based on the information yield (in the sense of information theory) of thelast decimal shown. 1 Introduction Once upon a time, it was a matter of professional ethics among computers never to writea meaningless decimal. Since then computers have become machines and thereby lost anyform of ethics, professional or otherwise. The human computers of yore were helped intheir ethical behaviour by the fact that it took effort to write spurious decimals. Now thesituation is reversed: the lazy way is to use the default precision of the I O library function.As a result it is common to see fifteen decimals, all but three of which are meaningless.Of course interval arithmetic is not guilty of such negligence. After all, the very raisond’ˆetre of the subject is to be explicit about the precision of computed results. Yet, eveninterval arithmetic is plagued by phoney decimals, albeit in a more subtle way. Just as con-ventional computation often needs more care in the presentation of computational results,the most obvious interval notation with default precision needs improvement.As a bounded interval has two bounds, say, l and u, the most straightforward notationis something like [l,u]. Written like this, it may not be immediately obvious what is wrongwith writing it that way. But when confronted with a real-life consequence
Hansen @cite_3 , @cite_5 , and Kearfott @cite_1 opt for the straightforward @math notation. Hansen mostly presents bounds with few digits, but for instance on page 178 we find @math demonstrating the problems addressed here.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_3" ], "mid": [ "2000931246", "2508647357", "2010193278" ], "abstract": [ "Let f be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number k such that if the ratio of the number of clauses over the number of variables of f strictly exceeds k , then f is almost certainly unsatisfiable. By a well-known and more or less straightforward argument, it can be shown that kF5.191. This upper bound was improved by to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of k is around 4.2. In this work, we define, in terms of the random formula f, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then f is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for k equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601q . In general, by letting the U This work was performed while the first author was visiting the School of Computer Science, Carleton Ž University, and was partially supported by NSERC Natural Sciences and Engineering Research Council . of Canada , and by a grant from the University of Patras for sabbatical leaves. The second and third Ž authors were supported in part by grants from NSERC Natural Sciences and Engineering Research . Council of Canada . During the last stages of this research, the first and last authors were also partially Ž . supported by EU ESPRIT Long-Term Research Project ALCOM-IT Project No. 20244 . †An extended abstract of this paper was published in the Proceedings of the Fourth Annual European Ž Symposium on Algorithms, ESA’96, September 25]27, 1996, Barcelona, Spain Springer-Verlag, LNCS, . pp. 27]38 . That extended abstract was coauthored by the first three authors of the present paper. Correspondence to: L. M. Kirousis Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r030253-17 253", "Spanners, emulators, and approximate distance oracles can be viewed as lossy compression schemes that represent an unweighted graph metric in small space, say O(n1+Δ) bits. There is an inherent tradeoff between the sparsity parameter Δ and the stretch function f of the compression scheme, but the qualitative nature of this tradeoff has remained a persistent open problem. It has been known for some time that when Δ ≥ 1 3 there are schemes with constant additive stretch (distance d is stretched to at most f(d) = d + O(1)), and recent results of Abboud and Bodwin show that when Δ In this paper we show that the lower bound of Abboud and Bodwin is just the first step in a hierarchy of lower bounds that characterize the asymptotic behavior of the optimal stretch function f for sparsity parameter Δ ∈ (0,1 3). Specifically, for any integer k ≥ 2, any compression scheme with size [EQUATION] has a sublinear additive stretch function f: f(d) = d + Ω(d1−1 k). This lower bound matches Thorup and Zwick's (2006) construction of sublinear additive emulators. It also shows that Elkin and Peleg's (1 + ϵ, β)-spanners have an essentially optimal tradeoff between Δ, ϵ, and β, and that the sublinear additive spanners of Pettie (2009) and Chechik (2013) are not too far from optimal. To complement these lower bounds we present a new construction of (1 + ϵ, O(k ϵ)k−1)-spanners with size [EQUATION], where hk Our lower bound technique exhibits several interesting degrees of freedom in the framework of Abboud and Bodwin. By carefully exploiting these freedoms, we are able to obtain lower bounds for several related combinatorial objects. We get lower bounds on the size of (β, ϵ)-hopsets, matching Elkin and Neiman's construction (2016), and lower bounds on shortcutting sets for digraphs that preserve the transitive closure. Our lower bound simplifies Hesse's (2003) refutation of Thorup's conjecture (1992), which stated that adding a linear number of shortcuts suffices to reduce the diameter to polylogarithmic. Finally, we show matching upper and lower bounds for graph compression schemes that work for graph metrics with girth at least 2γ + 1. One consequence is that 's (2010) additive O(γ)-spanners with size [EQUATION] cannot be improved in the exponent.", "Let Fk(n,m) be a random k-SAT formula on n variables formed by selecting uniformly and independently m out of all possible k-clauses. It is well-known that for r ≥ 2k ln 2, Fk(n,rn) is unsatisfiable with probability 1-o(1). We prove that there exists a sequence tk = O(k) such that for r ≥ 2k ln 2 - tk, Fk(n,rn) is satisfiable with probability 1-o(1).Our technique yields an explicit lower bound for every k which for k > 3 improves upon all previously known bounds. For example, when k=10 our lower bound is 704.94 while the upper bound is 708.94." ] }
cs0102023
2122926560
AbstractThis note addresses the input and output of intervals in the sense of intervalarithmetic and interval constraints. The most obvious, and so far most widely usednotation, for intervals has drawbacks that we remedy with a new notation that wepropose to call factored notation. It is more compact and allows one to find a goodtrade-off between interval width and ease of reading. We describe how such a trade-off can be based on the information yield (in the sense of information theory) of thelast decimal shown. 1 Introduction Once upon a time, it was a matter of professional ethics among computers never to writea meaningless decimal. Since then computers have become machines and thereby lost anyform of ethics, professional or otherwise. The human computers of yore were helped intheir ethical behaviour by the fact that it took effort to write spurious decimals. Now thesituation is reversed: the lazy way is to use the default precision of the I O library function.As a result it is common to see fifteen decimals, all but three of which are meaningless.Of course interval arithmetic is not guilty of such negligence. After all, the very raisond’ˆetre of the subject is to be explicit about the precision of computed results. Yet, eveninterval arithmetic is plagued by phoney decimals, albeit in a more subtle way. Just as con-ventional computation often needs more care in the presentation of computational results,the most obvious interval notation with default precision needs improvement.As a bounded interval has two bounds, say, l and u, the most straightforward notationis something like [l,u]. Written like this, it may not be immediately obvious what is wrongwith writing it that way. But when confronted with a real-life consequence
The standard notation in the Numerica book @cite_2 solves the scanning problem in an interesting way. It uses the idea of the @math notation, but writes instead @math . This variation has the advantage of not introducing new notation. The reason why we still prefer factored notation is clear from the @math example ), which, if rewritten as @math becomes @math Although it is attractive not to introduce special-purpose notation, there is so much redundancy here that the factored alternative: @math seems worth the new notation.
{ "cite_N": [ "@cite_2" ], "mid": [ "2737269238" ], "abstract": [ "We consider document listing on string collections, that is, finding in which strings a given pattern appears. In particular, we focus on repetitive collections: a collection of size @math over alphabet @math is composed of @math copies of a string of size @math , and @math edits are applied on ranges of copies. We introduce the first document listing index with size @math , precisely @math bits, and with useful worst-case time guarantees: Given a pattern of length @math , the index reports the @math strings where it appears in time @math , for any constant @math (and tells in time @math if @math ). Our technique is to augment a range data structure that is commonly used on grammar-based indexes, so that instead of retrieving all the pattern occurrences, it computes useful summaries on them. We show that the idea has independent interest: we introduce the first grammar-based index that, on a text @math with a grammar of size @math , uses @math bits and counts the number of occurrences of a pattern @math in time @math , for any constant @math . We also give the first index using @math bits, where @math is parsed by Lempel-Ziv into @math phrases, counting occurrences in time @math ." ] }
cs0102017
2950044032
Parallel jobs are different from sequential jobs and require a different type of process management. We present here a process management system for parallel programs such as those written using MPI. A primary goal of the system, which we call MPD (for multipurpose daemon), is to be scalable. By this we mean that startup of interactive parallel jobs comprising thousands of processes is quick, that signals can be quickly delivered to processes, and that stdin, stdout, and stderr are managed intuitively. Our primary target is parallel machines made up of clusters of SMPs, but the system is also useful in more tightly integrated environments. We describe how MPD enables much faster startup and better runtime management of parallel jobs. We show how close control of stdio can support the easy implementation of a number of convenient system utilities, even a parallel debugger. We describe a simple but general interface that can be used to separate any process manager from a parallel library, which we use to keep MPD separate from MPICH.
Many systems are intended to manage a collection of computing resources for both single-process and parallel jobs; see the survey by @cite_7 . Typically, these use a daemon that manages individual processes, with emphasis on jobs involving only a single process. Widely used systems include PBS @cite_11 , LSF @cite_0 , DQS @cite_22 , and Loadleveler POE @cite_9 . The Condor system @cite_23 is also widely used and supports parallel programs that use PVM @cite_2 or MPI @cite_8 @cite_12 . More specialized systems, such as MOSIX @cite_13 and GLUnix @cite_1 , provide single-system image support for clusters. Harness @cite_19 @cite_4 shares with MPD the goal of supporting management of parallel jobs. Its primary research goal is to demonstrate the flexibility of the plug-in'' approach to application design, potentially providing a wide range of services. The MPD system focuses more specifically on the design and implementation of services required for process management of parallel jobs, including high-speed startup of large parallel jobs on clusters and scalable standard I O management. The book @cite_10 provides a good overview of metacomputing systems and issues, and Feitelson @cite_3 surveys support for scheduling parallel processes.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_7", "@cite_8", "@cite_10", "@cite_9", "@cite_1", "@cite_3", "@cite_0", "@cite_19", "@cite_23", "@cite_2", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2545968212", "2004261242", "2765129600", "2008170189", "1988404188", "2004593585", "2094587335", "2132896973", "2077783617", "2563521659", "2187539704", "2000300079", "1993033064", "2128981930", "2171308769" ], "abstract": [ "Most high-performance, scientific libraries have adopted hybrid parallelization schemes - such as the popular MPI+OpenMP hybridization - to benefit from the capacities of modern distributed-memory machines. While these approaches have shown to achieve high performance, they require a lot of effort to design and maintain sophisticated synchronization communication strategies. On the other hand, task-based programming paradigms aim at delegating this burden to a runtime system for maximizing productivity. In this article, we assess the potential of task-based fast multipole methods (FMM) on clusters of multicore processors. We propose both a hybrid MPI+task FMM parallelization and a pure task-based parallelization where the MPI communications are implicitly handled by the runtime system. The latter approach yields a very compact code following a sequential task-based programming model. We show that task-based approaches can compete with a hybrid MPI+OpenMP highly optimized code and that furthermore the compact task-based scheme fully matches the performance of the sophisticated, hybrid MPI+task version, ensuring performance while maximizing productivity. We illustrate our discussion with the ScalFMM FMM library and the StarPU runtime system.", "Large scale supercomputing applications typically run on clusters using vendor message passing libraries, limiting the application to the availability of memory and CPU resources on that single machine. The ability to run inter-cluster parallel code is attractive since it allows the consolidation of multiple large scale resources for computational simulations not possible on a single machine, and it also allows the conglomeration of small subsets of CPU cores for rapid turnaround, for example, in the case of high-availability computing. MPIg is a grid-enabled implementation of the Message Passing Interface (MPI), extending the MPICH implementation of MPI to use Globus Toolkit services such as resource allocation and authentication. To achieve co-availability of resources, HARC, the Highly-Available Resource Co-allocator, is used. Here we examine two applications using MPIg: LAMMPS (Large-scale Atomic Molecular Massively Parallel Simulator), is used with a replica exchange molecular dynamics approach to enhance binding affinity calculations in HIV drug research, and HemeLB, which is a lattice-Boltzmann solver designed to address fluid flow in geometries such as the human cerebral vascular system. The cross-site scalability of both these applications is tested and compared to single-machine performance. In HemeLB, communication costs are hidden by effectively overlapping non-blocking communication with computation, essentially scaling linearly across multiple sites, and LAMMPS scales almost as well when run between two significantly geographically separated sites as it does at a single site.", "This paper reports our observations from a top-tier supercomputer Titan and its Lustre parallel file stores under production load. In summary, we find that supercomputer file systems are highly variable across the machine at fine time scales. This variability has two major implications. First, stragglers lessen the benefit of coupled I O parallelism (striping). Peak median output bandwidths are obtained with parallel writes to many independent files, with no striping or write-sharing of files across clients (compute nodes). I O parallelism is most effective when the application—or its I O middleware system—distributes the I O load so that each client writes separate files on multiple targets, and each target stores files for multiple clients, in a balanced way. Second, our results suggest that the potential benefit of dynamic adaptation is limited. In particular, it is not fruitful to attempt to identify “good spots” in the machine or in the file system: component performance is driven by transient load conditions, and past performance is not a useful predictor of future performance. For example, we do not observe regular diurnal load patterns.", "Parallel computing on volatile distributed resources requires schedulers that consider job and resource characteristics. We study unconventional computing environments containing devices spread throughout a single large organization. The devices are not necessarily typical general purpose machines; instead, they could be processors dedicated to special purpose tasks (for example printing and document processing), but capable of being leveraged for distributed computations. Harvesting their idle cycles can simultaneously help resources cooperate to perform their primary task and enable additional functionality and services. A new burstiness metric characterizes the volatility of the high-priority native tasks. A burstiness-aware scheduling heuristic opportunistically introduces grid jobs (a lower priority workload class) to avoid the higher-priority native applications, and effectively harvests idle cycles. Simulations based on real workload traces indicate that this approach improves makespan by an average of 18.3 over random scheduling, and comes within 7.6 of the theoretical upper bound.", "Unmatched computation and storage performance in new HPC systems have led to a plethora of I O optimizations ranging from application-side collective I O to network and disk-level request scheduling on the file system side. As we deal with ever larger machines, the interferences produced by multiple applications accessing a shared parallel file system in a concurrent manner become a major problem. These interferences often break single-application I O optimizations, dramatically degrading application I O performance and, as a result, lowering machine wide efficiency. This paper focuses on CALCioM, a framework that aims to mitigate I O interference through the dynamic selection of appropriate scheduling policies. CALCioM allows several applications running on a supercomputer to communicate and coordinate their I O strategy in order to avoid interfering with one another. In this work, we examine four I O strategies that can be accommodated in this framework: serializing, interrupting, interfering and coordinating. Experiments on Argonne's BG P Surveyor machine and on several clusters of the French Grid'5000 show how CALCioM can be used to efficiently and transparently improve the scheduling strategy between two otherwise interfering applications, given specified metrics of machine wide efficiency.", "A continuing challenge to the scientific research and engineering communities is how to fully utilize computational hardware. In particular, the proliferation of clusters of high performance workstations has become an increasingly attractive source of compute power. Developments to take advantage of this environment have previously focused primarily on managing the resources, or on providing interfaces so that a number of machines can be used in parallel to solve large problems. Both approaches are desirable, and indeed should be complementary. Unfortunately, the resource management and parallel processing systems are usually developed by independent groups, and they usually do not interact well together. To bridge this gap, we have developed a framework for interfacing these two sorts of systems. Using this framework, we have interfaced PVM, a popular system for parallel programming with Condor, a powerful resource management system. This combined system is operational, and we have made further developments to provide a single coherent environment.", "We propose and evaluate empirically the performance of a dynamic processor-scheduling policy for multiprogrammed shared-memory multiprocessors. The policy is dynamic in that it reallocates processors from one parallel job to another based on the currently realized parallelism of those jobs. The policy is suitable for implementation in production systems in that: —It interacts well with very efficient user-level thread packages, leaving to them many low-level thread operations that do not require kernel intervention. —It deals with thread blocking due to user I O and page faults. —It ensures fairness in delivering resources to jobs. —Its performance, measured in terms of average job response time, is superior to that of previously proposed schedulers, including those implemented in existing systems. It provides good performance to very short, sequential (e.g., interactive) requests. We have evaluated our scheduler and compared it to alternatives using a set of prototype implementations running on a Sequent Symmetry multiprocessor. Using a number of parallel applications with distinct qualitative behaviors, we have both evaluated the policies according to the major criterion of overall performance and examined a number of more general policy issues, including the advantage of “space sharing” over “time sharing” the processors of a multiprocessor, and the importance of cooperation between the kernel and the application in reallocating processors between jobs. We have also compared the policies according to other criteia important in real implementations, in particular, fairness and respone time to short, sequential requests. We conclude that a combination of performance and implementation considerations makes a compelling case for our dynamic scheduling policy.", "As high-end computer systems present users with rapidly increasing numbers of processors, possibly also incorporating attached co-processors, programmers are increasingly challenged to express the necessary levels of concurrency with the dominant parallel programming model, Fortran+MPI+OpenMP (or minor variations). In this paper, we examine the languages developed under the DARPA High-Productivity Computing Systems (HPCS) program (Chapel, Fortress, and XIO) as representatives of a different parallel programming model which might be more effective on emerging high-performance systems. The application used in this study is the Hartree-Fock method from quantum chemistry, which combines access to distributed data with a task-parallel algorithm and is characterized by significant irregularity in the computational tasks. We present several different implementation strategies for load balancing of the task-parallel computation, as well as distributed array operations, in each of the three languages. We conclude that the HPCS languages provide a wide variety of mechanisms for expressing parallelism, which can be combined at multiple levels, making them quite expressive for this problem.", "Application development for distributed-computing \"Grids\" can benefit from tools that variously hide or enable application-level management of critical aspects of the heterogeneous environment. As part of an investigation of these issues, we have developed MPICH-G2, a Grid-enabled implementation of the Message Passing Interface (MPI) that allows a user to run MPI programs across multiple computers, at the same or different sites, using the same commands that would be used on a parallel computer. This library extends the Argonne MPICH implementation of MPI to use services provided by the Globus Toolkit for authentication, authorization, resource allocation, executable staging, and I O, as well as for process creation, monitoring, and control. Various performance-critical operations, including startup and collective operations, are configured to exploit network topology information. The library also exploits MPI constructs for performance management; for example, the MPI communicator construct is used for application-level discovery of, and adaptation to, both network topology and network quality-of-service mechanisms. We describe the MPICH-G2 design and implementation, present performance results, and review application experiences, including record-setting distributed simulations.", "Increased system size and a greater reliance on utilizing system parallelism to achieve computational needs, requires innovative system architectures to meet the simulation challenges. As a step towards a new network class of co-processors — intelligent network devices, which manipulate data traversing the data-center network, this paper describes the SHArP technology designed to offload collective operation processing to the network. This is implemented in Mellanox's SwitchIB-2 ASIC, using innetwork trees to reduce data from a group of sources, and to distribute the result. Multiple parallel jobs with several partially overlapping groups are supported each with several reduction operations in-flight. Large performance enhancements are obtained, with an improvement of a factor of 2.1 for an eight byte MPI_Allreduce() operation on 128 hosts, going from 6.01 to 2.83 microseconds. Pipelining is used for an improvement of a factor of 3.24 in the latency of a 4096 byte MPI_Allreduce() operations, declining from 46.93 to 14.48 microseconds.", "Over the past decade, the design of microprocessors has been shifting to a new model where the microprocessor has multiple homogeneous processing units, aka cores, as a result of heat dissipation and energy consumption issues. Meanwhile, the demand for heterogeneity increases in computing systems due to the need for high performance computing in recent years. The current trend in gaining high computing power is to incorporate specialized processing resources such as manycore Graphic Processing Units in multicore systems, thus making a computing system heterogeneous. Maximum performance of data-parallel scientific applications on heterogeneous platforms can be achieved by balancing the load between heterogeneous processing elements. Data parallel applications can be load balanced by applying data partitioning with respect to the performance of the platform’s computing devices. However, load balancing on such platforms is complicated by several factors, such as contention for shared system resources, non-uniform memory access, limited GPU memory and slow bandwidth of PCIe, which connects the host processor and the GPU. In this thesis, we present methods of performance modeling and performance measurement on dedicated multicore and multi-GPU systems. We model a multicore and multi-GPU system by a set of heterogeneous abstract processors determined by the configuration of the parallel application. Each abstract processor represents a processing unit made of one or a group of processing elements executing one computational kernel of the application. We group processing units by shared resources, and measure the performance of processing units in each group simultaneously, thereby taking into account the influence of resource contention. We investigate the impact of resource contention, and the impact of process mapping on systems of NUMA architecture on the performance of processing units. Using the proposed method for measuring performance, we built functional performance models of abstract processors, and partition data of data parallel applications using these performance models to balance the workload. We evaluate the proposed methods with two typical data parallel applications, namely parallel matrix multiplication and numerical simulation of lid-driven cavity flow. Experimental results demonstrate that data partitioning algorithms based on functional performance models built using proposed methods are able to balance the workload of data parallel applications on heterogeneous multicore and multi-GPU platforms.", "Distributed-memory parallel supercomputers are an important platform for the execution of high-performance parallel jobs. In order to submit a job for execution in most supercomputers, one has to specify the number of processors to be allocated to the job. However, most parallel jobs in production today are moldable. A job is moldable when the number of processors it needs to execute can vary, although such a number has to be fixed before the job starts executing. Consequently, users have to decide how many processors to request whenever they submit a moldable job. In this dissertation, we show that the request that submits a moldable job can be automatically selected in a way that often reduces the job's turn-around time. The turn-around time of a job is the time elapsed between the job's submission and its completion. More precisely, we will introduce and evaluate SA, an application scheduler that chooses which request to use to submit a moldable job on behalf of the user. The user provides SA with a set of possible requests that can be used to submit a given moldable job. SA estimates the turn-around time of each request based on the current state of the supercomputer, and then forwards to the supercomputer the request with the smallest expected turn-around time. Users are thus relieved by SA of a task unrelated with their final goals, namely that of selecting which request to use. Moreover and more importantly, SA often improves the turn-around time of the job under a variety of conditions. The conditions under which SA was studied cover variations on the characteristics of the job, the state of the supercomputer, and the information available to SA. The emergent behavior generated by having most jobs using SA to craft their requests was also investigated.", "We seek to enable efficient large-scale parallel execution of applications in which a shared filesystem abstraction is used to couple many tasks. Such parallel scripting (many-task computing, MTC) applications suffer poor performance and utilization on large parallel computers because of the volume of filesystem I O and a lack of appropriate optimizations in the shared filesystem. Thus, we design and implement a scalable MTC data management system that uses aggregated compute node local storage for more efficient data movement strategies. We co-design the data management system with the data-aware scheduler to enable dataflow pattern identification and automatic optimization. The framework reduces the time to solution of parallel stages of an astronomy data analysis application, Montage, by 83.2 on 512 cores; decreases the time to solution of a seismology application, CyberShake, by 7.9 on 2,048 cores; and delivers BLAST performance better than mpiBLAST at various scales up to 32,768 cores, while preserving the flexibility of the original BLAST application.", "Recent studies show that graph processing systems on a single machine can achieve competitive performance compared with cluster-based graph processing systems. In this paper, we present NXgraph, an efficient graph processing system on a single machine. We propose the Destination-Sorted Sub-Shard (DSSS) structure to store a graph. To ensure graph data access locality and enable fine-grained scheduling, NXgraph divides vertices and edges into intervals and sub-shards. To reduce write conflicts among different threads and achieve a high degree of parallelism, NXgraph sorts edges within each sub-shard according to their destination vertices. Then, three updating strategies, i.e., Single-Phase Update (SPU), Double-Phase Update (DPU), and Mixed-Phase Update (MPU), are proposed in this paper. NXgraph can adaptively choose the fastest strategy for different graph problems according to the graph size and the available memory resources to fully utilize the memory space and reduce the amount of data transfer. All these three strategies exploit streamlined disk access patterns. Extensive experiments on three real-world graphs and five synthetic graphs show that NXgraph outperforms GraphChi, TurboGraph, VENUS, and GridGraph in various situations. Moreover, NXgraph, running on a single commodity PC, can finish an iteration of PageRank on the Twitter [1] graph with 1.5 billion edges in 2.05 seconds; while PowerGraph, a distributed graph processing system, needs 3.6s to finish the same task on a 64-node cluster.", "SUMMARY This paper presents a number of algorithms to run the fast multipole method (FMM) on NVIDIA CUDA-capable graphical processing units (GPUs) (Nvidia Corporation, Sta. Clara, CA, USA). The FMM is a class of methods to compute pairwise interactions between N particles for a given error tolerance and with computational cost of O(N). The methods described in the paper are applicable to any FMMs in which the multipole-to-local (M2L) operator is a dense matrix and the matrix is precomputed. This is the case for example in the black-box fast multipole method (bbFMM), which is a variant of the FMM that can handle large class of kernels. This example will be used in our benchmarks. In the FMM, two operators represent most of the computational cost, and an optimal implementation typically tries to balance those two operators. One is the nearby interaction calculation (direct sum calculation, line 29 in Listing 1), and the other is the M2L operation. We focus on the M2L. By combining multiple M2L operations and reordering the primitive loops of the M2L so that CUDA threads can reuse or share common data, these approaches reduce the movement of data in the GPU. Because memory bandwidth is the primary bottleneck of these methods, significant performance improvements are realized. Four M2L schemes are detailed and analyzed in the case of a uniform tree. The four schemes are tested and compared with an optimized, OpenMP parallelized, multi-core CPU code. We consider high and low precision calculations by varying the number of Chebyshev nodes used in the bbFMM. The accuracy of the GPU codes is found to be satisfactory and achieved performance over 200 Gflop s on one NVIDIA Tesla C1060 GPU (Nvidia Corporation, Sta. Clara, CA, USA). This was compared against two quad-core Intel Xeon E5345 processors (Intel Corporation, Sta. Clara, CA, USA) running at 2.33 GHz, for a combined peak performance of 149 Gflop s for single precision. For the low FMM accuracy case, the observed performance of the CPU code was 37 Gflop s, whereas for the high FMM accuracy case, the performance was about 8.5 Gflop s, most likely because of a higher frequency of cache misses. We also present benchmarks on an NVIDIA C2050 GPU (a Fermi processor)(Nvidia Corporation, Sta. Clara, CA, USA) in single and double precision. Copyright © 2011 John Wiley & Sons, Ltd." ] }
cs0103026
2953044264
This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.
Bigrams have been used as features for word sense disambiguation, particularly in the form of collocations where the ambiguous word is one component of the bigram (e.g., @cite_10 , @cite_0 , @cite_9 ). While some of the bigrams we identify are collocations that include the word being disambiguated, there is no requirement that this be the case.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_10" ], "mid": [ "1903115690", "1851555520", "2481930807" ], "abstract": [ "When a trigram backoff language model is created from a large body of text, trigrams and bigrams that occur few times in the training text are often excluded from the model in order to decrease the model size. Generally, the elimination of n-grams with very low counts is believed to not significantly affect model performance. This project investigates the degradation of a trigram backoff model's perplexity and word error rates as bigram and trigram cutoffs are increased. The advantage of reduction in model size is compared to the increase in word error rate and perplexity scores. More importantly, this project also investigates alternative ways of excluding bigrams and trigrams from a backoff language model, using criteria other than the number of times an n-gram occurs in the training text. Specifically, a difference method has been investigated where the difference in the logs of the original and backed off trigram and bigram probabilities is used as a basis for n-gram exclusion from the model. We show that excluding trigrams and bigrams based on a weighted version of this difference method results in better perplexity and word error rate performance than excluding trigrams and bigrams based on counts alone.", "The unavailability of very large corpora with semantically disambiguated words is a major limitation in text processing research. For example, statistical methods for word sense disambiguation of free text are known to achieve high accuracy results when large corpora are available to develop context rules, to train and test them.This paper presents a novel approach to automatically generate arbitrarily large corpora for word senses. The method is based on (1) the information provided in WordNet, used to formulate queries consisting of synonyms or definitions of word senses, and (2) the information gathered from Internet using existing search engines. The method was tested on 120 word senses and a precision of 91 was observed.", "Given an image of a handwritten word, a CNN is employed to estimate its n-gram frequency profile, which is the set of n-grams contained in the word. Frequencies for unigrams, bigrams and trigrams are estimated for the entire word and for parts of it. Canonical Correlation Analysis is then used to match the estimated profile to the true profiles of all words in a large dictionary. The CNN that is used employs several novelties such as the use of multiple fully connected branches. Applied to all commonly used handwriting recognition benchmarks, our method outperforms, by a very large margin, all existing methods." ] }
cs0103026
2953044264
This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.
Decision trees have been used in supervised learning approaches to word sense disambiguation, and have fared well in a number of comparative studies (e.g., @cite_2 , @cite_17 ). In the former they were used with the bag of word feature sets and in the latter they were used with a mixed feature set that included the part-of-speech of neighboring words, three collocations, and the morphology of the ambiguous word. We believe that the approach in this paper is the first time that decision trees based strictly on bigram features have been employed.
{ "cite_N": [ "@cite_17", "@cite_2" ], "mid": [ "1489348810", "1756650108" ], "abstract": [ "This paper describes a supervised algorithm for word sensedisambiguation based on hierarchies of decision lists. This algorithmsupports a useful degree of conditional branching while minimizing thetraining data fragmentation typical of decision trees. Classificationsare based on a rich set of collocational, morphological and syntacticcontextual features, extracted automatically from training data andweighted sensitive to the nature of the feature and feature class. Thealgorithm is evaluated comprehensively in the SENSEVAL framework,achieving the top performance of all participating supervised systems onthe 36 test words where training data is available.", "Abstract Objective The aim of this study was to investigate relations among different aspects in supervised word sense disambiguation (WSD; supervised machine learning for disambiguating the sense of a term in a context) and compare supervised WSD in the biomedical domain with that in the general English domain. Methods The study involves three data sets (a biomedical abbreviation data set, a general biomedical term data set, and a general English data set). The authors implemented three machine-learning algorithms, including (1) naive Bayes (NBL) and decision lists (TDLL), (2) their adaptation of decision lists (ODLL), and (3) their mixed supervised learning (MSL). There were six feature representations (various combinations of collocations, bag of words, oriented bag of words, etc.) and five window sizes (2, 4, 6, 8, and 10). Results Supervised WSD is suitable only when there are enough sense-tagged instances with at least a few dozens of instances for each sense. Collocations combined with neighboring words are appropriate selections for the context. For terms with unrelated biomedical senses, a large window size such as the whole paragraph should be used, while for general English words a moderate window size between 4 and 10 should be used. The performance of the authors' implementation of decision list classifiers for abbreviations was better than that of traditional decision list classifiers. However, the opposite held for the other two sets. Also, the authors' mixed supervised learning was stable and generally better than others for all sets. Conclusion From this study, it was found that different aspects of supervised WSD depend on each other. The experiment method presented in the study can be used to select the best supervised WSD classifier for each ambiguous term." ] }
cs0103026
2953044264
This paper presents a corpus-based approach to word sense disambiguation where a decision tree assigns a sense to an ambiguous word based on the bigrams that occur nearby. This approach is evaluated using the sense-tagged corpora from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate than the average results reported for 30 of 36 words, and is more accurate than the best results for 19 of 36 words.
The decision list is a closely related approach that has also been applied to word sense disambiguation (e.g., @cite_6 , @cite_14 , @cite_4 ). Rather than building and traversing a tree to perform disambiguation, a list is employed. In the general case a decision list may suffer from less fragmentation during learning than decision trees; as a practical matter this means that the decision list is less likely to be over--trained. However, we believe that fragmentation also reflects on the feature set used for learning. Ours consists of at most approximately 100 binary features. This results in a relatively small feature space that is not as likely to suffer from fragmentation as are larger spaces.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_6" ], "mid": [ "1489348810", "1967148170", "2050806103" ], "abstract": [ "This paper describes a supervised algorithm for word sensedisambiguation based on hierarchies of decision lists. This algorithmsupports a useful degree of conditional branching while minimizing thetraining data fragmentation typical of decision trees. Classificationsare based on a rich set of collocational, morphological and syntacticcontextual features, extracted automatically from training data andweighted sensitive to the nature of the feature and feature class. Thealgorithm is evaluated comprehensively in the SENSEVAL framework,achieving the top performance of all participating supervised systems onthe 36 test words where training data is available.", "Decision trees are probably the most popular and commonly used classification model. They are recursively built following a top-down approach (from general concepts to particular examples) by repeated splits of the training dataset. When this dataset contains numerical attributes, binary splits are usually performed by choosing the threshold value which minimizes the impurity measure used as splitting criterion (e.g. C4.5 gain ratio criterion or CART Gini's index). In this paper we propose the use of multi-way splits for continuous attributes in order to reduce the tree complexity without decreasing classification accuracy. This can be done by intertwining a hierarchical clustering algorithm with the usual greedy decision tree learning.", "Decision trees are the commonly applied tools in the task of data stream classification. The most critical point in decision tree construction algorithm is the choice of the splitting attribute. In majority of algorithms existing in literature the splitting criterion is based on statistical bounds derived for split measure functions. In this paper we propose a totally new kind of splitting criterion. We derive statistical bounds for arguments of split measure function instead of deriving it for split measure function itself. This approach allows us to properly use the Hoeffding's inequality to obtain the required bounds. Based on this theoretical results we propose the Decision Trees based on the Fractions Approximation algorithm (DTFA). The algorithm exhibits satisfactory results of classification accuracy in numerical experiments. It is also compared with other existing in literature methods, demonstrating noticeably better performance." ] }
cs0105021
1678362335
This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.
Inversion of functions on sets is done implicitly by every algorithm for solving systems of equations @cite_10 --- in this case the input set just contains one zero vector. It is mentioned explicitly mostly for computing the solution set of systems of inequalities @cite_32 @cite_37 .
{ "cite_N": [ "@cite_37", "@cite_10", "@cite_32" ], "mid": [ "2079397195", "2104375222", "2159964742" ], "abstract": [ "The method of inversion for arbitrary continuous multilayer nets is developed. The inversion is done by computing iteratively an input vector which minimizes the least-mean-square errors to approximate a given output target. This inversion is not unique for given targets and depends on the starting point in input space. The inversion method turns out to be a valuable tool for the examination of multilayer nets (MLNs). Applications of the inversion method to constraint satisfaction, feature detection, and the testing of reliability and performance of MLNs are outlined. It is concluded that recurrent nets and even time-delay nets might be invertible. >", "The problem of inverting trained feedforward neural networks is to find the inputs which yield a given output. In general, this problem is an ill-posed problem. We present a method for dealing with the inverse problem by using mathematical programming techniques. The principal idea behind the method is to formulate the inverse problem as a nonlinear programming problem, a separable programming (SP) problem, or a linear programming problem according to the architectures of networks to be inverted or the types of network inversions to be computed. An important advantage of the method over the existing iterative inversion algorithm is that various designated network inversions of multilayer perceptrons and radial basis function neural networks can be obtained by solving the corresponding SP problems, which can be solved by a modified simplex method. We present several examples to demonstrate the proposed method and applications of network inversions to examine and improve the generalization performance of trained networks. The results show the effectiveness of the proposed method.", "There are many methods for performing neural network inversion. Multi-element evolutionary inversion procedures are capable of finding numerous inversion points simultaneously. Constrained neural network inversion requires that the inversion solution belong to one or more specified constraint sets. In many cases, iterating between the neural network inversion solution and the constraint set can successfully solve constrained inversion problems. This paper surveys existing methodologies for neural network inversion, which is illustrated by its use as a tool in query-based learning, sonar performance analysis, power system security assessment, control, and generation of codebook vectors." ] }
cs0105021
1678362335
This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.
First-order constraints occur frequently in control, and especially robust control. Up to now they either have been solved by specialized methods @cite_19 @cite_3 @cite_1 or by applying general solvers like QEPCAD @cite_27 . In the first case one is usually restricted to conditions like linearity, and in the second case one suffers from the high run-time complexity of computing exact solutions @cite_5 @cite_17 . We know of only one case where general solvers for first-order constraints have been applied to discrete-time systems @cite_26 , but they have been frequently applied to continuous systems @cite_14 @cite_15 @cite_25 . For non-linear discrete-time systems without perturbations or control, interval methods have also proved to be an important tool @cite_20 @cite_13 .
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_1", "@cite_3", "@cite_19", "@cite_27", "@cite_5", "@cite_15", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2768546550", "2018738327", "1982831910", "2061749308", "2655474054", "2751090697", "2895571900", "2075779886", "2104715607", "2963349772", "2964271484", "1855555568" ], "abstract": [ "First-order methods have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two first-order methods for constrained convex programs, for which the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classic augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but employ different primal variable updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence as well as global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising and a convex quadratically constrained quadratic program to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results.", "When solving the general smooth nonlinear and possibly nonconvex optimization problem involving equality and or inequality constraints, an approximate first-order critical point of accuracy @math can be obtained by a second-order method using cubic regularization in at most @math evaluations of problem functions, the same order bound as in the unconstrained case. This result is obtained by first showing that the same result holds for inequality constrained nonlinear least-squares. As a consequence, the presence of (possibly nonconvex) equality inequality constraints does not affect the complexity of finding approximate first-order critical points in nonconvex optimization. This result improves on the best known ( @math ) evaluation-complexity bound for solving general nonconvexly constrained optimization problems.", "In this paper, we consider conic programming problems whose constraints consist of linear equalities, linear inequalities, a nonpolyhedral cone, and a polyhedral cone. A convenient way for solving this class of problems is to apply the directly extended alternating direction method of multipliers (ADMM) to its dual problem, which has been observed to perform well in numerical computations but may diverge in theory. Ideally, one should find a convergent variant which is at least as efficient as the directly extended ADMM in practice. We achieve this goal by designing a convergent semiproximal ADMM (called sPADMM3c for convenience) for convex programming problems having three separable blocks in the objective function with the third part being linear. At each iteration, the proposed sPADMM3c takes one special block coordinate descent (BCD) cycle with the order @math , instead of the usual @math Gauss--Seidel BCD cycle used in the nonconvergent directly extended 3-block ADMM, for updating the variable blocks. Our numerical experiments demonstrate that the convergent method is at least 20 faster than the directly extended ADMM with unit step-length for the vast majority of about 550 large-scale doubly nonnegative semidefinite programming problems with linear equality and or inequality constraints. This confirms that at least for conic convex programming, one can design a convergent and efficient ADMM with a special BCD cycle of updating the variable blocks.", "Many problems in control theory can be formulated as formulae in the first-order theory of real closed fields. In this paper we investigate some of the expressive power of this theory. We consider dynamical systems described by polynomial differential equations subjected to constraints on control and system variables and show how to formulate questions in the above framework which can be answered by quantifier elimination. The problems treated in this paper regard stationarity, stability, and following of a polynomially parametrized curve. The software package QEPCAD has been used to solve a number of examples.", "We focus on nonconvex and nonsmooth minimization problems with a composite objective, where the differentiable part of the objective is freed from the usual and restrictive global Lipschitz gradient continuity assumption. This longstanding smoothness restriction is pervasive in first order methods (FOM), and was recently circumvent for convex composite optimization by Bauschke, Bolte and Teboulle, through a simple and elegant framework which captures, all at once, the geometry of the function and of the feasible set. Building on this work, we tackle genuine nonconvex problems. We first complement and extend their approach to derive a full extended descent lemma by introducing the notion of smooth adaptable functions. We then consider a Bregman-based proximal gradient methods for the nonconvex composite model with smooth adaptable functions, which is proven to globally converge to a critical point under natural assumptions on the problem's data. To illustrate the power and potential of our general framework and results, we consider a broad class of quadratic inverse problems with sparsity constraints which arises in many fundamental applications, and we apply our approach to derive new globally convergent schemes for this class.", "A central challenge to using first-order methods for optimizing nonconvex problems is the presence of saddle points. First-order methods often get stuck at saddle points, greatly deteriorating their performance. Typically, to escape from saddles one has to use second-order methods. However, most works on second-order methods rely extensively on expensive Hessian-based computations, making them impractical in large-scale settings. To tackle this challenge, we introduce a generic framework that minimizes Hessian based computations while at the same time provably converging to second-order critical points. Our framework carefully alternates between a first-order and a second-order subroutine, using the latter only close to saddle points, and yields convergence results competitive to the state-of-the-art. Empirical results suggest that our strategy also enjoys a good practical performance.", "We consider the problem of finding an approximate second-order stationary point of a constrained non-convex optimization problem. We first show that, unlike the unconstrained scenario, the vanilla projected gradient descent algorithm may converge to a strict saddle point even when there is only a single linear constraint. We then provide a hardness result by showing that checking ( , )-second order stationarity is NP-hard even in the presence of linear constraints. Despite our hardness result, we identify instances of the problem for which checking second order stationarity can be done efficiently. For such instances, we propose a dynamic second order Frank--Wolfe algorithm which converges to ( , )-second order stationary points in O ( ^ -2 , ^ -3 ) iterations. The proposed algorithm can be used in general constrained non-convex optimization as long as the constrained quadratic sub-problem can be solved efficiently.", "We investigate the computational complexity of two closely related classes of combinatorial optimization problems for linear systems which arise in various fields such as machine learning, operations research and pattern recognition. In the first class (Min ULR) one wishes, given a possibly infeasible system of linear relations, to find a solution that violates as few relations as possible while satisfying all the others. In the second class (Min RVLS) the linear system is supposed to be feasible and one looks for a solution with as few nonzero variables as possible. For both Min ULR and Min RVLS the four basic types of relational operators =, ⩾, > and ≠ are considered. While Min RVLS with equations was mentioned to be NP-hard in (Garey and Johnson, 1979), we established in (Amaldi; 1992; Amaldi and Kann, 1995) that min ULR with equalities and inequalities are NP-hard even when restricted to homogeneous systems with bipolar coefficients. The latter problems have been shown hard to approximate in (, 1993). In this paper we determine strong bounds on the approximability of various variants of Min RVLS and min ULR, including constrained ones where the variables are restricted to take binary values or where some relations are mandatory while others are optional. The various NP-hard versions turn out to have different approximability properties depending on the type of relations and the additional constraints, but none of them can be approximated within any constant factor, unless P = NP. Particular attention is devoted to two interesting special cases that occur in discriminant analysis and machine learning. In particular, we disprove a conjecture of van Horn and Martinez (1992) regarding the existence of a polynomial-time algorithm to design linear classifiers (or perceptrons) that involve a close-to-minimum number of features.", "We consider linear problems in fields, ordered fields, discretely valued fields (with finite residue field or residue field of characteristic zero) and fields with finitely many independent orderings and discrete valuations. Most of the fields considered will be of characteristic zero. Formally, linear statements about these structures (with parameters) are given by formulas of the respective first-order language, in which all bound variables occur only linearly. We study symbolic algorithms (linear elimination procedures) that reduce linear formulas to linear formulas of a very simple form, i.e. quantifier-free linear formulas, and algorithms (linear decision procedures) that decide whether a given linear sentence holds in all structures of the given class. For all classes of fields considered, we find linear elimination procedures that run in double exponential space and time. As a consequence, we can show that for fields (with one or several discrete valuations), linear statements can be transferred from characteristic zero to prime characteristic p, provided p is double exponential in the length of the statement. (For similar bounds in the non-linear case, see Brown, 1978.) We find corresponding linear decision procedures in the Berman complexity classes @[email protected]?NSTA(*,2^c^n,dn) for d = 1, 2. In particular, all hese procedures run in exponential space. The technique employed is quantifier elimination via Skolem terms based on Ferrante & Rackoff (1975). Using ideas of Fischer & Rabin (1974), Berman (1977), Furer (1982), we establish lower bounds for these problems showing that our upper bounds are essentially tight. For linear formulas with a bounded number of quantifiers all our algorithms run in polynomial time. For linear formulas of bounded quantifier alternation most of the algorithms run in time 2^O^(^n^^^k^) for fixed k.", "(This is a theory paper) In this paper, we consider first-order methods for solving stochastic non-convex optimization problems. The key building block of the proposed algorithms is first-order procedures to extract negative curvature from the Hessian matrix through a principled sequence starting from noise, which are referred to NEgative-curvature-Originated-from-Noise or NEON and are of independent interest. Based on this building block, we design purely first-order stochastic algorithms for escaping from non-degenerate saddle points with a much better time complexity (almost linear time in the problem's dimensionality). In particular, we develop a general framework of first-order stochastic algorithms with a second-order convergence guarantee based on our new technique and existing algorithms that may only converge to a first-order stationary point. For finding a nearly second-order stationary point such that ‖∇F( )‖≤ϵ and ∇2F( )≥− √ ϵ I (in high probability), the best time complexity of the presented algorithms is ˜ O (d ϵ3.5), where F(⋅) denotes the objective function and d is the dimensionality of the problem. To the best of our knowledge, this is the first theoretical result of first-order stochastic algorithms with an almost linear time in terms of problem's dimensionality for finding second-order stationary points, which is even competitive with existing stochastic algorithms hinging on the second-order information.", "Nonconvex and nonsmooth optimization problems are frequently encountered in much of statistics, business, science and engineering, but they are not yet widely recognized as a technology in the sense of scalability. A reason for this relatively low degree of popularity is the lack of a well developed system of theory and algorithms to support the applications, as is the case for its convex counterpart. This paper aims to take one step in the direction of disciplined nonconvex and nonsmooth optimization. In particular, we consider in this paper some constrained nonconvex optimization models in block decision variables, with or without coupled affine constraints. In the absence of coupled constraints, we show a sublinear rate of convergence to an ( )-stationary solution in the form of variational inequality for a generalized conditional gradient method, where the convergence rate is dependent on the Holderian continuity of the gradient of the smooth part of the objective. For the model with coupled affine constraints, we introduce corresponding ( )-stationarity conditions, and apply two proximal-type variants of the ADMM to solve such a model, assuming the proximal ADMM updates can be implemented for all the block variables except for the last block, for which either a gradient step or a majorization–minimization step is implemented. We show an iteration complexity bound of (O(1 ^2) ) to reach an ( )-stationary solution for both algorithms. Moreover, we show that the same iteration complexity of a proximal BCD method follows immediately. Numerical results are provided to illustrate the efficacy of the proposed algorithms for tensor robust PCA and tensor sparse PCA problems.", "We investigate the control of constrained stochastic linear systems when faced with limited information regarding the disturbance process, i.e., when only the first two moments of the disturbance distribution are known. We consider two types of distributionally robust constraints. In the first case, we require that the constraints hold with a given probability for all disturbance distributions sharing the known moments. These constraints are commonly referred to as distributionally robust chance constraints. In the second case, we impose conditional value-at-risk (CVaR) constraints to bound the expected constraint violation for all disturbance distributions consistent with the given moment information. Such constraints are referred to as distributionally robust CVaR constraints with second-order moment specifications. We propose a method for designing linear controllers for systems with such constraints that is both computationally tractable and practically meaningful for both finite and infinite horizon problems. We prove in the infinite horizon case that our design procedure produces the globally optimal linear output feedback controller for distributionally robust CVaR and chance constrained problems. The proposed methods are illustrated for a wind blade control design case study for which distributionally robust constraints constitute sensible design objectives." ] }
cs0105021
1678362335
This paper deals with a problem from discrete-time robust control which requires the solution of constraints over the reals that contain both universal and existential quantifiers. For solving this problem we formulate it as a program in a (fictitious) constraint logic programming language with explicit quantifier notation. This allows us to clarify the special structure of the problem, and to extend an algorithm for computing approximate solution sets of first-order constraints over the reals to exploit this structure. As a result we can deal with inputs that are clearly out of reach for current symbolic solvers.
Apart from the method used in this paper @cite_8 , there there have been several successful attempts at solving special cases of first-order constraints, for example using classical interval techniques @cite_36 @cite_21 or constraint satisfaction @cite_0 , and very often in the context of robust control @cite_2 @cite_30 @cite_28 @cite_11 .
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_36", "@cite_28", "@cite_21", "@cite_0", "@cite_2", "@cite_11" ], "mid": [ "2768546550", "2018738327", "2126442135", "1982831910", "1977512600", "2088419249", "2026924428", "1504204411" ], "abstract": [ "First-order methods have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two first-order methods for constrained convex programs, for which the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classic augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but employ different primal variable updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence as well as global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising and a convex quadratically constrained quadratic program to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results.", "When solving the general smooth nonlinear and possibly nonconvex optimization problem involving equality and or inequality constraints, an approximate first-order critical point of accuracy @math can be obtained by a second-order method using cubic regularization in at most @math evaluations of problem functions, the same order bound as in the unconstrained case. This result is obtained by first showing that the same result holds for inequality constrained nonlinear least-squares. As a consequence, the presence of (possibly nonconvex) equality inequality constraints does not affect the complexity of finding approximate first-order critical points in nonconvex optimization. This result improves on the best known ( @math ) evaluation-complexity bound for solving general nonconvexly constrained optimization problems.", "We present a simple probabilistic algorithm for solving k-SAT and more generally, for solving constraint satisfaction problems (CSP). The algorithm follows a simple local search paradigm (S. , 1992): randomly guess an initial assignment and then, guided by those clauses (constraints) that are not satisfied, by successively choosing a random literal from such a clause and flipping the corresponding bit, try to find a satisfying assignment. If no satisfying assignment is found after O(n) steps, start over again. Our analysis shows that for any satisfiable k-CNF-formula with n variables this process has to be repeated only t times, on the average, to find a satisfying assignment, where t is within a polynomial factor of (2(1-1 k)) sup n . This is the fastest (and also the simplest) algorithm for 3-SAT known up to date. We consider also the more general case of a CSP with n variables, each variable taking at most d values, and constraints of order l, and analyze the complexity of the corresponding (generalized) algorith m. It turns out that any CSP can be solved with complexity at most (d spl middot (1-1 l)+ spl epsiv ) sup n .", "In this paper, we consider conic programming problems whose constraints consist of linear equalities, linear inequalities, a nonpolyhedral cone, and a polyhedral cone. A convenient way for solving this class of problems is to apply the directly extended alternating direction method of multipliers (ADMM) to its dual problem, which has been observed to perform well in numerical computations but may diverge in theory. Ideally, one should find a convergent variant which is at least as efficient as the directly extended ADMM in practice. We achieve this goal by designing a convergent semiproximal ADMM (called sPADMM3c for convenience) for convex programming problems having three separable blocks in the objective function with the third part being linear. At each iteration, the proposed sPADMM3c takes one special block coordinate descent (BCD) cycle with the order @math , instead of the usual @math Gauss--Seidel BCD cycle used in the nonconvergent directly extended 3-block ADMM, for updating the variable blocks. Our numerical experiments demonstrate that the convergent method is at least 20 faster than the directly extended ADMM with unit step-length for the vast majority of about 550 large-scale doubly nonnegative semidefinite programming problems with linear equality and or inequality constraints. This confirms that at least for conic convex programming, one can design a convergent and efficient ADMM with a special BCD cycle of updating the variable blocks.", "A hallmark of multibody dynamics is that most formulations involve a number of constraints. Typically, when redundant generalized coordinates are used, equations of motion are simpler to derive but constraint equations are present. Approaches to dealing with high index differential algebraic equations, based on index reduction techniques, are reviewed and discussed. Constraint violation stabilization techniques that have been developed to control constraint drift are also reviewed. These techniques are used in conjunction with algorithms that do not exactly enforce the constraints. Control theory forms the basis for a number of these methods. Penalty based techniques have also been developed, but the augmented Lagrangian formulation presents a more solid theoretical foundation. In contrast to constraint violation stabilization techniques, constraint violation elimination techniques enforce exact satisfaction of the constraints, at least to machine accuracy. Finally, as the finite element method has gained popularity for the solution of multibody systems, new techniques for the enforcement of constraints have been developed in that framework. The goal of this paper is to review the features of these methods, assess their accuracy and efficiency, underline the relationship among the methods, and recommend approaches that seem to perform better than others.", "Many NP-complete constraint satisfaction problems appear to undergo a “phase transition” from solubility to insolubility when the constraint density passes through a critical threshold. In all such cases it is easy to derive upper bounds on the location of the threshold by showing that above a certain density the first moment (expectation) of the number of solutions tends to zero. We show that in the case of certain symmetric constraints, considering the second moment of the number of solutions yields nearly matching lower bounds for the location of the threshold. Specifically, we prove that the threshold for both random hypergraph 2-colorability (Property B) and random Not-All-Equal @math -SAT is @math . As a corollary, we establish that the threshold for random @math -SAT is of order @math , resolving a long-standing open problem.", "where (an)n ENo is some sequence of nonnegative numbers, (Sn),nENo is the sequence of partial sums, S0 = 0, Sn = XflXk, of another sequence (Xk)kEN of i.i.d. random variables, and A c R is a fixed Borel set such as [0,1] or [0, oo). Examples of such convolution series are subordinated distributions (f=0Oan = 1) which arise as distributions of random sums, and harmonic and ordinary renewal measures (a0 = 0, an = 1 n for all n C N in the first, an = 1 for all n C NO in the second case). These examples are in turn essential for the analysis of the large time behaviour of diverse applied models such as branching and queueing processes, they are also of interest in connection with representation theorems such as the Levy representation of infinitely divisible distributions. A traditional approach to such problems is via regular variation: If the underlying random variables are nonnegative we can use Laplace transforms and the related Abelian and Tauberian theorems [see, e.g., Stam (1973) in the context of subordination and Feller (1971, XIV.3) in connection with renewal theory; Embrechts, Maejima, and Omey (1984) is a recent treatment of generalized renewal measures along these lines]. The approach of the present paper is based on the Wiener-Levy-Gel'fand theorem and has occasionally been called the Banach algebra method. In Gruibel (1983) we gave a new variant of this method for the special case of lattice distributions, showing that by using the appropriate Banach algebras of sequences, arbitrarily fine expansions are possible under certain assumptions on the higher-order differences of (P(X1 = n))fnEN. Here we give a corresponding treatment of nonlattice distributions. We restrict ourselves to an analogue of first-order differences and obtain a number of theorems which perhaps are described best as next-term results. To explain this let us consider a special case in more detail.", "This paper applies interval methods to a classical problem in computer algebra. Let a quantified constraint be a first-order formula over the real numbers. As shown by A. Tarski in the 1930's, such constraints, when restricted to the predicate symbols <, = and function symbols +, ×, are in general solvable. However, the problem becomes undecidable, when we add function symbols like sin. Furthermore, all exact algorithms known up to now are too slow for big examples, do not provide partial information before computing the total result, cannot satisfactorily deal with interval constants in the input, and often generate huge output. As a remedy we propose an approximation method based on interval arithmetic. It uses a generalization of the notion of cylindrical decomposition—as introduced by G. Collins. We describe an implementation of the method and demonstrate that, for quantified constraints without equalities, it can efficiently give approximate information on problems that are too hard for current exact methods." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
ii) The ideal gas with the Haldane statistics and the Sutherland-Wu equation. The series @math has an interpretation of the grand partition function of the ideal gas with the Haldane exclusion statistics @cite_16 . The finite @math -system appeared in @cite_16 as the thermal equilibrium condition for the distribution functions of the same system. See also @cite_1 for another interpretation. The one variable case ) also appeared in @cite_26 as the thermal equilibrium condition for the distribution function of the Calogero-Sutherland model. As an application of our second formula in Theorem , we can quickly reproduce the cluster expansion formula'' in [Eq. (129)] I , which was originally calculated by the Lagrange inversion formula, as follows: where @math is the solution of ). The Sutherland-Wu equation also plays an important role for the conformal field theory spectra. (See @cite_23 and the references therein.)
{ "cite_N": [ "@cite_23", "@cite_16", "@cite_1", "@cite_26" ], "mid": [ "2095564275", "2000277525", "2102499404", "2149433831" ], "abstract": [ "We discuss the relationship between the classical Lagrange theorem in mathematics and the quantum statistical mechanics and thermodynamics of an ideal gas of multispecies quasiparticles with mutual fractional exclusion statistics. First, we show that the thermodynamic potential and the density of the system are analytically expressed in terms of the language of generalized cluster expansions, where the cluster coefficients are determined from Wu’s functional relations for describing the distribution functions of mutual fractional exclusion statistics. Second, we generalize the classical Lagrange theorem for inverting the one complex variable functions to that for the multicomplex variable functions. Third, we explicitly obtain all the exact cluster coefficients by applying the generalized Lagrange theorem. @S0163-1829 98!03335-9#", "We derive an exact integral representation for the gr and partition function for an ideal gas with exclusion statistics. Using this we show how the Wu's equation for the exclusion statistics appears in the problem. This can be an alternative proof for the Wu's equation. We also discuss that singularities are related to the existence of a phase transition of the system.", "We study the properties of the conformal blocks of the conformal eld theories with Virasoro or W-extended symmetry. When the conformal blocks contain only second-order degenerate elds, the conformal blocks obey second order dierential equations and they can be interpreted as ground-state wave functions of a trigonometric Calogero-Sutherland Hamiltonian with nontrivial braiding properties. A generalized duality property relates the two types of second order degenerate elds. By studying this duality we found that the excited states of the CalogeroSutherland Hamiltonian are characterized by two partitions, or in the case of WAk 1 theories by k partitions. By extending the conformal eld theories under consideration by a u(1) eld, we nd that we can put in correspondence the states in the Hilbert state of the extended CFT with the excited non-polynomial eigenstates of the Calogero-Sutherland Hamiltonian. When the action of the Calogero-Sutherland integrals of motion is translated on the Hilbert space, they become identical to the integrals of motion recently discovered by Alba, Fateev, Litvinov and Tarnopolsky in Liouville theory in the context of the AGT conjecture. Upon bosonisation, these integrals of motion can be expressed as a sum of two, or in generalk, bosonic Calogero-Sutherland Hamiltonian coupled by an interaction term with a triangular structure. For special values of the coupling constant, the conformal blocks can be expressed in terms of Jack polynomials with pairing properties, and they give electron wave functions for special Fractional Quantum Hall states.", "We consider the solution of the stochastic heat equation @TZ D 1 @ 2 X ZZ P W with delta function initial condition Z.T D0;X DiXD0 whose logarithm, with appropriate normalization, is the free energy of the con- tinuum directed polymer, or the Hopf-Cole solution of the Kardar-Parisi-Zhang equation with narrow wedge initial conditions. We obtain explicit formulas for the one-dimensional marginal distributions, the crossover distributions, which interpolate between a standard Gaussian dis- tribution (small time) and the GUE Tracy-Widom distribution (large time). The proof is via a rigorous steepest-descent analysis of the Tracy-Widom formula for the asymmetric simple exclusion process with antishock initial data, which is shown to converge to the continuum equations in an appropriate weakly asymmetric limit. The limit also describes the crossover behavior between the symmetric and asymmetric exclusion processes. © 2010 Wiley Periodicals, Inc." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
Below we list the related works on Conjectures and -- mostly chronologically. However, the list is by no means complete. The series @math in ) admits a natural @math -analogue called the fermionic formula . This is another fascinating subject, but we do not cover it here. See @cite_23 @cite_7 @cite_6 and reference therein. It is convenient to refer the formula ) with the binomial coefficient ) as type I , and the ones with the binomial coefficient in Remark as type II . (In the context of the -type integrable spin chains, @math and @math represent the numbers of @math -strings and @math -holes of color @math , respectively. Therefore one must demand @math , which implies that the relevant formulae are necessarily of type II.) The manifest expression of the decomposition of @math such as is referred as type III , where @math is the character of the irreducible @math -module @math with highest weight @math . Since there is no essential distinction between these conjectured formulae for @math and @math , we simply refer the both cases as @math below. At this moment, however, the proofs should be separately given for nonsimply-laced case @cite_34 .
{ "cite_N": [ "@cite_34", "@cite_7", "@cite_23", "@cite_6" ], "mid": [ "2106856555", "1618024583", "1945101555", "2472026787" ], "abstract": [ "We introduce a fermionic formula associated with any quantum affine algebra U q (X N (r) . Guided by the interplay between corner transfer matrix and the Bethe ansatz in solvable lattice models, we study several aspects related to representation theory, most crucially, the crystal basis theory. They include one-dimensional sums over both finite and semi-infinite paths, spinon character formulae, Lepowsky—Primc type conjectural formula for vacuum string functions, dilogarithm identities, Q-systems and their solution by characters of various classical subalgebras and so forth. The results expand [HKOTY1] including the twisted cases and more details on inhomogeneous paths consisting of non-perfect crystals. As a most intriguing example, certain inhomogeneous one-dimensional sums conjecturally give rise to branching functions of an integrable G 2 (1) -module related to the embedding G 2 (1) ↪ B 3 (1) ↪ D 4 1 .", "Fermionic formulae originate in the Bethe ansatz in solvable lattice models. They are specific expressions of some q-polynomials as sums of products of q-binomial coefficients. We consider the fermionic formulae associated with general non-twisted quantum affine algebra U_q(X^ (1) _n) and discuss several aspects related to representation theories and combinatorics. They include crystal base theory, one dimensional sums, spinon character formulae, Q-system and combinatorial completeness of the string hypothesis for arbitrary X_n.", "Let @math be a smooth scheme over an algebraically closed field @math of characteristic zero and @math a regular function, and write @math Crit @math , as a closed subscheme of @math . The motivic vanishing cycle @math is an element of the @math -equivariant motivic Grothendieck ring @math defined by Denef and Loeser math.AG 0006050 and Looijenga math.AG 0006220, and used in Kontsevich and Soibelman's theory of motivic Donaldson-Thomas invariants, arXiv:0811.2435. We prove three main results: (a) @math depends only on the third-order thickenings @math of @math . (b) If @math is another smooth scheme, @math is regular, @math Crit @math , and @math is an embedding with @math and @math an isomorphism, then @math equals @math \"twisted\" by a motive associated to a principal @math -bundle defined using @math , where now we work in a quotient ring @math of @math . (c) If @math is an \"oriented algebraic d-critical locus\" in the sense of Joyce arXiv:1304.4508, there is a natural motive @math , such that if @math is locally modelled on Crit @math , then @math is locally modelled on @math . Using results from arXiv:1305.6302, these imply the existence of natural motives on moduli schemes of coherent sheaves on a Calabi-Yau 3-fold equipped with \"orientation data\", as required in Kontsevich and Soibelman's motivic Donaldson-Thomas theory arXiv:0811.2435, and on intersections of oriented Lagrangians in an algebraic symplectic manifold. This paper is an analogue for motives of results on perverse sheaves of vanishing cycles proved in arXiv:1211.3259. We extend this paper to Artin stacks in arXiv:1312.0090.", "Relations among tautological classes on the moduli space of stable curves are obtained via the study of Witten's r-spin theory for higher r. In order to calculate the quantum product, a new formula relating the r-spin correlators in genus 0 to the representation theory of sl2 is proven. The Givental-Teleman classification of CohFTs is used at two special semisimple points of the associated Frobenius manifold. At the first semisimple point, the R-matrix is exactly solved in terms of hypergeometric series. As a result, an explicit formula for Witten's r-spin class is obtained (along with tautological relations in higher degrees). As an application, the r=4 relations are used to bound the Betti numbers of the tautological ring of the moduli of nonsingular curves. At the second semisimple point, the form of the R-matrix implies a polynomiality property in r of Witten's r-spin class. In the Appendix (with F. Janda), a conjecture relating the r=0 limit of Witten's r-spin class to the class of the moduli space of holomorphic differentials is presented." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
2 @cite_24 . Kerov . proposed and proved the type II formula for @math by the combinatorial method, where the bijection between the Littlewood-Richardson tableaux and the rigged configurations was constructed.
{ "cite_N": [ "@cite_24" ], "mid": [ "2010163765" ], "abstract": [ "We establish a lower bound for the formula size of quolynomials over arbitrary fields. Our basic formula operations are addition, subtraction, multiplication and division. The proof is based on Neciporuk’s [Soviet Math. Doklady, 7 (1966), pp. 999–1000] lower bound for Boolean functions and uses formal power series. This result immediately yields a lower bound for the formula size of rational functions over infinite fields. We also show how to adapt Neciporuk’s method to rational functions over finite fields. These results are then used to show that, over any field, the @math determinant function has formula size at least @math . We thus have an algebraic analogue to the @math lower bound for the Boolean determinant due to Kloss [Soviet Math. Doklady, 7 (1966), pp. 1537–1540]." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
4 @cite_14 . Ogievetsky and Wiegmann proposed the type III formula of @math for some @math for the exceptional algebras from the reproduction scheme.
{ "cite_N": [ "@cite_14" ], "mid": [ "2180264879" ], "abstract": [ "Let @math be a cyclic @math -algebra of dimension @math with finite dimensional cohomology only in dimension one and two. By transfer theorem there exists a cyclic @math -algebra structure on the cohomology @math . The inner product plus the higher products of the cyclic @math -algebra defines a superpotential function @math on @math . We associate with an analytic Milnor fiber for the formal function @math and define the Euler characteristic of @math is to be the Euler characteristic of the 'et ale cohomology of the analytic Milnor fiber. In this paper we prove a Thom-Sebastiani type formula for the Euler characteristic of cyclic @math -algebras. As applications we prove the Joyce-Song formulas about the Behrend function identities for semi-Schur objects in the derived category of coherent sheaves over Calabi-Yau threefolds. A motivic Thom-Sebastiani type formula and a conjectural motivic Joyce-Song formulas for the motivic Milnor fiber of cyclic @math -algebras are also discussed." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
8 @cite_22 . Kleber analyzed a combinatorial structure of the type II formula for the simply-laced algebras. In particular, it was proved that the type III formula of @math and the corresponding type II formula are equivalent for @math and @math .
{ "cite_N": [ "@cite_22" ], "mid": [ "1554718120" ], "abstract": [ "We give an algebro-combinatorial proof of a general ver­ sion of Pieri's formula following the approach developed by Fomin and Kirillov in the paper \"Quadratic algebras, Dunkl elements, and Schu­ bert calculus.\" We prove several conjectures posed in their paper. As a consequence, a new proof of classical Pieri's formula for cohomol­ ogy of complex flag manifolds, and that of its analogue for quantum cohomology is obtained in this paper." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
9 @cite_7 @cite_6 . Hatayama . gave a characterization of the type I formula as the solution of the @math -system which are @math -linear combinations of the @math -characters with the property equivalent to the convergence property ). Using it, the equivalence of the type III formula of @math and the type I formula of @math for the classical algebras was shown @cite_7 . In @cite_6 , the type I and type II formulae, and the @math -systems for the twisted algebras @math were proposed. The type III formula of @math for @math , @math , @math , @math was also proposed, and the equivalence to the type I formula was shown in a similar way to the untwisted case.
{ "cite_N": [ "@cite_6", "@cite_7" ], "mid": [ "2065330478", "1945101555" ], "abstract": [ "We prove a comparison formula for the Donaldson-Thomas curve-counting invariants of two smooth and projective Calabi-Yau threefolds related by a flop. By results of Bridgeland any two such varieties are derived equivalent. Furthermore there exist pairs of categories of perverse coherent sheaves on both sides which are swapped by this equivalence. Using the theory developed by Joyce we construct the motivic Hall algebras of these categories. These algebras provide a bridge relating the invariants on both sides of the flop.", "Let @math be a smooth scheme over an algebraically closed field @math of characteristic zero and @math a regular function, and write @math Crit @math , as a closed subscheme of @math . The motivic vanishing cycle @math is an element of the @math -equivariant motivic Grothendieck ring @math defined by Denef and Loeser math.AG 0006050 and Looijenga math.AG 0006220, and used in Kontsevich and Soibelman's theory of motivic Donaldson-Thomas invariants, arXiv:0811.2435. We prove three main results: (a) @math depends only on the third-order thickenings @math of @math . (b) If @math is another smooth scheme, @math is regular, @math Crit @math , and @math is an embedding with @math and @math an isomorphism, then @math equals @math \"twisted\" by a motive associated to a principal @math -bundle defined using @math , where now we work in a quotient ring @math of @math . (c) If @math is an \"oriented algebraic d-critical locus\" in the sense of Joyce arXiv:1304.4508, there is a natural motive @math , such that if @math is locally modelled on Crit @math , then @math is locally modelled on @math . Using results from arXiv:1305.6302, these imply the existence of natural motives on moduli schemes of coherent sheaves on a Calabi-Yau 3-fold equipped with \"orientation data\", as required in Kontsevich and Soibelman's motivic Donaldson-Thomas theory arXiv:0811.2435, and on intersections of oriented Lagrangians in an algebraic symplectic manifold. This paper is an analogue for motives of results on perverse sheaves of vanishing cycles proved in arXiv:1211.3259. We extend this paper to Artin stacks in arXiv:1312.0090." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
10 @cite_13 @cite_10 . The second formula in Conjecture was proposed and proved for @math @cite_13 from the formal completeness of the -type Bethe vectors. The same formula was proposed for @math , and the equivalence to the type I formula was proved @cite_10 . The type I formula is formulated in the form ), and the characterization of type I formula in @cite_7 was simplified as the solution of the @math -system with the convergence property ).
{ "cite_N": [ "@cite_10", "@cite_13", "@cite_7" ], "mid": [ "1994934410", "2591592591", "2000931246" ], "abstract": [ "The ( U _ q ( s l (2)) ) Bethe equation is studied at q = 0. A linear congruence equation is proposed related to the string solutions. The number of its off-diagonal solutions is expressed in terms of an explicit combinatorial formula and coincides with the weight multiplicities of the quantum space.", "This paper considers the problem of designing maximum distance separable (MDS) codes over small fields with constraints on the support of their generator matrices. For any given @math binary matrix @math , the GM-MDS conjecture, proposed by , states that if @math satisfies the so-called MDS condition, then for any field @math of size @math , there exists an @math MDS code whose generator matrix @math , with entries in @math , fits the matrix @math (i.e., @math is the support matrix of @math ). Despite all the attempts by the coding theory community, this conjecture remains still open in general. It was shown, independently by and , that the GM-MDS conjecture holds if the following conjecture, referred to as the TM-MDS conjecture, holds: if @math satisfies the MDS condition, then the determinant of a transform matrix @math , such that @math fits @math , is not identically zero, where @math is a Vandermonde matrix with distinct parameters. In this work, we first reformulate the TM-MDS conjecture in terms of the Wronskian determinant, and then present an algebraic-combinatorial approach based on polynomial-degree reduction for proving this conjecture. Our proof technique's strength is based primarily on reducing inherent combinatorics in the proof. We demonstrate the strength of our technique by proving the TM-MDS conjecture for the cases where the number of rows ( @math ) of @math is upper bounded by @math . For this class of special cases of @math where the only additional constraint is on @math , only cases with @math were previously proven theoretically, and the previously used proof techniques are not applicable to cases with @math .", "Let f be a random Boolean formula that is an instance of 3-SAT. We consider the problem of computing the least real number k such that if the ratio of the number of clauses over the number of variables of f strictly exceeds k , then f is almost certainly unsatisfiable. By a well-known and more or less straightforward argument, it can be shown that kF5.191. This upper bound was improved by to 4.758 by first providing new improved bounds for the occupancy problem. There is strong experimental evidence that the value of k is around 4.2. In this work, we define, in terms of the random formula f, a decreasing sequence of random variables such that, if the expected value of any one of them converges to zero, then f is almost certainly unsatisfiable. By letting the expected value of the first term of the sequence converge to zero, we obtain, by simple and elementary computations, an upper bound for k equal to 4.667. From the expected value of the second term of the sequence, we get the value 4.601q . In general, by letting the U This work was performed while the first author was visiting the School of Computer Science, Carleton Ž University, and was partially supported by NSERC Natural Sciences and Engineering Research Council . of Canada , and by a grant from the University of Patras for sabbatical leaves. The second and third Ž authors were supported in part by grants from NSERC Natural Sciences and Engineering Research . Council of Canada . During the last stages of this research, the first and last authors were also partially Ž . supported by EU ESPRIT Long-Term Research Project ALCOM-IT Project No. 20244 . †An extended abstract of this paper was published in the Proceedings of the Fourth Annual European Ž Symposium on Algorithms, ESA’96, September 25]27, 1996, Barcelona, Spain Springer-Verlag, LNCS, . pp. 27]38 . That extended abstract was coauthored by the first three authors of the present paper. Correspondence to: L. M. Kirousis Q 1998 John Wiley & Sons, Inc. CCC 1042-9832r98r030253-17 253" ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
11 @cite_2 . Chari proved the type III formula of @math for @math for any @math for the classical algebras, and for some @math for the exceptional algebras.
{ "cite_N": [ "@cite_2" ], "mid": [ "1554718120" ], "abstract": [ "We give an algebro-combinatorial proof of a general ver­ sion of Pieri's formula following the approach developed by Fomin and Kirillov in the paper \"Quadratic algebras, Dunkl elements, and Schu­ bert calculus.\" We prove several conjectures posed in their paper. As a consequence, a new proof of classical Pieri's formula for cohomol­ ogy of complex flag manifolds, and that of its analogue for quantum cohomology is obtained in this paper." ] }
math0105145
2090763901
We study a class of systems of functional equations closely related to various kinds of integrable statistical and quantum mechanical models. We call them the finite and infinite @math -systems according to the number of functions and equations. The finite Q-systems appear as the thermal equilibrium conditions (the Sutherland–Wu equation) for certain statistical mechanical systems. Some infinite Q-systems appear as the relations of the normalized characters of the KR modules of the Yangians and the quantum affine algebras. We give two types of power series formulae for the unique solution (resp. the unique canonical solution) for a finite (resp. infinite) Q-system. As an application, we reformulate the Kirillov–Reshetikhin conjecture on the multiplicities formula of the KR modules in terms of the canonical solutions of Q-systems.
12 @cite_18 . Okado constructed bijections between the rigged configurations and the crystals (resp. virtual crystals) corresponding to @math , with @math for @math , for @math and @math (resp. @math ). As a corollary, the type II formula of those @math was proved for @math and @math .
{ "cite_N": [ "@cite_18" ], "mid": [ "2106856555" ], "abstract": [ "We introduce a fermionic formula associated with any quantum affine algebra U q (X N (r) . Guided by the interplay between corner transfer matrix and the Bethe ansatz in solvable lattice models, we study several aspects related to representation theory, most crucially, the crystal basis theory. They include one-dimensional sums over both finite and semi-infinite paths, spinon character formulae, Lepowsky—Primc type conjectural formula for vacuum string functions, dilogarithm identities, Q-systems and their solution by characters of various classical subalgebras and so forth. The results expand [HKOTY1] including the twisted cases and more details on inhomogeneous paths consisting of non-perfect crystals. As a most intriguing example, certain inhomogeneous one-dimensional sums conjecturally give rise to branching functions of an integrable G 2 (1) -module related to the embedding G 2 (1) ↪ B 3 (1) ↪ D 4 1 ." ] }
cs0107014
2949367797
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
As mentioned in the introduction, this is one of the few attempts to apply fold unfold techniques in the field of concurrent languages. In fact, in the literature we find only three papers which are relatively closely related to the present one: Ueda and Furukawa UF88 defined transformation systems for the concurrent logic language GHC @cite_7 , Sahlin Sah95 defined a partial evaluator for AKL, while de Francesco and Santone in DFS96 presented a transformation system for CCS @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_7" ], "mid": [ "1601458080", "1607674807" ], "abstract": [ "Rewriting logic extends to concurrent systems with state changes the body of theory developed within the algebraic semantics approach. It is both a foundational tool and the kernel language of several implementation efforts (Cafe, ELAN, Maude). Tile logic extends (unconditional) rewriting logic since it takes into account state changes with side effects and synchronization. It is especially useful for defining compositional models of computation of reactive systems, coordination languages, mobile calculi, and causal and located concurrent systems. In this paper, the two logics are defined and compared using a recently developed algebraic specification methodology, membership equational logic. Given a theory T, the rewriting logic of T is the free monoidal 2-category, and the tile logic of T is the free monoidal double category, both generated by T. An extended version of monoidal 2-categories, called 2VH-categories, is also defined, able to include in an appropriate sense the structure of monoidal double categories. We show that 2VH-categories correspond to an extended version of rewriting logic, which is able to embed tile logic, and which can be implemented in the basic version of rewriting logic using suitable internal strategies. These strategies can be significantly simpler when the theory is uniform. A uniform theory is provided in the paper for CCS, and it is conjectured that uniform theories exist for most process algebras.", "We study the relationship between Concurrent Separation Logic (CSL) and the assume-guarantee (A-G) method (a.k.a. rely-guarantee method). We show in three steps that CSL can be treated as a specialization of the A-G method for well-synchronized concurrent programs. First, we present an A-G based program logic for a low-level language with built-in locking primitives. Then we extend the program logic with explicit separation of \"private data\" and \"shared data\", which provides better memory modularity. Finally, we show that CSL (adapted for the low-level language) can be viewed as a specialization of the extended A-G logic by enforcing the invariant that \"shared resources are well-formed outside of critical regions\". This work can also be viewed as a different approach (from Brookes') to proving the soundness of CSL: our CSL inference rules are proved as lemmas in the A-G based logic, whose soundness is established following the syntactic approach to proving soundness of type systems." ] }
cs0107014
2949367797
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
The transformation system we are proposing builds on the systems defined in the papers above and can be considered an extension of them. Differently from the previous cases, our system is defined for a generic (concurrent) constraint language. Thus, together with some new transformations such as the distribution, the backward instantiation and the branch elimination, we introduce also specific operations which allow constraint simplification and elimination (though, some constraint simplification is done in @cite_9 as well).
{ "cite_N": [ "@cite_9" ], "mid": [ "2155710590" ], "abstract": [ "We present an important step towards the solution of the problem of inverse procedural modeling by generating parametric context-free L-systems that represent an input 2D model. The L-systemrules efficiently code the regular structures and the parameters represent the properties of the structure transformations. The algorithm takes as input a 2D vector image that is composed of atomic elements, such as curves and poly-lines. Similar elements are recognized and assigned terminal symbols ofan L-systemalphabet. Theterminal symbols’ position and orientation are pair-wise compared and the transformations are stored as points in multiple 4D transformation spaces. By careful analysis of the clusters in the transformation spaces, we detect sequences of elements and code them as L-system rules. The coded elements are then removed from the clusters, the clusters are updated, and then the analysis attempts to code groups of elements in (hierarchies) the same way. The analysis ends with a single group of elements that is coded as an L-system axiom. We recognize and code branching sequences of linearly translated, scaled, and rotated elements and their hierarchies. The L-system not only represents the input image, but it can also be used for various editing operations. By changing the L-system parameters, the image can be randomized, symmetrized, and groups of elements and regular structures can be edited. By changing the terminal and non-terminal symbols, elements or groups of elements can be replaced." ] }
cs0107014
2949367797
We introduce a transformation system for concurrent constraint programming (CCP). We define suitable applicability conditions for the transformations which guarantee that the input output CCP semantics is preserved also when distinguishing deadlocked computations from successful ones and when considering intermediate results of (possibly) non-terminating computations. The system allows us to optimize CCP programs while preserving their intended meaning: In addition to the usual benefits that one has for sequential declarative languages, the transformation of concurrent programs can also lead to the elimination of communication channels and of synchronization points, to the transformation of non-deterministic computations into deterministic ones, and to the crucial saving of computational space. Furthermore, since the transformation system preserves the deadlock behavior of programs, it can be used for proving deadlock freeness of a given program wrt a class of queries. To this aim it is sometimes sufficient to apply our transformations and to specialize the resulting program wrt the given queries in such a way that the obtained program is trivially deadlock free.
As previously mentioned, differently from our case in @cite_9 it is considered a definition of which allows us to remove potentially selectable branches; the consequence is that the resulting transformation system is only (thus not totally) correct. We should mention that in @cite_9 two preliminary assumptions on the scheduling'' are made in such a way that this limitation is actually less constraining than it might appear.
{ "cite_N": [ "@cite_9" ], "mid": [ "2072469765" ], "abstract": [ "Modern dynamically scheduled processors use branch prediction hardware to speculatively fetch and execute most likely executed paths in a program. Complex branch predictors have been proposed which attempt to identify these paths accurately such that the hardware can benefit from out-of-order (OOO) execution. Recent studies have shown that inspite of such complex prediction schemes, there still exist many frequently executed branches which are difficult to predict. Predicated execution has been proposed as an alternative technique to eliminate some of these branches in various forms ranging from a restrictive support to a full-blown support. We call the restrictive form of predicated execution as guarded execution. In this paper, we propose a new algorithm which uses profiling and selectively performs if-conversion for architectures with guarded execution support. Branch profiling is used to gather the taken, non-taken and misprediction counts for every branch. This combined with block profiling is used to select paths which suffer from heavy mispredictions and are profitable to if-convert. Effects of three different selection criterias, namely size-based, predictability-based and profiled-based, on net cycle improvements, branch mispredictions and mis-speculated instructions are then studied. We also propose new mechanisms to convert unsafe instructions to safe form to enhance the applicability of the technique. Finally, we explain numerous adjustments that were made to the selection criterias to better reflect the OOO processor behavior." ] }
cs0203030
2952475254
We study routing and scheduling in packet-switched networks. We assume an adversary that controls the injection time, source, and destination for each packet injected. A set of paths for these packets is admissible if no link in the network is overloaded. We present the first on-line routing algorithm that finds a set of admissible paths whenever this is feasible. Our algorithm calculates a path for each packet as soon as it is injected at its source using a simple shortest path computation. The length of a link reflects its current congestion. We also show how our algorithm can be implemented under today's Internet routing paradigms. When the paths are known (either given by the adversary or computed as above) our goal is to schedule the packets along the given paths so that the packets experience small end-to-end delays. The best previous delay bounds for deterministic and distributed scheduling protocols were exponential in the path length. In this paper we present the first deterministic and distributed scheduling protocol that guarantees a polynomial end-to-end delay for every packet. Finally, we discuss the effects of combining routing with scheduling. We first show that some unstable scheduling protocols remain unstable no matter how the paths are chosen. However, the freedom to choose paths can make a difference. For example, we show that a ring with parallel links is stable for all greedy scheduling protocols if paths are chosen intelligently, whereas this is not the case if the adversary specifies the paths.
The problem of choosing routes for a fixed set of packets was studied by Srinivasan and Teo @cite_5 and Bertsimas and Gamarnik @cite_13 . For example, @cite_5 presents an algorithm that minimizes the congestion and dilation of the routes up to a constant factor. This result complemented the paper of Leighton, Maggs and Rao @cite_1 which showed that packets could be scheduled along a set of paths in time @math congestion @math dilation @math .
{ "cite_N": [ "@cite_1", "@cite_5", "@cite_13" ], "mid": [ "2133049312", "2140916841", "2117065758" ], "abstract": [ "We study routing and scheduling in packet-switched networks. We assume an adversary that controls the injection time, source, and destination for each packet injected. A set of paths for these packets is admissible if no link in the network is overloaded. We present the first on-line routing algorithm that finds a set of admissible paths whenever this is feasible. Our algorithm calculates a path for each packet as soon as it is injected at its source using a simple shortest path computation. The length of a link reflects its current congestion. We also show how our algorithm can be implemented under today's Internet routing paradigms.When the paths are known (either given by the adversary or computed as above), our goal is to schedule the packets along the given paths so that the packets experience small end-to-end delays. The best previous delay bounds for deterministic and distributed scheduling protocols were exponential in the path length. In this article, we present the first deterministic and distributed scheduling protocol that guarantees a polynomial end-to-end delay for every packet.Finally, we discuss the effects of combining routing with scheduling. We first show that some unstable scheduling protocols remain unstable no matter how the paths are chosen. However, the freedom to choose paths can make a difference. For example, we show that a ring with parallel links is stable for all greedy scheduling protocols if paths are chosen intelligently, whereas this is not the case if the adversary specifies the paths.", "We present polylogarithmic approximations for the R|prec|Cmax and R|prec|∑jwjCj problems, when the precedence constraints are “treelike” – i.e., when the undirected graph underlying the precedences is a forest. We also obtain improved bounds for the weighted completion time and flow time for the case of chains with restricted assignment – this generalizes the job shop problem to these objective functions. We use the same lower bound of “congestion+dilation”, as in other job shop scheduling approaches. The first step in our algorithm for the R|prec|Cmax problem with treelike precedences involves using the algorithm of Lenstra, Shmoys and Tardos to obtain a processor assignment with the congestion + dilation value within a constant factor of the optimal. We then show how to generalize the random delays technique of Leighton, Maggs and Rao to the case of trees. For the weighted completion time, we show a certain type of reduction to the makespan problem, which dovetails well with the lower bound we employ for the makespan problem. For the special case of chains, we show a dependent rounding technique which leads to improved bounds on the weighted completion time and new bicriteria bounds for the flow time.", "This paper considers two inter-related questions: (i) Given a wireless ad-hoc network and a collection of source-destination pairs (s i ,t i ) , what is the maximum throughput capacity of the network, i.e. the rate at which data from the sources to their corresponding destinations can be transferred in the network? (ii) Can network protocols be designed that jointly route the packets and schedule transmissions at rates close to the maximum throughput capacity? Much of the earlier work focused on random instances and proved analytical lower and upper bounds on the maximum throughput capacity. Here, in contrast, we consider arbitrary wireless networks. Further, we study the algorithmic aspects of the above questions: the goal is to design provably good algorithms for arbitrary instances. We develop analytical performance evaluation models and distributed algorithms for routing and scheduling which incorporate fairness, energy and dilation (path-length) requirements and provide a unified framework for utilizing the network close to its maximum throughput capacity.Motivated by certain popular wireless protocols used in practice, we also explore \"shortest-path like\" path selection strategies which maximize the network throughput. The theoretical results naturally suggest an interesting class of congestion aware link metrics which can be directly plugged into several existing routing protocols such as AODV, DSR, etc. We complement the theoretical analysis with extensive simulations. The results indicate that routes obtained using our congestion aware link metrics consistently yield higher throughput than hop-count based shortest path metrics." ] }
cs0207085
1745858282
In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to repair' a database. We do so by characterizing the possibilities to recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.
Coherent integration and proper representation of amalgamated data is extensively studied in the literature (see, e.g., @cite_40 @cite_36 @cite_2 @cite_19 @cite_27 @cite_21 @cite_37 @cite_10 @cite_12 @cite_6 @cite_33 ). Common approaches for dealing with this task are based on techniques of belief revision @cite_21 , methods of resolving contradictions by quantitative considerations (such as majority vote'' @cite_37 ) or qualitative ones (e.g., defining priorities on different sources of information or preferring certain data over another @cite_24 @cite_34 ), and approaches that are based on rewriting rules for representing the information in a specific form @cite_27 . As in our case, abduction is used for database updating in @cite_23 and an extended form of abduction is used in @cite_17 @cite_18 to explain modifications in a theory.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_33", "@cite_36", "@cite_21", "@cite_6", "@cite_24", "@cite_19", "@cite_40", "@cite_27", "@cite_23", "@cite_2", "@cite_34", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "176609766", "2048333161", "1549828304", "35390552", "2112965492", "2158937425", "2035585180", "2611254175", "1599188306", "1854613159", "2169585110", "2074890228", "2103231373", "2233653089", "2028246200", "1551374365" ], "abstract": [ "Horn clause logic programming can be extended to include abduction with integrity constraints. In the resulting extension of logic programming, negation by failure can be simulated by making negative conditions abducible and by imposing appropriate denials and disjunctions as integrity constraints. This gives an alternative semantics for negation by failure, which generalises the stable model semantics of negation by failure. The abductive extension of logic programming extends negation by failure in three ways: (1) computation can be perfonned in alternative minimal models, (2) positive as well as negative conditions can be made abducible, and (3) other integrity constraints can also be accommodated. * This paper was written while the first author was at Imperial College. 235 Introduction The tenn \"abduction\" was introduced by the philosopher Charles Peirce [1931] to refer to a particular kind of hypothetical reasoning. In the simplest case, it has the fonn: From A and A fB infer B as a possible \"explanation\" of A. Abduction has been given prominence in Charniak and McDennot's [1985] \"Introduction to Artificial Intelligence\", where it has been applied to expert systems and story comprehension. Independently, several authors have developed deductive techniques to drive the generation of abductive hypotheses. Cox and Pietrzykowski [1986] construct hypotheses from the \"dead ends\" of linear resolution proofs. Finger and Genesereth [1985] generate \"deductive solutions to design problems\" using the \"residue\" left behind in resolution proofs. Poole, Goebel and Aleliunas [1987] also use linear resolution to generate hypotheses. All impose the restriction that hypotheses should be consistent with the \"knowledge base\". Abduction is a fonn of non-monotonic reasoning, because hypotheses which are consistent with one state of a knowledge base may become inconSistent when new knowledge is added. Poole [1988] argues that abduction is preferable to noh-monotonic logics for default reasoning. In this view, defaults are hypotheses fonnulated within classical logic rather than conclusions derived withln some fonn of non-monotonic logic. The similarity between abduction and default reasoning was also pointed out in [Kowalski, 1979]. In this paper we show how abduction can be integrated with logic programming, and we concentrate on the use of abduction to generalise negation by failure. Conditional Answers Compared with Abduction In the simplest case, a logic program consists of a set of Horn Clauses, which are used backward to_reduce goals to sub goals. The initial goal is solved when there are no subgollls left;", "Abstract During the process of knowledge acquisition from different experts it is usual that contradictions occur. Therefore strategies are needed for dealing with divergent statements and conflicts. We provide a formal framework to represent, process and combine distributed knowledge. The representation formalism is many-valued logic, which is a widely accepted method for expressing uncertainty, vagueness, contradictions and lack of information. Combining knowledge as proposed here makes use of the bilattice approach, which turns out to be very flexible and suggestive in the context of combining divergent information. We give some guidelines for choosing truth value spaces, assigning truth values and defining global operators to encode integration strategies.", "The process of integrating knowledge coming from different sources has been widely investigated in the literature. Three distinct conceptual approaches to this problem have been most succesful: belief revision, merging and update. In this paper we present a framework that integrates these three approaches. In the proposed framework all three operations can be performed. We provide an example that can only be solved by applying more than one single style of knowledge integration and, therefore, cannot be addressed by anyone of the approaches alone. The framework has been implemented, and the examples shown in this paper (as well as other examples from the belief revision literature) have been successfully tested.", "This paper investigates, several methods for coping with inconsistency caused by multiple source information by introducing suitable consequence relations capable of inferring non trivial conclusions from an inconsistent stratified knowledge base. Some of these methods presuppose a revision step, namely a selection of one or several consistent subsets of formulas, and then classical inference is used for inferring from these subsets. Two alternative methods that do not require any revision step are studied: inference based on arguments and a new approach called safely supported inference, where inconsistency is kept local. These two last methods look suitable when the inconsistency is due to the presence of several sources of information. The paper offers a comparative study of the various inference modes under inconsistency.", "The problem of integrating information from conflicting sources comes up in many current applications, such as cooperative information systems, heterogeneous databases, and multiagent systems. We model this by the operation of merging first-order theories. We propose a formal semantics for this operation and show that it has desirable properties, including abiding by majority rule in case of conflict and syntax independence. We apply our semantics to the special case when the theories to be merged represent relational databases under integrity constraints. We then present a way of merging databases that have different or conflicting schemas caused by problems such as synonyms, homonyms or type conflicts mentioned in the schema integration literature.", "Abstract Abduction is inference to the best explanation. In the TACITUS project at SRI we have developed an approach to abductive inference, called “weighted abduction”, that has resulted in a significant simplification of how the problem of interpreting texts is conceptualized. The interpretation of a text is the minimal explanation of why the text would be true. More precisely, to interpret a text, one must prove the logical form of the text from what is already mutually known, allowing for coercions, merging redundancies where possible, and making assumptions where necessary. It is shown how such “local pragmatics” problems as reference resolution, the interpretation of compound nominals, the resolution of syntactic ambiguity and metonymy, and schema recognition can be solved in this manner. Moreover, this approach of “interpretation as abduction” can be combined with the older view of “parsing as deduction” to produce an elegant and thorough integration of syntax, semantics, and pragmatics, one that spans the range of linguistic phenomena from phonology to discourse structure. Finally, we discuss means for making the abduction process efficient, possibilities for extending the approach to other pragmatics phenomena, and the semantics of the weights and costs in the abduction scheme.", "levels, such as Pet6fi's text grammar and Rumelhart's schemata for stories. By defining permissible interrelations between constituents of larger texts at all levels of abstraction, these create meaning structures organized on both hierarchical and associative principles, rather than on a sequential basis. The automated \"understanding\" process consists of instantiating an appropriate set of event templates by associating a given text with the ERGO inventory of event templates and const ucting a network of instanti ted templates based on intertemplate relational rules and constraints. The network of instantiated t mplates thus represents the information content of the unstructured original text, and provides the structured input for the event record data bas . Details of the automated understanding process presented here in the abstract are given in the follo ing section. 3.2 Processing Principles. The process of data base generation involves two major functions: (1) co tent analysis of the incoming text (2) event record synthesis (or production) The first involves constructing a meaning representation of the text and the second the extraction of relevant information and its storage in a data base record. The major focus of ERGO is on reports of a particular class of events which describe aircraft movements. The unit of analysis is therefore the report, a textual unit consisting of one or more par graphs, each containing one or more sentences. The first step in the analytical process involves a lexical lookup: a lexical entry contains morphological, syntactic and semantic information, on the lines of Sager (1973) and (1973). Each sentence is then subjected to a syntactic analysis by means of an Augmented Transition Network (ATN) parser (Woods, 1970). Since event templates are based on propositional structures, the analytical", "Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encode-attend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28 (absolute) in ROUGE-L scores.", "The goal of information extraction is to extract database records from text or semi-structured sources. Traditionally, information extraction proceeds by first segmenting each candidate record separately, and then merging records that refer to the same entities. While computationally efficient, this approach is suboptimal, because it ignores the fact that segmenting one candidate record can help to segment similar ones. For example, resolving a well-segmented field with a less-clear one can disambiguate the latter's boundaries. In this paper we propose a joint approach to information extraction, where segmentation of all records and entity resolution are performed together in a single integrated inference process. While a number of previous authors have taken steps in this direction (eg., (2003), (2004)), to our knowledge this is the first fully joint approach. In experiments on the CiteSeer and Cora citation matching datasets, joint inference improved accuracy, and our approach outperformed previous ones. Further, by using Markov logic and the existing algorithms for it, our solution consisted mainly of writing the appropriate logical formulas, and required much less engineering than previous ones.", "When integrating data coming from multiple different sources we are faced with the possibility of inconsistency in databases. In this paper, we use one of the paraconsistent logics introduced in [9,7] (LFI1) as a logical framework to model possibly inconsistent database instances obtained by integrating different sources. We propose a method based on the sound and complete tableau proof system of LFI1 to treat both the integration process and the evolution of the integrated database submitted to users updates. In order to treat the integrated database evolution, we introduce a kind of generalized database context, the evolutionary databases, which are databases having the capability of storing and manipulating inconsistent information and, at the same time, allowing integrity constraints to change in time. We argue that our approach is sufficiently general and can be applied in most circumstances where inconsistency may arise in databases.", "In practical data integration systems, it is common for the data sources being integrated to provide conflicting information about the same entity. Consequently, a major challenge for data integration is to derive the most complete and accurate integrated records from diverse and sometimes conflicting sources. We term this challenge the truth finding problem. We observe that some sources are generally more reliable than others, and therefore a good model of source quality is the key to solving the truth finding problem. In this work, we propose a probabilistic graphical model that can automatically infer true records and source quality without any supervision. In contrast to previous methods, our principled approach leverages a generative process of two types of errors (false positive and false negative) by modeling two different aspects of source quality. In so doing, ours is also the first approach designed to merge multi-valued attribute types. Our method is scalable, due to an efficient sampling-based inference algorithm that needs very few iterations in practice and enjoys linear time complexity, with an even faster incremental variant. Experiments on two real world datasets show that our new method outperforms existing state-of-the-art approaches to the truth finding problem.", "We propose a model of abduction based on the revision of the epistemic state of an agent. Explanations must be sufficient to induce belief in the sentence to be explained (for instance, some observation), or ensure its consistency with other beliefs, in a manner that adequately accounts for factual and hypothetical sentences. Our model will generate explanations that nonmonotonically predict an observation, thus generalizing most current accounts, which require some deductive relationship between explanation and observation. It also provides a natural preference ordering on explanations, defined in terms of normality or plausibility. To illustrate the generality of our approach, we reconstruct two of the key paradigms for model-based diagnosis, abductive and consistency-based diagnosis, within our framework. This reconstruction provides an alternative semantics for both and extends these systems to accommodate our predictive explanations and semantic preferences on explanations. It also illustrates how more general information can be incorporated in a principled manner.", "The use of statistical AI techniques in authorship recognition (or stylometry) has contributed to literary and historical breakthroughs. These successes have led to the use of these techniques in criminal investigations and prosecutions. However, few have studied adversarial attacks and their devastating effect on the robustness of existing classification methods. This paper presents a framework for adversarial attacks including obfuscation attacks, where a subject attempts to hide their identity imitation attacks, where a subject attempts to frame another subject by imitating their writing style. The major contribution of this research is that it demonstrates that both attacks work very well. The obfuscation attack reduces the effectiveness of the techniques to the level of random guessing and the imitation attack succeeds with 68-91 probability depending on the stylometric technique used. These results are made more significant by the fact that the experimental subjects were unfamiliar with stylometric techniques, without specialized knowledge in linguistics, and spent little time on the attacks. This paper also provides another significant contribution to the field in using human subjects to empirically validate the claim of high accuracy for current techniques (without attacks) by reproducing results for three representative stylometric methods.", "We describe a legal question answering system which combines legal information retrieval and textual entailment. We have evaluated our system using the data from the first competition on legal information extraction entailment (COLIEE) 2014. The competition focuses on two aspects of legal information processing related to answering yes no questions from Japanese legal bar exams. The shared task consists of two phases: legal ad hoc information retrieval and textual entailment. The first phase requires the identification of Japan civil law articles relevant to a legal bar exam query. We have implemented two unsupervised baseline models (tf-idf and Latent Dirichlet Allocation (LDA)-based Information Retrieval (IR)), and a supervised model, Ranking SVM, for the task. The features of the model are a set of words, and scores of an article based on the corresponding baseline models. The results show that the Ranking SVM model nearly doubles the Mean Average Precision compared with both baseline models. The second phase is to answer “Yes” or “No” to previously unseen queries, by comparing the meanings of queries with relevant articles. The features used for phase two are syntactic semantic similarities and identification of negation antonym relations. The results show that our method, combined with rule-based model and the unsupervised model, outperforms the SVM-based supervised model.", "This paper presents a general, consistency-based framework for expressing belief change. The framework has good formal properties while being well-suited for implementation. For belief revision, informally, in revising a knowledge base K by a sentence α, we begin with α and include as much of K as consistently possible. This is done by expressing K and α in disjoint languages, asserting that the languages agree on the truth values of corresponding atoms wherever consistently possible, and then re-expressing the result in the original language of K. There may be more than one way in which the languages of K and α can be so correlated: in choice revision, one such \"extension\" represents the revised state; alternately (skeptical) revision consists of the intersection of all such extensions. Contraction is similarly defined although, interestingly, it is not interdefinable with revision.The framework is general and flexible. For example, one could go on and express other belief change operations such as update and erasure, and the merging of knowledge bases. Further, the framework allows the incorporation of static and dynamic integrity constraints. The approach is well-suited for implementation: belief change can be equivalently expressed in terms of a finite knowledge base; and the scope of a belief change operation can be restricted to just those propositions common to the knowledge base and sentence for change. We give a high-level algorithm implementing the procedure, and an expression of the approach in Default Logic. Lastly, we briefly discuss two implementations of the approach.", "We consider the problem of answering queries from databases that may be incomplete. A database is incomplete if some tuples may be missing from some relations, and only a part of each relation is known to be complete. This problem arises in several contexts. For example, systems that provide access to multiple heterogeneous information sources often encounter incomplete sources. The question we address is to determine whether the answer to a specific given query is complete even when the database is incomplete. We present a novel sound and complete algorithm for the answer-completeness problem by relating it to the problem of independence of queries from updates. We also show an important case of the independence problem (and therefore ofthe answer-completeness problem) that can be decided in polynomial time, whereas the best known algorithm for this case is exponential. This case involves updates that are described using a conjunction of comparison predicates. We also describe an algorithm that determines whether the answer to the query is complete in the current state of the database. Finally, we show that our ‘treatment extends naturally to partiallyincorrect databases. Permission to copy without fee all or part of this material is granted provided that the copies aTe not made OT distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, OT to republish, requirea a fee and or special permission from the Endowment. Proceedings of the 22nd VLDB Conference Mumbai(Bombay), India, 1996" ] }
cs0207085
1745858282
In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to repair' a database. We do so by characterizing the possibilities to recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.
The use of three-valued logics is also a well-known technique for maintaining incomplete or inconsistent information; such logics are often used for defining fixpoint semantics of incomplete logic programs @cite_32 @cite_3 , and so in principle they can be applied on integrity constraints in an (extended) clause form @cite_11 . Three-valued formalisms such as LFI @cite_0 are also the basis of paraconsistent methods to construct database repairs @cite_8 and are useful in general for pinpointing inconsistencies @cite_7 . As noted above, this is also the role of the three-valued semantics in our case.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_32", "@cite_3", "@cite_0", "@cite_11" ], "mid": [ "2003531456", "1785931840", "1989393769", "2152131859", "1520574003", "1854613159" ], "abstract": [ "The use of conventional classical logic is misleading for characterizing the behavior of logic programs because a logic program, when queried, will do one of three things: succeed with the query, fail with it, or not respond because it has fallen into infinite backtracking. In [7] Kleene proposed a three-valued logic for use in recursive function theory. The so-called third truth value was really undefined: truth value not determined. This logic is a useful tool in logic-program specification, and in particular, for describing models. (See [11].) Tarski showed that formal languages, like arithmetic, cannot contain their own truth predicate because one could then construct a paradoxical sentence that effectively asserts its own falsehood. Natural languages do allow the use of \"is true\", so by Tarski's argument a semantics for natural language must leave truth-value gaps: some sentences must fail to have a truth value. In [8] Kripke showed how a model having truth-value gaps, using Kleene's three-valued logic, could be specified. The mechanism he used is a famiUar one in program semantics: consider the least fixed point of a certain monotone operator. But that operator must be defined on a space involving three-valued logic, and for Kripke's application it will not be continuous. We apply techniques similar to Kripke's to logic programs. We associate with each program a monotone operator on a space of three-valued logic interpretations, or better partial interpretations. This space is not a complete lattice, and the operators are not, in general, continuous. But least and other fixed points do exist. These fixed points are shown to provide suitable three-valued program models. They relate closely to the least and greatest fixed points of the operators used in [1]. Because of the extra machinery involved, our treatment allows for a natural consideration of negation, and indeed, of the other prepositional connectives as well. And because of the elaborate structure of fixed points available, we are able to", "The logics of formal inconsistency (LFI’s) are logics that allow to explicitly formalize the concepts of consistency and inconsistency by means of formulas of their language. Contradictoriness, on the other hand, can always be expressed in any logic, provided its language includes a symbol for negation. Besides being able to represent the distinction between contradiction and inconsistency, LFI’s are non-explosive logics, in the sense that a contradiction does not entail arbitrary statements, but yet are gently explosive, in the sense that, adjoining the additional requirement of consistency, then contradictoriness do cause explosion. Several logics can be seen as LFI’s, among them the great majority of paraconsistent systems developed under the Brazilian and Polish tradition. We present here tableau systems for some important LFI’s: bC, Ci and LFI1.", "In this paper we compare the expressive power of elementary representation formats for vague, incomplete or conflicting information. These include Boolean valuation pairs introduced by Lawry and Gonzalez-Rodriguez, orthopairs of sets of variables, Boolean possibility and necessity measures, three-valued valuations, supervaluations. We make explicit their connections with strong Kleene logic and with Belnap logic of conflicting information. The formal similarities between 3-valued approaches to vagueness and formalisms that handle incomplete information often lead to a confusion between degrees of truth and degrees of uncertainty. Yet there are important differences that appear at the interpretive level: while truth-functional logics of vagueness are accepted by a part of the scientific community (even if questioned by supervaluationists), the truth-functionality assumption of three-valued calculi for handling incomplete information looks questionable, compared to the non-truth-functional approaches based on Boolean possibility-necessity pairs. This paper aims to clarify the similarities and differences between the two situations. We also study to what extent operations for comparing and merging information items in the form of orthopairs can be expressed by means of operations on valuation pairs, three-valued valuations and underlying possibility distributions. We explore the connections between several representations of imperfect information.In each case we compare the expressive power of these formalisms.In each case we study how to express aggregation operations.We demonstrate the formal similarities among these approaches.We point out the differences in interpretations between these approaches.", "Logic programming with the stable model semantics is put forward as a novel constraint programming paradigm. This paradigm is interesting because it bring advantages of logic programming based knowledge representation techniques to constraint programming and because implementation methods for the stable model semantics for ground (variabledfree) programs have advanced significantly in recent years. For a program with variables these methods need a grounding procedure for generating a variabledfree program. As a practical approach to handling the grounding problem a subclass of logic programs, domain restricted programs, is proposed. This subclass enables efficient grounding procedures and serves as a basis for integrating builtdin predicates and functions often needed in applications. It is shown that the novel paradigm embeds classical logical satisfiability and standard (finite domain) constraint satisfaction problems but seems to provide a more expressive framework from a knowledge representation point of view. The first steps towards a programming methodology for the new paradigm are taken by presenting solutions to standard constraint satisfaction problems, combinatorial graph problems and planning problems. An efficient implementation of the paradigm based on domain restricted programs has been developed. This is an extension of a previous implementation of the stable model semantics, the Smodels system, and is publicly available. It contains, e.g., builtdin integer arithmetic integrated to stable model computation. The implementation is described briefly and some test results illustrating the current level of performance are reported.", "We introduce a family of partial stable model semantics for logic programs with arbitrary aggregate relations. The semantics are parametrized by the interpretation of aggregate relations in three-valued logic. Any semantics in this family satisfies two important properties: (i) it extends the partial stable semantics for normal logic programs and (ii) total stable models are always minimal. We also give a specific instance of the semantics and show that it has several attractive features.", "When integrating data coming from multiple different sources we are faced with the possibility of inconsistency in databases. In this paper, we use one of the paraconsistent logics introduced in [9,7] (LFI1) as a logical framework to model possibly inconsistent database instances obtained by integrating different sources. We propose a method based on the sound and complete tableau proof system of LFI1 to treat both the integration process and the evolution of the integrated database submitted to users updates. In order to treat the integrated database evolution, we introduce a kind of generalized database context, the evolutionary databases, which are databases having the capability of storing and manipulating inconsistent information and, at the same time, allowing integrity constraints to change in time. We argue that our approach is sufficiently general and can be applied in most circumstances where inconsistency may arise in databases." ] }
cs0207085
1745858282
In this paper we consider two points of views to the problem of coherent integration of distributed data. First we give a pure model-theoretic analysis of the possible ways to repair' a database. We do so by characterizing the possibilities to recover' consistent data from an inconsistent database in terms of those models of the database that exhibit as minimal inconsistent information as reasonably possible. Then we introduce an abductive application to restore the consistency of a given database. This application is based on an abductive solver (A-system) that implements an SLDNFA-resolution procedure, and computes a list of data-facts that should be inserted to the database or retracted from it in order to keep the database consistent. The two approaches for coherent data integration are related by soundness and completeness results.
A closely related topic is the problem of giving consistent query answers in inconsistent database @cite_26 @cite_15 @cite_27 . The idea is to answer database queries in a consistent way without computing the repairs of the database.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_26" ], "mid": [ "2062180302", "2077518845", "1551374365" ], "abstract": [ "This article deals with the computation of consistent answers to queries on relational databases that violate primary key constraints. A repair of such inconsistent database is obtained by selecting a maximal number of tuples from each relation without ever selecting two distinct tuples that agree on the primary key. We are interested in the following problem: Given a Boolean conjunctive query q, compute a Boolean first-order (FO) query @j such that for every database db, @j evaluates to true on db if and only if q evaluates to true on every repair of db. Such @j is called a consistent FO rewriting of q. We use novel techniques to characterize classes of queries that have a consistent FO rewriting. In this way, we are able to extend previously known classes and discover new ones. Finally, we use an Ehrenfeucht-Fraisse game to show the non-existence of a consistent FO rewriting for @[email protected]?y(R([email protected]?,y)@?R([email protected]?,c)), where c is a constant and the first coordinate of R is the primary key.", "In this paper we consider the problem of the logical characterization of the notion of consistent answer in a relational database that may violate given integrity constraints. This notion is captured in terms of the possible repaired versions of the database. A method for computing consistent answers is given and its soundness and completeness (for some classes of constraints and queries) proved. The method is based on an iterative procedure whose termination for several classes of constraints is proved as well.", "We consider the problem of answering queries from databases that may be incomplete. A database is incomplete if some tuples may be missing from some relations, and only a part of each relation is known to be complete. This problem arises in several contexts. For example, systems that provide access to multiple heterogeneous information sources often encounter incomplete sources. The question we address is to determine whether the answer to a specific given query is complete even when the database is incomplete. We present a novel sound and complete algorithm for the answer-completeness problem by relating it to the problem of independence of queries from updates. We also show an important case of the independence problem (and therefore ofthe answer-completeness problem) that can be decided in polynomial time, whereas the best known algorithm for this case is exponential. This case involves updates that are described using a conjunction of comparison predicates. We also describe an algorithm that determines whether the answer to the query is complete in the current state of the database. Finally, we show that our ‘treatment extends naturally to partiallyincorrect databases. Permission to copy without fee all or part of this material is granted provided that the copies aTe not made OT distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, OT to republish, requirea a fee and or special permission from the Endowment. Proceedings of the 22nd VLDB Conference Mumbai(Bombay), India, 1996" ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Of the load-balancing algorithms based on load, a very common approach to performing load-balancing is to choose the server with the least reported load from among a set of servers. This approach performs well in a homogeneous system where the task allocation is performed by a single centralized entity (dispatcher) which has complete up-to-date load information @cite_25 @cite_35 . In a system where multiple dispatchers are independently performing the allocation of tasks, this approach however has been shown to behave badly, especially if load information used is stale @cite_28 @cite_46 @cite_13 @cite_47 . Mitzenmacher talks about the herd behavior'' that can occur when servers that have reported low load are inundated with requests from dispatchers until new load information is reported @cite_13 .
{ "cite_N": [ "@cite_35", "@cite_47", "@cite_28", "@cite_46", "@cite_13", "@cite_25" ], "mid": [ "2109440766", "2120849241", "2154007983", "2963728009", "2080634500", "2149912409" ], "abstract": [ "In this paper we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can (1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, (2) outperform the best of the other algorithms we examine by as much as 60 when information is moderately old, (3) significantly outperform random load distribution when information is older still, and (4) avoid pathological behavior even when information is extremely old.", "Load balancing for distributed servers is a common issue in many applications and has been extensively studied. Several distributed load balancing schemes have been proposed that proactively route individual requests to appropriate servers to best balance the load and shorten request response time. These schemes do not require a centralized load balancer. Instead, each server is responsible for determining, for each request it receives from a client, to which server in the pool the request should be forwarded for processing. We propose a new request routing scheme that is more scalable to increasing number of servers and request load than the existing schemes. The method combines random server selection and next-neighbor load sharing techniques that together prevent the staleness of load information from building up when the number of servers increases. Our simulation shows that it outperforms existing schemes under a piggyback-based load update model.", "We consider the problem of load balancing in dynamic distributed systems in cases where new incoming tasks can make use of old information. For example, consider a multiprocessor system where incoming tasks with exponentially distributed service requirements arrive as a Poisson process, the tasks must choose a processor for service, and a task knows when making this choice the processor queue lengths from T seconds ago. What is a good strategy for choosing a processor in order for tasks to minimize their expected time in the system? Such models can also be used to describe settings where there is a transfer delay between the time a task enters a system and the time it reaches a processor for service. Our models are based on considering the behavior of limiting systems where the number of processors goes to infinity. The limiting systems can be shown to accurately describe the behavior of sufficiently large systems and simulations demonstrate that they are reasonably accurate even for systems with a small number of processors. Our studies of specific models demonstrate the importance of using randomness to break symmetry in these systems and yield important rules of thumb for system design. The most significant result is that only small amounts of queue length information can be extremely useful in these settings; for example, having incoming tasks choose the least loaded of two randomly chosen processors is extremely effective over a large range of possible system parameters. In contrast, using global information can actually degrade performance unless used carefully; for example, unlike most settings where the load information is current, having tasks go to the apparently least loaded server can significantly hurt performance.", "We consider load balancing in a network of caching servers delivering contents to end users. Randomized load balancing via the so-called power of two choices is a well-known approach in parallel and distributed systems. In this framework, we investigate the tension between storage resources, communication cost, and load balancing performance. To this end, we propose a randomized load balancing scheme which simultaneously considers cache size limitation and proximity in the server redirection process. In contrast to the classical power of two choices setup, since the memory limitation and the proximity constraint cause correlation in the server selection process, we may not benefit from the power of two choices. However, we prove that in certain regimes of problem parameters, our scheme results in the maximum load of order @math (here @math is the network size). This is an exponential improvement compared to the scheme which assigns each request to the nearest available replica. Interestingly, the extra communication cost incurred by our proposed scheme, compared to the nearest replica strategy, is small. Furthermore, our extensive simulations show that the trade-off trend does not depend on the network topology and library popularity profile details.", "In this paper, we study the performance characteristics of simple load sharing algorithms for heterogeneous distributed systems. We assume that nonnegligible delays are encountered in transferring jobs from one node to another. We analyze the effects of these delays on the performance of two threshold-based algorithms called Forward and Reverse. We formulate queuing theoretic models for each of the algorithms operating in heterogeneous systems under the assumption that the job arrival process at each node in Poisson and the service times and job transfer times are exponentially distributed. The models are solved using the Matrix-Geometric solution technique. These models are used to study the effects of different parameters and algorithm variations on the mean job response time: e.g., the effects of varying the thresholds, the impact of changing the probe limit, the impact of biasing the probing, and the optimal response times over a large range of loads and delays. Wherever relevant, the results of the models are compared with the M M 1 model, representing no load balancing (hereafter referred to as NLB), and the M M K model, which is an achievable lower bound (hereafter referred to as LB).", "Load sharing is a technique to improve the performance of distributed systems by distributing the system workload from heavily loaded nodes, where service is poor, to lightly loaded nodes in the system. Previous studies have considered two adaptive load sharing policies: sender-initiated and receiver-initiated. In the sender-initiated policy, a heavily loaded node attempts to transfer work to a lightly loaded node and in the receiver-initiated policy a lightly loaded node attempts to get work from a heavily loaded node. Almost all the previous studies assumed the first-come first-served node scheduling policy; furthermore, analysis and simulations in these studies have been done under the assumption that the job service times are exponentially distributed and the job arrivals form a Poisson process (i.e., job inter-arrival times are exponentially distributed). The goal of this paper is to fill the void in the existing literature. We study the impact of these assumptions on the performance of the sender-initiated and receiver initiated policies. We consider three node scheduling policies-first-come first-served (FCFS), shortest job first (SJF), and round robin (RR) policies. Furthermore, we also look at the impact of variance in the inter-arrival times and in the job service times. Our results show that: (i) When non-preemptive node scheduling policies (FCFS and SJF) are used, the receiver-initiated policy is (substantially) more sensitive to variance in inter-arrival times than the sender-initiated policies and the sender-initiated policies are relatively more sensitive to the variance in job service times; (ii) When the preemptive node scheduling policy (RR) is used, the sender-initiated policy provides a better performance than the receiver-initiated policy." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Dahlin proposes algorithms @cite_1 . These algorithms take into account the age (staleness) of the load information reported by each of a set of distributed homogeneous servers as well as an estimate of the rate at which new requests arrive at the whole system to determine to which server to allocate a request.
{ "cite_N": [ "@cite_1" ], "mid": [ "2109440766" ], "abstract": [ "In this paper we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can (1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, (2) outperform the best of the other algorithms we examine by as much as 60 when information is moderately old, (3) significantly outperform random load distribution when information is older still, and (4) avoid pathological behavior even when information is extremely old." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
@cite_16 propose an algorithm, which we call that first randomly selects @math servers. The algorithm then weighs the servers by load information and chooses a server with probability that is inversely proportional to the load reported by that server. When @math , where @math is the total number of servers, the algorithm is shown to perform better than previous load-based algorithms and for this reason we focus on this algorithm in this paper.
{ "cite_N": [ "@cite_16" ], "mid": [ "2109440766" ], "abstract": [ "In this paper we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can (1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, (2) outperform the best of the other algorithms we examine by as much as 60 when information is moderately old, (3) significantly outperform random load distribution when information is older still, and (4) avoid pathological behavior even when information is extremely old." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Another approach is to to exclude servers that fail some utilization threshold and to choose from the remaining servers. @cite_6 and @cite_47 classify machines as lightly-utilized or heavily-utilized and then choose randomly from the lightly-utilized servers. This work focuses on local-area distributed systems. use this approach to enhance round-robin DNS load-balancing across a set of widely distributed heterogeneous web servers @cite_2 , Specifically, when a web server surpasses a utilization threshold it sends an alarm signal to the DNS system indicating it is out of commission. The server is excluded from the DNS resolution until it sends another signal indicating it is below threshold and free to service requests again. In this work, the maximum capacities of the most capable servers are at most a factor of three that of the least capable servers.
{ "cite_N": [ "@cite_47", "@cite_6", "@cite_2" ], "mid": [ "2109440766", "1597560875", "2786117165" ], "abstract": [ "In this paper we examine the problem of balancing load in a large-scale distributed system when information about server loads may be stale. It is well known that sending each request to the machine with the apparent lowest load can behave badly in such systems, yet this technique is common in practice. Other systems use round-robin or random selection algorithms that entirely ignore load information or that only use a small subset of the load information. Rather than risk extremely bad performance on one hand or ignore the chance to use load information to improve performance on the other, we develop strategies that interpret load information based on its age. Through simulation, we examine several simple algorithms that use such load interpretation strategies under a range of workloads. Our experiments suggest that by properly interpreting load information, systems can (1) match the performance of the most aggressive algorithms when load information is fresh relative to the job arrival rate, (2) outperform the best of the other algorithms we examine by as much as 60 when information is moderately old, (3) significantly outperform random load distribution when information is older still, and (4) avoid pathological behavior even when information is extremely old.", "Energy management for servers is now necessary for technical, financial, and environmental reasons. This paper describes three policies designed to reduce energy consumption in Web servers. The policies employ two power management mechanisms: dynamic voltage scaling (DVS), an existing mechanism, and request batching, a new mechanism introduced in this paper. The first policy uses DVS in isolation, except that we extend recently introduced task-based DVS policies for use in server environments with many concurrent tasks. The second policy uses request batching to conserve energy during periods of low workload intensity. The third policy uses both DVS and request batching mechanisms to reduce processor energy usage over a wide range of workload intensities. All the policies trade off system responsiveness to save energy. However, the policies employ the mechanisms in a feedback-driven control framework in order to conserve energy while maintaining a given quality of service level, as defined by a percentile-level response time. We evaluate the policies using Salsa, a web server simulator that has been extensively validated for both energy and response time against measurements from a commodity web server. Three daylong static web workloads from real web server systems are used to quantify the energy savings: the Nagano Olympics98 web server, a financial services company web site, and a disk intensive web workload. Our results show that when required to maintain a 90th-percentile response time of 50ms, the DVS and request batching policies save from 8.7 to 38 and from 3.1 to 27 respectively of the CPU energy used by the base system. The two polices provide these savings for complementary workload intensities. The combined policy is effective for all three workloads across a broad range of intensities, saving from 17 to 42 of the CPU energy.", "Motivated by distributed schedulers that combine the power-of-d-choices with late binding and systems that use replication with cancellation-on-start, we study the performance of the LL(d) policy which assigns a job to a server that currently has the least workload among d randomly selected servers in large-scale homogeneous clusters. We consider general job size distributions and propose a partial integro-differential equation to describe the evolution of the system. This equation relies on the earlier proven ansatz for LL(d) which asserts that the workload distribution of any finite set of queues becomes independent of one another as the number of servers tends to infinity. Based on this equation we propose a fixed point iteration for the limiting workload distribution and study its convergence." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Another well-studied load-balancing cluster approach is to have heavily loaded servers handoff requests they receive to other servers within the cluster that are less loaded or to have lightly loaded servers attempt to get tasks from heavily loaded servers (e.g., @cite_9 @cite_10 ). This can be achieved through techniques such as HTTP redirection (e.g., @cite_32 @cite_31 @cite_36 ) or packet header rewriting (e.g., @cite_24 ) or remote script execution @cite_43 . HTTP redirection adds additional client round-trip latency for every rescheduled request. TCP IP hand-off and packet header rewriting require changes in the OS kernel or network interface drivers. Remote script execution requires trust between the serving entities.
{ "cite_N": [ "@cite_36", "@cite_9", "@cite_32", "@cite_24", "@cite_43", "@cite_31", "@cite_10" ], "mid": [ "2151744612", "2759636699", "2159285576", "2120849241", "2167025919", "1969490101", "2963728009" ], "abstract": [ "Users of highly popular Web sites may experience long delays when accessing information. Upgrading content site infrastructure from a single node to a locally distributed Web cluster composed by multiple server nodes provides limited relief, because the cluster wide-area connectivity may become the bottleneck. A better solution is to distribute Web clusters over the Internet by placing content nodes in strategic locations. A geographically distributed architecture where the Domain Name System (DNS) servers evaluate network proximity and users are served from the closest cluster reduces network impact on response time. On the other hand, serving closest requests only may cause unbalanced servers and may increase system impact on response time. To achieve a scalable Web system, we propose to integrate DNS proximity scheduling with an HTTP request redirection mechanism that any Web server can activate. We demonstrate through simulation experiments that this further dispatching mechanism augments the percentage of requests with guaranteed response time, thereby enhancing the Quality of Service of geographically distributed Web sites. However, HTTP request redirection should be used selectively because the additional round-trip increases network impact on latency time experienced by users. As a further contribution, this paper proposes and compares various mechanisms to limit reassignments with no negative consequences on load balancing.", "Load balancing plays an important role in improving scalability and stability in Content Delivery Networks (CDNs) to meet the increasing demand on bandwidth. This paper proposed a modified algorithm that takes into account the equilibrium between load balancing and redirection proximity. We extended a fluid queue model which is adopted in the existing literatures to the overall CDN system. In the system, scheduler selects proper replica server for each redistributed request by exploiting load differences between them. Furthermore, through limiting the migration distance for each request, the total costs mainly associated with delay are also effectively optimized. The simulation result indicates that the proposed algorithm can efficiently reduce redirection cost compared to Control-Law Balancing (CLB) algorithm at the expense of a bit of performance sacrifice of queue balancing. Besides, we found that the proposed mechanism has more benefit on queue balancing than CLB algorithm as well, when selecting an appropriate distance threshold.", "Replication of information among multiple World Wide Web servers is necessary to support high request rates to popular Web sites. A clustered Web server organization is preferable to multiple independent mirrored servers because it maintains a single interface to the users and has the potential to be more scalable, fault-tolerant and better load-balanced. In this paper, we propose a Web cluster architecture in which the Domain Name System (DNS) server, which dispatches the user requests among the servers through the URL name to the IP address mapping mechanism, is integrated with a redirection request mechanism based on HTTP. This should alleviate the side-effect of caching the IP address mapping at intermediate name servers. We compare many alternative mechanisms, including synchronous vs. asynchronous activation and centralized vs. distributed decisions on redirection. Moreover, we analyze the reassignment of entire domains or individual client requests, different types of status information and different server selection policies for redirecting requests. Our results show that the combination of centralized and distributed dispatching policies allows the Web server cluster to handle high load skews in the WWW environment.", "Load balancing for distributed servers is a common issue in many applications and has been extensively studied. Several distributed load balancing schemes have been proposed that proactively route individual requests to appropriate servers to best balance the load and shorten request response time. These schemes do not require a centralized load balancer. Instead, each server is responsible for determining, for each request it receives from a client, to which server in the pool the request should be forwarded for processing. We propose a new request routing scheme that is more scalable to increasing number of servers and request load than the existing schemes. The method combines random server selection and next-neighbor load sharing techniques that together prevent the staleness of load information from building up when the number of servers increases. Our simulation shows that it outperforms existing schemes under a piggyback-based load update model.", "Datacenter networks provide high path diversity for traffic between machines. Load balancing traffic across these paths is important for both, latency- and throughput-sensitive applications. The standard load balancing techniques used today obliviously hash a flow to a random path. When long flows collide on the same path, this might lead to long lasting congestion while other paths could be underutilized, degrading performance of other flows as well. Recent proposals to address this shortcoming incur significant implementation complexity at the host that would actually slow down short flows (MPTCP), depend on relatively slow centralized controllers for rerouting large congesting flows (Hedera), or require custom switch hardware, hindering near-term deployment (DeTail). We propose FlowBender, a novel technique that: (1) Load balances distributively at the granularity of flows instead of packets, avoiding excessive packet reordering. (2) Uses end-host-driven rehashing to trigger dynamic flow-to-path assignment. (3) Recovers from link failures within a Retransmit Timeout (RTO). (4) Amounts to less than 50 lines of critical kernel code and is readily deployable in commodity data centers today. (5) Is very robust and simple to tune. We evaluate FlowBender using both simulations and a real testbed implementation, and show that it improves average and tail latencies significantly compared to state of the art techniques without incurring the significant overhead and complexity of other load balancing schemes.", "It is becoming increasingly common to construct network services using redundant resources geographically distributed across the Internet. Content Distribution Networks are a prime example. Such systems distribute client requests to an appropriate server based on a variety of factors---e.g., server load, network proximity, cache locality---in an effort to reduce response time and increase the system capacity under load. This paper explores the design space of strategies employed to redirect requests, and defines a class of new algorithms that carefully balance load, locality, and proximity. We use large-scale detailed simulations to evaluate the various strategies. These simulations clearly demonstrate the effectiveness of our new algorithms, which yield a 60--91 improvement in system capacity when compared with the best published CDN technology, yet user-perceived response latency remains low and the system scales well with the number of servers.", "We consider load balancing in a network of caching servers delivering contents to end users. Randomized load balancing via the so-called power of two choices is a well-known approach in parallel and distributed systems. In this framework, we investigate the tension between storage resources, communication cost, and load balancing performance. To this end, we propose a randomized load balancing scheme which simultaneously considers cache size limitation and proximity in the server redirection process. In contrast to the classical power of two choices setup, since the memory limitation and the proximity constraint cause correlation in the server selection process, we may not benefit from the power of two choices. However, we prove that in certain regimes of problem parameters, our scheme results in the maximum load of order @math (here @math is the network size). This is an exponential improvement compared to the scheme which assigns each request to the nearest available replica. Interestingly, the extra communication cost incurred by our proposed scheme, compared to the nearest replica strategy, is small. Furthermore, our extensive simulations show that the trade-off trend does not depend on the network topology and library popularity profile details." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
A lot of work has looked at balancing load across multi-server homogeneous web sites by leveraging the DNS service used to provide the mapping between a web page's URL and the IP address of a web server serving the URL. Round-robin DNS was proposed, where the DNS system maps requests to web servers in a round-robin fashion @cite_22 @cite_14 . Because DNS mappings have a Time-to-Live (TTL) field associated with them and tend to be cached at the local name server in each domain, this approach can lead to a large number of client requests from a particular domain getting mapped to the same web server during the TTL period. Thus, round-robin DNS achieves good balance only so long as each domain has the same client request rate. Moreover, round-robin load-balancing does not work in a heterogeneous peer-to-peer context because each serving replica gets a uniform rate of requests regardless of whether it can handle this rate. Work that takes into account domain request rate improves upon round-robin DNS and is described by @cite_34 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_34" ], "mid": [ "1747723070", "2159285576", "2151744612" ], "abstract": [ "With ever increasing Web traffic, a distributed multi server Web site can provide scalability and flexibility to cope with growing client demands. Load balancing algorithms to spread the requests across multiple Web servers are crucial to achieve the scalability. Various domain name server (DNS) based schedulers have been proposed in the literature, mainly for multiple homogeneous servers. The presence of heterogeneous Web servers not only increases the complexity of the DNS scheduling problem, but also makes previously proposed algorithms for homogeneous distributed systems not directly applicable. This leads us to propose new policies, cabled adaptive TTL algorithms, that take into account both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request. Extensive simulation results show that these strategies are robust and effective in balancing load among geographically distributed heterogeneous Web servers.", "Replication of information among multiple World Wide Web servers is necessary to support high request rates to popular Web sites. A clustered Web server organization is preferable to multiple independent mirrored servers because it maintains a single interface to the users and has the potential to be more scalable, fault-tolerant and better load-balanced. In this paper, we propose a Web cluster architecture in which the Domain Name System (DNS) server, which dispatches the user requests among the servers through the URL name to the IP address mapping mechanism, is integrated with a redirection request mechanism based on HTTP. This should alleviate the side-effect of caching the IP address mapping at intermediate name servers. We compare many alternative mechanisms, including synchronous vs. asynchronous activation and centralized vs. distributed decisions on redirection. Moreover, we analyze the reassignment of entire domains or individual client requests, different types of status information and different server selection policies for redirecting requests. Our results show that the combination of centralized and distributed dispatching policies allows the Web server cluster to handle high load skews in the WWW environment.", "Users of highly popular Web sites may experience long delays when accessing information. Upgrading content site infrastructure from a single node to a locally distributed Web cluster composed by multiple server nodes provides limited relief, because the cluster wide-area connectivity may become the bottleneck. A better solution is to distribute Web clusters over the Internet by placing content nodes in strategic locations. A geographically distributed architecture where the Domain Name System (DNS) servers evaluate network proximity and users are served from the closest cluster reduces network impact on response time. On the other hand, serving closest requests only may cause unbalanced servers and may increase system impact on response time. To achieve a scalable Web system, we propose to integrate DNS proximity scheduling with an HTTP request redirection mechanism that any Web server can activate. We demonstrate through simulation experiments that this further dispatching mechanism augments the percentage of requests with guaranteed response time, thereby enhancing the Quality of Service of geographically distributed Web sites. However, HTTP request redirection should be used selectively because the additional round-trip increases network impact on latency time experienced by users. As a further contribution, this paper proposes and compares various mechanisms to limit reassignments with no negative consequences on load balancing." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
later extend this work to balance load across a set of widely distributed heterogeneous web servers @cite_2 . This work proposes the use of adaptive TTLs, where the TTL for a DNS mapping is set inversely proportional to the domain's local client request rate for the mapping of interest (as reported by the domain's local name server). The TTL is at the same time set to be proportional to the chosen web server's maximum capacity. So web servers with high maximum capacity will have DNS mappings with longer TTLs, and domains with low request rates will receive mappings with longer TTLs. Max-Cap, the algorithm proposed in this thesis, also uses the maximum capacities of the serving replica nodes to allocate requests proportionally. The main difference is that in the work by , the root DNS scheduler acts as a centralized dispatcher setting all DNS mappings and is assumed to know what the request rate in the requesting domain is like. In the peer-to-peer case the authority node has no idea what the request rate throughout the network is like, nor how large is the set of requesting nodes.
{ "cite_N": [ "@cite_2" ], "mid": [ "1747723070" ], "abstract": [ "With ever increasing Web traffic, a distributed multi server Web site can provide scalability and flexibility to cope with growing client demands. Load balancing algorithms to spread the requests across multiple Web servers are crucial to achieve the scalability. Various domain name server (DNS) based schedulers have been proposed in the literature, mainly for multiple homogeneous servers. The presence of heterogeneous Web servers not only increases the complexity of the DNS scheduling problem, but also makes previously proposed algorithms for homogeneous distributed systems not directly applicable. This leads us to propose new policies, cabled adaptive TTL algorithms, that take into account both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request. Extensive simulation results show that these strategies are robust and effective in balancing load among geographically distributed heterogeneous Web servers." ] }
cs0209023
2952481296
This paper studies the problem of load-balancing the demand for content in a peer-to-peer network across heterogeneous peer nodes that hold replicas of the content. Previous decentralized load balancing techniques in distributed systems base their decisions on periodic updates containing information about load or available capacity observed at the serving entities. We show that these techniques do not work well in the peer-to-peer context; either they do not address peer node heterogeneity, or they suffer from significant load oscillations. We propose a new decentralized algorithm, Max-Cap, based on the maximum inherent capacities of the replica nodes and show that unlike previous algorithms, it is not tied to the timeliness or frequency of updates. Yet, Max-Cap can handle the heterogeneity of a peer-to-peer environment without suffering from load oscillations.
Lottery scheduling is another technique that, like Max-Cap, uses proportional allocation. This approach has been proposed in the context of resource allocation within an operating system (the Mach microkernel) @cite_23 . Client processes hold tickets that give them access to particular resources in the operating system. Clients are allocated resources by a centralized lottery scheduler proportionally to the number of tickets they own and can donate their tickets to other clients in exchange for tickets at a later point. Max-Cap is similar in that it allocates requests to a replica node proportionally to the maximum capacity of the replica node. The main difference is that in Max-Cap the allocation decision is completely distributed with no opportunity for exchange of resources across replica nodes.
{ "cite_N": [ "@cite_23" ], "mid": [ "2111087562" ], "abstract": [ "This paper presents lottery scheduling, a novel randomized resource allocation mechanism. Lottery scheduling provides efficient, responsive control over the relative execution rates of computations. Such control is beyond the capabilities of conventional schedulers, and is desirable in systems that service requests of varying importance, such as databases, media-based applications, and networks. Lottery scheduling also supports modular resource management by enabling concurrent modules to insulate their resource allocation policies from one another. A currency abstraction is introduced to flexibly name, share, and protect resource rights. We also show that lottery scheduling can be generalized to manage many diverse resources, such as I O bandwidth, memory, and access to locks. We have implemented a prototype lottery scheduler for the Mach 3.0 microkernel, and found that it provides flexible and responsive control over the relative execution rates of a wide range of applications. The overhead imposed by our unoptimized prototype is comparable to that of the standard Mach timesharing policy." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
Currently research in classification and clustering methods for XML or semi-structured documents is very active. New document models have been proposed by ( @cite_1 , @cite_7 ) to extend the classical vector model and take into account both the structure and the textual part. It amounts to distinguish words appearing in different types of XML elements in a generic way, while our approach uses the structure to select (manually) the type of elements relevant to a specific mining objective.
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "1575842006", "2045825058" ], "abstract": [ "A semi-structured document has more structured information compared to an ordinary document, and the relation among semi-structured documents can be fully utilized. In order to take advantage of the structure and link information in a semi-structured document for better mining, a structured link vector model (SLVM) is presented in this paper, where a vector represents a document, and vectors' elements are determined by terms, document structure and neighboring documents. Text mining based on SLVM is described in the procedure of K-means for briefness and clarity: calculating document similarity and calculating cluster center. The clustering based on SLVM performs significantly better than that based on a conventional vector space model in the experiments, and its F value increases from 0.65-0.73 to 0.82-0.86.", "In this paper, we present a probabilistic method that can improve the efficiency of document classification when applied to structured documents. The analysis of the structure of a document is the starting point of document classification. Our method is designed to augment other classification schemes and complement pre-filtering information extraction procedures to reduce uncertainties. To this end, a probabilistic distribution on the structure of XML documents is introduced. We show how to parameterise existing learning methods to describe the structure distribution efficiently. The learned distribution is then used to predict the classes of unseen documents. Novelty detection making use of the structure-based distribution function is also discussed. Demonstration on model documents and on Internet XML documents are presented." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
XML document clustering has been used mostly for visualizing large collections of documents, for example @cite_2 cluster AML (Astronomical Markup Language) documents based only on their links. @cite_3 propose a model similar to @cite_1 but adding in- and out-links to the model, and they use it for clustering rather than classification. @cite_4 also propose a BitCube model for clustering that represents documents based on their ePaths (paths of text elements) and textual content. Their focus is on evaluating time performance rather than clustering effectiveness.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_3", "@cite_2" ], "mid": [ "3603", "2084370216", "2590145195", "2007675602" ], "abstract": [ "In this paper, we describe a new bitmap indexing technique to cluster XML documents. XML is a new standard for exchanging and representing information on the Internet. Documents can be hierarchically represented by XML-elements. XML documents are represented and indexed using a bitmap indexing technique. We define the similarity and popularity operations available in bitmap indexes and propose a method for partitioning a XML document set. Furthermore, a 2-dimensional bitmap index is extended to a 3dimensional bitmap index, called BitCube. We define statistical measurements in the BitCube: mean, mode, standard derivation, and correlation coefficient. Based on these measurements, we also define the slice, project, and dice operations on a BitCube. BitCube can be manipulated efficiently and improves the performance of document retrieval.", "Abstract Self-organization or clustering of data objects can be a powerful aid towards knowledge discovery in distributed databases. The web presents opportunities for such clustering of documents and other data objects. This potential will be even more pronounced when XML becomes widely used over the next few years. Based on clustering of XML links, we explore a visualization approach for discovering knowledge on the web.", "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogazici University Printhouse. http: www.issi2015.org files downloads all-papers 1042.pdf, 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.", "The increasing diffusion of XML languages for the encoding of domain-specific multimedia information raises the need for new information retrieval models that can fully exploit structural information. An XML language specifically designed for music like MX allows queries to be made directly on the thematic material. The main advantage of such a system is that it can handle symbolic, notational, and audio objects at the same time through a multilayered structure. On the model side, common music information retrieval methods do not take into account the inner structure of melodic themes and the metric relationships between notes. In this article we deal with two main topics: a novel architecture based on a new XML language for music and a new model of melodic themes based on graph theory. This model takes advantage of particular graph invariants that can be linked to melodic themes as metadata in order to characterize all their possible modifications through specific transformations and that can be exploited in filtering algorithms. We provide a similarity function and show through an evaluation stage how it improves existing methods, particularly in the case of same-structured themes." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
Another direction is clustering Web documents returned as answers to a query, an alternative to rank lists. @cite_11 propose an original algorithm using a suffix tree structure, that is linear in the size of the collection and incremental, an important feature to support online clustering.
{ "cite_N": [ "@cite_11" ], "mid": [ "2100958137" ], "abstract": [ "Users of Web search engines are often forced to sift through the long ordered list of document \"snippets\" returned by the engines. The IR community has explored document clustering as an alternative method of organizing retrieval results, but clustering has yet to be deployed on the major search engines. The paper articulates the unique requirements of Web document clustering and reports on the first evaluation of clustering methods in this domain. A key requirement is that the methods create their clusters based on the short snippets returned by Web search engines. Surprisingly, we find that clusters based on snippets are almost as good as clusters created using the full text of Web documents. To satisfy the stringent requirements of the Web domain, we introduce an incremental, linear time (in the document collection size) algorithm called Suffix Tree Clustering (STC). which creates clusters based on phrases shared between documents. We show that STC is faster than standard clustering methods in this domain, and argue that Web document clustering via STC is both feasible and potentially beneficial." ] }
cs0507024
1892314125
This paper presents some experiments in clustering homogeneous XML documents to validate an existing classification or more generally an organisational structure. Our approach integrates techniques for extracting knowledge from docu- ments with unsupervised classification (clustering) of documents. We focus on the feature selection used for representing documents and its impact on the emerging clas- sification. We mix the selection of structured features with fine textual selection based on syntactic characteristics. We illustrate and evaluate this approach with a collection of Inria activity reports for the year 2003. The objective is to cluster projects into larger groups (Themes), based on the keywords or different chapters of these activity reports. We then compare the results of clustering using different feature selections, with the official theme structure used by Inria.
@cite_5 compare different text feature extractions, and variants of a linear-time clustering algorithm using random seed selection with center adjustment.
{ "cite_N": [ "@cite_5" ], "mid": [ "2070412788" ], "abstract": [ "Clustering is a powerful technique for large-scale topic discovery from text. It involves two phases: first, feature extraction maps each document or record to a point in high-dimensional space, then clustering algorithms automatically group the points into a hierarchy of clusters. We describe an unsupervised, near-linear time text clustering system that offers a number of algorithm choices for each phase. We introduce a methodology for measuring the quality of a cluster hierarchy in terms of FMeasure, and present the results of experiments comparing different algorithms. The evaluation considers some feature selection parameters (tfidfand feature vector length) but focuses on the clustering algorithms, namely techniques from Scatter Gather (buckshot, fractionation, and split join) and kmeans. Our experiments suggest that continuous center adjustment contributes more to cluster quality than seed selection does. It follows that using a simpler seed selection algorithm gives a better time quality tradeoff. We describe a refinement to center adjustment, “vector average damping,” that further improves cluster quality. We also compare the near-linear time algorithms to a group average greedy agglomerative clustering algorithm to demonstrate the time quality tradeoff quantitatively." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
For routing on a circle, the best-known constructions have @math and @math . Examples include: Chord @cite_15 with distance-function @math , a variant of Chord with bidirectional links'' @cite_4 and distance-function @math , and the hypercube with distance function @math . In this paper, we improve upon all of these constructions by showing how to route in @math hops in the worst case with @math links per node.
{ "cite_N": [ "@cite_15", "@cite_4" ], "mid": [ "2070219632", "2949856235" ], "abstract": [ "We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected graph on 2b nodes arranged in a circle, with edges connecting pairs of nodes that are 2k positions apart for any k ≥ 0. The standard Chord routing algorithm uses edges in only one direction. Our algorithms exploit the bidirectionality of edges for optimality. At the heart of the new protocols lie algorithms for writing a positive integer d as the difference of two non-negative integers d′ and d″ such that the total number of 1-bits in the binary representation of d′ and d″ is minimized. Given that Chord is a variant of the hypercube, the optimal routes possess a surprising combinatorial structure.", "We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
routing with distance function @math has been studied for Chord @cite_4 , a popular topology for P2P networks. Chord has @math nodes, with out-degree @math per node. The longest route takes @math hops. In terms of @math and @math , the largest-sized Chord network has @math nodes. Moreover, @math and @math cannot be chosen independently -- they are functionally related. Both @math and @math are @math . Analysis of routing of Chord leaves open the following question:
{ "cite_N": [ "@cite_4" ], "mid": [ "2070219632" ], "abstract": [ "We propose optimal routing algorithms for Chord [1], a popular topology for routing in peer-to-peer networks. Chord is an undirected graph on 2b nodes arranged in a circle, with edges connecting pairs of nodes that are 2k positions apart for any k ≥ 0. The standard Chord routing algorithm uses edges in only one direction. Our algorithms exploit the bidirectionality of edges for optimality. At the heart of the new protocols lie algorithms for writing a positive integer d as the difference of two non-negative integers d′ and d″ such that the total number of 1-bits in the binary representation of d′ and d″ is minimized. Given that Chord is a variant of the hypercube, the optimal routes possess a surprising combinatorial structure." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Xu al @cite_16 provide a partial answer to the above question by studying routing with distance function @math over graph topologies. A graph over @math nodes placed in a circle is said to be uniform if the set of clockwise offsets of out-going links is identical for all nodes. Chord is an example of a uniform graph. Xu al show that for any uniform graph with @math links per node, routing with distance function @math necessitates @math hops in the worst-case.
{ "cite_N": [ "@cite_16" ], "mid": [ "2950469527" ], "abstract": [ "We consider a system of @math servers inter-connected by some underlying graph topology @math . Tasks arrive at the various servers as independent Poisson processes of rate @math . Each incoming task is irrevocably assigned to whichever server has the smallest number of tasks among the one where it appears and its neighbors in @math . Tasks have unit-mean exponential service times and leave the system upon service completion. The above model has been extensively investigated in the case @math is a clique. Since the servers are exchangeable in that case, the queue length process is quite tractable, and it has been proved that for any @math , the fraction of servers with two or more tasks vanishes in the limit as @math . For an arbitrary graph @math , the lack of exchangeability severely complicates the analysis, and the queue length process tends to be worse than for a clique. Accordingly, a graph @math is said to be @math -optimal or @math -optimal when the occupancy process on @math is equivalent to that on a clique on an @math -scale or @math -scale, respectively. We prove that if @math is an Erd o s-R 'enyi random graph with average degree @math , then it is with high probability @math -optimal and @math -optimal if @math and @math as @math , respectively. This demonstrates that optimality can be maintained at @math -scale and @math -scale while reducing the number of connections by nearly a factor @math and @math compared to a clique, provided the topology is suitably random. It is further shown that if @math contains @math bounded-degree nodes, then it cannot be @math -optimal. In addition, we establish that an arbitrary graph @math is @math -optimal when its minimum degree is @math , and may not be @math -optimal even when its minimum degree is @math for any @math ." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Cordasco al @cite_19 extend the result of Xu al @cite_16 by showing that routing with distance function @math in a uniform graph over @math nodes satisfies the inequality @math , where @math denotes the out-degree of each node, @math is the length of the longest path, and @math denotes the @math Fibonacci number. It is well-known that @math , where @math is the Golden ratio and @math denotes the integer closest to real number @math . It follows that @math . Cordasco al show that the inequality is strict if @math . For @math , they construct uniform graphs based upon Fibonacci numbers which achieve an optimal tradeoff between @math and @math .
{ "cite_N": [ "@cite_19", "@cite_16" ], "mid": [ "2949856235", "2099470983" ], "abstract": [ "We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds.", "It is proven that the connected pathwidth of any graph @math is at most @math , where @math is the pathwidth of @math . The method is constructive, i.e., it yields an efficient algorithm that for a given path decomposition of width @math computes a connected path decomposition of width at most @math . The running time of the algorithm is @math , where @math is the number of “bags” in the input path decomposition. The motivation for studying connected path decompositions comes from the connection between the pathwidth and the search number of a graph. One of the advantages of the above bound for connected pathwidth is an inequality @math , where @math and @math are the connected search number and the search number of @math , respectively. Moreover, the algorithm presented in this work can be used to convert a given search strategy using @math searchers into a (monotone) connected one using @math searc..." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
The results in @cite_4 @cite_16 @cite_19 leave open the question whether there exists any graph construction that permits routes of length @math with distance function @math and or @math . provides an answer to the problem by constructing a non-uniform graph --- the set of clockwise offsets of out-going links is different for different nodes.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_4" ], "mid": [ "1512819151", "2950552904", "1623319572" ], "abstract": [ "The emergence of real life graphs with billions of nodes poses significant challenges for managing and querying these graphs. One of the fundamental queries submitted to graphs is the shortest distance query. Online BFS (breadth-first search) and offline pre-computing pairwise shortest distances are prohibitive in time or space complexity for billion-node graphs. In this paper, we study the feasibility of building distance oracles for billion-node graphs. A distance oracle provides approximate answers to shortest distance queries by using a pre-computed data structure for the graph. Sketch-based distance oracles are good candidates because they assign each vertex a sketch of bounded size, which means they have linear space complexity. However, state-of-the-art sketch-based distance oracles lack efficiency or accuracy when dealing with big graphs. In this paper, we address the scalability and accuracy issues by focusing on optimizing the three key factors that affect the performance of distance oracles: landmark selection, distributed BFS, and answer generation. We conduct extensive experiments on both real networks and synthetic networks to show that we can build distance oracles of affordable cost and efficiently answer shortest distance queries even for billion-node graphs.", "We present new and improved data structures that answer exact node-to-node distance queries in planar graphs. Such data structures are also known as distance oracles. For any directed planar graph on n nodes with non-negative lengths we obtain the following: * Given a desired space allocation @math , we show how to construct in @math time a data structure of size @math that answers distance queries in @math time per query. As a consequence, we obtain an improvement over the fastest algorithm for k-many distances in planar graphs whenever @math . * We provide a linear-space exact distance oracle for planar graphs with query time @math for any constant eps>0. This is the first such data structure with provable sublinear query time. * For edge lengths at least one, we provide an exact distance oracle of space @math such that for any pair of nodes at distance D the query time is @math . Comparable query performance had been observed experimentally but has never been explained theoretically. Our data structures are based on the following new tool: given a non-self-crossing cycle C with @math nodes, we can preprocess G in @math time to produce a data structure of size @math that can answer the following queries in @math time: for a query node u, output the distance from u to all the nodes of C. This data structure builds on and extends a related data structure of Klein (SODA'05), which reports distances to the boundary of a face, rather than a cycle. The best distance oracles for planar graphs until the current work are due to Cabello (SODA'06), Djidjev (WG'96), and Fakcharoenphol and Rao (FOCS'01). For @math and space @math , we essentially improve the query time from @math to @math .", "A (1+e)-approximate distance oracle for a graph is a data structure that supports approximate point-to-point shortest-path-distance queries. The most relevant measures for a distance-oracle construction are: space, query time, and preprocessing time. There are strong distance-oracle constructions known for planar graphs (Thorup, JACM'04) and, subsequently, minor-excluded graphs (Abraham and Gavoille, PODC'06). However, these require Ω(e-1n lg n) space for n-node graphs. In this paper, for planar graphs, bounded-genus graphs, and minor-excluded graphs we give distance-oracle constructions that require only O(n) space. The big O hides only a fixed constant, independent of e and independent of genus or size of an excluded minor. The preprocessing times for our distance oracle are also faster than those for the previously known constructions. For planar graphs, the preprocessing time is O(nlg2 n). However, our constructions have slower query times. For planar graphs, the query time is O(e-2 lg2 n). For all our linear-space results, we can in fact ensure, for any δ > 0, that the space required is only 1 + δ times the space required just to represent the graph itself." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Kleinberg's construction has found applications in the design of overlay routing networks for Distributed Hash Tables. Symphony @cite_13 is an adaptation of Kleinberg's construction in a single dimension. The idea is to place @math nodes in a virtual circle and to equip each node with @math out-going links. In the resulting network, the average path length of routes with distance function @math is @math hops. Note that unlike Kleinberg's network, the space here is virtual and so are the distances and the sense of routing. The same complexity was achieved with a slightly different Kleinberg-style construction by Aspnes al @cite_18 . In the same paper, it was also shown that any symmetric, randomized degree- @math network has @math routing complexity.
{ "cite_N": [ "@cite_18", "@cite_13" ], "mid": [ "1992467531", "2107997203" ], "abstract": [ "We consider the problem of designing an overlay network and routing mechanism that permits finding resources efficiently in a peer-to-peer system. We argue that many existing approaches to this problem can be modeled as the construction of a random graph embedded in a metric space whose points represent resource identifiers, where the probability of a connection between two nodes depends only on the distance between them in the metric space. We study the performance of a peer-to-peer system where nodes are embedded at grid points in a simple metric space: a one-dimensional real line. We prove upper and lower bounds on the message complexity of locating particular resources in such a system, under a variety of assumptions about failures of either nodes or the connections between them. Our lower bounds in particular show that the use of inverse power-law distributions in routing, as suggested by Kleinberg [5], is close to optimal. We also give heuristics to efficiently maintain a network supporting efficient routing as nodes enter and leave the system. Finally, we give some experimental results that suggest promising directions for future work.", "Consider @math nodes connected by wires to make an n-dimensional binary cube. Suppose that initially the nodes contain one packet each addressed to distinct nodes of the cube. We show that there is a distributed randomized algorithm that can route every packet to its destination without two packets passing down the same wire at any one time, and finishes within time @math with overwhelming probability for all such routing requests. Each packet carries with it @math bits of bookkeeping information. No other communication among the nodes takes place.The algorithm offers the only scheme known for realizing arbitrary permutations in a sparse N node network in @math time and has evident applications in the design of general purpose parallel computers." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Papillon outperforms all of the above randomized constructions, using degree @math and achieving @math routing. It should be possible to randomize Papillon along similar principles to the Viceroy @cite_14 randomized construction of the butterfly network, though we do not pursue this direction here.
{ "cite_N": [ "@cite_14" ], "mid": [ "1980177572" ], "abstract": [ "In this paper we study randomized algorithms for circuit switching on multistage networks related to the butterfly. We devise algorithms that route messages by constructing circuits (or paths) for the messages with small congestion, dilation, and setup time. Our algorithms are based on the idea of having each message choose a route from two possibilities, a technique that has previously proven successful in simpler load balancing settings. As an application of our techniques, we propose a novel design for a data server." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
With @math out-going links per node, several graphs over @math nodes in a circle support routes with @math greedy hops. Deterministic graphs with this property include: (a) the original Chord @cite_15 topology with distance function @math , (b) Chord with edges treated as bidirectional @cite_4 with distance function @math . This is also the known lower bound on any uniform graph with distance function @math @cite_16 . Randomized graphs with the same tradeoff include randomized-Chord @cite_2 @cite_22 and Symphony @cite_13 -- both with distance function @math . With degree @math , Symphony @cite_13 has routes of length @math on average. The network of @cite_18 also supports routes of length @math on average , with a gap to the known lower bound on their network of @math .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2900102855", "2568950526", "2951371116", "2950469527", "2037549896", "2787728049", "2240957909" ], "abstract": [ "In the distributed all-pairs shortest paths problem (APSP), every node in the weighted undirected distributed network (the CONGEST model) needs to know the distance from every other node using least number of communication rounds (typically called time complexity ). The problem admits @math -approximation @math -time algorithm and a nearly-tight @math lower bound [Nanongkai, STOC'14; Lenzen and Patt-Shamir PODC'15] @math , @math and @math hide polylogarithmic factors. Note that the lower bounds also hold even in the unweighted case and in the weighted case with polynomial approximation ratios LenzenP_podc13,HolzerW12,PelegRT12,Nanongkai-STOC14 . . For the exact case, Elkin [STOC'17] presented an @math time bound, which was later improved to @math [Huang, Nanongkai, Saranurak FOCS'17]. It was shown that any super-linear lower bound (in @math ) requires a new technique [Censor-Hillel, Khoury, Paz, DISC'17], but otherwise it remained widely open whether there exists a @math -time algorithm for the exact case, which would match the best possible approximation algorithm. This paper resolves this question positively: we present a randomized (Las Vegas) @math -time algorithm, matching the lower bound up to polylogarithmic factors. Like the previous @math bound, our result works for directed graphs with zero (and even negative) edge weights. In addition to the improved running time, our algorithm works in a more general setting than that required by the previous @math bound; in our setting (i) the communication is only along edge directions (as opposed to bidirectional), and (ii) edge weights are arbitrary (as opposed to integers in 1, 2, ... poly(n) ). ...", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013", "We consider the problem of topology recognition in wireless (radio) networks modeled as undirected graphs. Topology recognition is a fundamental task in which every node of the network has to output a map of the underlying graph i.e., an isomorphic copy of it, and situate itself in this map. In wireless networks, nodes communicate in synchronous rounds. In each round a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node @math hears a message from a neighbor @math in a given round, if @math listens in this round, and if @math is its only neighbor that transmits in this round. Nodes have labels which are (not necessarily different) binary strings. The length of a labeling scheme is the largest length of a label. We concentrate on wireless networks modeled by trees, and we investigate two problems. What is the shortest labeling scheme that permits topology recognition in all wireless tree networks of diameter @math and maximum degree @math ? What is the fastest topology recognition algorithm working for all wireless tree networks of diameter @math and maximum degree @math , using such a short labeling scheme? We are interested in deterministic topology recognition algorithms. For the first problem, we show that the minimum length of a labeling scheme allowing topology recognition in all trees of maximum degree @math is @math . For such short schemes, used by an algorithm working for the class of trees of diameter @math and maximum degree @math , we show almost matching bounds on the time of topology recognition: an upper bound @math , and a lower bound @math , for any constant @math .", "We consider a system of @math servers inter-connected by some underlying graph topology @math . Tasks arrive at the various servers as independent Poisson processes of rate @math . Each incoming task is irrevocably assigned to whichever server has the smallest number of tasks among the one where it appears and its neighbors in @math . Tasks have unit-mean exponential service times and leave the system upon service completion. The above model has been extensively investigated in the case @math is a clique. Since the servers are exchangeable in that case, the queue length process is quite tractable, and it has been proved that for any @math , the fraction of servers with two or more tasks vanishes in the limit as @math . For an arbitrary graph @math , the lack of exchangeability severely complicates the analysis, and the queue length process tends to be worse than for a clique. Accordingly, a graph @math is said to be @math -optimal or @math -optimal when the occupancy process on @math is equivalent to that on a clique on an @math -scale or @math -scale, respectively. We prove that if @math is an Erd o s-R 'enyi random graph with average degree @math , then it is with high probability @math -optimal and @math -optimal if @math and @math as @math , respectively. This demonstrates that optimality can be maintained at @math -scale and @math -scale while reducing the number of connections by nearly a factor @math and @math compared to a clique, provided the topology is suitably random. It is further shown that if @math contains @math bounded-degree nodes, then it cannot be @math -optimal. In addition, we establish that an arbitrary graph @math is @math -optimal when its minimum degree is @math , and may not be @math -optimal even when its minimum degree is @math for any @math .", "ean distance is at most r, for some prescribed r. We show that monotone properties for this class of graphs have sharp thresholds by reducing the problem to bounding the bottleneck matching on two sets of n points distributed uniformly in [0, 1] d . We present upper bounds on the threshold width, and show that our bound is sharp for d = 1 and at most a sublogarithmic factor away for d ≥ 2. Interestingly, the threshold width is much sharper for random geometric graphs than for Bernoulli random graphs. Further, a random geometric graph is shown to be a subgraph, with high probability, of another independently drawn random geometric graph with a slightly larger radius; this property is shown to have no analogue for Bernoulli random graphs.", "The well-known @math -disjoint path problem ( @math -DPP) asks for pairwise vertex-disjoint paths between @math specified pairs of vertices @math in a given graph, if they exist. The decision version of the shortest @math -DPP asks for the length of the shortest (in terms of total length) such paths. Similarly the search and counting versions ask for one such and the number of such shortest set of paths, respectively. We restrict attention to the shortest @math -DPP instances on undirected planar graphs where all sources and sinks lie on a single face or on a pair of faces. We provide efficient sequential and parallel algorithms for the search versions of the problem answering one of the main open questions raised by Colin de Verdiere and Schrijver for the general one-face problem. We do so by providing a randomised @math algorithm along with an @math time randomised sequential algorithm. We also obtain deterministic algorithms with similar resource bounds for the counting and search versions. In contrast, previously, only the sequential complexity of decision and search versions of the \"well-ordered\" case has been studied. For the one-face case, sequential versions of our routines have better running times for constantly many terminals. In addition, the earlier best known sequential algorithms (e.g. ) were randomised while ours are also deterministic. The algorithms are based on a bijection between a shortest @math -tuple of disjoint paths in the given graph and cycle covers in a related digraph. This allows us to non-trivially modify established techniques relating counting cycle covers to the determinant. We further need to do a controlled inclusion-exclusion to produce a polynomial sum of determinants such that all \"bad\" cycle covers cancel out in the sum allowing us to count \"good\" cycle covers.", "We present a randomized algorithm for dynamic graph connectivity. With failure probability less than @math (for any constant @math we choose), our solution has worst case running time @math per edge insertion, @math per edge deletion, and @math per query, where @math is the number of vertices. The previous best algorithm has worst case running time @math per edge insertion and @math per edge deletion. The improvement is made by reducing the randomness used in the previous result, so that we save a @math factor in update time. Specifically, kapron2013dynamic uses @math copies of a data structure in order to boost a success probability from @math to @math . We show that, in fact though, because of the special structure of their algorithm, this boosting via repetition is unnecessary. Rather, we can still obtain the same correctness guarantee with high probability by arguing via a new invariant, without repetition." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
The construction demonstrates that we can indeed design networks in which greedy routing along these metrics has asymptotically optimal routing complexity. Our contribution is a family of networks that extends the Butterfly network family, so as to facilitate efficient greedy routing. With @math links per node, greedy routes are @math in the worst-case, which is asymptotically optimal. For @math , this beats the lower bound of @cite_18 on symmetric, randomized greedy routing networks (and it meets it for @math ). In the specific case of @math , our greedy routing achieves @math average route length.
{ "cite_N": [ "@cite_18" ], "mid": [ "2160405192" ], "abstract": [ "Several peer-to-peer networks are based upon randomized graph topologies that permit efficient greedy routing, e. g., randomized hypercubes, randomized Chord, skip-graphs and constructions based upon small-world percolation networks. In each of these networks, a node has out-degree Θ(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lower-bounds for greedy routing for these networks, and analyze Neighbor-of-Neighbor (NoN)- greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.The following picture emerges: Deterministic routing networks like hypercubes and Chord have diameter Θ(log n) and greedy routing is optimal. Randomized routing networks like randomized hypercubes, randomized Chord, and constructions based on small-world percolation networks, have diameter Θ(log n log log n) with high probability. The expected diameter of Skip graphs is also Θ(log n log log n). In all of these networks, greedy routing fails to find short routes, requiring Ω(log n) hops with high probability. Surprisingly, the NoN- greedy routing algorithm is able to diminish route-lengths to Θ(log n log log n) hops, which is asymptotically optimal." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Recent work @cite_9 explores the surprising advantages of with in randomized graphs over @math nodes in a circle. The idea behind is to take neighbor's neighbors into account to make routing decisions. It shows that greedy with achieves @math expected route length in Symphony @cite_13 . For other networks which have @math out-going links per node, e.g., randomized-Chord @cite_2 @cite_22 , randomized-hypercubes @cite_2 , skip-graphs @cite_20 and SkipNet @cite_8 , average path length is @math hops. Among these networks, Symphony and randomized-Chord use routing with distance function @math . Other networks use a different distance function (none of them uses @math ). For each of these networks, with @math out-going links per node, it was established that plain ( ) is sub-optimal and achieves @math expected route lengths. The results suggest that lookahead has significant impact on routing.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_9", "@cite_2", "@cite_13", "@cite_20" ], "mid": [ "2160405192", "2949588463", "2568950526", "2028069703", "2170362389", "2053616686" ], "abstract": [ "Several peer-to-peer networks are based upon randomized graph topologies that permit efficient greedy routing, e. g., randomized hypercubes, randomized Chord, skip-graphs and constructions based upon small-world percolation networks. In each of these networks, a node has out-degree Θ(log n), where n denotes the total number of nodes, and greedy routing is known to take O(log n) hops on average. We establish lower-bounds for greedy routing for these networks, and analyze Neighbor-of-Neighbor (NoN)- greedy routing. The idea behind NoN, as the name suggests, is to take a neighbor's neighbors into account for making better routing decisions.The following picture emerges: Deterministic routing networks like hypercubes and Chord have diameter Θ(log n) and greedy routing is optimal. Randomized routing networks like randomized hypercubes, randomized Chord, and constructions based on small-world percolation networks, have diameter Θ(log n log log n) with high probability. The expected diameter of Skip graphs is also Θ(log n log log n). In all of these networks, greedy routing fails to find short routes, requiring Ω(log n) hops with high probability. Surprisingly, the NoN- greedy routing algorithm is able to diminish route-lengths to Θ(log n log log n) hops, which is asymptotically optimal.", "We study approximate distributed solutions to the weighted all-pairs-shortest-paths (APSP) problem in the CONGEST model. We obtain the following results. @math A deterministic @math -approximation to APSP in @math rounds. This improves over the best previously known algorithm, by both derandomizing it and by reducing the running time by a @math factor. In many cases, routing schemes involve relabeling, i.e., assigning new names to nodes and require that these names are used in distance and routing queries. It is known that relabeling is necessary to achieve running times of @math . In the relabeling model, we obtain the following results. @math A randomized @math -approximation to APSP, for any integer @math , running in @math rounds, where @math is the hop diameter of the network. This algorithm simplifies the best previously known result and reduces its approximation ratio from @math to @math . Also, the new algorithm uses uses labels of asymptotically optimal size, namely @math bits. @math A randomized @math -approximation to APSP, for any integer @math , running in time @math and producing compact routing tables of size @math . The node lables consist of @math bits. This improves on the approximation ratio of @math for tables of that size achieved by the best previously known algorithm, which terminates faster, in @math rounds.", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013", "We propose a routing strategy to improve the transportation efficiency on complex networks. Instead of using the routing strategy for shortest path, we give a generalized routing algorithm to find the so-called efficient path, which considers the possible congestion in the nodes along actual paths. Since the nodes with the largest degree are very susceptible to traffic congestion, an effective way to improve traffic and control congestion, as our strategy, can be redistributing traffic load in central nodes to other noncentral nodes. Simulation results indicate that the network capability in processing traffic is improved more than 10 times by optimizing the efficient path, which is in good agreement with the analysis. DOI: 10.1103 PhysRevE.73.046108 PACS numbers: 89.75.Hc Since the seminal work on scale-free networks by Barabasi and Albert BA model1 and on the small-world phenomenon by Watts and Strogatz 2, the structure and dynamics of complex networks have recently attracted a tremendous amount of interest and attention from the physics community see the review papers 3‐5 and references therein. The increasing importance of large communication networks such as the Internet 6, upon which our society survives, calls for the need for high efficiency in handling and delivering information. In this light, to find optimal strategies for traffic routing is one of the important issues we have to address. There have been many previous studies to understand and control traffic congestion on networks, with a basic assumption that the network has a homogeneous structure 7‐11. However, many real networks display both scale-free and small-world features, and thus it is of great interest to study the effect of network topology on traffic flow and the effect of traffic on network evolution. present a formalism that can cope simultaneously with the searching and traffic dynamics in parallel transportation systems 12. This formalism can be used to optimize network structure under a local search algorithm, while to obtain the formalism one should know the global information of the whole networks. Holme and Kim provide an in-depth analysis on the vertex edge overload cascading breakdowns based on evolving networks, and suggest a method to avoid", "We consider the issue of protection in very large networks displaying randomness in topology. We employ random graph models to describe such networks, and obtain probabilistic bounds on several parameters related to reliability. In particular, we take the case of random regular networks for simplicity and consider the length of primary and backup paths in terms of the number of hops. First, for a randomly picked pair of nodes, we derive a lower bound on the average distance between the pair and discuss the tightness of the bound. In addition, noting that primary and protection paths form cycles, we obtain a lower bound on the average length of the shortest cycle around the pair. Finally, we show that the protected connections of a given maximum finite length are rare. We then generalize our network model so that different degrees are allowed according to some arbitrary distribution, and show that the second moment of degree over the first moment is an important shorthand for behavior of a network. Notably, we show that most of the results in regular networks carry over with minor modifications, which significantly broadens the scope of networks to which our approach applies. We present as an example the case of networks with a power-law degree distribution.", "Compared to single-hop networks such as WiFi, multihop infrastructure wireless mesh networks (WMNs) can potentially embrace the broadcast benefits of a wireless medium in a more flexible manner. Rather than being point-to-point, links in the WMNs may originate from a single node and reach more than one other node. Nodes located farther than a one-hop distance and overhearing such transmissions may opportunistically help relay packets for previous hops. This phenomenon is called opportunistic overhearing listening. With multiple radios, a node can also improve its capacity by transmitting over multiple radios simultaneously using orthogonal channels. Capitalizing on these potential advantages requires effective routing and efficient mapping of channels to radios (channel assignment (CA)). While efficient channel assignment can greatly reduce interference from nearby transmitters, effective routing can potentially relieve congestion on paths to the infrastructure. Routing, however, requires that only packets pertaining to a particular connection be routed on a predetermined route. Random network coding (RNC) breaks this constraint by allowing nodes to randomly mix packets overheard so far before forwarding. A relay node thus only needs to know how many packets, and not which packets, it should send. We mathematically formulate the joint problem of random network coding, channel assignment, and broadcast link scheduling, taking into account opportunistic overhearing, the interference constraints, the coding constraints, the number of orthogonal channels, the number of radios per node, and fairness among unicast connections. Based on this formulation, we develop a suboptimal, auction-based solution for overall network throughput optimization. Performance evaluation results show that our algorithm can effectively exploit multiple radios and channels and can cope with fairness issues arising from auctions. Our algorithm also shows promising gains over traditional routing solutions in which various channel assignment strategies are used." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
demonstrates that it is possible to construct a graph in which each node has degree @math and in which 1- has routes of length @math in the worst case, for the metrics @math , @math and @math . Furthermore, for all @math , plain greedy on our network design beats even the results obtained in @cite_9 with @math - lookahead .
{ "cite_N": [ "@cite_9" ], "mid": [ "2949856235" ], "abstract": [ "We introduce a family of directed geometric graphs, denoted @math , that depend on two parameters @math and @math . For @math and @math , the @math graph is a strong @math -spanner, with @math . The out-degree of a node in the @math graph is at most @math . Moreover, we show that routing can be achieved locally on @math . Next, we show that all strong @math -spanners are also @math -spanners of the unit disk graph. Simulations for various values of the parameters @math and @math indicate that for random point sets, the spanning ratio of @math is better than the proven theoretical bounds." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Deterministic butterflies have been proposed for DHT routing by Xu al @cite_16 , who subsequently developed their ideas into Ulysses @cite_6 . for distance function @math has structural similarities with Ulysses -- both are butterfly-based networks. The key differences are as follows: (a) Ulysses does not use @math as its distance function, (b) Ulysses does not use routing, and (c) Ulysses uses more links than for distance function @math -- additional links have been introduced to ameliorate non-uniform edge congestion caused by Ulysses' routing algorithm. In contrast, the congestion-free routing algorithm developed in obviates the need for any additional links in (see Theorem ).
{ "cite_N": [ "@cite_16", "@cite_6" ], "mid": [ "2049130980", "2031684765" ], "abstract": [ "A number of distributed hash table (DHT)-based protocols have been proposed to address the issue of scalability in peer-to-peer networks. In this paper, we present Ulysses, a peer-to-peer network based on the butterfly topology that achieves the theoretical lower bound of log n log log n on network diameter when the average routing table size at nodes is no more than log n. Compared to existing DHT-based schemes with similar routing table size, Ulysses reduces the network diameter by a factor of log log n, which is 2–4 for typical configurations. This translates into the same amount of reduction on query latency and average traffic per link node. In addition, Ulysses maintains the same level of robustness in terms of routing in the face of faults and recovering from graceful ungraceful joins and departures, as provided by existing DHT-based schemes. The performance of the protocol has been evaluated using both analysis and simulation. Copyright © 2004 AEI", "We study a fundamental tradeoff issue in designing a distributed hash table (DHT) in peer-to-peer (P2P) networks: the size of the routing table versus the network diameter. Observing that existing DHT schemes have either 1) a routing table size and network diameter both of O(log sub 2 n), or 2) a routing table of size d and network diameter of O(n sup 1 d ), S. (2001) asked whether this represents the best asymptotic \"state-efficiency\" tradeoffs. We show that some straightforward routing algorithms achieve better asymptotic tradeoffs. However, such algorithms all cause severe congestion on certain network nodes, which is undesirable in a P2P network. We rigorously define the notion of \"congestion\" and conjecture that the above tradeoffs are asymptotically optimal for a congestion-free network. The answer to this conjecture is negative in the strict sense. However, it becomes positive if the routing algorithm is required to eliminate congestion in a \"natural\" way by being uniform. We also prove that the tradeoffs are asymptotically optimal for uniform algorithms. Furthermore, for uniform algorithms, we find that the routing table size of O(log sub 2 n) is a magic threshold point that separates two different \"state-efficiency\" regions. Our third result is to study the exact (instead of asymptotic) optimal tradeoffs for uniform algorithms. We propose a new routing algorithm that reduces the routing table size and the network diameter of Chord both by 21.4 without introducing any other protocol overhead, based on a novel number-theory technique. Our final result is to present Ulysses, a congestion-free nonuniform algorithm that achieves a better asymptotic \"state-efficiency\" tradeoff than existing schemes in the probabilistic sense, even under dynamic node joins leaves." ] }
cs0507034
2952461782
We study greedy routing over @math nodes placed in a ring, with the between two nodes defined to be the clockwise or the absolute distance between them along the ring. Such graphs arise in the context of modeling social networks and in routing networks for peer-to-peer systems. We construct the first network over @math nodes in which greedy routing takes @math hops in the worst-case, with @math out-going links per node. Our result has the first asymptotically optimal greedy routing complexity. Previous constructions required @math hops.
Viceroy @cite_14 is a butterfly network which routes in @math hops in expectation with @math links per node. Mariposa (see reference @cite_23 or @cite_21 ) improves upon Viceroy by providing routes of length @math in the worst-case, with @math out-going links per node. Viceroy and Mariposa are different from other randomized networks in terms of their design philosophy. The topology borrows elements of the geometric embedding of the butterfly in a circle from Viceroy @cite_14 and from @cite_21 , while extending them for greedy routing.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_23" ], "mid": [ "2953216592", "1980177572", "2568950526" ], "abstract": [ "Given @math wireless transceivers located in a plane, a fundamental problem in wireless communications is to construct a strongly connected digraph on them such that the constituent links can be scheduled in fewest possible time slots, assuming the SINR model of interference. In this paper, we provide an algorithm that connects an arbitrary point set in @math slots, improving on the previous best bound of @math due to Moscibroda. This is complemented with a super-constant lower bound on our approach to connectivity. An important feature is that the algorithms allow for bi-directional (half-duplex) communication. One implication of this result is an improved bound of @math on the worst-case capacity of wireless networks, matching the best bound known for the extensively studied average-case. We explore the utility of oblivious power assignments, and show that essentially all such assignments result in a worst case bound of @math slots for connectivity. This rules out a recent claim of a @math bound using oblivious power. On the other hand, using our result we show that @math slots suffice, where @math is the ratio between the largest and the smallest links in a minimum spanning tree of the points. Our results extend to the related problem of minimum latency aggregation scheduling, where we show that aggregation scheduling with @math latency is possible, improving upon the previous best known latency of @math . We also initiate the study of network design problems in the SINR model beyond strong connectivity, obtaining similar bounds for biconnected and @math -edge connected structures.", "In this paper we study randomized algorithms for circuit switching on multistage networks related to the butterfly. We devise algorithms that route messages by constructing circuits (or paths) for the messages with small congestion, dilation, and setup time. Our algorithms are based on the idea of having each message choose a route from two possibilities, a technique that has previously proven successful in simpler load balancing settings. As an application of our techniques, we propose a novel design for a data server.", "We consider the task of topology discovery of sparse random graphs using end-to-end random measurements (e.g., delay) between a subset of nodes, referred to as the participants. The rest of the nodes are hidden, and do not provide any information for topology discovery. We consider topology discovery under two routing models: (a) the participants exchange messages along the shortest paths and obtain end-to-end measurements, and (b) additionally, the participants exchange messages along the second shortest path. For scenario (a), our proposed algorithm results in a sub-linear edit-distance guarantee using a sub-linear number of uniformly selected participants. For scenario (b), we obtain a much stronger result, and show that we can achieve consistent reconstruction when a sub-linear number of uniformly selected nodes participate. This implies that accurate discovery of sparse random graphs is tractable using an extremely small number of participants. We finally obtain a lower bound on the number of participants required by any algorithm to reconstruct the original random graph up to a given edit distance. We also demonstrate that while consistent discovery is tractable for sparse random graphs using a small number of participants, in general, there are graphs which cannot be discovered by any algorithm even with a significant number of participants, and with the availability of end-to-end information along all the paths between the participants. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013" ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
Ballintijn al argue that resource naming should be decoupled from resource identification @cite_7 . Resources are named with human-friendly names, which are based on DNS @cite_9 , while identification is done with object handles, which are globally unique identifiers that need not contain network locations. They use DNS to resolve human-friendly names to object handles and a location service to resolve object handles to network locations. The location service uses a hierarchical architecture for resolving object handles. This two-level approach allows the naming of resources without worrying about replication or migration and the identification of resources without worrying about naming policies.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2096392987", "2083158002" ], "abstract": [ "To fill the gap between what uniform resource names (URNs) provide and what humans need, we propose a new kind of uniform resource identifier (URI) called human-friendly names (HFNs). In this article, we present the design for a scalable HFN-to-URL (uniform resource locator) resolution mechanism that makes use of the Domain Name System (DNS) and the Globe location service to name and locate resources. This new URI proposes to improve both scalability and usability in naming replicated resources on the Web.", "Name services are critical for mapping logical resource names to physical resources in large-scale distributed systems. The Domain Name System (DNS) used on the Internet, however, is slow, vulnerable to denial of service attacks, and does not support fast updates. These problems stem fundamentally from the structure of the legacy DNS.This paper describes the design and implementation of the Cooperative Domain Name System (CoDoNS), a novel name service, which provides high lookup performance through proactive caching, resilience to denial of service attacks through automatic load-balancing, and fast propagation of updates. CoDoNS derives its scalability, decentralization, self-organization, and failure resilience from peer-to-peer overlays, while it achieves high performance using the Beehive replication framework. Cryptographic delegation, instead of host-based physical delegation, limits potential malfeasance by namespace operators and creates a competitive market for namespace management. Backwards compatibility with existing protocols and wire formats enables CoDoNS to serve as a backup for legacy DNS, as well as a complete replacement. Performance measurements from a real-life deployment of the system in PlanetLab shows that CoDoNS provides fast lookups, automatically reconfigures around faults without manual involvement and thwarts distributed denial of service attacks by promptly redistributing load across nodes." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
Walfish al argue for the use of semantic-free references for identifying web documents instead of URLs @cite_8 . The reason is that changes in naming policies or ownership of DNS domain names often result in previous URLs pointing to unrelated or non-existent documents, even when the original documents still exist. Semantic-free references are hashes of public keys or other data, and are resolved to URLs using a distributed hash table based on Chord @cite_13 . Using semantic-free references would allow web documents to link to each other without worrying about changes in the URLs of the documents.
{ "cite_N": [ "@cite_13", "@cite_8" ], "mid": [ "144112633", "1784290353" ], "abstract": [ "The Web relies on the Domain Name System (DNS) to resolve the hostname portion of URLs into IP addresses. This marriage-of-convenience enabled the Web's meteoric rise, but the resulting entanglement is now hindering both infrastructures--the Web is overly constrained by the limitations of DNS, and DNS is unduly burdened by the demands of the Web. There has been much commentary on this sad state-of-affairs, but dissolving the ill-fated union between DNS and the Web requires a new way to resolve Web references. To this end, this paper describes the design and implementation of Semantic Free Referencing (SFR), a reference resolution infrastructure based on distributed hash tables (DHTs).", "Over the last decades, several billion Web pages have been made available on the Web. The ongoing transition from the current Web of unstructured data to the Web of Data yet requires scalable and accurate approaches for the extraction of structured data in RDF (Resource Description Framework) from these websites. One of the key steps towards extracting RDF from text is the disambiguation of named entities. While several approaches aim to tackle this problem, they still achieve poor accuracy. We address this drawback by presenting AGDISTIS, a novel knowledge-base-agnostic approach for named entity disambiguation. Our approach combines the Hypertext-Induced Topic Search (HITS) algorithm with label expansion strategies and string similarity measures. Based on this combination, AGDISTIS can efficiently detect the correct URIs for a given set of named entities within an input text. We evaluate our approach on eight different datasets against state-of-the-art named entity disambiguation frameworks. Our results indicate that we outperform the state-of-the-art approach by up to 29 F-measure." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
Distributed hash tables, also called peer-to-peer structured overlay networks, are distributed systems which map a uniform distribution of identifiers to nodes in the system @cite_3 @cite_13 @cite_20 . Nodes act as peers, with no node having to play a special role, and a distributed hash table can continue operation even as nodes join or leave the system. Lookups and updates to a distributed hash table are scalable, typically taking time logarithmic to the number of nodes in the system. We experimentally evaluated our work using OpenDHT @cite_12 , which is a public distributed hash table service based on Bamboo @cite_5 .
{ "cite_N": [ "@cite_3", "@cite_5", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "1587208850", "2151682391", "2049794981", "2134320193", "2150288915" ], "abstract": [ "Distributed Hash Tables (DHTs) are very efficient distributed systems for routing, but at the same time vulnerable to disruptive nodes. Designers of such systems want them used in open networks, where an adversary can perform a sybil attack by introducing a large number of corrupt nodes in the network, considerably degrading its performance. We introduce a routing strategy that alleviates some of the effects of such an attack by making sure that lookups are performed using a diverse set of nodes. This ensures that at least some of the nodes queried are good, and hence the search makes forward progress. This strategy makes use of latent social information present in the introduction graph of the network.", "Mobile ad-hoc networks (MANETs) and distributed hash-tables (DHTs) share key characteristics in terms of self organization, decentralization, redundancy requirements, and limited infrastructure. However, node mobility and the continually changing physical topology pose a special challenge to scalability and the design of a DHT for mobile ad-hoc network. The mobile hash-table (MHT) [9] addresses this challenge by mapping a data item to a path through the environment. In contrast to existing DHTs, MHT does not to maintain routing tables and thereby can be used in networks with highly dynamic topologies. Thus, in mobile environments it stores data items with low maintenance overhead on the moving nodes and allows the MHT to scale up to several ten thousands of nodes.This paper addresses the problem of churn in mobile hash tables. Similar to Internet based peer-to-peer systems a deployed mobile hash table suffers from suddenly leaving nodes and the need to recover lost data items. We evaluate how redundancy and recovery technique used in the internet domain can be deployed in the mobile hash table. Furthermore, we show that these redundancy techniques can greatly benefit from the local broadcast properties of typical mobile ad-hoc networks.", "Distributed hash table (DHT) systems are an important class of peer-to-peer routing infrastructures. They enable scalable wide-area storage and retrieval of information, and will support the rapid development of a wide variety of Internet-scale applications ranging from naming systems and file systems to application-layer multicast. DHT systems essentially build an overlay network, but a path on the overlay between any two nodes can be significantly different from the unicast path between those two nodes on the underlying network. As such, the lookup latency in these systems can be quite high and can adversely impact the performance of applications built on top of such systems.In this paper, we discuss a random sampling technique that incrementally improves lookup latency in DHT systems. Our sampling can be implemented using information gleaned from lookups traversing the overlay network. For this reason, we call our approach lookup-parasitic random sampling (LPRS). LPRS is fast, incurs little network overhead, and requires relatively few modifications to existing DHT systems.For idealized versions of DHT systems like Chord, Tapestry and Pastry, we analytically prove that LPRS can result in lookup latencies proportional to the average unicast latency of the network, provided the underlying physical topology has a power-law latency expansion. We then validate this analysis by implementing LPRS in the Chord simulator. Our simulations reveal that LPRS-Chord exhibits a qualitatively better latency scaling behavior relative to unmodified Chord.Finally, we provide evidence which suggests that the Internet router-level topology resembles power-law latency expansion. This finding implies that LPRS has significant practical applicability as a general latency reduction technique for many DHT systems. This finding is also of independent interest since it might inform the design of latency-sensitive topology models for the Internet.", "Decentralized systems, such as structured overlays, are subject to the Sybil attack, in which an adversary creates many false identities to increase its influence. This paper describes a one-hop distributed hash table which uses the social links between users to strongly resist the Sybil attack. The social network is assumed to be fast mixing, meaning that a random walk in the honest part of the network quickly approaches the uniform distribution. As in the related SybilLimit system [25], with a social network of n honest nodes and m honest edges, the protocol can tolerate up to o(n log n) attack edges (social links from honest nodes to compromised nodes). The routing tables contain O(√m log m) entries per node and are constructed efficiently by a distributed protocol. This is the first sublinear solution to this problem. Preliminary simulation results are presented to demonstrate the approach's effectiveness.", "During recent years, Distributed Hash Tables (DHTs) have been extensively studied through simulation and analysis. However, due to their limited deployment, it has not been possible to observe the behavior of a widely-deployed DHT in practice. Recently, the popular eMule file-sharing software incorporated a Kademlia-based DHT, called Kad, which currently has around one million simultaneous users. In this paper, we empirically study the performance of the key DHT operation, lookup, over Kad. First, we analytically derive the benefits of different ways to increase the richness of routing tables in Kademlia-based DHTs. Second, we empirically characterize two aspects of the accuracy of routing tables in Kad, namely completeness and freshness, and characterize their impact on Kad’s lookup performance. Finally, we investigate how the efficiency and consistency of lookup in Kad can be improved by performing parallel lookup and maintaining multiple replicas, respectively. Our results pinpoint the best operating point for the degree of lookup parallelism and the degree of replication for Kad." ] }
0706.0580
1672346542
Resources in a cloud can be identified using identifiers based on random numbers. When using a distributed hash table to resolve such identifiers to network locations, the straightforward approach is to store the network location directly in the hash table entry associated with an identifier. When a mobile host contains a large number of resources, this requires that all of the associated hash table entries must be updated when its network address changes. We propose an alternative approach where we store a host identifier in the entry associated with a resource identifier and the actual network address of the host in a separate host entry. This can drastically reduce the time required for updating the distributed hash table when a mobile host changes its network address. We also investigate under which circumstances our approach should or should not be used. We evaluate and confirm the usefulness of our approach with experiments run on top of OpenDHT.
There has also been research on implementing distributed hash tables on top of mobile ad hoc networks @cite_4 @cite_6 . As with Mobile IP @cite_15 and HIP @cite_1 , hosts in mobile ad hoc networks do not change their network address with movement, so there would be no need to update entries in a distributed hash table used for resolving resource identifiers. However, almost the entire Internet is not part of a mobile ad hoc network, so it is of little help to applications that need to run on current networks.
{ "cite_N": [ "@cite_1", "@cite_15", "@cite_4", "@cite_6" ], "mid": [ "2151682391", "2040228414", "2134011626", "2126105048" ], "abstract": [ "Mobile ad-hoc networks (MANETs) and distributed hash-tables (DHTs) share key characteristics in terms of self organization, decentralization, redundancy requirements, and limited infrastructure. However, node mobility and the continually changing physical topology pose a special challenge to scalability and the design of a DHT for mobile ad-hoc network. The mobile hash-table (MHT) [9] addresses this challenge by mapping a data item to a path through the environment. In contrast to existing DHTs, MHT does not to maintain routing tables and thereby can be used in networks with highly dynamic topologies. Thus, in mobile environments it stores data items with low maintenance overhead on the moving nodes and allows the MHT to scale up to several ten thousands of nodes.This paper addresses the problem of churn in mobile hash tables. Similar to Internet based peer-to-peer systems a deployed mobile hash table suffers from suddenly leaving nodes and the need to recover lost data items. We evaluate how redundancy and recovery technique used in the internet domain can be deployed in the mobile hash table. Furthermore, we show that these redundancy techniques can greatly benefit from the local broadcast properties of typical mobile ad-hoc networks.", "Ad hoc networks have no spatial hierarchy and suffer from frequent link failures which prevent mobile hosts from using traditional routing schemes. Under these conditions, mobile hosts must find routes to destinations without the use of designated routers and also must dynamically adapt the routes to the current link conditions. This article proposes a distributed adaptive routing protocol for finding and maintaining stable routes based on signal strength and location stability in an ad hoc network and presents an architecture for its implementation. Interoperability with mobile IP (Internet protocol) is discussed.", "Reliable storage of data with concurrent read write accesses (or query update) is an ever recurring issue in distributed settings. In mobile ad hoc networks, the problem becomes even more challenging due to highly dynamic and unpredictable topology changes. It is precisely this unpredictability that makes probabilistic protocols very appealing for such environments. Inspired by the principles of probabilistic quorum systems, we present a Probabilistic quorum system for ad hoc networks Pan), a collection of protocols for the reliable storage of data in mobile ad hoc networks. Our system behaves in a predictable way due to the gossip-based diffusion mechanism applied for quorum accesses, and the protocol overhead is reduced by adopting an asymmetric quorum construction. We present an analysis of our Pan system, in terms of both reliability and overhead, which can be used to fine tune protocol parameters to obtain the desired tradeoff between efficiency and fault tolerance. We confirm the predictability and tunability of Pan through simulations with ns-2.", "The advances in computer and wireless communication technologies have led to an increasing interest in ad hoc networks which are temporarily constructed by only mobile hosts. In ad hoc networks, since mobile hosts move freely, disconnections occur frequently, and this causes frequent network division. Consequently, data accessibility in ad hoc networks is lower than that in the conventional fixed networks. We propose three replica allocation methods to improve data accessibility by replicating data items on mobile hosts. In these three methods, we take into account the access frequency from mobile hosts to each data item and the status of the network connection. We also show the results of simulation experiments regarding the performance evaluation of our proposed methods." ] }
0706.0430
2950884312
As decentralized computing scenarios get ever more popular, unstructured topologies are natural candidates to consider running mix networks upon. We consider mix network topologies where mixes are placed on the nodes of an unstructured network, such as social networks and scale-free random networks. We explore the efficiency and traffic analysis resistance properties of mix networks based on unstructured topologies as opposed to theoretically optimal structured topologies, under high latency conditions. We consider a mix of directed and undirected network models, as well as one real world case study -- the LiveJournal friendship network topology. Our analysis indicates that mix-networks based on scale-free and small-world topologies have, firstly, mix-route lengths that are roughly comparable to those in expander graphs; second, that compromise of the most central nodes has little effect on anonymization properties, and third, batch sizes required for warding off intersection attacks need to be an order of magnitude higher in unstructured networks in comparison with expander graph topologies.
Borisov @cite_11 analyzes anonymous communications over a De Bruijn graph topology overlay network. He analyzes the deBruijn graph topology and comments on their successful mixing capabilities.
{ "cite_N": [ "@cite_11" ], "mid": [ "2163598416" ], "abstract": [ "As more of our daily activities are carried out online, it becomes important to develop technologies to protect our online privacy. Anonymity is a key privacy technology, since it serves to hide patterns of communication that can often be as revealing as their contents. This motivates our study of the use of large scale peer-to-peer systems for building anonymous systems. We first develop a novel methodology for studying the anonymity of peer-to-peer systems, based on an information-theoretic anonymity metric and simulation. We use simulations to sample a probability distribution modeling attacker knowledge under conservative assumptions and estimate the entropy-based anonymity metric using the sampled distribution. We then validate this approach against an analytic method for computing entropy. The use of sampling introduces some error, but it can be accurately bounded and therefore we can make rigorous statements about the success of an entire class of attacks. We next apply our methodology to perform the first rigorous analysis of Freenet, a peer-to-peer anonymous publishing system, and identify a number of weaknesses in its design. We show that a targeted attack on high-degree nodes can be very effective at reducing anonymity. We also consider a next generation routing algorithm proposed by the Freenet authors to improve performance and show that it has a significant negative impact on anonymity. Finally, even in the best case scenario, the anonymity levels provided by Freenet are highly variable and, in many cases, little or no anonymity is achieved. To provide more uniform anonymity protection, we propose a new design for peer-to-peer anonymous systems based on structured overlays. We use random walks along the overlay to provide anonymity. We compare the mixing times of random walks on different graph structures and find that de Bruijn graphs are superior to other structures such as the hypercube or butterfly. Using our simulation methodology, we analyze the anonymity achieved by our design running on top of Koorde, a structured overlay based on de Bruijn graphs. We show that it provides anonymity competitive with Freenet in the average case, while ensuring that worst-case anonymity remains at an acceptable level. We also maintain logarithmic guarantees on routing performance." ] }
0706.0523
2069748505
In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.
The chief alternative to iterative approximation is to produce an exact propositional characterization of the abstract transition relation. For example the method of @cite_3 uses small-domain techniques to translate a first-order transition formula into a propositional one that is equisatisfiable over the state-holding predicates. However, this translation introduces a large number of auxiliary Boolean variables, making it impractical to use BDD-based methods for image computation. Though SAT-base Boolean quantifier elimination methods can be used, the effect is still essentially to enumerate the states in the image. By contrast, the interpolation-based method produces an approximate transition relation with no auxiliary Boolean variables, allowing efficient use of BDD-based methods.
{ "cite_N": [ "@cite_3" ], "mid": [ "1552505815" ], "abstract": [ "The paper presents an approach for shape analysis based on predicate abstraction. Using a predicate base that involves reachability relations between program variables pointing into the heap, we are able to analyze functional properties of programs with destructive heap updates, such as list reversal and various in-place list sorts. The approach allows verification of both safety and liveness properties. The abstraction we use does not require any abstract representation of the heap nodes (e.g. abstract shapes), only reachability relations between the program variables. The computation of the abstract transition relation is precise and automatic yet does not require the use of a theorem prover. Instead, we use a small model theorem to identify a truncated (small) finite-state version of the program whose abstraction is identical to the abstraction of the unbounded-heap version of the same program. The abstraction of the finite-state version is then computed by BDD techniques. For proving liveness properties, we augment the original system by a well-founded ranking function, which is abstracted together with the system. Well-foundedness is then abstracted into strong fairness (compassion). We show that, for a restricted class of programs that still includes many interesting cases, the small model theorem can be applied to this joint abstraction. Independently of the application to shape-analysis examples, we demonstrate the utility of the ranking abstraction method and its advantages over the direct use of ranking functions in a deductive verification of the same property." ] }
0706.0523
2069748505
In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.
The most closely related method is that of Das and Dill @cite_7 . This method analyzes abstract counterexamples (sequences of predicate states), refining the transition relation approximation in such a way as to rule out infeasible transitions. This method is effective, but has the disadvantage that it uses a specific counterexample and does not consider the property being verified. Thus it can easily generate refinements not relevant to the property. The interpolation-based method does not use abstract counterexamples. Rather, it generates facts relevant to proving the given property in a bounded sense. Thus, it tends to generate more relevant refinements, and as a result converges more rapidly.
{ "cite_N": [ "@cite_7" ], "mid": [ "1503537039" ], "abstract": [ "Abstraction can often lead to spurious counterexamples. Counterexample-guided abstraction refinement is a method of strengthening abstractions based on the analysis of these spurious counterexamples. For invariance properties, a counterexample is a finite trace that violates the invariant; it is spurious if it is possible in the abstraction but not in the original system. When proving termination or other liveness properties of infinite-state systems, a useful notion of spurious counterexamples has remained an open problem. For this reason, no counterexample-guided abstraction refinement algorithm was known for termination. In this paper, we address this problem and present the first known automatic counterexample-guided abstraction refinement algorithm for termination proofs. We exploit recent results on transition invariants and transition predicate abstraction. We identify two reasons for spuriousness: abstractions that are too coarse, and candidate transition invariants that are too strong. Our counterexample-guided abstraction refinement algorithm successively weakens candidate transition invariants and refines the abstraction." ] }
0706.0523
2069748505
In predicate abstraction, exact image computation is problematic, requiring in the worst case an exponential number of calls to a decision procedure. For this reason, software model checkers typically use a weak approximation of the image. This can result in a failure to prove a property, even given an adequate set of predicates. We present an interpolant-based method for strengthening the abstract transition relation in case of such failures. This approach guarantees convergence given an adequate set of predicates, without requiring an exact image computation. We show empirically that the method converges more rapidly than an earlier method based on counterexample analysis.
@cite_8 , interpolants are used to choose new predicates to refine a predicate abstraction. Here, we use interpolants to refine an approximation of the abstract transition relation for a given set of predicates.
{ "cite_N": [ "@cite_8" ], "mid": [ "2151463894" ], "abstract": [ "The success of model checking for large programs depends crucially on the ability to efficiently construct parsimonious abstractions. A predicate abstraction is parsimonious if at each control location, it specifies only relationships between current values of variables, and only those which are required for proving correctness. Previous methods for automatically refining predicate abstractions until sufficient precision is obtained do not systematically construct parsimonious abstractions: predicates usually contain symbolic variables, and are added heuristically and often uniformly to many or all control locations at once. We use Craig interpolation to efficiently construct, from a given abstract error trace which cannot be concretized, a parsominous abstraction that removes the trace. At each location of the trace, we infer the relevant predicates as an interpolant between the two formulas that define the past and the future segment of the trace. Each interpolant is a relationship between current values of program variables, and is relevant only at that particular program location. It can be found by a linear scan of the proof of infeasibility of the trace.We develop our method for programs with arithmetic and pointer expressions, and call-by-value function calls. For function calls, Craig interpolation offers a systematic way of generating relevant predicates that contain only the local variables of the function and the values of the formal parameters when the function was called. We have extended our model checker Blast with predicate discovery by Craig interpolation, and applied it successfully to C programs with more than 130,000 lines of code, which was not possible with approaches that build less parsimonious abstractions." ] }
0706.2434
2143252188
In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.
There exists a significant body of literature for networks with Poisson distributed nodes. In @cite_6 the characteristic function of the interference was obtained when there is no fading and the nodes are Poisson distributed. They also provide the probability distribution function of the interference as an infinite series. , in @cite_2 , analyze the interference when the interference contribution by a transmitter located at @math , to a receiver located at the origin is exponentially distributed with parameter @math . Using this model they derive the density function of the interference when the nodes are arranged as a one dimensional lattice. Also the Laplace transform of the interference is obtained when the nodes are Poisson distributed.
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "2143252188", "2290648561" ], "abstract": [ "In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.", "A Manhattan Poisson line process divides the plane into an infinite number of rectangular rooms with walls extending infinitely along the axes. When the path loss is dominated by the penetration through each of the walls, a Poisson field of transmitters creates a heavy tailed interference at a randomly picked room, whose distribution is tractable in the Laplace domain. Interference correlation at different rooms is explicitly available. This model gives the first tractable mathematical abstraction to indoor physical environments where wireless signals are shadowed by (common) walls. Applying the analytical results leads to a formula for success probabilities of a transmission attempt between two given rooms." ] }